Replies: 3 comments
-
We should also consider to integrate Tensorboard Dev features in this. |
Beta Was this translation helpful? Give feedback.
-
Just a quick comment about this. Lots of papers may have slightly different implementation details. Even widespread strategies like EWC are often implemented in different ways. If you do (1) config files are nice to have. If you do (2) it's difficult to represent all the experimental details in a config file, you probably need some custom code. If you want to be reproducible you also need much more than that, such as all the library versions. Maybe we can collect reproducibility experiments using avalanche in a separate repository. |
Beta Was this translation helpful? Give feedback.
-
We launched this effort as a separate repo here: https://github.com/ContinualAI/reproducible-continual-learning |
Beta Was this translation helpful? Give feedback.
-
We should start thinking of a way to check if a strategy implementation can reach the same performance as in the original papers in which it was proposed / used.
I was thinking this can be done with config files (cfg or yml) (please keep in mind #198) in the extra module, that a strategy should be able to load thought a named parameter like:
clmodel = Naive(..., config="exp1.yml" )
Or maybe more in general we should think of a way to parametrize the whole experiment (including benchmarks and logger) in line with the Sacred integration idea.
This will be useful also for the slow tests described here #172
Beta Was this translation helpful? Give feedback.
All reactions