Online Continual Learning #789
HamedHemati
started this conversation in
Ideas
Replies: 2 comments
-
A couple of things that we might want to help with OCL scenarios:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
I think the alternative that you proposed can be very useful; having something like |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone!
For online continual models, the buffer may need to be updated with every batch of incoming data. By default, in the standard CL benchmarks, all batches from the same task are considered as one experience, and therefore the buffer will be updated only once after the task ends. As suggested by @AntonioCarta, I converted each batch of data to an individual experience. It can be done by creating a sub-set of the experience's dataset and replacing it with the created subset as below:
Now, the strategy's buffer will be updated as expected in an online setup. The only possible issue is that by creating "sub-experiences" out of each batch, the optimizer is re-initialized. This can be problematic for cases where the optimizer uses momentum or any kind of information from previous updates in the same experiences.
Does anyone know a better or more general way to do it?
Thanks :)
Beta Was this translation helpful? Give feedback.
All reactions