-
Hi,
This approach was also implemented in rainbow-memory here In avalanche, there is an example of online replay here, but it seems that the implementation is a bit different because there is no way to re-train the model with multiple passes or epochs using the buffer only (please CMIIW). is there an implementation of this particular setting in avalanche or is there a way where I can implement this in avalanche? your help would be very much appreciated or a point to the existing discussion would also be helpful! thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
From what I see, the ClovaAI RM implementation seems to do this in the main loop. They train on the experience, update the dataset to just the buffer, and call train again. I'm not familiar with a canonical Avalanche way, but I think you could recreate that logic pretty easily in your main training by doing a nested for loop. Something like for i, exp in enumerate(benchmark.train_stream):
# split experience into an online stream
ocl_stream = split_online_stream([exp], experience_size=32)
# Break down the ocl_stream even further
for online_exp in ocl_stream:
cl_strategy.train(online_exp)
# add some logic to build
# an exp from the buffer on the fly
buffer_exp = ...
buffer_train_epochs = ...
for epoch in buffer_train_epochs:
cl_strategy.train(buffer_exp) Disclaimer: I haven't really worked with the online setting, so this is maybe not as thought out of an answer someone with more experience might be able to give! I can also imagine ways you could achieve the same thing with a strategy plugin, but I personally think having this logic in the main training loop reads the clearest. |
Beta Was this translation helpful? Give feedback.
thanks a lot for the quick response.
After extensively digging into the avalanche code base, I found a way to re-train (or fine-tune) the model with the buffer.
I did not re-split the experience as suggested by @niniack or like this online replay. Instead, I want to pass the current stream to the model using one epoch and then finetune the model again with the buffer.
If I call
cl_strategy.train
again, it will triggerbefore_training_exp
,after_training_exp
, etc, which I do not want.I only want to call
_before_training_epoch
,training_epoch
and_after_training_epoch
, since I am only interested in fine-tuning the model using the buffer with multiple epochs.Instead, I created a new templa…