Skip to content

Online CL scenario using memory buffer only #1538

Answered by mhwasil
mhwasil asked this question in Q&A
Discussion options

You must be logged in to vote

thanks a lot for the quick response.
After extensively digging into the avalanche code base, I found a way to re-train (or fine-tune) the model with the buffer.

I did not re-split the experience as suggested by @niniack or like this online replay. Instead, I want to pass the current stream to the model using one epoch and then finetune the model again with the buffer.

If I call cl_strategy.train again, it will trigger before_training_exp, after_training_exp, etc, which I do not want.

I only want to call _before_training_epoch, training_epoch and _after_training_epoch, since I am only interested in fine-tuning the model using the buffer with multiple epochs.

Instead, I created a new templa…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@AntonioCarta
Comment options

@mhwasil
Comment options

Answer selected by mhwasil
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants