Support for Active Learning? #818
-
Hi all. I am considering to use Avalanche for my research, but I'm not sure if it can do what I need. I would like to train a system using a curriculum of tasks that adapts to the performance of the network. Ideally it should even be possible for the network to use Active Learning and choose what tasks it wants to tackle next. I am hoping that this would allow a neural network to mitigate catastrophic forgetting much in the same way that humans do: When we notice that we are no longer as good at a task as we used to be, we can look at specific examples of this task to refresh our memory. Is this something that Avalanche can do? Are there any benchmarks in Avalanche that you think could be particularly fitting for this? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Hi @FlorianDietz, Thanks for your interest in Avalanche and considering using it for your research :) At the moment the Avalanche benchmarks module is structured to generate benchmarks instances that are nothing more then a collection of multiple data streams (for train, test, etc.). Such streams are composed of what we call "experiences". In your case, each experience would be a separate task. While the benchmark instance is generated to have this streams fixed (and not dynamically generated), in avalanche you can already process experiences in a stream the way you like (you can index a stream object as a normal python list): you can process them backward, you can skip experiences, etc. for train or eval. So even if not really "semantically aligned" with the current benchmarks design, I think your scenario can be reasonably implemented in avalanche without any structural change. You can just consider the stream as a set rather than a ordered list. I don't know if @lrzpellegrini has better ideas. As for the second question, the easiest way would be to use the computer vision datasets we already support. However, adding custom datasets in Avalanche is really straightforward. So here the limits is only your imagination! Let me know if you have any doubts or further questions! |
Beta Was this translation helpful? Give feedback.
-
Thanks! I'm going to have to check it out to see how much work adding things would be. An extension to my second question: I am interested in logic problems or other tasks where later experiences build on earlier experiences in a significant way. As in, if solving the later tasks is completely impossible without first solving the earlier tasks, this would be more interesting for me. Because it gives the system a stronger incentive not to mess with the things it has already learned. Vision tasks are not suitable for this. Are there any tasks that you think would fit these criteria? Like logic, math, or some other task where harder instances require using easier instances as subtasks? |
Beta Was this translation helpful? Give feedback.
-
Thanks. I will check it out. Are you aware of any benchmarks along these lines? |
Beta Was this translation helpful? Give feedback.
Hi @FlorianDietz,
Thanks for your interest in Avalanche and considering using it for your research :)
At the moment the Avalanche benchmarks module is structured to generate benchmarks instances that are nothing more then a collection of multiple data streams (for train, test, etc.). Such streams are composed of what we call "experiences". In your case, each experience would be a separate task. While the benchmark instance is generated to have this streams fixed (and not dynamically generated), in avalanche you can already process experiences in a stream the way you like (you can index a stream object as a normal python list): you can process them backward, you can skip experiences, etc. …