-
I'm trying to train an ensemble of models on multiple devices. However, avalanche strategies can only take one device as parameter and keep putting the model on this device, which makes sense. Is there a hidden feature that would allow me to train on multiple devices without rewriting the whole base_strategy ? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Unfortunately, we do not currently support distributed training... you'd need to do it using basic pytorch functionalities. But we are discussing this internally and we'd be very interested in hearing your thoughts about it if you go ahead and try to implement it :) Maybe @lrzpellegrini has a better idea about this, who worked with distributed training in Pytorch recently. |
Beta Was this translation helpful? Give feedback.
Unfortunately, we do not currently support distributed training... you'd need to do it using basic pytorch functionalities. But we are discussing this internally and we'd be very interested in hearing your thoughts about it if you go ahead and try to implement it :)
Maybe @lrzpellegrini has a better idea about this, who worked with distributed training in Pytorch recently.