You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for your sharing.
When I tried to use multi-gpu to train Knowledge Distillation: python3 -m torch.distributed.run --nproc_per_node $N_GPU distillation.py ...
I got the error:
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
distillation.py FAILED
Failures:
<NO_OTHER_FAILURES>
The text was updated successfully, but these errors were encountered:
KD currently does not support multi-gpu.
We adopted KD method from "End-to-end semi-supervised object dection with soft teacher" and generating teacher's feature(I would say it's a guide feature for student) was heavy operation.
Already noticed that both the student and the teacher takes a GPU, and the teacher uses quite a lot of GPU memory. Thanks for your extra information, do you mean the performance gain is limited when applying the semi-supervised object detection? In my case, the labeled data to unlabeled data ratio is 1:2.
Hi, thanks for your sharing.
When I tried to use multi-gpu to train Knowledge Distillation:
python3 -m torch.distributed.run --nproc_per_node $N_GPU distillation.py ...
I got the error:
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
distillation.py FAILED
Failures:
<NO_OTHER_FAILURES>
The text was updated successfully, but these errors were encountered: