You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I couldn't reproduce the results with the examples provided in the readme on my 4 GPUs.
So I used batch 256 on only 1 GPU and it works.
Adding one GPU, lower precision@5 by ~10%, so with 4 GPUs results were around 60%!!!
Please correct this bug asap.
Thanks,
--mike
The text was updated successfully, but these errors were encountered:
After lots of testing, I think this might be a limitation of pytorch inference. Therefore, it might be better to disable multiple Gpus in devide_ids or use it to specify only 1 gpu. As such, there is no need to use distributed.
There is no need to clear Cuda cache in the code. It's better to reduce the batch size. 256 works well for 11GB Gpus (eg rtx 2080 TI).
I couldn't reproduce the results with the examples provided in the readme on my 4 GPUs.
So I used batch 256 on only 1 GPU and it works.
Adding one GPU, lower precision@5 by ~10%, so with 4 GPUs results were around 60%!!!
Please correct this bug asap.
Thanks,
--mike
The text was updated successfully, but these errors were encountered: