Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multi-gpu results WRONG #15

Open
mikeseven opened this issue Apr 20, 2020 · 1 comment
Open

multi-gpu results WRONG #15

mikeseven opened this issue Apr 20, 2020 · 1 comment

Comments

@mikeseven
Copy link

I couldn't reproduce the results with the examples provided in the readme on my 4 GPUs.
So I used batch 256 on only 1 GPU and it works.
Adding one GPU, lower precision@5 by ~10%, so with 4 GPUs results were around 60%!!!

Please correct this bug asap.

Thanks,
--mike

@mikeseven
Copy link
Author

After lots of testing, I think this might be a limitation of pytorch inference. Therefore, it might be better to disable multiple Gpus in devide_ids or use it to specify only 1 gpu. As such, there is no need to use distributed.

There is no need to clear Cuda cache in the code. It's better to reduce the batch size. 256 works well for 11GB Gpus (eg rtx 2080 TI).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant