Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about learning rate in Multi-gpu training #25

Open
jkdxg8837 opened this issue Jul 6, 2022 · 1 comment
Open

Question about learning rate in Multi-gpu training #25

jkdxg8837 opened this issue Jul 6, 2022 · 1 comment

Comments

@jkdxg8837
Copy link

Hello, thanks for your awesome work!@kevinlin311tw

I have noticed that in your official tutorial of multi-gpus training, when facing with 2 gpus, you set args.learning_rate = 3e-4 and args.backbone_coef_lr = 0.05, which means your backbone will reach 1.5e-5 after warm up epoch.

And in your official tensorboard_log extracted from msrvtt-table1, your model used 16 gpus. Meanwhile, after warm up epoch, tensorboard shows your learning rate also reached 1.5*e-5, which is a same number with respect to 2-gpus situation mentioned above.

It seems a problem to me that if you should change the learning rate according to the world size? In my opinion, the learning rate should be bigger when facing with bigger world size, but I haven't seen any relevant operation in your code.

Looking forward for your apply!

@kevinlin311tw
Copy link
Member

Thank you for the question. In our experiments, we use 16 gpus for training. If you use a different number of gpus, the parameters should be adjusted manually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants