Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3.1 8b训练32k的上下文模型,训练时间长、并且loss偏大 #348

Open
ARQlalala opened this issue Sep 20, 2024 · 1 comment

Comments

@ARQlalala
Copy link

您好,我用llama3.1 8b训练32k的上下文,训练配置同readme中,但发现每个iter训练时间很长,llamafactory中用deepspeed 时间是36s,但用pai-megatron是60s;且loss比较大,lamafactory中用deepspeed 的loss是1左右,但用pai-megatron的loss是10左右;

tp和pp会带来这么大的区别吗

@kkkeepgoing
Copy link

请问是batch size8 global batch size1吗,我对应的是128和1,一个iteration要十分钟,算下好像和你速度差不多,请问现在有什么解决方法吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants