We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好,我用llama3.1 8b训练32k的上下文,训练配置同readme中,但发现每个iter训练时间很长,llamafactory中用deepspeed 时间是36s,但用pai-megatron是60s;且loss比较大,lamafactory中用deepspeed 的loss是1左右,但用pai-megatron的loss是10左右;
tp和pp会带来这么大的区别吗
The text was updated successfully, but these errors were encountered:
请问是batch size8 global batch size1吗,我对应的是128和1,一个iteration要十分钟,算下好像和你速度差不多,请问现在有什么解决方法吗
Sorry, something went wrong.
No branches or pull requests
您好,我用llama3.1 8b训练32k的上下文,训练配置同readme中,但发现每个iter训练时间很长,llamafactory中用deepspeed 时间是36s,但用pai-megatron是60s;且loss比较大,lamafactory中用deepspeed 的loss是1左右,但用pai-megatron的loss是10左右;
tp和pp会带来这么大的区别吗
The text was updated successfully, but these errors were encountered: