Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3.1-8b转换megatron-mcore格式后模型大小从15G变成了71G,精度仍然为bf16,这是正常的吗 #356

Open
kkkeepgoing opened this issue Sep 26, 2024 · 3 comments

Comments

@kkkeepgoing
Copy link

No description provided.

@jerryli1981
Copy link
Collaborator

您好,转换命令可以发我们复现下吗?

@kkkeepgoing
Copy link
Author

kkkeepgoing commented Sep 27, 2024

您好,转换命令可以发我们复现下吗?
你好,这是命令
bash hf2mcore_convertor_llama3_1.sh 8B {hf_path} {megatron_path} 8 1 false true false bf16 {hf_path}

@divisionblur
Copy link

这个原因是什么啊?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants