Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

为什么使用huggingface上面的代码,老是说识别不了这个model type tinyllava #162

Open
MichealZhangxa opened this issue Dec 23, 2024 · 6 comments

Comments

@MichealZhangxa
Copy link

9
使用不了Zhang199/TinyLLaVA-Qwen2-0.5B-SigLIP,但是能使用tinyllava/TinyLLaVA-Phi-2-SigLIP-3.1B,transformer库升级也没用

@ZhangXJ199
Copy link
Collaborator

使用load_model.py中的load_pretrained_model函数加载试试

@MichealZhangxa
Copy link
Author

使用load_model.py中的load_pretrained_model函数加载试试

还是不行

@MichealZhangxa
Copy link
Author

MichealZhangxa commented Dec 23, 2024

使用load_model.py中的load_pretrained_model函数加载试试

而且很奇怪的一点是我明明给的是从huggingface上面下载下来的本地路径,也设置了 model = TinyLlavaForConditionalGeneration.from_pretrained(model_name_or_path, local_files_only=True, # 仅使用本地文件
low_cpu_mem_usage=True, resume_download=True)还是会从huggingface上面下载,而我的linux子系统不能访问huggingface

@ZhangXJ199
Copy link
Collaborator

应该是环境问题 重新执行pip install -e .之后试一下

@MichealZhangxa
Copy link
Author

应该是环境问题 重新执行pip install -e .之后试一下

麻烦请问可以提供一下你的模型结构吗,可以打印一下吗,我实在是跑不通

@MichealZhangxa
Copy link
Author

应该是环境问题 重新执行pip install -e .之后试一下
目前几个人都没跑通这个代码,麻烦作者可以提供一下参数吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants