-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eagle_llama
but Transformers does not recognize this architecture
#8
Comments
Hello! You probably won't be able to achieve this because it's not included in the Transformers library, and I didn't find it in the model card on Hugging Face either. What you might want to do instead is the following:
However, this alone isn't sufficient to get started, as Eagle is a family of Multimodal Large Language Models and its weights alone are just fine-tuned versions of LLaVa's. Additionally, the repository downloads other models like Vision Experts and Clip Encoder. Here's your simplified version of load_pretrained_model:
Given that, if you intend to use it without Gradio for terminal testing or to create an endpoint, you'll need to determine how you'll receive your images. Once you have the images and their corresponding prompts, you can proceed with the following steps:
Use an older version of Transformers, such as: With more recent versions, you'll likely encounter the following exception: |
Hello,
I want to run:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("NVEagle/Eagle-X5-13B-Chat")
But I get:
ValueError: The checkpoint you are trying to load has model type
eagle_llama
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.Transformer version: 4.44.2
The text was updated successfully, but these errors were encountered: