-
Notifications
You must be signed in to change notification settings - Fork 739
ENH: optimize MPS on Mac for Qwen2.5-VL #3524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@@ -119,6 +119,16 @@ def load(self): | |||
torch_dtype="float16", | |||
**kwargs, | |||
).eval() | |||
elif device == "mps": | |||
# MacOS special, https://github.com/QwenLM/Qwen2.5-VL/issues/777 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue described in issue seems has been fixed, I checked the file: https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct/blob/main/preprocessor_config.json
It's correct already.
What's the problem this PR tries to address? The issue in the comment seems incorrect, it's not related to MPS at all, maybe you link the wrong issue? |
Run Qwen2.5-vl-7b on MacOS, may get the following exception:
To fix it, we need use options This issue is also be mentioned at QwenLM/Qwen2.5-VL#760, and may effect to all qwen2.5-vl models. |
Sorry, I missed the comment you sent, now this model has been modified and put into transformers/multimodel/qwen2_vl.py, so there is conflict now, can you fix accordingly? |
No description provided.