-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Unable to run model on ipex-llm[cpp] using intel 1240p #12751
Comments
Hi @1009058470, how did you pull |
|
Hi @1009058470 , I am not able to reproduce your issue. And we have released a new version of ipex-llm ollama, you may install it in a new conda env via |
emmm I have run that ,but seem it do not run on gpu but only cpu |
Could you pls provide the detailed ollama server log? |
thanks, i have fix that, it seems i forget to run but i want to konw what is that? |
Actually, on the Windows platform we don't need to run |
(⊙﹏⊙),but if i donot run that ,the gpu can not be use |
硬件环境
cpu:Intel i5-1240p / AMD
gpu:Intel Iris Xe / AMD Radeon
内存:16GB DDR4
os:Windows 11
重现步骤
I read this doc try to run ollama on my machiche
and also set this
I also try deepseek-r1:1.5b and llama3.2:1b
then error showed
debug.txt
OS
Windows
GPU
![Image](https://private-user-images.githubusercontent.com/30902531/406633163-43b90540-860b-4e66-8393-f636a106ad9b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk0NzMzMjcsIm5iZiI6MTczOTQ3MzAyNywicGF0aCI6Ii8zMDkwMjUzMS80MDY2MzMxNjMtNDNiOTA1NDAtODYwYi00ZTY2LTgzOTMtZjYzNmExMDZhZDliLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTMlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjEzVDE4NTcwN1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTA1ZDJhYzU4MTI1MWU3MjkyNTQwMGE2MjM3YmIxYTU3NWFhZTVlMzE3YTkxNDc2MTMxMzdiMjBkNDg3ODg3YWMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.idaMiaEw2fozydfVTUtgR5aDI1rpD7JN3Hu6yiQ5i2w)
Intel
CPU
Intel
Ollama version
ollama version is 0.5.1-ipexllm-20250123 Warning: client version is 0.5.7
but when i try to set this value in (doc)[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md#8-save-gpu-memory-by-specify-ollama_num_parallel1]
set OLLAMA_NUM_PARALLEL=1 that will can run the model, but when i try to say to it's, it will broken down
try_to_say_to_model_debug.txt
The text was updated successfully, but these errors were encountered: