You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@qiuxin2012
iGPU under a linux system , this is an easy issue to reproduce , use any model , take llama3.2 & chat with it & ask it , switch between models if your using openwebui , the same happens yo deepseek all qwen models , just any damn model , give me a single model that can work with ipex without a garbage output , or the gpu still bieng used even though you ended chating with the model , its not an obscure issue , its clear , just use it to come across it , iam not the only one facing that i saw multiple people mention this ollama garbage output & igpu utilization even though you ended chating.
I dont know about the development procces of the lib , but i think its a good idea to have a stable branch & a testing one.
I also tried ollama in the co tainer its worse there , its barely can say hello , dont have default behaviour of lunching start-ollama script automatically , every single boot you need to run the docker command to enter it & lunch it , it hurts my automated services.
ipex-llm[cpp]==2.2.0b20250204
intel-oneapi-basekit 2024.0.0.49564-3
iam using ollama on iGPU , some models chat the first time then they go crazy like if they are drunk , garbage output as i can say
The text was updated successfully, but these errors were encountered: