-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[The current device architecture is not supported by sycl_ext_oneapi_device_architecture]
#12809
opened Feb 11, 2025 by
ammrabet
Runtime Configurations for Intel Core™ Ultra 9 Processor 288V
user issue
#12793
opened Feb 9, 2025 by
morteza89
the GPU is bieng used even though the model finished his text generation , OLLAMA
user issue
#12791
opened Feb 7, 2025 by
gitnohubz
Ollama llama3.x models do not work with LangChain chat/tool integration
#12780
opened Feb 6, 2025 by
tkarna
When will support for the multimodal large model deepseek-ai/Janus-Pro-1B be available?
user issue
#12773
opened Feb 6, 2025 by
szzzh
Intel B580 -> not able to run Ollama serve on GPU after following guide
user issue
#12772
opened Feb 5, 2025 by
Mushtaq-BGA
[Windows-MTL-NPU]: OSError: [WinError -529697949] Windows Error 0xe06d7363
user issue
#12762
opened Jan 29, 2025 by
raj-ritu17
RuntimeError: Unable to run model on ipex-llm[cpp] using intel 1240p
user issue
#12751
opened Jan 25, 2025 by
1009058470
vpux-compiler error occured when using qwen2.5-7B in large content or prompt
#12736
opened Jan 22, 2025 by
dockerg
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.