-
-
Notifications
You must be signed in to change notification settings - Fork 11k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 服务端配置Ollama模型无法使用 #5265
Comments
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
I found some similar discussions and issues related to Ollama integration with LobeChat that might be helpful:
These threads and solutions might help you troubleshoot the issue with the Ollama model list configuration and the fetching problem. To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
*Version:1.40.2 |
Update: 在关闭客户端请求模式的时候会出这个bug,开启后可以正常使用。但如果服务端配置了-language_model_settings,用户无法修改是否启用客户端请求模式。貌似默认是不启用客户端请求模式?清除cookie后是不启用的 |
Update: This bug will occur when client request mode is turned off, but it can be used normally after it is turned on. However, if the server is configured with -language_model_settings, the user cannot modify whether to enable client request mode. It seems that client request mode is not enabled by default? It will not be enabled after clearing cookies. |
📦 部署环境
Docker
📌 软件版本
1.40.2
💻 系统环境
Ubuntu
🌐 浏览器
Edge
🐛 问题描述
self-host了Ollama和lobechat。如果在docker环境变量里配置ollama的模型列表,会无法使用:
如果在前端页面配置ollama,下载模型的交互很奇怪,每修改一个字符(没有按回车)都会尝试拉取模型,而且带具体参数个数的模型无法拉取,只能拉latest的模型(我只能拉qwen2.5,没法拉qwen2.5:3b和qwen2.5:7b).
📷 复现步骤
🚦 期望结果
可以正常使用ollama进行对话
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: