Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 服务端配置Ollama模型无法使用 #5265

Open
EnTaroYan opened this issue Jan 2, 2025 · 6 comments
Open

[Bug] 服务端配置Ollama模型无法使用 #5265

EnTaroYan opened this issue Jan 2, 2025 · 6 comments
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models

Comments

@EnTaroYan
Copy link

EnTaroYan commented Jan 2, 2025

📦 部署环境

Docker

📌 软件版本

1.40.2

💻 系统环境

Ubuntu

🌐 浏览器

Edge

🐛 问题描述

self-host了Ollama和lobechat。如果在docker环境变量里配置ollama的模型列表,会无法使用:
image

image

如果在前端页面配置ollama,下载模型的交互很奇怪,每修改一个字符(没有按回车)都会尝试拉取模型,而且带具体参数个数的模型无法拉取,只能拉latest的模型(我只能拉qwen2.5,没法拉qwen2.5:3b和qwen2.5:7b).

📷 复现步骤

  1. self-host Ollama和lobechat
  2. 在docker-compose.yml中配置ollama模型列表。
  3. 访问lobechat进行对话

🚦 期望结果

可以正常使用ollama进行对话

📝 补充信息

No response

@EnTaroYan EnTaroYan added the 🐛 Bug Something isn't working | 缺陷 label Jan 2, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Jan 2, 2025
@lobehubbot
Copy link
Member

👀 @EnTaroYan

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


📦 Deployment environment

Docker

📌 Software version

0.14.2

💻 System environment

Ubuntu

🌐 Browser

Edge

🐛 Problem description

Self-hosted Ollama and lobechat. If you configure ollama's model list in the docker environment variable, it will not be available:
image

image

If you configure ollama on the front-end page, the interaction when downloading the model is very strange. Every time you modify a character (without pressing Enter), you will try to pull the model, and the model with a specific number of parameters cannot be pulled. You can only pull the latest model (I It can only pull qwen2.5, but cannot pull qwen2.5:3b and qwen2.5:7b).

📷 Steps to reproduce

  1. self-host Ollama and lobechat
  2. Configure the ollama model list in docker-compose.yml.
  3. Visit lobechat to have a conversation

🚦 Expected results

You can use ollama to have conversations normally

📝 Supplementary information

No response

@dosubot dosubot bot added the ollama Relative to Ollama Provider and ollama models label Jan 2, 2025
Copy link

dosubot bot commented Jan 2, 2025

I found some similar discussions and issues related to Ollama integration with LobeChat that might be helpful:

  1. There is an open discussion titled "[RFC] 065 - Ollama 实现重构" which might provide insights into ongoing efforts or changes related to Ollama integration [1].

  2. Another discussion titled "我在云服务器上部署lobechat时,想要和ollama集成时遇到了问题。" addresses issues encountered during integration with Ollama on a cloud server [2].

  3. A closed issue suggested a solution for a cross-domain problem with Ollama by configuring the environment variables OLLAMA_HOST=0.0.0.0 and OLLAMA_ORIGINS=*, and changing the URL of Ollama in LobeChat to the IPv4 address [3].

  4. Another solution involved running Ollama and LobeChat behind an nginx proxy and making specific configuration changes in the docker-compose files for both Ollama and nginx [4].

These threads and solutions might help you troubleshoot the issue with the Ollama model list configuration and the fetching problem.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@EnTaroYan
Copy link
Author

*Version:1.40.2

@EnTaroYan EnTaroYan changed the title [Bug] [Bug] 服务端配置Ollama模型无法使用 Jan 2, 2025
@EnTaroYan
Copy link
Author

Update: 在关闭客户端请求模式的时候会出这个bug,开启后可以正常使用。但如果服务端配置了-language_model_settings,用户无法修改是否启用客户端请求模式。貌似默认是不启用客户端请求模式?清除cookie后是不启用的

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Update: This bug will occur when client request mode is turned off, but it can be used normally after it is turned on. However, if the server is configured with -language_model_settings, the user cannot modify whether to enable client request mode. It seems that client request mode is not enabled by default? It will not be enabled after clearing cookies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷 ollama Relative to Ollama Provider and ollama models
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

2 participants