You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have open-webui 0.4.8 with pipelines that were available at the time of the 0.4.8 release. They are all installed with Docker.
Gemini 2.0 Flash which was just released yesterday works fine but trying to use LearnLM produces this error:
Error: Invalid model name format: learnlm-1.5-pro-experimental
Docker logs doesn't show anything which I find relevant:
open-webui | INFO: 188.25.145.71:0 - "POST /api/v1/memories/query HTTP/1.1" 200 OK
open-webui | INFO [open_webui.apps.openai.main] get_all_models()
pipelines | INFO: 172.18.0.1:42978 - "GET /models HTTP/1.1" 200 OK
open-webui | INFO [open_webui.apps.ollama.main] get_all_models()
pipelines | INFO: 172.18.0.1:42994 - "POST /google_genai.learnlm-1.5-pro-experimental/filter/inlet HTTP/1.1" 200 OK
pipelines | google_genai.learnlm-1.5-pro-experimental
pipelines | google_genai.learnlm-1.5-pro-experimental
pipelines | INFO: 172.18.0.1:43008 - "POST /chat/completions HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/chat/completions HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
pipelines | INFO: 172.18.0.1:43022 - "POST /google_genai.learnlm-1.5-pro-experimental/filter/outlet HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/chat/completed HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/task/title/completions HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640/tags HTTP/1.1" 200 OK
open-webui | Count of chats for tag 'science': 6
open-webui | Count of chats for tag 'technology': 47
open-webui | Count of chats for tag 'general': 12
open-webui | INFO: 188.25.145.71:0 - "DELETE /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640/tags/all HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "POST /api/task/tags/completions HTTP/1.1" 200 OK
open-webui | [] physics
open-webui | INFO: 188.25.145.71:0 - "POST /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640/tags HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/d28a614a-1ea5-418f-9564-9fe88496e640 HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/all/tags HTTP/1.1" 200 OK
open-webui | INFO: 188.25.145.71:0 - "GET /api/v1/chats/?page=1 HTTP/1.1" 200 OK
The text was updated successfully, but these errors were encountered:
tjbck
transferred this issue from open-webui/open-webui
Dec 12, 2024
ifnotmodel_id.startswith("gemini-"):
returnf"Error: Invalid model name format: {model_id}"
If the check is commented out, the model works:
I can make a pull request to disable the check, I guess. It's also possible to allow learnlm- as model ID prefix, but that kind of approach would also require manually adding that for every future Google model prefix.
I have open-webui 0.4.8 with pipelines that were available at the time of the 0.4.8 release. They are all installed with Docker.
Gemini 2.0 Flash which was just released yesterday works fine but trying to use LearnLM produces this error:
Docker logs doesn't show anything which I find relevant:
The text was updated successfully, but these errors were encountered: