-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Description
When using wrenai launcher on windows and selecting custom option for setup, I was able to up the docker network. However, the wrenai-wren-ai-service-1 container logs error and fails to start application and retries. The docker container remains up.
To Reproduce
Steps to reproduce the behavior:
- Install wrenai launcher on Windows 11
- Launch docker desktop
- Launch Ollama and make sure you have
nomic-embed-text:latestandllama3.1:latestpulled. - Update
~/.wrenai/config.yamland~/.wrenai/.env - Launch wrenai and select custom LLM provider
- Open docker desktop and go to wrenai service container logs
Expected behavior
No error in logs should have appeared, and the application would not throw error.
Desktop Environment:
- OS: Windows 11
- Browser: Chrome
Wren AI Information
- Wrenai Launcher (Windows) Version: 0.29.1
- Wrenai service: 0.14.0
Additional context
I made changes to PROJECT_DIR in .env aswell trying to fix the issue but it did not really work:
COMPOSE_PROJECT_NAME=wrenai
PLATFORM=linux/amd64
PROJECT_DIR=C:/Users/pc/.wrenai
...
Relevant log output
The config.yaml that is seen set on the wrenai-wren-ai-service-1 container:
# cat /app/config.yaml
type: llm
provider: litellm_llm
models:
- api_base: http://host.docker.internal:11434/v1
model: ollama_chat/llama3.1:latest # ollama_chat/<ollama_model_name>
alias: default
timeout: 600
kwargs:
n: 1
temperature: 0
---
type: embedder
provider: litellm_embedder
models:
- model: openai/nomic-embed-text:latest
alias: default
api_base: http://host.docker.internal:11434/v1
timeout: 600
---
type: document_store
provider: qdrant
location: http://qdrant:6333
embedding_model_dim: 768 # put your embedding model dimension here
timeout: 120
recreate_index: true
---
type: engine
provider: wren_ui
endpoint: http://wren-ui:3000
---
type: pipeline
pipes:
- name: db_schema_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: historical_question_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: table_description_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: historical_question_retrieval
embedder: litellm_embedder.default
document_store: qdrant
- name: sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant
- name: sql_correction
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant
- name: followup_sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant
- name: sql_answer
llm: litellm_llm.default
- name: semantics_description
llm: litellm_llm.default
- name: relationship_recommendation
llm: litellm_llm.default
engine: wren_ui
- name: question_recommendation
llm: litellm_llm.default
- name: question_recommendation_db_schema_retrieval
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: question_recommendation_sql_generation
llm: litellm_llm.default
engine: wren_ui
document_store: qdrant
- name: chart_generation
llm: litellm_llm.default
- name: chart_adjustment
llm: litellm_llm.default
- name: intent_classification
llm: litellm_llm.default
embedder: litellm_embedder.default
document_store: qdrant
- name: misleading_assistance
llm: litellm_llm.default
- name: data_assistance
llm: litellm_llm.default
- name: sql_pairs_indexing
document_store: qdrant
embedder: litellm_embedder.default
- name: sql_pairs_retrieval
document_store: qdrant
embedder: litellm_embedder.default
llm: litellm_llm.default
- name: preprocess_sql_data
llm: litellm_llm.default
- name: sql_executor
engine: wren_ui
- name: user_guide_assistance
llm: litellm_llm.default
- name: sql_question_generation
llm: litellm_llm.default
- name: sql_generation_reasoning
llm: litellm_llm.default
- name: followup_sql_generation_reasoning
llm: litellm_llm.default
- name: sql_regeneration
llm: litellm_llm.default
engine: wren_ui
- name: instructions_indexing
embedder: litellm_embedder.default
document_store: qdrant
- name: instructions_retrieval
embedder: litellm_embedder.default
document_store: qdrant
- name: sql_functions_retrieval
engine: wren_ibis
document_store: qdrant
- name: project_meta_indexing
document_store: qdrant
- name: sql_tables_extraction
llm: litellm_llm.default
- name: sql_diagnosis
llm: litellm_llm.default
- name: sql_knowledge_retrieval
engine: wren_ibis
document_store: qdrant
---
settings:
column_indexing_batch_size: 50
table_retrieval_size: 10
table_column_retrieval_size: 100
allow_using_db_schemas_without_pruning: false
query_cache_maxsize: 1000
query_cache_ttl: 3600
langfuse_host: https://cloud.langfuse.com
langfuse_enable: true
logging_level: DEBUG
development: false
#
Error Logs:
I0110 14:42:44.540 8 wren-ai-service:42] Imported Provider: src.providers.llm.azure_openai
/app/.venv/lib/python3.12/site-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:
* 'fields' has been removed
warnings.warn(message, UserWarning)
I0110 14:42:45.347 8 wren-ai-service:66] Registering provider: litellm_llm
I0110 14:42:45.347 8 wren-ai-service:42] Imported Provider: src.providers.llm.litellm
I0110 14:42:45.349 8 wren-ai-service:66] Registering provider: ollama_llm
I0110 14:42:45.349 8 wren-ai-service:42] Imported Provider: src.providers.llm.ollama
I0110 14:42:45.383 8 wren-ai-service:66] Registering provider: openai_llm
I0110 14:42:45.383 8 wren-ai-service:42] Imported Provider: src.providers.llm.openai
I0110 14:42:45.383 8 wren-ai-service:42] Imported Provider: src.providers.loader
ERROR: Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 133, in merged_lifespan
async with original_context(app) as maybe_original_state:
File "/usr/local/lib/python3.12/contextlib.py", line 204, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/src/__main__.py", line 29, in lifespan
pipe_components = generate_components(settings.components)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/providers/__init__.py", line 391, in generate_components
config = transform(configs)
^^^^^^^^^^^^^^^^^^
File "/src/providers/__init__.py", line 294, in transform
converted = processor(entry)
^^^^^^^^^^^^^^^^
File "/src/providers/__init__.py", line 132, in embedder_processor
"dimension": model["dimension"],
~~~~~^^^^^^^^^^^^^
KeyError: 'dimension'
ERROR: Application startup failed. Exiting.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working