-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: introducing Ollama embeddings properties. #2690
base: main
Are you sure you want to change the base?
Conversation
Someone is attempting to deploy a commit to the Quivr-app Team on Vercel. A member of the Team first needs to authorize it. |
Thanks a lot for this PR! Being on holiday (still looking at PR) @AmineDiro will review the PR ;) |
Hi could you please provide an example of changes to the env ? (OLLAMA_EMBEDDINGS_* part please). Thanks in advance! |
fair! Here's my props
|
@filipe-omnix can you confirm if this patch is useful for you? |
I applied the above update, but still encountered an error during local testing: {"error": "model 'llama2' not found, try pulling it first"} The following are debugging logs: 1、get_embeddings of models/setting.py , mode is llama3: Here you can see that the model is llama3, indicating that the configuration is valid. 2、similarity_search of vectorstore/supabase.py, : The model here has been changed to llama2 again, and the previous embeddings have not been used 3、error log : backend-core | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
@andyzhangwp I'm afraid the patch hasn't been applied fully. At first check that |
What if we want to use multiple llama models? |
Description
Ollama embeddings should be properly configured via these props.
Now only base_url is passed to
OllamaEmbeddings
. It causes the following issues:ollama/dolphin-phi
available on ollama, but /chat/{chat_id}/question throws {"error":"model 'llama2' not found, try pulling it first"} #2056 [Bug]: No model on ollama was used to answer #2595llama2
it yields unreasonably large 4k vectors (compare with 1.5k OpeanAI's)This change let users to configure embeddings models which is hosted in Ollama.
Checklist before requesting a review
Please delete options that are not relevant.
Screenshots (if appropriate):