Replies: 2 comments 1 reply
-
Hi @Pubskiste, sorry for the complex setup. I'm moving away from LocalAI to LM Studio and Ollama next, the next release is probably within the week. In the meantime, can you try LM Studio with "OpenAI Proxy Base URL" in the setting? #191 (reply in thread) |
Beta Was this translation helpful? Give feedback.
-
Hi @logancyang, thanks for your answer. Do you have a hint how to setup the "QA Settings"? If I just select "LocalAI" as Provider is not working, so I need an embedding provider like BERT? Thanks for your great work! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I installed and configured copilot plugin on Windows 11 running LocalAI on wsl2-dockerfollowing the documentation.
Now I have the following issue: When I configure the plugin under "LocalAI Model" with "llama-2-uncensored-q4ks" - as described in the documentation, I get the error "... Unknown model". If I put in "luna-ai-llama2-uncensored.ggmlv3.q4_K_S.bin" instead, it seams to work, I can do a first chat. But already on the second I get the error:
LangChain error: Error: output values have 1 keys, you must specify an output key or pass only 1 key as output
So which parameter is correct for "LocalAI Model"?
Thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions