-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error during inference: fetch failed #29
Comments
my url is using a custom ca but I also have NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt set which from my understanding should address any ssl issues with my custom ca. |
Got some tests running locally and verified that this is related to the custom ca. Any chance of getting support for custom ca certs? |
I get the same warning in VSCode: #3 (comment) |
VSCode as a host controls all connections extensions open and use, so it's not related to Llama Coder specifically. Have you tried solution from this Stackoverflow question? |
I've tried the NODE_EXTRA_CERTS solution as disabling SSL is a really bad idea but that didn't help. For similar plugins like Continue I know they had to add something to support extra certs in the plugin itself for this to work. |
I have an ollama container running the stable-code:3b-code-q4_0 model. I'm able to interact with the model via curl:
curl -d '{"model":"stable-code:3b-code-q4_0", "prompt": "c++"}' https://notarealurl.io/api/generate
and get a response in a terminal in wsl where I'm running vscode:
However when I set the Ollama Server Endpoint to https://notarealurl.io/ I just get
[warning] Error during inference: fetch failed
The text was updated successfully, but these errors were encountered: