-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using local ollama models returns a json structure instead of executing them #1577
Comments
Same here using phi, llama3.2 and mistral on macos 15.2. |
The same, but it works OK with |
I found the solution to make it works. Run these before you run the code. |
I ran this code before running interpreter and now I get the following error: |
Describe the bug
Today I installed openinterpreter and want to use it locally.
But when I'm using a local ollama model, I get some json structure as an output:
Loading qwen2.5-coder:32b...
Model loaded.
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}
But when i use an openai model, I get this output, as expected:
ls ~/Images
Would you like to run this code? (y/n)
If you need further actions or details on any specific file, please let me know!
Reproduce
interpreter
{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}
Expected behavior
Screenshots
I would expect
instead of
{"name": "execute", "arguments":{"language": "python", "code":
"computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}
Open Interpreter version
Version: 0.4.3
Python version
Python 3.11.11
Operating System name and version
W11
Additional context
No response
The text was updated successfully, but these errors were encountered: