Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using local ollama models returns a json structure instead of executing them #1577

Open
jwcloud365 opened this issue Jan 4, 2025 · 5 comments

Comments

@jwcloud365
Copy link

Describe the bug

Today I installed openinterpreter and want to use it locally.

But when I'm using a local ollama model, I get some json structure as an output:

Loading qwen2.5-coder:32b...

Model loaded.

Open Interpreter will require approval before running code.

Use interpreter -y to bypass this.

Press CTRL-C to exit.

list all files in my Images directory

{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}

browse to cnn.com


  {"name": "execute", "arguments":{"language": "python", "code":
  "computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}

But when i use an openai model, I get this output, as expected:

list all files in my Images directory

ls ~/Images

Would you like to run this code? (y/n)

If you need further actions or details on any specific file, please let me know!

browse to cnn.com

  computer.browser.go_to_url('https://www.cnn.com')


  Would you like to run this code? (y/n)

Reproduce

interpreter

list all files in my Images directory

{"name": "execute", "arguments":{"language": "shell", "code": "dir %USERPROFILE%\Pictures"}}

browse to cnn.com


  {"name": "execute", "arguments":{"language": "python", "code":
  "computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}

Expected behavior

  computer.browser.go_to_url('https://www.cnn.com')


  Would you like to run this code? (y/n)

Screenshots

I would expect

afbeelding

instead of

{"name": "execute", "arguments":{"language": "python", "code":
"computer.browser.setup(headless=False)\ncomputer.browser.go_to_url('https://www.cnn.com')"}}

Open Interpreter version

Version: 0.4.3

Python version

Python 3.11.11

Operating System name and version

W11

Additional context

No response

@thomahn3
Copy link

Same here using phi, llama3.2 and mistral on macos 15.2.

@nainglinwai1
Copy link

For those problems, I found the same error after first try. I am using the python 3.10. I can run with llama3.2:latest model and open-interptreter local tutorial. It's working now.

Image

@66Ton99
Copy link

66Ton99 commented Jan 24, 2025

The same, but it works OK with qwen

@nainglinwai1
Copy link

nainglinwai1 commented Jan 25, 2025

I found the solution to make it works.
First you need to export OLLAMA_MODEL with this
export OLLAMA_MODEL=llama3.2:latest
export OLLAMA_API_BASE=http://localhost:11434/v1

Run these before you run the code.

@thomahn3
Copy link

I found the solution to make it works. First you need to export OLLAMA_MODEL with this export OLLAMA_MODEL=llama3.2:latest export OLLAMA_API_BASE=http://localhost:11434/v1

Run these before you run the code.

I ran this code before running interpreter and now I get the following error: 16:39:34 - LiteLLM:ERROR: utils.py:1873 - Model not found or error in checking vision support. You passed model=llama3.2:latest, custom_llm_provider=ollama. Error: OllamaError: Error getting model info for llama3.2:latest. Set Ollama API Base via 'OLLAMA_API_BASE' environment variable. Error: Client error '404 Not Found' for url 'http://localhost:11434/v1/api/show/api/show'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants