Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama3.1:8b #1103

Open
MyraBaba opened this issue Aug 21, 2024 · 6 comments
Open

llama3.1:8b #1103

MyraBaba opened this issue Aug 21, 2024 · 6 comments

Comments

@MyraBaba
Copy link

Hi,

How to set model to llama3.1:8b for Local Rag ?

I cant find a convenient way to do this

@WilliamEspegren
Copy link
Contributor

Hey @MyraBaba, here is how you can run local Ollama models with Phidata :)

@MyraBaba
Copy link
Author

@WilliamEspegren

(venvPhiData) redel@RedElephant:~/Projects/phidata$ python cookbook/llms/ollama/assistant.py
⠋ Working...
Traceback (most recent call last):
File "/home/redel/Projects/phidata/cookbook/llms/ollama/assistant.py", line 9, in
assistant.print_response("Share a quick healthy breakfast recipe.", markdown=True)
File "/home/redel/Projects/phidata/venvPhiData/lib/python3.10/site-packages/phi/assistant/assistant.py", line 1473, in print_response
for resp in self.run(message=message, messages=messages, stream=True, **kwargs):
File "/home/redel/Projects/phidata/venvPhiData/lib/python3.10/site-packages/phi/assistant/assistant.py", line 891, in _run
for response_chunk in self.llm.response_stream(messages=llm_messages):
File "/home/redel/Projects/phidata/venvPhiData/lib/python3.10/site-packages/phi/llm/ollama/chat.py", line 271, in response_stream
for response in self.invoke_stream(messages=messages):
File "/home/redel/Projects/phidata/venvPhiData/lib/python3.10/site-packages/phi/llm/ollama/chat.py", line 96, in invoke_stream
yield from self.client.chat(
File "/home/redel/Projects/phidata/venvPhiData/lib/python3.10/site-packages/ollama/_client.py", line 84, in _stream
raise ResponseError(e.response.text, e.response.status_code) from None
ollama._types.ResponseError: model "llama3" not found, try pulling it first

I am serving ollama at local llama3.1:8b but above errors given.

@WilliamEspegren
Copy link
Contributor

@MyraBaba have you pulled it by running 'ollama run llama3.1'?

@MyraBaba
Copy link
Author

MyraBaba commented Aug 21, 2024 via email

@WilliamEspegren
Copy link
Contributor

@MyraBaba can you share your code?

@reinside
Copy link

Hi,

How to set model to llama3.1:8b for Local Rag ?

I cant find a convenient way to do this

Hey,

Just edit the file, adding "llama3.1" in the models list:
cookbook/llms/ollama/rag/app.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants