-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llama3.1:8b #1103
Comments
Hey @MyraBaba, here is how you can run local Ollama models with Phidata :) |
(venvPhiData) redel@RedElephant:~/Projects/phidata$ python cookbook/llms/ollama/assistant.py I am serving ollama at local llama3.1:8b but above errors given. |
@MyraBaba have you pulled it by running 'ollama run llama3.1'? |
ollama run llama3.1:8b is running
… On 21 Aug 2024, at 21:39, William Espegren ***@***.***> wrote:
@MyraBaba <https://github.com/MyraBaba> have you pulled it by running 'ollama run llama3.1'?
—
Reply to this email directly, view it on GitHub <#1103 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEFRZH63AES65SGNIQPMASLZSTNFJAVCNFSM6AAAAABM3X2JLCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBSG4ZDKNZWHE>.
You are receiving this because you were mentioned.
|
@MyraBaba can you share your code? |
Hey, Just edit the file, adding "llama3.1" in the models list: |
Hi,
How to set model to llama3.1:8b for Local Rag ?
I cant find a convenient way to do this
The text was updated successfully, but these errors were encountered: