-
Notifications
You must be signed in to change notification settings - Fork 443
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama pull error: The model you are attempting to pull requires a newer version of Ollama.Hi #585
Comments
I am using the dustynv/ollama:r36.3.0 and it works for me. |
it worked. to pull the new models. thanks ... but ollama -v shows 0.0.0.0 as version ... which is ok for me as long as the service works and the performance is good :-) Update: |
I have seen this error when trying llama3.1 8b instruct. The basic llama3.1 works for me. ollama -v gives "ollama version is 0.0.0" for me as well. |
@RPHllc @iguelkanat just kicked this off, it is building on my Jetson running L4T R36.2, so if all goes well there will be an updated ollama image pushed to
This issue was previously addressed by commit 6536834 , however perhaps the container image you are running was built prior, and this new one building now will include the versioning patch. |
@dusty-nv : Thanks... it works now ... even llama3.1:70b works fine ... and I get "ollama version is 7d1c004-dirty" as version. |
ok yes!, looks like it finished uploading I'm not sure about it reporting the actual version number, IIRC when building from git it extracts the latest commit SHA and uses that for the "version" |
Perfect! Thank you. |
@dusty-nv, we are not there yet. The new version of Ollama seems to verify model compatibility against the version or this was an issue on 36.2.0 and not on 36.3.0. When I attempted to pull new images I got: root@jetson:/# ollama pull llama3.1:8b-instruct-q4_0 The model you are attempting to pull requires a newer version of Ollama. Please download the latest version at:
Because I have two Jetsons and I had downloaded this llm model earlier to one of them, I know the new dustynv/ollama:r36.2.0 fixed the issue that prevented the llm model to run. But I am unable to download the same model to my second Jetson. |
Workaround to be able to download new models until this is resolved.
(This works because we have two images, the older r36.3.0 that is able to download all models but do not run the llama3.1 type models. and the new build r36.2.0 that runs the new models but is not downloading them) |
@dusty-nv thanks |
@iguelkanat oh sure yes, you can rebuild the ollama container with the latest anytime like this: jetson-containers build ollama Just follow these steps first to get your docker-root on NVME so that you don't run out of storage: |
@RPHllc is this because your other Jetson is still running the previous r36.3 image? In that case, run it with My r36.3 machine is still occupied doing a training run, but after will rebuild it with r36.3 for consistency with the tags. Or are you referring to a different issue, and the ollama version still isn't being set correctly in the container? |
@dusty-nv yes, it is a different issue. The r36.2 version you previously had and the r36.3 version you currently have are able to load llama3.1, llama3.1:8b-instruct-q4_0, etc. In other words, I was able to execute "ollama pull lama3.1" from inside the container. But those container versions are not able to run llama3.1:8b-instruct-q4_0. Your new r36.2 version is able to run the previously downloaded llm images. The new version fixed the "not able to run" problem. However, the new version is not able to download llama3.1 and possibly not other images. |
After the command Jetson-containers build ollama i ended up with the following: How do I get a single image with the latest version? |
hello,how to do (ollama -v gives "ollama version is 0.0.0"), thank you for your time |
I updated ollama to git version from AUR but still get the error
|
Is the issue that the builds from source use the git tag as the version like upstream did, whereas in the actual ollama releases they assign a specific version number?
…________________________________
From: ballerburg9005 ***@***.***>
Sent: Saturday, August 24, 2024 4:39:34 PM
To: dusty-nv/jetson-containers ***@***.***>
Cc: Dustin Franklin ***@***.***>; Mention ***@***.***>
Subject: Re: [dusty-nv/jetson-containers] Ollama pull error: The model you are attempting to pull requires a newer version of Ollama.Hi (Issue #585)
I updated ollama to git version from AUR but still get the error
ollama pull llama3.1:8b-text-fp16
—
Reply to this email directly, view it on GitHub<#585 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADVEGK4LT6LWBZV7QERYTWDZTDVQNAVCNFSM6AAAAABMG7H35CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMBYGUZTCMBRGQ>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I have no idea.
|
Ah yes, I changed 0.3.7.g234234234 to 0.3.7 and now it works. Thanks. |
Hi, does anyone has a tutorial on how to build Ollama from source on a Jetson Oring AGX.. I could not find instructions on how to build it. |
@iguelkanat |
@dusty-nv I tried twice and each time it generated successfully but I could not load new models such as llama3.1 with the same error message as I stated at top of this post. I get the same result as shown by wilbert-vb.. seems that something is not working. My queston was if there is a tutorial on creating the ollama binaries from source on Jetson Orin AGX w/o running it on a container... but if the container runs w/o issues, I am more than happy to use it. |
Hi @iguelkanat , @ballerburg9005 said above they got it to work after changing the version - #585 (comment) presumably they truncated the commit tag
I am happy to accept PR's should someone confirm the modifications. You can also run llama-3.1 through llama.cpp easily. Happy to host the builds and container images for ollama, but I have maintained since the beginning of it that there's a limit to which I will personally dive into that project being written in Go and essentially just a complicated wrapper around llama.cpp. Case in point: llama.cpp can easily run llama-3.1 GGUF since day 1, but you are stuck messing with ollama versions telling you it can't run. |
See here for suggestion from @mtebenev to revert to just using version For now, I am just rebuilding against ollama 3.9 since that's out now and will push it to |
FYI it ended up being |
Can you elaborate this? I'm having the same problem but I don't understand where I should pass the version variable |
@CappyT it is here to change it to 0.0.0 and then rebuild with
|
Hi,
I just installed the containers and pulled the ollama docker.. so far it runs, but it cannot load the new models, as it has the version 0.1.46-0-gbc42e60 but it requires at least the version 0.3x to be able to load the new models.
I get the following error message, when I try to load llama3.1:
root@ubuntu:/# ollama pull llama3.1
pulling manifest
Error: pull model manifest: 412:
The model you are attempting to pull requires a newer version of Ollama.
Please download the latest version at:
How can I update the ollama within the container to a newer version?
The text was updated successfully, but these errors were encountered: