Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama pull error: The model you are attempting to pull requires a newer version of Ollama.Hi #585

Open
iguelkanat opened this issue Aug 8, 2024 · 28 comments

Comments

@iguelkanat
Copy link

Hi,

I just installed the containers and pulled the ollama docker.. so far it runs, but it cannot load the new models, as it has the version 0.1.46-0-gbc42e60 but it requires at least the version 0.3x to be able to load the new models.

I get the following error message, when I try to load llama3.1:

root@ubuntu:/# ollama pull llama3.1
pulling manifest
Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

https://ollama.com/download

How can I update the ollama within the container to a newer version?

@RPHllc
Copy link

RPHllc commented Aug 8, 2024

I am using the dustynv/ollama:r36.3.0 and it works for me.
From the command prompt you can "docker images" to see the images you have. Then remove the one you have and use "docker pull dustynv/ollama:rxx.x.x" to get the new image for your system. The options for xx.x.x are 35.4.1, 36.2.0, 36.3.0

@iguelkanat
Copy link
Author

iguelkanat commented Aug 8, 2024

it worked. to pull the new models. thanks ...

but ollama -v shows 0.0.0.0 as version ... which is ok for me as long as the service works and the performance is good :-)

Update:
Sorry, I can run the "ollama pull llama3.1" but "ollama run llama3.1" gives now the following error:
"Error: llama runner process has terminated: signal: aborted (core dumped) "

@RPHllc
Copy link

RPHllc commented Aug 8, 2024

I have seen this error when trying llama3.1 8b instruct. The basic llama3.1 works for me.
This error seems to be a problem with the ollama source, see ollama/ollama#6048, which was recently fixed. I believe we need to wait for dustynv to rebuild the container.

ollama -v gives "ollama version is 0.0.0" for me as well.

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 8, 2024

@RPHllc @iguelkanat just kicked this off, it is building on my Jetson running L4T R36.2, so if all goes well there will be an updated ollama image pushed to dustynv/ollama:r36.2.0

ollama -v gives "ollama version is 0.0.0" for me as well.

This issue was previously addressed by commit 6536834 , however perhaps the container image you are running was built prior, and this new one building now will include the versioning patch.

@iguelkanat
Copy link
Author

iguelkanat commented Aug 8, 2024

@dusty-nv : Thanks... it works now ... even llama3.1:70b works fine ... and I get "ollama version is 7d1c004-dirty" as version.

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 8, 2024

ok yes!, looks like it finished uploading dustynv/ollama:r36.2.0 (which should work fine on r36.3)

I'm not sure about it reporting the actual version number, IIRC when building from git it extracts the latest commit SHA and uses that for the "version"

@RPHllc
Copy link

RPHllc commented Aug 8, 2024

Perfect! Thank you.

@RPHllc
Copy link

RPHllc commented Aug 8, 2024

@dusty-nv, we are not there yet.

The new version of Ollama seems to verify model compatibility against the version or this was an issue on 36.2.0 and not on 36.3.0.

When I attempted to pull new images I got:

root@jetson:/# ollama pull llama3.1:8b-instruct-q4_0
pulling manifest
Error: pull model manifest: 412:

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

https://ollama.com/download

Because I have two Jetsons and I had downloaded this llm model earlier to one of them, I know the new dustynv/ollama:r36.2.0 fixed the issue that prevented the llm model to run. But I am unable to download the same model to my second Jetson.

@RPHllc
Copy link

RPHllc commented Aug 8, 2024

Workaround to be able to download new models until this is resolved.

  1. docker pull dustynv/ollama:r36.3.0
  2. jetson-containers run -d --name ollama $(autotag ollama:r36.3.0)
  3. docker exec -it ollama bash
  4. ollama pull [any desired model]
  5. exit
  6. docker stop ollama
  7. jetson-containers run -d --name ollama $(autotag ollama)

(This works because we have two images, the older r36.3.0 that is able to download all models but do not run the llama3.1 type models. and the new build r36.2.0 that runs the new models but is not downloading them)

@iguelkanat
Copy link
Author

iguelkanat commented Aug 9, 2024

@dusty-nv
As Ollama is updating their application quite frequently, would it be an option, if you could provide instructions with the necessary settings on how to create the docker on our own on the jetson device?
I tried a few times to build the binaries from source on my own but failed each time. I appreciate if there is a tutorial.

thanks

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 9, 2024

@iguelkanat oh sure yes, you can rebuild the ollama container with the latest anytime like this:

jetson-containers build ollama

Just follow these steps first to get your docker-root on NVME so that you don't run out of storage:
https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 9, 2024

Because I have two Jetsons and I had downloaded this llm model earlier to one of them, I know the new dustynv/ollama:r36.2.0 fixed the issue that prevented the llm model to run. But I am unable to download the same model to my second Jetson.

@RPHllc is this because your other Jetson is still running the previous r36.3 image? In that case, run it with dustynv/ollama:r36.2.0 instead of $(autotag ollama) (r36.3 and r36.2 are compatible with each other)

My r36.3 machine is still occupied doing a training run, but after will rebuild it with r36.3 for consistency with the tags.

Or are you referring to a different issue, and the ollama version still isn't being set correctly in the container?

@RPHllc
Copy link

RPHllc commented Aug 9, 2024

@dusty-nv yes, it is a different issue.

The r36.2 version you previously had and the r36.3 version you currently have are able to load llama3.1, llama3.1:8b-instruct-q4_0, etc. In other words, I was able to execute "ollama pull lama3.1" from inside the container. But those container versions are not able to run llama3.1:8b-instruct-q4_0.

Your new r36.2 version is able to run the previously downloaded llm images. The new version fixed the "not able to run" problem. However, the new version is not able to download llama3.1 and possibly not other images.

@wilbert-vb
Copy link

@iguelkanat oh sure yes, you can rebuild the ollama container with the latest anytime like this:

jetson-containers build ollama

Just follow these steps first to get your docker-root on NVME so that you don't run out of storage: https://github.com/dusty-nv/jetson-containers/blob/master/docs/setup.md#docker-default-runtime

After the command Jetson-containers build ollama i ended up with the following:

IMG_0059

How do I get a single image with the latest version?

@xiaohei2022
Copy link

@RPHllc @iguelkanat just kicked this off, it is building on my Jetson running L4T R36.2, so if all goes well there will be an updated ollama image pushed to dustynv/ollama:r36.2.0

ollama -v gives "ollama version is 0.0.0" for me as well.

This issue was previously addressed by commit 6536834 , however perhaps the container image you are running was built prior, and this new one building now will include the versioning patch.

hello,how to do (ollama -v gives "ollama version is 0.0.0"), thank you for your time

@ballerburg9005
Copy link

I updated ollama to git version from AUR but still get the error

ollama pull  llama3.1:8b-text-fp16 

@dusty-nv
Copy link
Owner

dusty-nv commented Aug 25, 2024 via email

@ballerburg9005
Copy link

I have no idea.

ollama version is 0.3.7.g0f92b19b
# ollama pull  llama3.1:8b-text-fp16

pulling manifest 
Error: pull model manifest: 412: 

The model you are attempting to pull requires a newer version of Ollama.

Please download the latest version at:

	https://ollama.com/download

@ballerburg9005
Copy link

Ah yes, I changed 0.3.7.g234234234 to 0.3.7 and now it works. Thanks.

@iguelkanat
Copy link
Author

Hi, does anyone has a tutorial on how to build Ollama from source on a Jetson Oring AGX.. I could not find instructions on how to build it.
Thanks

@dusty-nv
Copy link
Owner

@iguelkanat jetson-containers build ollama

@iguelkanat
Copy link
Author

iguelkanat commented Aug 28, 2024

@dusty-nv I tried twice and each time it generated successfully but I could not load new models such as llama3.1 with the same error message as I stated at top of this post.

I get the same result as shown by wilbert-vb.. seems that something is not working.

My queston was if there is a tutorial on creating the ollama binaries from source on Jetson Orin AGX w/o running it on a container... but if the container runs w/o issues, I am more than happy to use it.

@dusty-nv
Copy link
Owner

Hi @iguelkanat , @ballerburg9005 said above they got it to work after changing the version - #585 (comment)

presumably they truncated the commit tag g234234234 off the version here in the Dockerfile:

RUN cd /opt/ollama && \

I am happy to accept PR's should someone confirm the modifications. You can also run llama-3.1 through llama.cpp easily. Happy to host the builds and container images for ollama, but I have maintained since the beginning of it that there's a limit to which I will personally dive into that project being written in Go and essentially just a complicated wrapper around llama.cpp. Case in point: llama.cpp can easily run llama-3.1 GGUF since day 1, but you are stuck messing with ollama versions telling you it can't run.

@dusty-nv
Copy link
Owner

dusty-nv commented Sep 1, 2024

See here for suggestion from @mtebenev to revert to just using version 0.0.0 to avoid these checks (#592 (comment)). It was previously requested to have the correct ollama version reported in the container when ollama --version is run, but then this llama 3.1 issue was encountered (and presumably others like it in the future)

For now, I am just rebuilding against ollama 3.9 since that's out now and will push it to dustynv/ollama:r36.3.0 - that should avoid the llama 3.1 issue

@dusty-nv
Copy link
Owner

dusty-nv commented Sep 2, 2024

FYI it ended up being dustynv/ollama:r36.2.0 that was pushed with ollama 3.9

@tomcreutz
Copy link

Hi @dusty-nv ,
for me the same error remained even with your recently pushed update on dustynv/ollama:r36.2.0 .
I was able to resolve the issue by setting the ollama version to 0.0.0 in the Dockerfile (export VERSION="0.0.0") as suggested in #592 .

@CappyT
Copy link

CappyT commented Sep 12, 2024

Hi @dusty-nv , for me the same error remained even with your recently pushed update on dustynv/ollama:r36.2.0 . I was able to resolve the issue by setting the ollama version to 0.0.0 in the Dockerfile (export VERSION="0.0.0") as suggested in #592 .

Can you elaborate this? I'm having the same problem but I don't understand where I should pass the version variable

@dusty-nv
Copy link
Owner

@CappyT it is here to change it to 0.0.0 and then rebuild with jetson-containers build ollama

export VERSION=$(git describe --tags --first-parent --abbrev=7 --long --dirty --always | sed -e "s/^v//g") && \

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants