You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As title says
If I've already pulled the new (as of 2024-01-30) codellama-70b from meta (or python variant)
Will Llama Coder use this?
Or does it download the 34b and run that?
Does it just run whatever I'm running in Ollama and the list of models you provide are more "recommendations"?
Instructions seem to contradict or are not clear.
One one hand it simply says:
Local Installation
Install Ollama on local machine and then launch the extension in VSCode, everything should work as it is.
But then below that, the list of models doesn't go to 70b and probably doesn't include the new meta ones 70b, 70b python and 70b instruct?
Since my machine is capable of running it, I would prefer to.
Successfully running (and quite fast!) ollama run codellama:70b from here: https://ollama.ai/library/codellama:70b
Only reason I haven't simply installed and launched is because I don't want to end up with a 34b download in an unspecified location despite already running 70b :)
Thanks
The text was updated successfully, but these errors were encountered:
To use it, in extension settings select 'Custom' in 'Inference: Model' dropdown and put model name in 'Inference > Custom: Model' textbox, codellama:70b-code for example. Also make sure 'codellama' option is selected in 'Inference > Custom: Format' dropdown list.
You could try to use non-FIM model 'codellama:70b' (configured the same way as described above) but I doubt it will produce output the extension expects.
As title says
If I've already pulled the new (as of 2024-01-30) codellama-70b from meta (or python variant)
Will Llama Coder use this?
Or does it download the 34b and run that?
Does it just run whatever I'm running in Ollama and the list of models you provide are more "recommendations"?
Instructions seem to contradict or are not clear.
One one hand it simply says:
But then below that, the list of models doesn't go to 70b and probably doesn't include the new meta ones 70b, 70b python and 70b instruct?
Since my machine is capable of running it, I would prefer to.
Successfully running (and quite fast!)
ollama run codellama:70b
from here: https://ollama.ai/library/codellama:70bOnly reason I haven't simply installed and launched is because I don't want to end up with a 34b download in an unspecified location despite already running 70b :)
Thanks
The text was updated successfully, but these errors were encountered: