We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I've tried running the llama 2 example in the readme in Colab but I've been unable to get any output.
Colab for reference: https://colab.research.google.com/drive/1XEnQmW7RbMh8BGPbFVoWSikKJ-1H_h8F?usp=sharing
Would appreciate any input on this. Thanks!
The text was updated successfully, but these errors were encountered:
Hi @cem2ran ,
Thank you for your interest in our work.
I'm not sure. Did it just hang after loading the checkpoint? Did it print any stack trace when you stopped it?
Please note that faiss-gpu version 1.7.4 is required, and from your logs it seems that pip installs version 1.7.2.
pip
Can you also try the 7B version of Llama? Maybe 13B is too much for Colab.
See also this issue #25 where users managed to run this on Colab.
Let us know how it goes. Best, Uri
Sorry, something went wrong.
No branches or pull requests
I've tried running the llama 2 example in the readme in Colab but I've been unable to get any output.
Colab for reference: https://colab.research.google.com/drive/1XEnQmW7RbMh8BGPbFVoWSikKJ-1H_h8F?usp=sharing
Would appreciate any input on this. Thanks!
The text was updated successfully, but these errors were encountered: