-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM as backend for DemoGPT agent #41
Comments
Hi @paluigi, Thank you for highlighting this feature request. We truly value your feedback and are always eager to improve DemoGPT based on our community's suggestions. At present, our primary focus is enhancing DemoGPT's capabilities by adding more tools. That said, integrating local models is definitely on our roadmap, and Llama2 is indeed at the top of our list for such integrations. We appreciate your input and dedication to the growth of DemoGPT. 🙏 Stay tuned for updates! |
Hi @paluigi Link: https://github.com/chatchat-space/Langchain-Chatchat/blob/master/server/llm_api.py If necessary, I can submit the code to GitHub. @melih-unsal |
Thanks @wusiyaodiudiu , I will have a look to your repo! |
Hi @paluigi Newly added files: |
Is your feature request related to a problem? Please describe.
Using local LLMs instead than OpenAI API as backend
Describe the solution you'd like
Create a DemoGPT agent from a locally available model (ideally, a quantized Llama2 model via llama-cpp-python
Describe alternatives you've considered
If that' s already possible, a guide or some instruction about how to do it would be greatly appreciated!
Additional context
NA
The text was updated successfully, but these errors were encountered: