-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Issues: PromtEngineer/localGPT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Installation with Docker image and Windows 11 is very slow
#830
opened Oct 26, 2024 by
paulusTecnion
If my question is unrelated to the document, the model's response is very poor. What should I do
#814
opened Jul 5, 2024 by
Suiji12
You are trying to offload the whole model to the disk. Please use the
disk_offload
function instead.
#811
opened Jun 21, 2024 by
MoSedky
Small pdf file, simple question => inference takes a lot of time!
#805
opened Jun 4, 2024 by
hadiidbouk
Can I reuse the models which I have running locally via ollama service ?
#795
opened May 12, 2024 by
g1ra
If I want to improve the Recall access the reranker model ,how can I do it?
#792
opened May 8, 2024 by
Suiji12
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.