-
Notifications
You must be signed in to change notification settings - Fork 206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there support for local LLM? #58
Comments
Local models are supported via langchain HuggingFacePipeline: |
The configuration file doesn't have any path to the local LLM, how can I use it correctly? |
You can see here for more info on langchain HuggingFacePipeline: You need to put the model id in the config file as the 'name', see here: This is the list of supported models (and their id): |
Sorry, I meant to operate completely offline, the connection you provided above, is to run the model hosted on Huggingface locally, what I'm trying to understand is to first go offline and then run the LLM that exists locally on my own server. |
The huggingface pipeline downloads the model locally (once), and then uses the stored model. Then in the model name you should refer to the folder with all the model files |
Something error:
|
This is an Argilla connection error. |
Could you tell me how to use the GPU when using it offline and does the config file provide a GPU entry?
|
Here are the changes I made to config:
The run was very long and I did not wait for the results:
Once again the process spiked the RAM from 10G to 500G and counting: |
I added support for inference with local GPU: #59 In order to use it you need to change the config to either:
Or better using accelerate:
|
Hello, how to solve this problem Exception: Failed to connect to argilla, check connection details |
Hello, how to solve this problem Exception: Failed to connect to argilla, check connection details |
Please follow carefully after step 4 in the setup instructions: |
Is there any plan to support local offline models?
The text was updated successfully, but these errors were encountered: