How to write the correct config file for GPT4All? #453
-
[components.llm.model] As title, the above configuration doesn't work with Key model error venv/lib/python3.12/site-packages/langchain_community/llms/gpt4all.py", line 141, in validate_environment Before I look into the code, I want to make sure I wrote the configuration file correctly. And I couldn't find the reference doc or example to use a local llm with GPT4All. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
I guess nobody has use GPT4All ? |
Beta Was this translation helpful? Give feedback.
-
The answer is removing the |
Beta Was this translation helpful? Give feedback.
-
[nlp] [components] [components.llm] [components.llm.task] [components.llm.task.label_definitions] #[components.llm.task.examples] [components.llm.model] name has to be folllow , can't use full path. ~/.cache/gpt4all/name = "gemma-2b-it-q4_k_m.gguf" |
Beta Was this translation helpful? Give feedback.
[nlp]
lang = "en"
pipeline = ["llm"]
batch_size = 1024
[components]
[components.llm]
factory = "llm"
[components.llm.task]
@llm_tasks = "spacy.TextCat.v3"
labels = ["COMPLIMENT", "INSULT"]
[components.llm.task.label_definitions]
"COMPLIMENT" = "a polite expression of praise or admiration.",
"INSULT" = "a disrespectful or scornfully abusive remark or act."
exclusive_classes = True
allow_none = False
#[components.llm.task.examples]
#@misc = "spacy.FewShotReader.v1"
#path = "example.json"
[components.llm.model]
@llm_models = "langchain.GPT4All.v1"
name has to be folllow , can't use full path. ~/.cache/gpt4all/
name = "gemma-2b-it-q4_k_m.gguf"
query = {"@llm_queries": "spacy.CallLangChain.v1"}
…