-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception: Prompt exceeds max length of 16000 characters even without history #754
Comments
even after sitting the max_token to a lower limit it still give back the 12000 token limit, I even did change the default on the rails config lib, but it still shows the same error |
Hi @ayoubazaouyat , would you try it out by using |
Hi @Pouyanpi , I changed the following : config.yml : models:
rails: prompt.yml:
my output.co looks as follows : define flow answer pas-x question error : ValueError Traceback (most recent call last) File c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:211, in LLMRails.init(self, config, llm, verbose) File c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\rails\llm\llmrails.py:272, in LLMRails._validate_config(self) ValueError: The provided output rail flow fact checking does not exist what I want to do is use the default fact-checker or do I have to use something else, how can I fix it ? thanks |
@ayoubazaouyat, I meant just to change the prompt's task: task: fact_checking
max_length: 120000
content: |-
You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
You will only use the contents of the evidence and not rely on external knowledge.
Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails": keep the rest as it was. rails:
output:
flows:
- self check facts would you please check it with this config? |
@Pouyanpi , i did try that, it give this error back : { File ~/Downloads/Python/.venv/lib/python3.10/site-packages/nemoguardrails/rails/llm/config.py:856, in RailsConfig.from_path(config_path) File ~/Downloads/Python/.venv/lib/python3.10/site-packages/nemoguardrails/rails/llm/config.py:926, in RailsConfig.parse_object(cls, obj) File ~/Downloads/Python/.venv/lib/python3.10/site-packages/pydantic/main.py:1118, in BaseModel.parse_obj(cls, obj) File ~/Downloads/Python/.venv/lib/python3.10/site-packages/pydantic/main.py:551, in BaseModel.model_validate(cls, obj, strict, from_attributes, context) ValidationError: 1 validation error for RailsConfig |
Thank you very much @ayoubazaouyat. I'll investigate it and update you shortly. |
@ayoubazaouyat there seems to be a bug, thanks for reporting it. As a workaround, before we fix it, you can use following in your prompts:
- task: self_check_facts
max_length: 120000
content: |-
You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
You will only use the contents of the evidence and not rely on external knowledge.
Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":
- task: fact_checking
max_length: 120000
content: |-
You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
You will only use the contents of the evidence and not rely on external knowledge.
Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails": It worked on my end,I've used the head of develop branch. I hope it temporarily resolves your issue. |
python call :
from nemoguardrails import LLMRails
rails = LLMRails(config)
messages=[{
"role": "user", "content": "what is an mbr ?"
}]
options = {"output_vars": True}
output = rails.generate(messages=messages, options=options)
print(output)
config.yml :
models:
engine: azure
model: gpt-4o
parameters:
deployment_name: ****
api_version: 2023-09-01-preview
rails:
output:
flows:
- self check facts
prompts:
max_length: 120000
content: |-
You are given a task to identify if the hypothesis is grounded and entailed to the evidence.
You will only use the contents of the evidence and not rely on external knowledge.
Answer with yes/no. "evidence": {{ evidence }} "hypothesis": {{ response }} "entails":
error : Error while execution 'self_check_facts' with parameters :{ ***************}
Prompt exceeds max length of 16000 characters even without history
Traceback (most recent call last):
File "c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\actions\action_dispatcher.py", line 197, in execute_action
result = await result
^^^^^^^^^^^^
File "C:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\library\self_check\facts\actions.py", line 45, in self_check_facts
prompt = llm_task_manager.render_task_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\Ayoub_Azaouyat\AppData\Local\Programs\Python\Python311\Lib\site-packages\nemoguardrails\llm\taskmanager.py", line 231, in render_task_prompt
raise Exception(
Exception: Prompt exceeds max length of 16000 characters even without history
The text was updated successfully, but these errors were encountered: