bug: parsed LLM output which is configured to rails is invalid #908
Labels
bug
Something isn't working
status: needs triage
New issues that have not yet been reviewed or categorized.
Did you check docs and existing issues?
Python version (python --version)
Python 3.9
Operating system/version
Linux Fedora
NeMo-Guardrails version (if you must use a specific version and not the latest
0.8.0
Describe the bug
rails_output = rails.generate(prompt=user_query)
rails_output is using LLM model backend and once it gets the ouput it parsing into True or False.
If message needs to be blocked LLM output should be No and rails_output should be True
If message needs to be allowed LLM output should be Yes and rails_output should be False.
But for few questions its not working as per above
Steps To Reproduce
user_query = "Explore dataset:ECOM_AZURE_SL for discount and plot scatter plot"
rails _ouput = rails.generate(prompt=user_query)
Expected rails _ouput should be True but its not
Expected Behavior
Expected rails _ouput should be True but its not .
If anyone can help directly to parse LLM output of rails, it would be great help ..
Actual Behavior
It giving response as False which is not correct
The text was updated successfully, but these errors were encountered: