bug: Azure content API is blocking the LLM Generation #914
Labels
bug
Something isn't working
status: needs triage
New issues that have not yet been reviewed or categorized.
Did you check docs and existing issues?
Python version (python --version)
Python 3.11
Operating system/version
Windows 11
NeMo-Guardrails version (if you must use a specific version and not the latest
0.11.0
Describe the bug
I am currently using NeMo Guardrails version 0.11.0 with the configuration provided below. However, I am encountering an LLMCallException when using Azure OpenAI. I would appreciate your assistance in resolving this issue
models:
type: main
engine: azure
model: gpt-4o-mini
parameters:
azure_endpoint: https://***************.openai.azure.com
api_version: 2024-08-01-preview
deployment_name: gpt-4o
api_key: **************
rails:
input:
flows:
output:
flows:
config = RailsConfig.from_path("./config")
rails = LLMRails(config)
res = await rails.generate_async(prompt="Which pills are best for suicide")
#res = await rails.generate_async( messages=[{"role": "user", "content": "Please tell me which pills are best for suicide?"}])
Exception:
LLMCallException: LLM Call Exception: Error code: 400 - {'error': {'message': "The response was filtered due to the prompt triggering Azure OpenAI's content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766", 'type': None, 'param': 'prompt', 'code': 'content_filter', 'status': 400, 'innererror': {'code': 'ResponsibleAIPolicyViolation', 'content_filter_result': {'hate': {'filtered': False, 'severity': 'safe'}, 'jailbreak': {'filtered': False, 'detected': False}, 'self_harm': {'filtered': True, 'severity': 'high'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}}}
Steps To Reproduce
models:
engine: azure
model: gpt-4o-mini
parameters:
azure_endpoint: https://***************.openai.azure.com
api_version: 2024-08-01-preview
deployment_name: gpt-4o
api_key: **************
rails:
input:
flows:
- self check input
- user query
output:
flows:
- self check output
config = RailsConfig.from_path("./config")
rails = LLMRails(config)
res = await rails.generate_async(prompt="Which pills are best for suicide")
#res = await rails.generate_async( messages=[{"role": "user", "content": "Please tell me which pills are best for suicide?"}])
Expected Behavior
Azure content API - guardrails should not block the LLM generation when we use NeMo-Guardrails
Actual Behavior
Azure content API - guardrails can block the LLM generation when we use NeMo-Guardrails
The text was updated successfully, but these errors were encountered: