Skip to content

Commit 4e8959f

Browse files
committed
Documentation edits made through Mintlify web editor
1 parent 5123786 commit 4e8959f

File tree

2 files changed

+44
-16
lines changed

2 files changed

+44
-16
lines changed

docs/documentation/features/ai-guardrails.mdx

Lines changed: 43 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -9,41 +9,68 @@ Both tools make it possible to program guardrails that safeguard conversations w
99

1010
Key benefits of adding programmable guardrails include:
1111

12-
* *Trustworthiness and Reliability:* Guardrails can be used to guide and safeguard conversations between your users and your LLM system. You can choose to define the behavior of your LLM system on specific topics and prevent it from engaging in discussions on unwanted topics.
12+
* *Trustworthiness and Reliability:*
1313

14-
* *Controllable Dialog:* Use guardrails to steer the LLM to follow pre-defined conversational flows, making sure the LLM follows best practices in conversation design and enforces standard procedures, such as authentication.
14+
Guardrails can be used to guide and safeguard conversations between your users and your LLM system. You can choose to define the behavior of your LLM system on specific topics and prevent it from engaging in discussions on unwanted topics.
1515

16-
* *Protection against Vulnerabilities:* Guardrails can be specified in a way that they can help increase the security of your LLM application by checking for LLM vulnerabilities, such as checking for secrets in user inputs or LLM responses or detecting prompt injections.
16+
* *Controllable Dialog:*
17+
18+
Use guardrails to steer the LLM to follow pre-defined conversational flows, making sure the LLM follows best practices in conversation design and enforces standard procedures, such as authentication.
19+
20+
* *Protection against Vulnerabilities:*
21+
22+
Guardrails can be specified in a way that they can help increase the security of your LLM application by checking for LLM vulnerabilities, such as checking for secrets in user inputs or LLM responses or detecting prompt injections.
1723

1824
## Types of Guardrails
1925

2026
In the following, we give a brief overview of the types of guardrails that can be specified with the open-source toolkits [NeMo Guardrails](https://github.com/NVIDIA/NeMo-Guardrails) and [Guardrails AI](https://github.com/guardrails-ai/guardrails). For further technical documentation, please check out the respective GitHub repositories and documentations.
2127

2228
### NeMo Guardrails
2329

24-
* Technical Documentation: [https://docs.nvidia.com/nemo/guardrails](https://docs.nvidia.com/nemo/guardrails)
25-
* GitHub Repository: [https://github.com/NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails)
30+
* Technical Documentation:
31+
32+
[https://docs.nvidia.com/nemo/guardrails](https://docs.nvidia.com/nemo/guardrails)
33+
34+
* GitHub Repository:
35+
36+
[https://github.com/NVIDIA/NeMo-Guardrails](https://github.com/NVIDIA/NeMo-Guardrails)
2637

2738
NeMo Guardrails supports five main types of guardrails (short: rails):
39+
2840
<Frame caption="Type of AI Guardrails with NeMo Guardrails">
29-
![Type of AI Guardrails with NeMo Guardrails](/images/guardrails/1.webp)
41+
![Type of AI Guardrails with NeMo Guardrails](/images/guardrails/1.webp)
3042
</Frame>
31-
* *Input Rails:* Checking the user input, an input rail can reject, change (e.g., to rephrase or mask sensitive data), or stop processing the input.
3243

33-
* *Dialog Rails:* Dialog rails influence how the LLM is prompted and determine if an action should be executed, if the LLM should be invoked to generate the next step or a response, if a predefined response should be used instead, etc.
44+
* *Input Rails:*
45+
46+
Checking the user input, an input rail can reject, change (e.g., to rephrase or mask sensitive data), or stop processing the input.
47+
48+
* *Dialog Rails:*
3449

35-
* *Retrieval Rails:* When using a RAG (Retrieval Augmented Generation) LLM system, retrieval rails check the retrieved documents and can reject, change (e.g., to rephrase or mask sensitive data), or stop processing specific chunks.
50+
Dialog rails influence how the LLM is prompted and determine if an action should be executed, if the LLM should be invoked to generate the next step or a response, if a predefined response should be used instead, etc.
3651

37-
* *Execution Rails:* Execution rails use mechanisms to check and verify the inputs and/or outputs of custom actions that are being evoked by the LLM (e.g., the LLM triggering actions in other tools).
52+
* *Retrieval Rails:*
3853

39-
* *Output Rails:* Checking the response of a LLM, an output rail can reject, change (e.g., remove sensitive data), or remove a LLM’s response.
54+
When using a RAG (Retrieval Augmented Generation) LLM system, retrieval rails check the retrieved documents and can reject, change (e.g., to rephrase or mask sensitive data), or stop processing specific chunks.
55+
56+
* *Execution Rails:*
57+
58+
Execution rails use mechanisms to check and verify the inputs and/or outputs of custom actions that are being evoked by the LLM (e.g., the LLM triggering actions in other tools).
59+
60+
* *Output Rails:*
61+
62+
Checking the response of a LLM, an output rail can reject, change (e.g., remove sensitive data), or remove a LLM’s response.
4063

4164
### Guardrails AI
4265

43-
* Technical Documentation: [https://www.guardrailsai.com/docs](https://www.guardrailsai.com/docs)
44-
* GitHub Repository: [https://github.com/guardrails-ai/guardrails](https://github.com/guardrails-ai/guardrails)
66+
* Technical Documentation:
67+
68+
[https://www.guardrailsai.com/docs](https://www.guardrailsai.com/docs)
69+
70+
* GitHub Repository:
71+
72+
[https://github.com/guardrails-ai/guardrails](https://github.com/guardrails-ai/guardrails)
4573

4674
Guardrails AI offers Guardrails Hub, a collection of pre-built checks for specific types of LLM risks (called “validators”). Multiple validators can be combined together into Guardrail AI’s Input and Output Guards (guardrail object) that intercept the inputs and outputs of LLMs. Visit [Guardrails Hub](https://hub.guardrailsai.com/) to see the full list of validators and their documentation.
47-
<Frame caption="Examples of AI Guardrails with Guardrails AI (Guardrails AI, 2024)">
48-
![Examples of AI Guardrails with Guardrails AI (Guardrails AI, 2024)](https://files.buildwithfern.com/https://scorecard.docs.buildwithfern.com/2024-12-10T21:27:50.697Z/docs/assets/guardrails_ai_hub.gif)
49-
</Frame>
75+
76+
![Examples of AI Guardrails with Guardrails AI (Guardrails AI, 2024)](/guardrails_ai_hub.gif "Examples of AI Guardrails with Guardrails AI (Guardrails AI, 2024)")

guardrails_ai_hub.gif

Lines changed: 1 addition & 0 deletions
Loading

0 commit comments

Comments
 (0)