Skip to content

Commit

Permalink
Merge pull request #1207 from guardrails-ai/docs/other-llms-details
Browse files Browse the repository at this point in the history
Update using_llms.md
  • Loading branch information
dtam authored Jan 6, 2025
2 parents 932790c + a9c06d3 commit c407daf
Showing 1 changed file with 13 additions and 1 deletion.
14 changes: 13 additions & 1 deletion docs/how_to_guides/using_llms.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,8 +287,20 @@ for chunk in stream_chunk_generator
```

## Other LLMs
As mentioned at the top of this page, over 100 LLMs are supported through our litellm integration, including (but not limited to)

See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
- Anthropic
- AWS Bedrock
- Anyscale
- Huggingface
- Mistral
- Predibase
- Fireworks


Find your LLM in LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers). Then, follow those same steps and set the same environment variables they guide you to use, but invoke a `Guard` object instead of the litellm object.

Guardrails will wire through the arguments to litellm, run the Guarding process, and return a validated outcome.

## Custom LLM Wrappers
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that accepts a positional argument for the prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.
Expand Down

0 comments on commit c407daf

Please sign in to comment.