Structured prompting for LLMs. InstructorLite is a fork and spiritual successor to instructor_ex library, which is the Elixir member of the great Instructor family.
The Instructor is useful for coaxing an LLM to return JSON that maps to an Ecto schema that you provide, rather than the default unstructured text output. If you define your own validation logic, Instructor can automatically retry prompts when validation fails (returning natural language error messages to the LLM, to guide it when making corrections).
InstructorLite is designed to be:
- Lean. It does so little it makes you question if you should just write your own version!
- Composable. Almost everything it does can be overridden or extended.
- Magic-free. It doesn't hide complexity behind one line function calls, but does its best to provide you with enough information to understand what's going on.
InstructorLite comes with 3 adapters: OpenAI, Anthropic and Llamacpp.
InstructorLite can be boiled down to these features:
- It provides a very simple function for generating JSON-schema from Ecto schema.
- It facilitates generating prompts, calling LLMs, casting and validating responses, including retrying prompts when validation fails.
- It holds knowledge of major LLM providers' API interfaces with adapters.
Any of the features above can be used independently.
Define an instruction, which is a normal Ecto schema with an extra use Instructor.Instruction
call.
defmodule UserInfo do
use Ecto.Schema
use InstructorLite.Instruction
@primary_key false
embedded_schema do
field(:name, :string)
field(:age, :integer)
end
end
Now let's use InstructorLite.instruct/2
to fill the schema from unstructured text:
iex> InstructorLite.instruct(%{
messages: [
%{role: "user", content: "John Doe is fourty two years old"}
]
},
response_model: UserInfo,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :openai_key)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
messages: [
%{role: "user", content: "John Doe is fourty two years old"}
]
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.Anthropic,
adapter_context: [api_key: Application.fetch_env!(:instructor_lite, :anthropic_key)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
prompt: "John Doe is fourty two years old"
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.Llamacpp,
adapter_context: [url: Application.fetch_env!(:instructor_lite, :llamacpp_url)]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}
iex> InstructorLite.instruct(%{
contents: [
%{
role: "user",
parts: [%{text: "John Doe is fourty two years old"}]
}
]
},
response_model: UserInfo,
adapter: InstructorLite.Adapters.Gemini,
adapter_context: [
api_key: Application.fetch_env!(:instructor_lite, :gemini_key)
]
)
{:ok, %UserInfo{name: "John Doe", age: 42}}
InstructorLite does not access the application environment for configuration options like adapter or api key. Instead, they're passed as options when needed. Note that different adapters may require different options, so make sure to check their documentation.
In your mix.exs, add :instructor_lite
to your list of dependencies:
def deps do
[
{:instructor_lite, "~> 0.3.0"}
]
end
Optionally, include m:Req
HTTP client as it's used by default:
def deps do
[
{:req, "~> 0.5 or ~> 1.0"}
]
end