Skip to content

Commit

Permalink
Docs update
Browse files Browse the repository at this point in the history
  • Loading branch information
ddebowczyk committed Oct 4, 2024
1 parent 25a8fbb commit 8c984a9
Show file tree
Hide file tree
Showing 5 changed files with 70 additions and 41 deletions.
60 changes: 60 additions & 0 deletions docs/cookbook/examples/extras/schema_dynamic.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: 'Generating JSON Schema dynamically'
docname: 'schema_dynamic'
---

## Overview

Instructor has a built-in support for generating JSON Schema from
dynamic objects with `Structure` class.

This is useful when the data model is built during runtime or defined
by your app users.

`Structure` helps you flexibly design and modify data models that
can change with every request or user input and allows you to generate
JSON Schema for them.

## Example

```php
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Extras\LLM\Inference;
use Cognesy\Instructor\Extras\Structure\Field;
use Cognesy\Instructor\Extras\Structure\Structure;

$city = Structure::define('city', [
Field::string('name', 'City name')->required(),
Field::int('population', 'City population')->required(),
Field::int('founded', 'Founding year')->required(),
]);

$data = (new Inference)
->withConnection('openai')
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with JSON data.']],
responseFormat: [
'type' => 'json_schema',
'description' => 'City data',
'json_schema' => [
'name' => 'city_data',
'schema' => $city->toJsonSchema(),
'strict' => true,
],
],
options: ['max_tokens' => 64],
mode: Mode::JsonSchema,
)
->toJson();

echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);

?>
```
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,11 @@ event listeners.
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');

use Cognesy\Instructor\Events\Inference\LLMResponseReceived;use Cognesy\Instructor\Events\Inference\PartialLLMResponseReceived;use Cognesy\Instructor\Extras\LLM\Data\LLMResponse;use Cognesy\Instructor\Extras\LLM\Data\PartialLLMResponse;use Cognesy\Instructor\Instructor;
use Cognesy\Instructor\Events\Inference\LLMResponseReceived;
use Cognesy\Instructor\Events\Inference\PartialLLMResponseReceived;
use Cognesy\Instructor\Extras\LLM\Data\LLMResponse;
use Cognesy\Instructor\Extras\LLM\Data\PartialLLMResponse;
use Cognesy\Instructor\Instructor;

class User {
public int $age;
Expand Down
33 changes: 0 additions & 33 deletions docs/internals/component_config.mdx

This file was deleted.

9 changes: 4 additions & 5 deletions docs/internals/lifecycle.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,10 @@ As Instructor for PHP processes your request, it goes through several stages:

1. Initialize and self-configure (with possible overrides defined by developer).
2. Analyze classes and properties of the response data model specified by developer.
3. Encode data model into a schema that can be provided to LLM.
3. Translate data model into a schema that can be provided to LLM.
4. Execute request to LLM using specified messages (content) and response model metadata.
5. Receive a response from LLM or multiple partial responses (if streaming is enabled).
6. Deserialize response received from LLM into originally requested classes and their properties.
7. In case response contained incomplete or corrupted data - if errors are encountered, create feedback message for LLM and requests regeneration of the response.
8. Execute validations defined by developer for the data model - if any of them fail, create feedback message for LLM and requests regeneration of the response.
9. Repeat the steps 4-8, unless specified limit of retries has been reached or response passes validation

7. In case response contained unserializable data - create feedback message for LLM and request regeneration of the response.
8. Execute validations defined by developer on the deserialized data - if any of them fail, create feedback message for LLM and requests regeneration of the response.
9. Repeat the steps 4-8, unless specified limit of retries has been reached or response passes validation.
3 changes: 1 addition & 2 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,6 @@
"group": "Internals",
"pages": [
"internals/instructor",
"internals/component_config",
"internals/lifecycle",
"internals/response_models",
"internals/debugging",
Expand Down Expand Up @@ -202,7 +201,7 @@
"cookbook/examples/extras/llm_md_json",
"cookbook/examples/extras/llm_tools",
"cookbook/examples/extras/schema",
"cookbook/examples/extras/schema",
"cookbook/examples/extras/schema_dynamic",
"cookbook/examples/extras/transcription_to_tasks",
"cookbook/examples/extras/translate_ui_fields",
"cookbook/examples/extras/web_to_objects"
Expand Down

0 comments on commit 8c984a9

Please sign in to comment.