Skip to content

Commit bdb97b8

Browse files
wangzuobruce-shi
andauthored
zod schema (#4)
* add job zod schema * refactor: update job parameter types to use specific schemas for chat, embedding, and image jobs * feat: implement models() function for OpenAI, Ollama, and Anthropic providers * docs: update README to reflect changes in models listing and add usage example * refactor: make options field optional in baseJobSchema * schema based on provider * simplify chat stream * fix utils * add job result schema * add google provider * update README to include instructions for requesting support for new AI providers * add groq example * add job.remote() * update schemas and fix dump type * fmt * fix test * fix type * rename job types * add schema file * refactor: reorganize chat job types and introduce ChatTool class * add builder class * seems working * revert example * fix some imports * exclude zod in dist * fix more types * fix more types * update types * fix typo * add provider * update job schemas and add TypeScript check in github action * add deepseek provider * deepseek models * fix job type * zod v4 * fix test, add ChatStream * fix job schema export * add jobStream() helper * rename example * include response json in http error * add job output schema * fix vision chat * add partial json * fix readme --------- Co-authored-by: Bruce Shi <shibocuhk@gmail.com>
1 parent 67a179a commit bdb97b8

File tree

106 files changed

+2023
-1513
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+2023
-1513
lines changed

.github/workflows/test.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ jobs:
3131
name: Install dependencies
3232
run: |
3333
bun install
34+
# - id: tsc
35+
# run: bunx tsc
3436
- id: build
3537
name: Run build
3638
run: |

.gitignore

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ yarn-debug.log*
66
yarn-error.log*
77
lerna-debug.log*
88
.pnpm-debug.log*
9-
9+
.vscode
1010
# Diagnostic reports (https://nodejs.org/api/report.html)
1111
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
1212

README.md

Lines changed: 48 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# fluent-ai
22

3-
![NPM Version](https://img.shields.io/npm/v/fluent-ai)
4-
![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/modalityml/fluent-ai/test.yml)
3+
[![NPM Version](https://img.shields.io/npm/v/fluent-ai)](http://npmjs.com/fluent-ai)
4+
[![GitHub Actions Workflow Status](https://img.shields.io/github/actions/workflow/status/modalityml/fluent-ai/test.yml)](https://github.com/modalityml/fluent-ai/actions/workflows/test.yml)
55

66
> [!WARNING]
77
> This project is in beta. The API is subject to changes and may break.
@@ -10,21 +10,24 @@ fluent-ai is a lightweight, type-safe AI toolkit that seamlessly integrates mult
1010

1111
## Installation
1212

13+
[Zod](https://zod.dev/) is a popular type of validation library for TypeScript and JavaScript that allows developers to define and validate data schemas in a concise and type-safe manner. fluent-ai is built upon zod.
14+
1315
```sh
14-
npm install fluent-ai
16+
npm install fluent-ai zod@next
1517
```
1618

1719
## AI Service provider support
1820

1921
fluent-ai includes support for multiple AI providers and modalities.
2022

21-
| Provider | chat | embedding | image | listModels |
23+
| provider | chat completion | embedding | image generation | list models |
2224
| --------- | ------------------ | ------------------ | ------------------ | ------------------ |
23-
| anthropic | :white_check_mark: | | | :white_check_mark: |
24-
| fal | | | :white_check_mark: | |
25-
| ollama | :white_check_mark: | :white_check_mark: | | :white_check_mark: |
26-
| openai | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
27-
| voyageai | | :white_check_mark: | | |
25+
| anthropic | :white_check_mark: | | | :white_check_mark: |
26+
| fal | | | :white_check_mark: | |
27+
| google | :white_check_mark: | | | |
28+
| ollama | :white_check_mark: | :white_check_mark: | | :white_check_mark: |
29+
| openai | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
30+
| voyage | | :white_check_mark: | | |
2831

2932
By default, API keys for providers are read from environment variable (`process.env`) following the format `<PROVIDER>_API_KEY` (e.g., `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`).
3033

@@ -47,40 +50,47 @@ Each request to AI providers is wrapped in a `Job`. which can also serialized an
4750
### Method chaining
4851

4952
```ts
50-
import { openai, userPrompt } from "fluent-ai";
53+
import { openai, user } from "fluent-ai";
5154

5255
const job = openai()
5356
.chat("gpt-4o-mini")
54-
.messages([userPrompt("Hi")])
57+
.messages([user("Hi")])
5558
.temperature(0.5)
5659
.maxTokens(1024);
5760
```
5861

5962
### Declaration
6063

61-
Alternatively, fluent-ai also supports job declaration from json object.
64+
Alternatively, fluent-ai supports declarative job creation using JSON objects, with full TypeScript autocompletion support.
6265

6366
```ts
6467
import { load } from "fluent-ai";
6568

6669
const job = load({
6770
provider: "openai",
68-
chat: {
71+
type: "chat",
72+
input: {
6973
model: "gpt-4o-mini",
70-
params: {
71-
messages: [{ role: "user", content: "hi" }],
72-
temperature: 0.5,
73-
},
74+
messages: [{ role: "user", content: "hi" }],
75+
temperature: 0.5,
7476
},
7577
});
7678
```
7779

80+
fluent-ai provides built-in TypeScript type definitions and schema validation for jobs:
81+
82+
```ts
83+
import { type Job } from "fluent-ai"; // TypeScript type
84+
import { JobSchema } from "fluent-ai"; // Zod schema
85+
import { jobJSONSchema } from "fluent-ai"; // JSON Schema
86+
```
87+
7888
### Job serialization and deserialization
7989

8090
To serialize a job to a JSON object, use the `dump` method:
8191

8292
```ts
83-
const obj = await job.dump();
93+
const payload = job.dump();
8494
```
8595

8696
This allows you to save the job's state for later use, such as storing it in a queue or database.
@@ -89,7 +99,7 @@ To recreate and execute a job from the JSON object, use the `load` function:
8999
```ts
90100
import { load } from "fluent-ai";
91101

92-
const job = load(obj);
102+
const job = load(payload);
93103
await job.run();
94104
```
95105

@@ -100,42 +110,14 @@ Chat completion, such as ChatGPT, is the most common AI service. It generates re
100110
### Text generation
101111

102112
```ts
103-
import { openai, systemPrompt, userPrompt } from "fluent-ai";
113+
import { openai, system, user, text } from "fluent-ai";
104114

105115
const job = openai()
106116
.chat("gpt-4o-mini")
107-
.messages([systemPrompt("You are a helpful assistant"), userPrompt("Hi")]);
108-
109-
const { text } = await job.run();
110-
```
111-
112-
### Structured output
113-
114-
Structured output from AI chat completions involves formatting the responses based on predefined json schema. This feature is essential when building applications with chat completions.
117+
.messages([system("You are a helpful assistant"), user("Hi")]);
115118

116-
[Zod](https://zod.dev/) is a popular type of validation library for TypeScript and JavaScript that allows developers to define and validate data schemas in a concise and type-safe manner. fluent-ai provides built-in integration for declare json-schema with zod. To use zod integration, first install `zod` from npm. Any parameter in fluent-ai that accepts a JSON schema will also work with a Zod schema.
117-
118-
```sh
119-
npm install zod
120-
```
121-
122-
fluent-ai provides a consistent `jsonSchema()` function for all providers to generate structured output. For more details, refer to the [structured output docs](/docs/chat-structured-outputs.md)
123-
124-
```ts
125-
import { z } from "zod";
126-
import { openai, userPrompt } from "fluent-ai";
127-
128-
const personSchema = z.object({
129-
name: z.string(),
130-
age: z.number(),
131-
});
132-
133-
const job = openai()
134-
.chat("gpt-4o-mini")
135-
.messages([userPrompt("generate a person with name and age in json format")])
136-
.jsonSchema(personSchema, "person");
137-
138-
const { object } = await job.run();
119+
const result = await job.run();
120+
console.log(text(result));
139121
```
140122

141123
### Function calling (tool calling)
@@ -163,9 +145,7 @@ To use the tool, add it to a chat job with a function-calling-enabled model, suc
163145
```ts
164146
const job = openai().chat("gpt-4o-mini").tool(weatherTool);
165147

166-
const { toolCalls } = await job
167-
.messages([userPrompt("What is the weather in San Francisco?")])
168-
.run();
148+
await job.messages([user("What is the weather in San Francisco?")]).run();
169149
```
170150

171151
### Streaming support
@@ -175,52 +155,47 @@ Rather than waiting for the complete response, streaming enables the model to re
175155
```ts
176156
const job = openai()
177157
.chat("gpt-4o-mini")
178-
.messages([systemPrompt("You are a helpful assistant"), userPrompt("Hi")])
158+
.messages([system("You are a helpful assistant"), user("Hi")])
179159
.stream();
180160

181-
const { stream } = await job.run();
182-
for await (const chunk of stream) {
183-
console.log(chunk);
161+
for await (const event of await job.run()) {
162+
console.log(text(event));
184163
}
185164
```
186165

187166
fluent-ai supports streaming text, object and tool calls on demand. For more details, see the [streaming docs](/docs/chat-streaming.md).
188167

189-
### Vision support
190-
191-
You can leverage chat models with vision capabilities by including an image URL in your prompt.
168+
## Embedding
192169

193170
```ts
194-
import { openai, systemPrompt, userPrompt } from "fluent-ai";
171+
import { openai } from "fluent-ai";
195172

196-
openai()
197-
.chat("gpt-4o-mini")
198-
.messages([
199-
userPrompt("Describe the image", { image: { url: "<image_url>" } }),
200-
]);
173+
const job = openai().embedding("text-embedding-3-small").value("hello");
174+
const result = await job.run();
201175
```
202176

203-
## Embedding
177+
## Image generation
204178

205179
```ts
206180
import { openai } from "fluent-ai";
207181

208-
const job = openai().embedding("text-embedding-3-small").input("hello");
182+
const job = openai().image("dalle-2").prompt("a cat").n(1).size("512x512");
209183
const result = await job.run();
210184
```
211185

212-
## Image generation
186+
## List models
187+
188+
fluent-ai provides an easy way to retrieve all available models from supported providers (openai, anthropic, ollama).
213189

214190
```ts
215191
import { openai } from "fluent-ai";
216192

217-
const job = openai().image("dalle-2").prompt("a cat").n(1).size("512x512");
218-
const result = await job.run();
193+
const models = await openai().models().run();
219194
```
220195

221196
## Support
222197

223-
Feel free to [open an issue](https://github.com/modalityml/fluent-ai/issues) or [start a discussion](https://github.com/modalityml/fluent-ai/discussions) if you have any questions. [Join our Discord community](https://discord.gg/HzGZWbY8Fx)
198+
Feel free to [open an issue](https://github.com/modalityml/fluent-ai/issues) or [start a discussion](https://github.com/modalityml/fluent-ai/discussions) if you have any questions. If you would like to request support for a new AI provider, please create an issue with details about the provider's API. [Join our Discord community](https://discord.gg/HzGZWbY8Fx) for help and updates.
224199

225200
## License
226201

biome.json

Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
{
2+
"$schema": "https://biomejs.dev/schemas/1.9.4/schema.json",
3+
"vcs": {
4+
"enabled": false,
5+
"clientKind": "git",
6+
"useIgnoreFile": false
7+
},
8+
"files": {
9+
"ignoreUnknown": false,
10+
"ignore": [
11+
"**/node_modules/**",
12+
"**/dist/**",
13+
"**/build/**"
14+
]
15+
},
16+
"formatter": {
17+
"enabled": true,
18+
"indentStyle": "space",
19+
"indentWidth": 2
20+
},
21+
"organizeImports": {
22+
"enabled": true
23+
},
24+
"linter": {
25+
"enabled": true,
26+
"rules": {
27+
"recommended": false,
28+
"suspicious": {
29+
"noExplicitAny": "off"
30+
},
31+
"style": {
32+
"noNonNullAssertion": "off",
33+
"useImportType": "off"
34+
},
35+
"correctness": {
36+
"useExhaustiveDependencies": "warn"
37+
}
38+
}
39+
},
40+
"javascript": {
41+
"formatter": {
42+
"indentStyle": "space",
43+
"indentWidth": 2,
44+
"quoteStyle": "double",
45+
"arrowParentheses": "always",
46+
"bracketSameLine": false,
47+
"bracketSpacing": true,
48+
"jsxQuoteStyle": "double",
49+
"quoteProperties": "asNeeded",
50+
"semicolons": "always",
51+
"trailingCommas": "all"
52+
}
53+
},
54+
"json": {
55+
"formatter": {
56+
"trailingCommas": "none"
57+
}
58+
}
59+
}

build.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ import dts from "bun-plugin-dts";
44
const defaultBuildConfig: BuildConfig = {
55
entrypoints: ["./src/index.ts"],
66
outdir: "./dist",
7+
external: ["zod"],
78
};
89

910
await Promise.all([

bun.lock

Lines changed: 4 additions & 5 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

bunfig.toml

Lines changed: 0 additions & 2 deletions
This file was deleted.

docs/chat-streaming.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ export interface StreamOptions {
1313
```ts
1414
const { textStream } = await openai()
1515
.chat("gpt-4o-mini")
16-
.messages([userPrompt("hi")])
16+
.messages([user("hi")])
1717
.stream()
1818
.run();
1919

@@ -28,9 +28,7 @@ for await (const text of textStream) {
2828
const { toolCallStream } = await openai()
2929
.chat("gpt-4o-mini")
3030
.tool(weatherTool)
31-
.messages([
32-
userPrompt("What's the weather like in Boston, Beijing, Tokyo today?"),
33-
])
31+
.messages([user("What's the weather like in Boston, Beijing, Tokyo today?")])
3432
.stream()
3533
.run();
3634

@@ -44,7 +42,7 @@ for await (const toolCalls of toolCallStream) {
4442
```ts
4543
const { objectStream } = await openai()
4644
.chat("gpt-4o-mini")
47-
.messages([userPrompt("generate a person with name and age in json format")])
45+
.messages([user("generate a person with name and age in json format")])
4846
.responseSchema(personSchema)
4947
.objectStream()
5048
.run();
@@ -61,7 +59,7 @@ The original chunk object from providers
6159
```ts
6260
const { stream } = await openai()
6361
.chat("gpt-4o-mini")
64-
.messages([userPrompt("hi")]);
62+
.messages([user("hi")]);
6563

6664
for await (const chunk of stream) {
6765
console.log(chunk);

0 commit comments

Comments
 (0)