[alpha] Interact with Livepeer AI pipelines
Livepeer AI testnet is live! This unofficial client library makes it easy to integrate Livepeer's decentralized AI pipelines into your app. Usage is currently sponsored by Livepeer as the product is in beta, so it's free to get started!
This AI SDK is written in Typescript and published for use with either Typescript or Javascript with both CommonJS and ESM support. It is in active development.
More information about Livepeer AI on our docs site.
More pipelines and models are coming online regularly. If you are interested in contributing or requesting specific models, please join our Discord! We have an active community and welcome your participation.
npm install @karbondallas/ai-sdk
Compatible with modern browsers and Node-like runtimes!
TODO: add compatibility table
We have designed this library with developer experience in mind. With that said, we are still very early in development, so your feedback is valuable! If you run into any problems or have thoughts on the design, please feel encouraged to let us know!
Below are several examples of how you might make use of this library.
Generate an image from a prompt using the warm model with default parameters:
import LivepeerAI from '@livepeer/ai-sdk'
const ai = LivepeerAI()
const response = await ai.textToImage({
prompt: 'grainy photo of a black cat',
})
Generate a new image from a text prompt and existing image using the warm model with default parameters:
import { readFileSync } from 'fs'
import { LivepeerAI } from '@livepeer/ai-sdk'
const ai = new LivepeerAI()
const image = readFileSync('./cat.jpg')
const response = await ai.imageToImage({
prompt: 'retro video game sprite of a black cat',
image,
})
Generate a short video from an image using the warm model with default parameters:
import { readFileSync } from 'fs'
import { LivepeerAI } from '@livepeer/ai-sdk'
const ai = new LivepeerAI()
const image = readFileSync('./cat.jpg')
const response = await ai.imageToVideo({
image,
})
Upscale an existing image to higher resolution using the warm model with default parameters:
import { readFileSync } from 'fs'
import { LivepeerAI } from '@livepeer/ai-sdk'
const ai = new LivepeerAI()
const image = readFileSync('./cat.jpg')
const response = await ai.upscale({
prompt: 'photorealistic 4k image of a black cat'
image,
})
Generate text from an audio sample using the warm model with default parameters:
import { readFileSync } from 'fs'
import { LivepeerAI } from '@livepeer/ai-sdk'
const ai = new LivepeerAI()
const audio = readFileSync('./meow.mp3')
const response = await ai.audioToText({
audio,
})