This service handles content transformation for BlogStorm. Each endpoint operates independently, performing specific content enhancement tasks. The service is designed around the principle of single responsibility, where each endpoint focuses on one aspect of content transformation.
The LLM Service takes various forms of input content and enhances it through different stages of AI processing. Each endpoint represents a distinct transformation stage and can be used independently or as part of a larger content pipeline. BlogStorm Publish uses this service to transform content for publication.
- Node.js (v18 or higher)
- OpenAI API key
- PostgreSQL database
- Clone the repository
- Install dependencies:
npm install
- Copy
.env.example
to.env
and fill in required values - Run migrations:
npx prisma migrate dev
- Start the server:
npm run dev
Transforms initial brainstorming into a structured article outline.
Request Body:
{
"idea": "string", // Initial brainstorming text
"targetLength": "short | medium | long", // Optional, defaults to medium
"tone": "casual | professional" // Optional, defaults to casual
}
Response:
{
"outline": {
"title": "string",
"mainThesis": "string",
"keyPoints": ["string"],
"sections": [
{
"heading": "string",
"points": ["string"]
}
]
},
"metadata": {
"suggestedTags": ["string"],
"estimatedReadTime": "number"
}
}
Creates a full article draft from a structured outline.
Request Body:
{
"outline": {
"title": "string",
"mainThesis": "string",
"keyPoints": ["string"],
"sections": [
{
"heading": "string",
"points": ["string"]
}
]
}
}
Response:
{
"content": "string", // Markdown formatted article
"metadata": {
"wordCount": "number",
"readingTime": "number",
"headings": ["string"]
}
}
Improves the quality and engagement of an existing draft.
Request Body:
{
"content": "string", // Markdown formatted article
"focusAreas": ["clarity", "engagement", "tone"]
}
Response:
{
"enhancedContent": "string",
"changes": [
{
"type": "string",
"description": "string"
}
]
}
Prepares content for publishing to Ghost, including metadata generation.
Request Body:
{
"content": "string" // Markdown formatted article
}
Response:
{
"title": "string",
"slug": "string",
"metaDescription": "string",
"tags": ["string"],
"content": "string", // Ghost-formatted content
"excerpt": "string"
}
All endpoints follow a consistent error response format:
{
"error": {
"code": "string",
"message": "string",
"details": {} // Optional additional information
}
}
Common error codes:
INVALID_INPUT
: Request body fails validationLLM_ERROR
: Error communicating with OpenAIPROCESSING_ERROR
: Error during content transformationRATE_LIMIT
: Too many requests to LLM service
Run tests with: npm test
Test coverage report: npm run test:coverage
PORT=3001
DATABASE_URL=postgresql://user:password@localhost:5432/llm_pipeline
OPENAI_API_KEY=your_api_key
NODE_ENV=development
The service includes detailed logging of:
- Processing duration for each transformation
- LLM token usage
- Error rates and types
- Request patterns
Access logs are available at /logs
in development mode.
Each endpoint is designed to function independently, following the single responsibility principle. This allows for:
- Independent scaling of different transformation types
- Isolated testing and monitoring
- Flexible pipeline composition
- Easier maintenance and updates
The service uses a queue system for long-running transformations, with results stored in the database for reliability.