A powerful TypeScript implementation of the Model Context Protocol (MCP) client that uses Ollama as the LLM backend for local, private AI-powered interactions with MCP servers. Run AI tools and access resources without sending data to the cloud.
β Phase 8 Complete - Full testing suite with comprehensive unit, integration, and end-to-end tests π Phase 9 In Progress - Documentation and examples
- π Privacy-First: All LLM processing happens locally via Ollama - your data never leaves your machine
- π Full MCP Support: Connect to any MCP server via stdio, HTTP, or SSE transport
- π οΈ Smart Tool Orchestration: Automatic tool discovery, validation, and execution
- π Resource Management: Access and transform MCP resources with caching
- π¬ Conversation Management: Persistent sessions with context window optimization
- π Multi-Server Support: Connect to and orchestrate multiple MCP servers simultaneously
- π― TypeScript First: Fully typed APIs with comprehensive IntelliSense support
- β‘ Performance Optimized: Connection pooling, request queuing, and smart caching
- π Resilient: Circuit breakers, exponential backoff, and automatic retry strategies
- π Structured Logging: Winston-based logging with multiple output formats
- π§ Plugin System: Extend functionality with custom plugins and transformers
- π¨ CLI Interface: Feature-rich command-line interface with interactive mode
- π Analytics: Built-in tool usage analytics and performance metrics
- π Secure: Input validation, rate limiting, and secure transport options
- Node.js 18+ - Download
- Ollama - Installation Guide
# macOS brew install ollama # Linux curl -fsSL https://ollama.ai/install.sh | sh # Windows # Download from https://ollama.ai/download/windows
- TypeScript 5.x (for development)
npm install ollama-mcp-client
npm install -g ollama-mcp-client
git clone https://github.com/mayanksingh09/ollama-mcp-client.git
cd ollama-mcp-client
npm install
npm run build
npm link # For CLI usage
# Initialize configuration
ollama-mcp config init
# Discover and connect to MCP servers
ollama-mcp discover --save
ollama-mcp connect filesystem
# Start interactive chat
ollama-mcp chat
import { OllamaMCPClient } from 'ollama-mcp-client';
// Initialize the client
const client = new OllamaMCPClient({
ollama: {
host: 'http://localhost:11434',
// model: 'llama3.2' // Optional: specify a model, or auto-detect will be used
},
logging: {
level: 'info'
}
});
// Connect to a local MCP server via stdio
const serverId = await client.connectToServer({
type: 'stdio',
command: 'node',
args: ['./mcp-server.js']
});
// Or connect to a remote server via HTTP
const remoteId = await client.connectToServer({
type: 'http',
url: 'https://api.example.com/mcp',
headers: {
'Authorization': 'Bearer token'
}
});
// List available tools
const tools = await client.listTools();
console.log('Available tools:', tools);
// Chat with Ollama using MCP tools
const response = await client.chat('Help me calculate 2+2', {
includeHistory: true,
temperature: 0.7
});
console.log('Response:', response.message);
// Disconnect when done
await client.disconnectAll();
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Development mode with watch
npm run dev
# Lint and format
npm run lint
npm run format
ollama-mcp-client/
βββ src/
β βββ client/ # MCP client implementation
β βββ ollama/ # Ollama API integration
β βββ transport/ # Transport layer (stdio, HTTP, SSE)
β βββ session/ # Session and state management
β βββ protocol/ # MCP protocol message handlers
β βββ types/ # TypeScript type definitions
β βββ utils/ # Utility functions
βββ tests/ # Test files
βββ docs/ # Documentation
βββ examples/ # Usage examples
βββ basic-usage.ts # Basic client usage
βββ example-mcp-server.js # Example MCP server
- π API Documentation - Complete API reference (auto-generated)
- π Tutorials - Step-by-step guides
- βοΈ Configuration - Configuration reference
- π§ CLI Reference - Command-line interface guide
- ποΈ Architecture - System design and architecture
- π Troubleshooting - Common issues and solutions
- π₯ Contributing - Contribution guidelines
const client = new OllamaMCPClient({
ollama: {
host: string, // Ollama server URL (default: http://localhost:11434)
model?: string, // Optional: specify a model (e.g., 'llama3.2', 'mistral', 'codellama') or auto-detect will be used
timeout?: number, // Request timeout in ms (default: 60000)
headers?: Record<string, string>, // Custom headers for Ollama requests
},
mcp: {
name: string, // Client name for identification
version: string, // Client version
capabilities: { // MCP capabilities to advertise
tools?: {},
resources?: {},
prompts?: {},
}
},
session: {
persist?: boolean, // Enable session persistence (default: false)
storagePath?: string, // Path for session storage
maxHistory?: number, // Max conversation history (default: 100)
},
logging: {
level?: string, // Log level (error, warn, info, debug)
file?: string, // Log file path
console?: boolean, // Enable console logging (default: true)
},
performance: {
connectionPoolSize?: number, // HTTP connection pool size (default: 10)
cacheSize?: number, // Cache size in MB (default: 100)
requestTimeout?: number, // Global request timeout (default: 30000)
}
});
// Connect to servers
await client.connectToServer(options: TransportOptions): Promise<string>
await client.disconnectFromServer(serverId: string): Promise<void>
await client.disconnectAll(): Promise<void>
await client.getConnectedServers(): ServerInfo[]
// Tool discovery and execution
await client.listTools(serverId?: string): Promise<Tool[]>
await client.callTool(name: string, args: any, serverId?: string): Promise<ToolResult>
await client.validateToolCall(name: string, args: any): Promise<ValidationResult>
// Resource operations
await client.listResources(serverId?: string): Promise<Resource[]>
await client.readResource(uri: string, serverId?: string): Promise<ResourceContent>
await client.subscribeToResource(uri: string, callback: Function): Promise<Subscription>
// Chat with Ollama using MCP tools
await client.chat(message: string, options?: ChatOptions): Promise<ChatResponse>
await client.streamChat(message: string, options?: StreamOptions): AsyncIterator<ChatChunk>
await client.getConversationHistory(): Promise<Message[]>
await client.clearConversation(): Promise<void>
// Session operations
client.getSession(): SessionInfo | null
await client.saveSession(path?: string): Promise<void>
await client.loadSession(path: string): Promise<void>
await client.exportSession(): Promise<SessionData>
Build a private AI assistant that can interact with your local files, databases, and APIs without sending data to the cloud.
Create AI-powered development tools that can analyze code, run tests, and manage projects using local LLMs.
Process sensitive data with AI while maintaining complete data privacy and compliance requirements.
Automate complex workflows by combining multiple MCP servers with Ollama's reasoning capabilities.
Feature | Ollama MCP Client | Cloud-based Alternatives |
---|---|---|
Privacy | β 100% local processing | β Data sent to cloud |
Cost | β Free after setup | β Per-token pricing |
Speed | β Low latency (local) | |
Offline | β Works offline | β Requires internet |
Customization | β Full control | |
Models | β Any Ollama model |
# Run all tests
npm test
# Run specific test suites
npm run test:unit # Unit tests only
npm run test:integration # Integration tests
npm run test:e2e # End-to-end tests
npm run test:coverage # Generate coverage report
# Watch mode for development
npm run test:watch
Ollama Connection Failed
# Ensure Ollama is running
ollama serve
# Check Ollama is accessible
curl http://localhost:11434/api/tags
MCP Server Not Found
# Install MCP server packages
npm install -g @modelcontextprotocol/server-filesystem
# Discover available servers
ollama-mcp discover
Performance Issues
# Enable debug logging
export LOG_LEVEL=debug
# Check system resources
ollama-mcp config set performance.cacheSize 200
For more detailed troubleshooting, see our Troubleshooting Guide.
- Phase 10: Production readiness with Docker support
- Phase 11: Performance optimization and benchmarking
- Phase 12: Security hardening and compliance features
- Browser extension for web-based MCP interactions
- Support for more LLM providers (while maintaining local-first approach)
- Visual workflow builder for complex tool chains
- Advanced context management with RAG support
We welcome contributions! Please see our Contributing Guide for details.
# Clone the repository
git clone https://github.com/mayanksingh09/ollama-mcp-client.git
cd ollama-mcp-client
# Install dependencies
npm install
# Run in development mode
npm run dev
# Run tests
npm test
MIT License - see LICENSE for details.
- Ollama for providing local LLM capabilities
- Anthropic for the Model Context Protocol specification
- All contributors and users of this project
- π§ Report Issues
- π¬ Discussions
- π Documentation
Built with β€οΈ for the local-first AI community