- Supported Models
- Features
- How to Use
- Quick Start Guide
- Getting Started
- How to Connect Apps
- API Documentation
- Build Steps
- Bundling for Release
- Deploy
- FAQ
The goal of this project is to be an all-in-one solution for running local Ai that is easy to install, setup and use. It handles all basic building blocks of Ai: inference, memory retrieval (RAG) and storage (vector DB), model file management, and agent/workflow building.
obrew-ai.mp4
OpenBrew is a native app with a GUI that can be configured to allow access from other apps you write or third party services, making it an ideal engine for Ai workloads built on your own tech stack.
This backend runs a web server that acts as the main gateway to the suite of tools. A WebUI is provided called OpenBrew: WebUI to access this server. You can also run in headless mode to access it programmatically via the API.
- Launch the desktop app and choose an app to start using
- or navigate your browser to any web app that supports the API
- or connect with a service provider or custom stack that supports the OpenBrew API
query-doc.mp4
- 8GB Disk space
- 4GB Memory
✅ Run locally
✅ Windows OS installer
✅ MacOS installer
✅ Save chat history
✅ CPU & GPU support
❌ Linux installer
❌ Production ready: This project is under active development
✅ Inference: Run open-source LLM models locally
✅ Embeddings: Create vector embeddings from a file/website/media to augment memory
✅ Knowledge Base: Search a vector database with Llama Index to retrieve information
✅ Agents: Customized LLM, can choose or specify tool use
✅ Tool Use: Choose from pre-made or write your own
✅ Multi-modal:
- ✅ image
- ✅ text
- ❌ video
- ❌ audio
- ❌ 3d
❌ Observability: Source citations, logging, tracing
❌ Cached Context & Extended Context
❌ Voice-to-Text and Text-to-Speech
This is a local first project. The ultimate goal is to support many providers via one API.
✅ Open-Source (GGUF format)
❌ Google Gemini
❌ OpenAI
❌ Anthropic
❌ Mistral AI
❌ Groq

