Nargis is a full-stack demo of a voice-first AI productivity assistant (tasks, habits, journaling) with a real-time pipeline and a dual-mode AI backend (API + local fallback).
This README is purposely concise — detailed operational and design docs live under docs/ and the package READMEs in apps/.
Prereqs: Bun, Node-compatible shell, Go (for gateway), Python (for AI service), Docker (optional).
Clone and install:
git clone https://github.com/divijg19/Nargis.git
cd Nargis
bun installRun the full local dev workflow:
# Local (all services)
bun run dev
# Hybrid: Docker backend + local frontend
bun run dev:hybridRuns all services locally with hot-reloading. Ensure Docker is not running first (bun run dev:docker:down).
# Start all services locally
bun run dev| Command | Description |
|---|---|
bun run dev |
Starts all services locally with hot-reloading (for backend work). |
bun run dev:hybrid |
Starts the Docker backend and local frontend (for frontend work). |
bun run dev:docker:up |
Starts the Docker Compose services only. |
bun run dev:docker:down |
Stops the Docker Compose services. |
bun run build |
Builds all applications in the monorepo. |
bun run lint |
Lints the entire monorepo with Biome. |
bun run typecheck |
Type-checks all TypeScript packages. |
- Monorepo setup with modern tooling (Bun, Turborepo, Biome).
- Fully functional, polyglot backend (Go, Python).
- Dual-Mode AI Pipeline implemented and working.
- End-to-end, real-time voice-to-response functionality.
- Flexible hybrid and fully-local development workflows configured.
- Implement short-term conversational memory (agent state graph scaffolding).
- Integrate a service layer for backend logic (
apps/api-py/services/*). - Define and implement core AI "Tools" (e.g.,
create_task) exposed to the agent runtime (apps/api-py/agent/tools.py). - Implement an agent orchestration layer using LangGraph (
apps/api-py/agent/graph.py) to bind LLMs to tools and manage reasoning. - Integrate a PostgreSQL database for long-term persistence (user profiles, goals).
- Implement semantic search over journal entries (RAG).
- Create scheduled, proactive agent triggers (e.g., "morning briefing").
- Develop AI-powered insights and progress reports.
The backend has been refactored to a three-tier architecture to support agentic reasoning and safe tool use.
- Router: HTTP + FastAPI routers remain thin. They validate requests, enforce auth, and delegate domain work to the Service layer (
apps/api-py/routers/*). - Service: Business logic and DB access moved into
apps/api-py/services/*(for example,services/tasks.py). Services accept simple Pydantic-compatible inputs and a SQLAlchemySessionto enable unit testing and reuse by agent tools. - Agent: LangGraph-based agent lives in
apps/api-py/agent/. Tools are defined inagent/tools.py(strict Pydanticargs_schema) and bound to an LLM-drivenStateGraphinagent/graph.py. The API invokes the agent for reasoning-heavy flows (e.g., voice-driven task creation).
Key files:
apps/api-py/services/— service modules (DB logic, validation, idempotency helpers).apps/api-py/agent/tools.py— Pydantic-validated LangChain tools exposed to the agent.apps/api-py/agent/graph.py— LangGraph StateGraph composition and compiledagent_appruntime.apps/api-py/main.py— audio pipeline now delegates LLM reasoning to the agent runtime and uses legacy LLM calls as a fallback.
Environment & dependencies:
- Add
langgraph,langchain, andlangchain-groqto the Python environment used byapps/api-py(seeapps/api-py/pyproject.tomland commitapps/api-py/uv.lockfor reproducible installs). - Configure AI keys and endpoints as usual (
GROQ_API_KEY,DEEPGRAM_API_KEY,LLM_URL,TTS_URL, etc.) and add any agent-specific settings (e.g.,AGENT_MODEL,AGENT_TEMPERATURE).
The refactor keeps the HTTP surface unchanged while enabling safe, testable agent tooling.
This is currently a personal project, but suggestions and feedback are welcome! Please feel free to open an issue or a pull request.
Divij Ganjoo
- Portfolio: divijganjoo.me
- LinkedIn: in/divij-ganjoo
- GitHub: @divijg19
Built with ❤️ and ☕ as a demonstration of modern full-stack development with AI/ML integration.