A curated collection of LLM applications and tutorials covering RAG (Retrieval-Augmented Generation), AI agents, multi-agent teams, voice agents, MCP (Model Context Protocol), and more. Each subfolder contains a self-contained example with comprehensive documentation and runnable code.
This repository serves as a comprehensive learning resource and practical toolkit for:
Perfect for beginners to learn AI agent fundamentals:
- Blog to Podcast conversion, Data Analysis, Travel Planning
- Music Generation, Medical Imaging, Meme Creation
- Research automation, Web scraping, Finance analysis
Sophisticated single and multi-agent systems:
- Deep research, Consulting, System architecture
- Investment analysis, Health coaching, Journalism
- Game design teams, Legal services, Real estate
Speech-enabled applications with natural voice interaction:
- Audio tours, Customer support, Voice-activated RAG
Using Model Context Protocol for enhanced integration:
- Browser automation, GitHub management, Notion integration
- Travel planning with standardized tool access
Comprehensive retrieval-augmented generation examples:
- Basic to advanced RAG patterns, Corrective RAG
- Vision RAG, Hybrid search, Local implementations
- Provider-specific solutions (Gemini, Cohere, Llama)
Sophisticated applications with persistent memory:
- Memory-enabled agents, Chat with external sources
- Fine-tuning tutorials for Gemma and Llama models
Crash courses for popular agent frameworks:
- Google ADK comprehensive tutorial
- OpenAI Agents SDK mastery course
Each example is designed to be independent and runnable. Here's the general workflow:
- Navigate to an example folder
- Create a virtual environment:
python -m venv .venv && source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -U pip
- Install dependencies:
pip install -r requirements.txt
- Set API keys (as needed):
export OPENAI_API_KEY=your_key_here export ANTHROPIC_API_KEY=your_key_here # for Claude export GEMINI_API_KEY=your_key_here # for Gemini
- Run the application:
- For Streamlit apps:
streamlit run streamlit_app.py - For CLI scripts:
python app.pyorpython main.py
- For Streamlit apps:
├── .github/
│ ├── copilot-instructions.md # GitHub Copilot custom instructions
│ └── instructions/ # Specialized guidance files
├── 🌱 starter_ai_agents/ # Beginner-friendly AI agents
├── 🚀 advanced_ai_agents/ # Sophisticated agent implementations
│ ├── single_agent_apps/ # Advanced standalone agents
│ ├── multi_agent_apps/ # Multi-agent coordination
│ └── autonomous_game_playing_agent_apps/ # Game-playing agents
├── 🗣️ voice_ai_agents/ # Speech-enabled AI applications
├── 🌐 mcp_ai_agents/ # Model Context Protocol agents
├── 📀 rag_tutorials/ # Retrieval Augmented Generation
├── 💾 advanced_llm_apps/ # Sophisticated LLM applications
│ ├── llm_apps_with_memory_tutorials/ # Memory-enabled applications
│ ├── chat_with_X_tutorials/ # Chat interfaces for various sources
│ └── llm_finetuning_tutorials/ # Model fine-tuning guides
├── 🧑🏫 ai_agent_framework_crash_course/ # Framework learning resources
├── AGENTS.md # Agent working procedures
├── CLAUDE.md # Claude/Anthropic provider hints
└── GEMINI.md # Gemini provider hints
- Isolation First: Each example is independent - no cross-folder dependencies
- Clear Documentation: Every example includes a detailed README with setup and usage instructions
- Environment Management: Always use virtual environments per example
- API Key Security: Never commit API keys; use environment variables
- Provider Flexibility: Examples often support multiple LLM providers
- Python 3.10+
- Virtual Environment (recommended)
- API Keys for your chosen providers:
- OpenAI (
OPENAI_API_KEY) - Anthropic Claude (
ANTHROPIC_API_KEY) - Google Gemini (
GEMINI_API_KEYorGOOGLE_API_KEY) - Groq (
GROQ_API_KEY) - Together AI (
TOGETHER_API_KEY)
- OpenAI (
This repository includes custom GitHub Copilot instructions to help with:
- Understanding the repository structure
- Setting up examples correctly
- Following best practices for LLM applications
- Troubleshooting common issues
- Import Errors: Ensure you've installed
requirements.txtin the correct folder - API Issues: Verify your API keys are set and valid
- Port Conflicts: Use
--server.port 8502for Streamlit if 8501 is busy - Path Issues: Check that data files are in expected locations per the README
- Create examples in descriptive folders
- Include comprehensive README.md with setup and usage
- Add requirements.txt with pinned versions
- Test your example with fresh virtual environment
- Document all required environment variables
This project follows the license specified in the LICENSE file.