Skip to content

A curated collection of LLM applications and tutorials covering RAG (Retrieval-Augmented Generation), AI agents, multi-agent teams, voice agents, MCP (Model Context Protocol), and more.

License

Notifications You must be signed in to change notification settings

ambicuity/Awesome-LLM-AI-Agent

Repository files navigation

Awesome LLM AI Agent

A curated collection of LLM applications and tutorials covering RAG (Retrieval-Augmented Generation), AI agents, multi-agent teams, voice agents, MCP (Model Context Protocol), and more. Each subfolder contains a self-contained example with comprehensive documentation and runnable code.

🎯 What You'll Find

This repository serves as a comprehensive learning resource and practical toolkit for:

🌱 Starter AI Agents

Perfect for beginners to learn AI agent fundamentals:

  • Blog to Podcast conversion, Data Analysis, Travel Planning
  • Music Generation, Medical Imaging, Meme Creation
  • Research automation, Web scraping, Finance analysis

🚀 Advanced AI Agents

Sophisticated single and multi-agent systems:

  • Deep research, Consulting, System architecture
  • Investment analysis, Health coaching, Journalism
  • Game design teams, Legal services, Real estate

🗣️ Voice AI Agents

Speech-enabled applications with natural voice interaction:

  • Audio tours, Customer support, Voice-activated RAG

🌐 MCP AI Agents

Using Model Context Protocol for enhanced integration:

  • Browser automation, GitHub management, Notion integration
  • Travel planning with standardized tool access

📀 RAG Applications

Comprehensive retrieval-augmented generation examples:

  • Basic to advanced RAG patterns, Corrective RAG
  • Vision RAG, Hybrid search, Local implementations
  • Provider-specific solutions (Gemini, Cohere, Llama)

💾 Advanced LLM Apps

Sophisticated applications with persistent memory:

  • Memory-enabled agents, Chat with external sources
  • Fine-tuning tutorials for Gemma and Llama models

🧑‍🏫 Framework Learning

Crash courses for popular agent frameworks:

  • Google ADK comprehensive tutorial
  • OpenAI Agents SDK mastery course

🚀 Quick Start

Each example is designed to be independent and runnable. Here's the general workflow:

  1. Navigate to an example folder
  2. Create a virtual environment:
    python -m venv .venv && source .venv/bin/activate  # Windows: .venv\Scripts\activate
    pip install -U pip
  3. Install dependencies:
    pip install -r requirements.txt
  4. Set API keys (as needed):
    export OPENAI_API_KEY=your_key_here
    export ANTHROPIC_API_KEY=your_key_here  # for Claude
    export GEMINI_API_KEY=your_key_here     # for Gemini
  5. Run the application:
    • For Streamlit apps: streamlit run streamlit_app.py
    • For CLI scripts: python app.py or python main.py

📁 Repository Structure

├── .github/
│   ├── copilot-instructions.md          # GitHub Copilot custom instructions
│   └── instructions/                    # Specialized guidance files
├── 🌱 starter_ai_agents/                # Beginner-friendly AI agents
├── 🚀 advanced_ai_agents/               # Sophisticated agent implementations
│   ├── single_agent_apps/               # Advanced standalone agents
│   ├── multi_agent_apps/                # Multi-agent coordination
│   └── autonomous_game_playing_agent_apps/  # Game-playing agents
├── 🗣️ voice_ai_agents/                  # Speech-enabled AI applications
├── 🌐 mcp_ai_agents/                    # Model Context Protocol agents
├── 📀 rag_tutorials/                    # Retrieval Augmented Generation
├── 💾 advanced_llm_apps/                # Sophisticated LLM applications
│   ├── llm_apps_with_memory_tutorials/  # Memory-enabled applications
│   ├── chat_with_X_tutorials/           # Chat interfaces for various sources
│   └── llm_finetuning_tutorials/        # Model fine-tuning guides
├── 🧑‍🏫 ai_agent_framework_crash_course/ # Framework learning resources
├── AGENTS.md                            # Agent working procedures
├── CLAUDE.md                            # Claude/Anthropic provider hints
└── GEMINI.md                            # Gemini provider hints

🛠️ Development Guidelines

  • Isolation First: Each example is independent - no cross-folder dependencies
  • Clear Documentation: Every example includes a detailed README with setup and usage instructions
  • Environment Management: Always use virtual environments per example
  • API Key Security: Never commit API keys; use environment variables
  • Provider Flexibility: Examples often support multiple LLM providers

📋 Common Requirements

  • Python 3.10+
  • Virtual Environment (recommended)
  • API Keys for your chosen providers:
    • OpenAI (OPENAI_API_KEY)
    • Anthropic Claude (ANTHROPIC_API_KEY)
    • Google Gemini (GEMINI_API_KEY or GOOGLE_API_KEY)
    • Groq (GROQ_API_KEY)
    • Together AI (TOGETHER_API_KEY)

🤖 GitHub Copilot Integration

This repository includes custom GitHub Copilot instructions to help with:

  • Understanding the repository structure
  • Setting up examples correctly
  • Following best practices for LLM applications
  • Troubleshooting common issues

🆘 Troubleshooting

  • Import Errors: Ensure you've installed requirements.txt in the correct folder
  • API Issues: Verify your API keys are set and valid
  • Port Conflicts: Use --server.port 8502 for Streamlit if 8501 is busy
  • Path Issues: Check that data files are in expected locations per the README

🤝 Contributing

  1. Create examples in descriptive folders
  2. Include comprehensive README.md with setup and usage
  3. Add requirements.txt with pinned versions
  4. Test your example with fresh virtual environment
  5. Document all required environment variables

📄 License

This project follows the license specified in the LICENSE file.

About

A curated collection of LLM applications and tutorials covering RAG (Retrieval-Augmented Generation), AI agents, multi-agent teams, voice agents, MCP (Model Context Protocol), and more.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages