Skip to main content
This guide covers detailed installation steps for different types of applications in the Awesome LLM Apps repository.

System Requirements

Python

Version: 3.8 or higherRecommended: 3.10+

Memory

Minimum: 4GB RAMRecommended: 8GB+ for local models

Storage

Minimum: 2GB free spaceAdditional space needed for local models

Internet

Required for cloud APIsOptional for local-only setups

Installation Methods

For most users, this is the fastest way to get started:
# Clone the repository
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
cd awesome-llm-apps

# Navigate to your chosen project
cd starter_ai_agents/ai_travel_agent

# Install dependencies
pip install -r requirements.txt

# Run the application
streamlit run travel_agent.py

Project-Specific Installation

Starter AI Agents

Basic agents with minimal dependencies. Perfect for beginners.
1

Navigate to Agent Directory

cd starter_ai_agents/<agent-name>
Popular starter agents:
  • ai_travel_agent - Travel planning agent
  • ai_data_analysis_agent - Data analysis with AI
  • openai_research_agent - Multi-agent research system
  • ai_music_generator_agent - Music generation
2

Install Dependencies

Most starter agents use these core libraries:
pip install -r requirements.txt
Common dependencies:
streamlit
agno>=2.2.10
openai
python-dotenv
3

Configure API Keys

Set environment variables:
export OPENAI_API_KEY='sk-...'
export SERPAPI_KEY='your-serpapi-key'  # If using search
4

Run the Application

streamlit run <agent-name>.py

RAG (Retrieval Augmented Generation) Applications

RAG applications require additional dependencies for vector databases and embeddings.
1

Navigate to RAG Project

cd rag_tutorials/<project-name>
Examples:
  • ai_blog_search - Agentic RAG with LangGraph
  • corrective_rag - CRAG implementation
  • local_rag_agent - Local RAG with open-source models
2

Install Dependencies

RAG applications typically include:
pip install -r requirements.txt
Common RAG dependencies:
langchain
langgraph
langchain-community
langchain-google-genai
langchain-qdrant
langchain-text-splitters
tiktoken
beautifulsoup4
python-dotenv
3

Set Up Vector Database

Some RAG apps use Qdrant or ChromaDB:For Qdrant (cloud):
export QDRANT_URL='your-qdrant-url'
export QDRANT_API_KEY='your-qdrant-key'
For ChromaDB (local):
# No additional setup needed - runs locally
4

Run the Application

streamlit run app.py

MCP (Model Context Protocol) Agents

MCP agents require Docker and additional setup for protocol servers.
1

Install Docker

MCP agents use Docker containers for protocol servers.
Download from docker.com
# Verify installation
docker --version
docker ps
Make sure Docker is running before starting MCP agents:
docker ps
2

Navigate to MCP Agent

cd mcp_ai_agents/<agent-name>
Available MCP agents:
  • github_mcp_agent - GitHub repository analysis
  • browser_mcp_agent - Browser automation
  • notion_mcp_agent - Notion integration
3

Install Python Dependencies

pip install -r requirements.txt
MCP-specific dependencies:
openai
streamlit
mcp
anthropic
python-dotenv
4

Configure API Keys

MCP agents typically need multiple API keys:For GitHub MCP Agent:
export OPENAI_API_KEY='your-openai-key'
export GITHUB_TOKEN='your-github-token'
Create a GitHub Personal Access Token at github.com/settings/tokens with repo scope.
5

Run the Agent

streamlit run github_agent.py
The agent will automatically manage Docker containers for MCP servers.

Advanced Multi-Agent Teams

Multi-agent systems may have additional dependencies for coordination and orchestration.
1

Navigate to Agent Team

cd advanced_ai_agents/multi_agent_apps/agent_teams/<team-name>
Examples:
  • ai_finance_agent_team - Financial analysis team
  • ai_legal_agent_team - Legal research team
  • multimodal_coding_agent_team - Code generation team
2

Install Framework Dependencies

Different teams use different frameworks:
pip install ag2
pip install pyautogen
3

Install All Dependencies

pip install -r requirements.txt
4

Run the Team

streamlit run <team-app>.py

Voice AI Agents

Voice agents require audio processing libraries.
1

System Audio Libraries

Install system-level audio dependencies:
brew install portaudio
2

Install Python Packages

cd voice_ai_agents/<agent-name>
pip install -r requirements.txt
Common voice dependencies:
openai
streamlit
sounddevice
soundfile
numpy
3

Run Voice Agent

streamlit run voice_agent.py

Local Model Setup (Ollama)

Many applications support local models via Ollama for privacy and cost savings.
1

Install Ollama

Download and install from ollama.ai
brew install ollama
2

Pull Your Model

Download models you want to use:
ollama pull llama3.1
ollama pull mistral
ollama pull qwen2.5
Use smaller models for faster responses:
ollama pull llama3.1:7b   # 7B parameter model
ollama pull gemma2:2b     # 2B parameter model
3

Start Ollama Service

ollama serve
Verify it’s running:
curl http://localhost:11434/api/tags
4

Run Local Applications

Applications with local support typically have separate files:
streamlit run local_<app-name>.py
Example:
cd starter_ai_agents/ai_travel_agent
streamlit run local_travel_agent.py

Environment Variables

Manage your API keys securely using environment variables.

Using .env Files

Many projects include .env.example files:
# Copy the example file
cp .env.example .env

# Edit with your keys
nano .env
Example .env file:
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AI...
ANTHROPIC_API_KEY=sk-ant-...
SERPAPI_KEY=...

System Environment Variables

Add to ~/.bashrc or ~/.zshrc:
export OPENAI_API_KEY='sk-...'
export GOOGLE_API_KEY='AI...'
Reload your shell:
source ~/.bashrc
Security Best Practices:
  • Never commit .env files to Git
  • Use different API keys for development and production
  • Rotate your API keys regularly
  • Add .env to your .gitignore file

Framework-Specific Setup

OpenAI Agents SDK

pip install openai-agents
pip install openai
pip install streamlit
pip install pydantic
pip install python-dotenv
Required environment variable:
export OPENAI_API_KEY='sk-...'

Google ADK (Agent Development Kit)

pip install google-genai-adk
pip install google-generativeai
Configuration:
# For Google AI Studio
export GOOGLE_GENAI_USE_VERTEXAI=False
export GOOGLE_API_KEY='AI...'

# For Vertex AI
export GOOGLE_GENAI_USE_VERTEXAI=True

LangChain & LangGraph

pip install langchain
pip install langgraph
pip install langchain-community
pip install langchain-openai
pip install langchain-google-genai
pip install langchain-anthropic

CrewAI

pip install crewai
pip install crewai-tools

Troubleshooting

Common Installation Issues

Solution:
# Upgrade pip
pip install --upgrade pip

# Reinstall requirements
pip install -r requirements.txt --force-reinstall
Solution:
# Don't use sudo with pip
# Instead, use --user flag
pip install --user -r requirements.txt

# Or use a virtual environment
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Solution:
# Ensure Docker is running
docker ps

# On macOS, start Docker Desktop
# On Linux
sudo systemctl start docker
Solution:
# Ensure streamlit is installed
pip install streamlit

# If still not working, run via python
python -m streamlit run app.py
Solution:
  • Check your API usage and billing
  • Use smaller models or reduce request frequency
  • Consider using local models via Ollama

Verification

Verify your installation:
# Check Python version
python --version

# Check pip
pip --version

# Check installed packages
pip list | grep streamlit
pip list | grep openai

# Test Streamlit
streamlit hello

# Check Docker (if needed)
docker --version
docker ps

Updating Dependencies

Keep your installations up to date:
# Update all packages
pip install -r requirements.txt --upgrade

# Update specific package
pip install --upgrade streamlit

# Update Ollama
curl -fsSL https://ollama.ai/install.sh | sh

Next Steps

Quick Start

Run your first AI agent in minutes

AI Agents

Explore starter and advanced agents

RAG Tutorials

Build retrieval augmented generation apps

Configuration

Advanced configuration options

Additional Resources

For project-specific installation instructions, always refer to the README.md file in each project directory.

Build docs developers (and LLMs) love