Skip to main content

Getting Started with Support Bot

This guide will walk you through setting up Support Bot on your local machine. You’ll have a fully functional AI-powered incident resolution system running in under 10 minutes.

Prerequisites

Before you begin, ensure you have the following installed:
  • Python 3.11+: Support Bot requires Python 3.11 or higher
  • Docker & Docker Compose: For running PostgreSQL and Qdrant databases
  • Node.js 18+: For the React frontend (with npm or yarn)
  • Git: To clone the repository
Make sure Docker is running before proceeding with the setup steps.

Installation Steps

1

Clone the Repository

Start by cloning the Support Bot repository to your local machine:
git clone <repository-url>
cd support-bot
2

Configure Environment Variables

Copy the example environment file and configure your settings:
cp .env.example .env
Edit the .env file and set the required variables:
# LLM Configuration
MODEL=gpt-4  # or claude-3-5-sonnet, gemini-pro, etc.

# API Keys (choose based on your LLM provider)
GEMINI_API_KEY=your-gemini-key
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key

# Database URLs
DATABASE_URL=postgresql+asyncpg://postgres:admin@localhost:5434/bot_db
DATABASE_URL_SYNC=postgresql://postgres:admin@localhost:5434/bot_db
VECTOR_DATABASE_URL=postgresql://postgres:admin@localhost:5433/vector_db
QDRANT_URL=http://localhost:6333

# Security
JWT_SECRET_KEY=your-secret-key-here
JWT_ALGORITHM=HS256
JWT_EXPIRY=30  # days
ENCRYPTION_KEY=your-encryption-key
Generate secure random values for JWT_SECRET_KEY and ENCRYPTION_KEY in production. Never commit these to version control.
3

Start Docker Services

Launch the required databases using Docker Compose:
docker-compose up -d
This starts three services:
  • PostgreSQL (port 5434): Main application database
  • pgvector (port 5433): Vector database for embeddings
  • Qdrant (port 6333): Vector search engine
  • Adminer (port 8080): Database management UI
Verify the services are running:
docker-compose ps
4

Install Python Dependencies

Install the backend dependencies using pip:
pip install -e .
Or using uv (recommended for faster installs):
uv pip install -e .
5

Run Database Migrations

Initialize the database schema using Alembic:
cd src/api/db
alembic upgrade head
cd ../../..
This creates all necessary tables for users, roles, permissions, incidents, and chat sessions.
6

Create an Admin User

Create your first admin user to access the system:
python scripts/create_admin.py
Follow the prompts to set up your admin credentials.
7

Start the Backend Server

Launch the FastAPI backend:
uvicorn src.api.main:app --reload --port 8000
The API will be available at http://localhost:8000. You can view the API docs at http://localhost:8000/docs.
8

Install Frontend Dependencies

In a new terminal, navigate to the frontend directory and install dependencies:
cd frontend
npm install
9

Configure Frontend Environment

Create the frontend environment file:
cp .env.example .env
The default configuration should work:
VITE_BE_URL=http://localhost:8000
10

Start the Frontend Development Server

Launch the React development server:
npm run dev
The frontend will be available at http://localhost:5173.

Your First Chat

Now that everything is running, let’s test the system:
  1. Open your browser to http://localhost:5173
  2. Log in with your admin credentials
  3. Navigate to the chat interface
  4. Try asking: “Show me recent incidents” or “Find issues related to payment gateway”
The AI agent will search your knowledge base and provide relevant information. As you add more incidents to the system, the search becomes more powerful.

Testing the CLI (Optional)

You can also interact with the AI copilot directly from the command line:
python -m src.copilot.main
This launches an interactive chat session in your terminal. Type your questions and press Enter.
The CLI uses the same LangGraph agent as the web interface, so you’ll get consistent responses across both platforms.

Next Steps

Architecture Overview

Learn how Support Bot works under the hood

Ingest Incidents

Import your existing incident data into the knowledge base

Configure LLM Providers

Set up and switch between different AI providers

User Management

Add users, configure roles, and set up OAuth

Troubleshooting

Database Connection Issues

If you see database connection errors, verify that:
  • Docker containers are running: docker-compose ps
  • Port 5434 and 5433 are not in use by other services
  • Database credentials in .env match the docker-compose.yml configuration

LLM API Errors

If the AI agent isn’t responding:
  • Check that your API key is valid in the .env file
  • Verify the MODEL variable matches your chosen provider
  • Review logs in the terminal running uvicorn

Frontend Not Loading

If the frontend shows errors:
  • Ensure VITE_BE_URL points to your running backend
  • Check browser console for CORS errors
  • Verify the backend is running on port 8000
Need more help? Check the full documentation or open an issue on GitHub.

Build docs developers (and LLMs) love