Getting Started with Support Bot
This guide will walk you through setting up Support Bot on your local machine. You’ll have a fully functional AI-powered incident resolution system running in under 10 minutes.Prerequisites
Before you begin, ensure you have the following installed:- Python 3.11+: Support Bot requires Python 3.11 or higher
- Docker & Docker Compose: For running PostgreSQL and Qdrant databases
- Node.js 18+: For the React frontend (with npm or yarn)
- Git: To clone the repository
Make sure Docker is running before proceeding with the setup steps.
Installation Steps
Configure Environment Variables
Copy the example environment file and configure your settings:Edit the
.env file and set the required variables:Start Docker Services
Launch the required databases using Docker Compose:This starts three services:
- PostgreSQL (port 5434): Main application database
- pgvector (port 5433): Vector database for embeddings
- Qdrant (port 6333): Vector search engine
- Adminer (port 8080): Database management UI
Install Python Dependencies
Install the backend dependencies using pip:Or using uv (recommended for faster installs):
Run Database Migrations
Initialize the database schema using Alembic:This creates all necessary tables for users, roles, permissions, incidents, and chat sessions.
Create an Admin User
Create your first admin user to access the system:Follow the prompts to set up your admin credentials.
Start the Backend Server
Launch the FastAPI backend:The API will be available at
http://localhost:8000. You can view the API docs at http://localhost:8000/docs.Install Frontend Dependencies
In a new terminal, navigate to the frontend directory and install dependencies:
Configure Frontend Environment
Create the frontend environment file:The default configuration should work:
Your First Chat
Now that everything is running, let’s test the system:- Open your browser to
http://localhost:5173 - Log in with your admin credentials
- Navigate to the chat interface
- Try asking: “Show me recent incidents” or “Find issues related to payment gateway”
Testing the CLI (Optional)
You can also interact with the AI copilot directly from the command line:The CLI uses the same LangGraph agent as the web interface, so you’ll get consistent responses across both platforms.
Next Steps
Architecture Overview
Learn how Support Bot works under the hood
Ingest Incidents
Import your existing incident data into the knowledge base
Configure LLM Providers
Set up and switch between different AI providers
User Management
Add users, configure roles, and set up OAuth
Troubleshooting
Database Connection Issues
If you see database connection errors, verify that:- Docker containers are running:
docker-compose ps - Port 5434 and 5433 are not in use by other services
- Database credentials in
.envmatch thedocker-compose.ymlconfiguration
LLM API Errors
If the AI agent isn’t responding:- Check that your API key is valid in the
.envfile - Verify the
MODELvariable matches your chosen provider - Review logs in the terminal running
uvicorn
Frontend Not Loading
If the frontend shows errors:- Ensure
VITE_BE_URLpoints to your running backend - Check browser console for CORS errors
- Verify the backend is running on port 8000