Skip to main content

Quickstart Guide

This guide will help you get Sentinel AI running quickly. For detailed installation instructions, see the Installation page.

Prerequisites

Before you begin, ensure you have:
  • Python 3.10+ installed
  • Node.js 18+ and npm installed
  • API keys for:
    • OpenAI (for GPT-4 models)
    • Pinecone (for vector storage)
    • Cohere (for reranking)
    • LlamaCloud (optional, for document parsing)
You’ll need access to a Linux server via SSH. For testing, you can use localhost with SSH enabled.

Installation

1

Clone the Repository

git clone https://github.com/yourusername/sentinel-ai.git
cd sentinel-ai
2

Set Up Backend

Create a virtual environment and install dependencies:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
3

Configure Environment Variables

Create a .env file in the project root with your API keys:
.env
# AI Services
OPENAI_API_KEY=sk-your-openai-key-here
PINECONE_API_KEY=your-pinecone-key-here
COHERE_API_KEY=your-cohere-key-here
LLAMA_CLOUD_API_KEY=your-llamacloud-key-here

# SSH Configuration
SSH_HOST=localhost
SSH_PORT=22
SSH_USER=your-username
SSH_PASS=your-password-or-leave-empty-for-key-auth

# Server Configuration
PORT=8000
Never commit the .env file to version control. Add it to .gitignore.
4

Set Up Frontend

Install frontend dependencies:
cd frontend
npm install

Running Sentinel AI

1

Start the Backend Server

From the project root directory:
python run_server.py
The API server will start on http://localhost:8000. You should see:
INFO:     Started server process
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000
The knowledge base will initialize in the background on first startup. This may take a few moments.
2

Start the Frontend Dashboard

In a new terminal, navigate to the frontend directory:
cd frontend
npm run dev
Open your browser to http://localhost:3000 to access the dashboard.
3

Run Your First Analysis

In the Sentinel AI dashboard:
  1. Click the “Start Agent” button
  2. The agent will begin monitoring configured services
  3. Watch the live terminal for agent activity
  4. If issues are found, you’ll see the diagnosis and remediation plan
  5. Approve or reject critical actions as needed

Testing the Agent

To test Sentinel AI’s capabilities, you can simulate a service failure:
# Stop nginx to trigger detection
sudo systemctl stop nginx

# The agent will detect this and propose a fix
The agent checks services every 30 seconds by default. You can adjust this in src/core/config.py by changing MONITOR_INTERVAL.

API Endpoints

Sentinel AI exposes the following key endpoints:
EndpointMethodDescription
/GETAPI health check and status
/statusGETCurrent status of all monitored services
/agent/runPOSTStart the agent analysis cycle
/agent/stopPOSTStop the running agent
/agent/approvePOSTApprove or reject pending actions
/agent/stateGETGet current agent state
/servicesGET/POST/DELETEManage monitored services
/chatPOSTChat with the agent about infrastructure
/ws/logsWebSocketReal-time agent logs stream

Using the Chat Feature

You can query the agent’s knowledge base through the chat interface:
curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"query": "How do I configure Nginx reverse proxy?"}'
The agent will retrieve relevant information from indexed technical documentation and stream the response.

Managing Services

By default, Sentinel AI monitors:
  • nginx: Web server
  • postgresql: Database server
  • ssh: System access

Add a Custom Service

curl -X POST http://localhost:8000/services \
  -H "Content-Type: application/json" \
  -d '{
    "name": "docker",
    "check_command": "systemctl status docker",
    "running_indicator": "active (running)",
    "type": "container_runtime"
  }'

Remove a Service

curl -X DELETE http://localhost:8000/services/docker

Next Steps

Installation Guide

Learn about detailed installation, configuration, and deployment options.

Configuration

Customize monitoring intervals, retry limits, and service definitions.

API Reference

Explore all available API endpoints and WebSocket events.

Agent Workflow

Deep dive into the agent’s decision graph and RAG pipeline.

Troubleshooting

If the backend fails to start, ensure all environment variables are correctly set in .env.

Common Issues

Knowledge base not initializing:
  • Verify Pinecone API key is valid
  • Check that the index sentinel-ai-index exists in your Pinecone project
  • Wait 30-60 seconds for initial vector store connection
SSH connection fails:
  • Verify SSH credentials in .env
  • Ensure the target server has SSH enabled
  • Test SSH connection manually: ssh user@host -p port
Frontend cannot connect to backend:
  • Ensure backend is running on port 8000
  • Check CORS settings in src/api/server.py
  • Verify no firewall is blocking localhost connections
Agent not detecting service failures:
  • Check service names match your system (e.g., nginx vs nginx.service)
  • Verify check commands work manually via SSH
  • Review agent logs in the dashboard terminal

Build docs developers (and LLMs) love