Skip to main content
Sentinel AI uses environment variables for configuration. Create a .env file in the root directory to configure the agent.

Quick Setup

1

Create .env file

Create a .env file in the root directory of your Sentinel AI installation.
2

Add required variables

Configure the required API keys and settings below.
3

Restart Sentinel AI

Restart the agent to apply the new configuration.

API Keys

OpenAI Configuration

OPENAI_API_KEY
string
required
OpenAI API key for GPT-4 model access. Used for the main reasoning engine and embeddings.Used in: src/core/config.py:10
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Get your API key from OpenAI Platform.
Sentinel AI uses gpt-4o as the default model with temperature set to 0 for deterministic responses.

Pinecone Configuration

PINECONE_API_KEY
string
required
Pinecone API key for vector database access. Required for the knowledge base and RAG functionality.Used in: src/core/config.py:14
PINECONE_API_KEY = os.getenv("PINECONE_API_KEY")
The default index name is sentinel-ai-index with 1536 dimensions for OpenAI embeddings.
Sentinel AI automatically creates the Pinecone index on first run if it doesn’t exist, using AWS us-east-1 region with cosine similarity metric.

LlamaCloud Configuration

LLAMA_CLOUD_API_KEY
string
required
LlamaCloud API key for PDF parsing with LlamaParse. Required for ingesting technical documentation.Used in: src/core/config.py:18
LLAMA_CLOUD_API_KEY = os.getenv("LLAMA_CLOUD_API_KEY")
Used by the knowledge base to parse PDF manuals into markdown format.

Cohere Configuration

COHERE_API_KEY
string
required
Cohere API key for semantic reranking. Improves retrieval accuracy by reranking top-k results.Used in: src/core/config.py:23
COHERE_API_KEY = os.getenv("COHERE_API_KEY")
The reranker is configured to return top 5 results after reranking.

SSH Configuration

Sentinel AI connects to remote servers via SSH to execute commands and monitor services.
SSH_HOST
string
default:"localhost"
Target SSH server hostname or IP address.Used in: src/core/config.py:25
SSH_HOST = os.getenv("SSH_HOST", "localhost")
SSH_PORT
integer
default:"2222"
SSH server port number.Used in: src/core/config.py:26
SSH_PORT = int(os.getenv("SSH_PORT", 2222))
SSH_USER
string
default:"sentinel"
SSH username for authentication.Used in: src/core/config.py:27
SSH_USER = os.getenv("SSH_USER", "sentinel")
SSH_PASS
string
SSH password for authentication. Optional if using key-based authentication.Used in: src/core/config.py:28
SSH_PASS = os.getenv("SSH_PASS")
For production environments, use key-based authentication instead of passwords. See SSH Setup for details.

System Configuration

These variables control Sentinel AI’s monitoring and execution behavior.
MONITOR_INTERVAL
integer
default:"30"
Interval in seconds between service health checks.Defined in: src/core/config.py:35
MONITOR_INTERVAL = 30
Lower values provide faster detection but increase system load.
MAX_RETRIES
integer
default:"5"
Maximum number of retry attempts for failed operations.Defined in: src/core/config.py:36
MAX_RETRIES = 5

Directory Configuration

Sentinel AI uses these directories for data storage. They are created automatically if they don’t exist.
DATA_DIR = "data"
MANUALS_DIR = os.path.join(DATA_DIR, "manuals")
MEMORY_DIR = os.path.join(DATA_DIR, "memory")
SERVICES_FILE = os.path.join(DATA_DIR, "services.json")
DirectoryPurposePath
DATA_DIRRoot data directorydata/
MANUALS_DIRTechnical documentation PDFsdata/manuals/
MEMORY_DIRAgent memory storagedata/memory/
SERVICES_FILEService configurationdata/services.json

Example Configuration

# OpenAI
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxx

# Pinecone
PINECONE_API_KEY=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

# LlamaCloud
LLAMA_CLOUD_API_KEY=llx_xxxxxxxxxxxxx

# Cohere
COHERE_API_KEY=xxxxxxxxxxxxx

# SSH Configuration
SSH_HOST=192.168.1.100
SSH_PORT=22
SSH_USER=admin
SSH_PASS=your_secure_password

Model Configuration

These are hardcoded in config.py but can be customized if needed:
MODEL_NAME = "gpt-4o"
TEMPERATURE = 0
EMBED_MODEL = "text-embedding-3-small"
EMBEDDING_DIM = 1536
PINECONE_INDEX = "sentinel-ai-index"
  • MODEL_NAME: OpenAI model for reasoning (default: gpt-4o)
  • TEMPERATURE: Sampling temperature for deterministic responses (default: 0)
  • EMBED_MODEL: OpenAI embedding model (default: text-embedding-3-small)
  • EMBEDDING_DIM: Embedding dimensions for Pinecone (default: 1536)
  • PINECONE_INDEX: Pinecone index name (default: sentinel-ai-index)
To customize these values, modify src/core/config.py directly.

Validation

After configuring your environment variables, validate the setup:
# Check if all required API keys are set
python -c "from src.core.config import config; print('OpenAI:', bool(config.OPENAI_API_KEY)); print('Pinecone:', bool(config.PINECONE_API_KEY)); print('Cohere:', bool(config.COHERE_API_KEY))"

# Test SSH connection
python -c "from src.tools.ssh import SSHClient; from src.core.config import config; client = SSHClient(config.SSH_HOST, config.SSH_USER, config.SSH_PASS, port=config.SSH_PORT); client.connect(); print('SSH connection successful')"

Security Best Practices

Never commit your .env file to version control. Add it to .gitignore immediately.

Use Environment-Specific Files

Create separate .env.dev, .env.staging, and .env.prod files for different environments.

Rotate Keys Regularly

Rotate API keys and SSH credentials periodically, especially after team member changes.

Restrict SSH Access

Use SSH keys instead of passwords, and limit SSH user permissions to only what’s needed.

Monitor API Usage

Set up usage alerts for your API keys to detect unauthorized access or unexpected usage.

Next Steps

SSH Setup

Configure SSH authentication and permissions

Services Configuration

Define services to monitor and manage

AI Models

Customize AI model settings and behavior

Build docs developers (and LLMs) love