Installation
This guide covers detailed installation instructions for Aurora, including development setup, production deployment with Docker Compose, and Kubernetes deployment.
For a quick local setup, see the Quickstart guide. This page provides more detailed instructions for different deployment scenarios.
Prerequisites
Required
Docker 20.10+ and Docker Compose 2.0+
Git for cloning the repository
LLM API Key from OpenRouter, OpenAI, Anthropic, or Google AI
Optional
Kubernetes 1.25+ with Helm 3.0+ (for K8s deployment)
Docker Buildx (for multi-platform builds)
Cloud Provider Accounts (AWS, GCP, Azure - only if using cloud connectors)
System Requirements
RAM : Minimum 4GB, recommended 8GB+
CPU : 2+ cores recommended
Disk : 10GB+ free space for Docker volumes
Ports : 3000, 5080, 5006, 5432, 6379, 8080, 8200 must be available
Installation Methods
Prebuilt Images Fastest method - pull images from GHCR
Build from Source For development or custom modifications
Kubernetes Production deployment with Helm
Method 1: Prebuilt Images (Recommended)
Use prebuilt images from GitHub Container Registry for the fastest setup.
Clone and initialize
git clone https://github.com/arvo-ai/aurora.git
cd aurora
make init
Configure environment
Edit .env to add your LLM API key: Required configuration: # LLM Provider (at least one required)
OPENROUTER_API_KEY = sk-or-v1-your-key-here
LLM_PROVIDER_MODE = openrouter
# Or use OpenAI
# OPENAI_API_KEY=sk-your-key-here
# LLM_PROVIDER_MODE=openai
# Agent configuration (optional)
AGENT_RECURSION_LIMIT = 240
RCA_OPTIMIZE_COSTS = false
Start with prebuilt images
Configure Vault token
Retrieve and set the Vault root token: docker logs vault-init 2>&1 | grep "Root Token:"
Add to .env: VAULT_TOKEN = hvs.your-vault-token-here
Restart services: make down && make prod-prebuilt
Prebuilt images are production-ready and tested. They’re rebuilt on every commit to the main branch.
Method 2: Build from Source
Build Aurora locally for development or to test custom modifications.
Development Mode
Production Mode (Local Build)
# Clone repository
git clone https://github.com/arvo-ai/aurora.git
cd aurora
# Initialize environment
make init
# Add LLM API key to .env
nano .env
# Build and start in development mode
make dev
# Configure Vault token (see step above)
docker logs vault-init 2>&1 | grep "Root Token:"
# Add VAULT_TOKEN to .env, then:
make down && make dev
Development vs Production
Feature Development (make dev) Production (make prod-local) Hot reload Yes No Source maps Yes No Optimized builds No Yes Volume mounts Code mounted No mounts Use case Local development Testing production builds
Development mode mounts source directories for hot reload. Production mode uses optimized builds without source mounts.
Method 3: Kubernetes Deployment
Deploy Aurora to Kubernetes using Helm charts.
Prepare configuration
Copy the values template: cd deploy/helm/aurora
cp values.yaml values.generated.yaml
Edit values.generated.yaml to configure: image :
registry : your-registry.example.com
tag : latest
config :
OPENROUTER_API_KEY : "sk-or-v1-your-key-here"
LLM_PROVIDER_MODE : "openrouter"
# Frontend URLs
FRONTEND_URL : "https://aurora.example.com"
NEXT_PUBLIC_BACKEND_URL : "https://api.aurora.example.com"
# Database
POSTGRES_PASSWORD : "generate-secure-password-here"
# Optional: Cloud connectors
# AWS_ACCESS_KEY_ID: "..."
# AWS_SECRET_ACCESS_KEY: "..."
Build and push images
Build images for your registry: This:
Reads configuration from values.generated.yaml
Builds images for linux/amd64
Tags with current git SHA
Pushes to your configured registry
Updates values.generated.yaml with the new tag
Deploy with Helm
Or manually: helm upgrade --install aurora-oss ./deploy/helm/aurora \
--namespace aurora \
--create-namespace \
--reset-values \
-f deploy/helm/aurora/values.generated.yaml
Initialize Vault (first time only)
After deployment, initialize Vault: kubectl exec -n aurora deployment/vault -- vault operator init
Save the unseal keys and root token securely. Update the Kubernetes secret: kubectl create secret generic vault-token \
--from-literal=token=hvs.your-vault-token \
-n aurora
Restart Aurora services to pick up the token: kubectl rollout restart deployment/aurora-server -n aurora
kubectl rollout restart deployment/chatbot -n aurora
Verify deployment
kubectl get pods -n aurora
kubectl logs -f deployment/aurora-server -n aurora
Kubernetes deployment requires additional configuration for:
Ingress controllers (nginx, Traefik, etc.)
TLS certificates (cert-manager recommended)
Persistent volume provisioning
Network policies
See the Kubernetes Deployment guide for details.
Environment Configuration
Core Variables
Required environment variables in .env:
# Environment
AURORA_ENV = dev
# Database
POSTGRES_USER = aurora
POSTGRES_PASSWORD =< generated-by-make-init >
POSTGRES_DB = aurora_db
POSTGRES_HOST = postgres
POSTGRES_PORT = 5432
# Redis
REDIS_URL = redis://redis:6379/0
# Object Storage (SeaweedFS defaults)
STORAGE_BUCKET = aurora-storage
STORAGE_ENDPOINT_URL = http://seaweedfs-filer:8333
STORAGE_ACCESS_KEY = admin
STORAGE_SECRET_KEY = admin
STORAGE_REGION = us-east-1
# Security
FLASK_SECRET_KEY =< generated-by-make-init >
AUTH_SECRET =< generated-by-make-init >
# URLs
FRONTEND_URL = http://localhost:3000
BACKEND_URL = http://aurora-server:5080
NEXT_PUBLIC_BACKEND_URL = http://localhost:5080
NEXT_PUBLIC_WEBSOCKET_URL = ws://localhost:5006
# Vault
VAULT_ADDR = http://vault:8200
VAULT_TOKEN =< get-from-vault-init-logs >
VAULT_KV_MOUNT = aurora
# LLM Provider (at least one required)
OPENROUTER_API_KEY =< your-key-here >
LLM_PROVIDER_MODE = openrouter
AGENT_RECURSION_LIMIT = 240
# Web Search
SEARXNG_URL = http://searxng:8080
SEARXNG_SECRET =< generated-by-make-init >
# Weaviate
WEAVIATE_HOST = weaviate
WEAVIATE_PORT = 8080
# Memgraph
MEMGRAPH_HOST = memgraph
MEMGRAPH_PORT = 7687
MEMGRAPH_PASSWORD = CHANGE_ME
The make init command automatically generates secure values for POSTGRES_PASSWORD, FLASK_SECRET_KEY, AUTH_SECRET, and SEARXNG_SECRET.
Optional Integrations
Add these variables to enable optional features:
# AWS
AWS_ACCESS_KEY_ID = your-access-key
AWS_SECRET_ACCESS_KEY = your-secret-key
AWS_DEFAULT_REGION = us-east-1
# GCP
CLIENT_ID = your-oauth-client-id
CLIENT_SECRET = your-oauth-client-secret
# GitHub OAuth
GH_OAUTH_CLIENT_ID = your-github-oauth-client-id
GH_OAUTH_CLIENT_SECRET = your-github-oauth-client-secret
# Slack
NEXT_PUBLIC_ENABLE_SLACK = true
SLACK_CLIENT_ID = your-slack-client-id
SLACK_CLIENT_SECRET = your-slack-client-secret
SLACK_SIGNING_SECRET = your-slack-signing-secret
# PagerDuty
NEXT_PUBLIC_ENABLE_PAGERDUTY_OAUTH = true
PAGERDUTY_CLIENT_ID = your-pagerduty-client-id
PAGERDUTY_CLIENT_SECRET = your-pagerduty-client-secret
See the Environment Variables guide for all available options.
Makefile Commands
Aurora provides a comprehensive Makefile for common operations:
Development
make init # First-time setup (generates secrets)
make dev # Build and start dev environment
make down # Stop all containers
make logs # View logs (all services)
make logs < servic e > # View logs for specific service
make rebuild-server # Rebuild aurora-server only
make clean # Stop containers and remove volumes
make nuke # Full cleanup (containers, volumes, images)
Production
make prod-prebuilt # Pull and start prebuilt images
make prod-local # Build from source and start
make prod-logs # View production logs
make prod-clean # Stop and remove production volumes
make prod-nuke # Full production cleanup
Kubernetes
make deploy-build # Build and push images for K8s
make deploy # Deploy with Helm
The make down command works for both development and production deployments.
Verify Installation
Check Services
Verify all services are running:
You should see containers for:
aurora-server (Flask API)
aurora_celery-worker-1 (Background tasks)
aurora_celery-beat-1 (Scheduled tasks)
aurora_chatbot-1 (WebSocket server)
aurora_frontend-1 (Next.js UI)
aurora-postgres (Database)
weaviate (Vector database)
redis (Message queue)
aurora-vault (Secrets management)
aurora-seaweedfs-* (Object storage)
aurora-memgraph (Graph database)
Test Endpoints
# Test backend API
curl http://localhost:5080/health
# Test frontend
curl http://localhost:3000
# Test Vault
curl http://localhost:8200/v1/sys/health
View Logs
# All services
make logs
# Specific service
make logs aurora-server
make logs chatbot
make logs frontend
Troubleshooting
Build fails with dependency errors
Clear Docker cache and rebuild:
Check which ports are in use: lsof -i :3000 # Frontend
lsof -i :5080 # Backend
lsof -i :5432 # PostgreSQL
Stop conflicting services or modify ports in .env:
Services crash on startup
Check Docker resource limits: Increase Docker memory allocation to at least 4GB (8GB recommended).
Reset the database: make down
docker volume rm aurora_postgres-data
make dev
Weaviate requires more memory. Check logs: Increase Docker memory or disable vector search temporarily.
Next Steps
Configuration Configure LLM providers and adjust agent settings
Cloud Connectors Add AWS, GCP, Azure integrations
Production Deployment Deploy Aurora to production with best practices
Architecture Understand Aurora’s architecture and components
Upgrading
To upgrade to a newer version:
Prebuilt Images
Build from Source
# Stop current version
make down
# Pull latest images
make prod-prebuilt
# Or pin a specific version
make prod-prebuilt VERSION=v1.2.3
Always backup your data before upgrading: docker exec aurora-postgres pg_dump -U aurora aurora_db > backup.sql
Uninstalling
To completely remove Aurora:
# Stop all containers and remove volumes
make nuke
# Remove images
docker rmi $( docker images 'aurora_*' -q )
docker rmi $( docker images 'ghcr.io/arvo-ai/aurora-*' -q )
# Remove repository
cd ..
rm -rf aurora