Quickstart
This guide will help you get Aurora running locally using prebuilt images from GitHub Container Registry (GHCR). This is the fastest way to evaluate Aurora.This quickstart uses prebuilt images. For development or to build from source, see the Installation guide.
Prerequisites
Before you begin, ensure you have:- Docker and Docker Compose installed
- An LLM API key from one of:
- OpenRouter (recommended - supports multiple models)
- OpenAI
- Anthropic
- Google AI Studio
Aurora works without any cloud provider accounts! The LLM API key is the only external requirement.
Installation Steps
Initialize configuration
Run the initialization script to generate secure secrets automatically:This command:
- Copies
.env.exampleto.env - Generates secure random secrets for
POSTGRES_PASSWORD,FLASK_SECRET_KEY,AUTH_SECRET, andSEARXNG_SECRET - Prepares your environment for first launch
The
make init command is idempotent - it won’t overwrite existing secrets if you run it again.Add your LLM API key
Edit Add one of these keys:
.env and add your LLM API key:.env
If using OpenRouter, set
LLM_PROVIDER_MODE=openrouter. For OpenAI, use LLM_PROVIDER_MODE=openai.Start Aurora with prebuilt images
Pull and start Aurora using prebuilt images from GHCR:This command:To pin a specific version instead of using latest:Available versions are listed at github.com/orgs/Arvo-AI/packages.
- Pulls the latest Aurora images from GitHub Container Registry
- Tags them for local use
- Starts all services with docker-compose
First launch may take 2-3 minutes as Docker pulls images and initializes services.
Get the Vault root token
After services start, retrieve the Vault root token from the initialization logs:You’ll see output like:Copy the root token value and add it to your Add:
.env file:.env
Access Aurora
That’s it! Aurora is now running. Access the web interface:Service Endpoints
- Frontend: http://localhost:3000
- Backend API: http://localhost:5080
- Chatbot WebSocket: ws://localhost:5006
- Vault UI: http://localhost:8200
- SeaweedFS File Browser: http://localhost:8888
- Memgraph Lab: http://localhost:3001
Verify Installation
Check that all services are running:Next Steps
Configuration
Configure LLM providers, cloud connectors, and integrations
Cloud Connectors
Add AWS, GCP, Azure, or other cloud provider integrations
Integrations
Set up Slack, PagerDuty, GitHub, and other third-party services
Architecture
Learn about Aurora’s architecture and components
Build from Source (Alternative)
If you prefer to build images locally instead of using prebuilt images:- Testing feature branches
- Local development
- Custom modifications
Stopping Aurora
To stop all services:Troubleshooting
Services fail to start
Services fail to start
Check logs for errors:Common issues:
- Missing LLM API key in
.env - Port conflicts (3000, 5080, 5432, 6379, 8080 must be available)
- Insufficient Docker resources (allocate at least 4GB RAM)
Vault connection errors
Vault connection errors
Ensure you’ve:
- Retrieved the Vault root token from
vault-initlogs - Added
VAULT_TOKENto.env - Restarted services with
make down && make prod-prebuilt
LLM requests fail
LLM requests fail
Verify:
- Your API key is valid and has credits
LLM_PROVIDER_MODEmatches your provider (openrouter, openai, anthropic, google)- Network connectivity to LLM provider
Database connection errors
Database connection errors
Wait for PostgreSQL to fully initialize (check logs with
make logs postgres). If issues persist: