Skip to main content

Quickstart

This guide will help you get Aurora running locally using prebuilt images from GitHub Container Registry (GHCR). This is the fastest way to evaluate Aurora.
This quickstart uses prebuilt images. For development or to build from source, see the Installation guide.

Prerequisites

Before you begin, ensure you have:
Aurora works without any cloud provider accounts! The LLM API key is the only external requirement.

Installation Steps

1

Clone the repository

git clone https://github.com/arvo-ai/aurora.git
cd aurora
2

Initialize configuration

Run the initialization script to generate secure secrets automatically:
make init
This command:
  • Copies .env.example to .env
  • Generates secure random secrets for POSTGRES_PASSWORD, FLASK_SECRET_KEY, AUTH_SECRET, and SEARXNG_SECRET
  • Prepares your environment for first launch
The make init command is idempotent - it won’t overwrite existing secrets if you run it again.
3

Add your LLM API key

Edit .env and add your LLM API key:
nano .env  # or use your preferred editor
Add one of these keys:
.env
# OpenRouter (recommended - supports multiple models)
OPENROUTER_API_KEY=sk-or-v1-...

# Or OpenAI
OPENAI_API_KEY=sk-...

# Or Anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Or Google AI
GOOGLE_AI_API_KEY=...
If using OpenRouter, set LLM_PROVIDER_MODE=openrouter. For OpenAI, use LLM_PROVIDER_MODE=openai.
4

Start Aurora with prebuilt images

Pull and start Aurora using prebuilt images from GHCR:
make prod-prebuilt
This command:
  • Pulls the latest Aurora images from GitHub Container Registry
  • Tags them for local use
  • Starts all services with docker-compose
First launch may take 2-3 minutes as Docker pulls images and initializes services.
To pin a specific version instead of using latest:
make prod-prebuilt VERSION=v1.2.3
Available versions are listed at github.com/orgs/Arvo-AI/packages.
5

Get the Vault root token

After services start, retrieve the Vault root token from the initialization logs:
docker logs vault-init 2>&1 | grep "Root Token:"
You’ll see output like:
===================================================
Vault initialization complete!
Root Token: hvs.xxxxxxxxxxxxxxxxxxxxxxxxxxxx
IMPORTANT: Set VAULT_TOKEN=hvs.xxxxxxxxxxxxxxxxxxxxxxxxxxxx in your .env file
           to connect Aurora services to Vault.
===================================================
Copy the root token value and add it to your .env file:
nano .env
Add:
.env
VAULT_TOKEN=hvs.xxxxxxxxxxxxxxxxxxxxxxxxxxxx
The Vault token is required for Aurora to store and retrieve secrets securely. Without it, cloud connector credentials won’t be saved.
6

Restart Aurora to load the Vault token

Stop and restart Aurora to pick up the Vault token:
make down
make prod-prebuilt
This restart is only needed on first setup. Subsequent restarts aren’t necessary unless you change environment variables.

Access Aurora

That’s it! Aurora is now running. Access the web interface:
http://localhost:3000

Service Endpoints

Verify Installation

Check that all services are running:
make logs
You should see logs from all services without errors. To view logs for a specific service:
make logs frontend
make logs aurora-server
make logs chatbot

Next Steps

Configuration

Configure LLM providers, cloud connectors, and integrations

Cloud Connectors

Add AWS, GCP, Azure, or other cloud provider integrations

Integrations

Set up Slack, PagerDuty, GitHub, and other third-party services

Architecture

Learn about Aurora’s architecture and components

Build from Source (Alternative)

If you prefer to build images locally instead of using prebuilt images:
make prod-local
This builds all images from source and starts the services. Useful for:
  • Testing feature branches
  • Local development
  • Custom modifications
See the Installation guide for detailed development setup.

Stopping Aurora

To stop all services:
make down
This stops and removes all containers but preserves your data in Docker volumes.

Troubleshooting

Check logs for errors:
make logs
Common issues:
  • Missing LLM API key in .env
  • Port conflicts (3000, 5080, 5432, 6379, 8080 must be available)
  • Insufficient Docker resources (allocate at least 4GB RAM)
Ensure you’ve:
  1. Retrieved the Vault root token from vault-init logs
  2. Added VAULT_TOKEN to .env
  3. Restarted services with make down && make prod-prebuilt
Verify:
  • Your API key is valid and has credits
  • LLM_PROVIDER_MODE matches your provider (openrouter, openai, anthropic, google)
  • Network connectivity to LLM provider
Wait for PostgreSQL to fully initialize (check logs with make logs postgres). If issues persist:
make down
docker volume rm aurora_postgres-data
make prod-prebuilt
For production deployments, see the Production Considerations guide. This quickstart is designed for local testing and evaluation only.

Build docs developers (and LLMs) love