Skip to main content

Welcome to Aurora

Aurora is an automated root cause analysis investigation tool that uses AI agents to help Site Reliability Engineers resolve incidents. Built with a powerful combination of AI orchestration and cloud integrations, Aurora accelerates incident response by automatically investigating issues across your infrastructure.
Aurora works without any cloud provider accounts! The only external requirement is an LLM API key. Connectors for cloud providers and third-party services are optional and can be enabled later.

Key Features

AI-Powered Root Cause Analysis

Automated investigation using LangGraph agents that analyze logs, metrics, and infrastructure state to identify root causes.

Multi-Cloud Support

Built-in connectors for AWS, GCP, Azure, and more. Optional integrations allow Aurora to investigate across your entire infrastructure.

Self-Hosted & Secure

Run entirely on your infrastructure with HashiCorp Vault for secrets management. No external dependencies except your chosen LLM provider.

Real-Time Investigation

WebSocket-based chatbot interface provides live updates as agents investigate incidents and gather information.

Architecture Overview

Aurora consists of several key components that work together:
  • Python Backend: Flask REST API and WebSocket chatbot powered by LangGraph agents
  • Next.js Frontend: Modern React interface for incident management and investigation
  • PostgreSQL: Primary database for incident data and configurations
  • Weaviate: Vector database for semantic search and knowledge retrieval
  • Redis: Message queue for Celery background tasks
  • HashiCorp Vault: Secure secrets storage for API keys and credentials
  • SeaweedFS: S3-compatible object storage (Apache 2.0 licensed)
  • Memgraph: Graph database for infrastructure topology
All services run in Docker containers for easy deployment. See the Quickstart guide to get up and running in minutes.

Repository Structure

aurora/
├── server/          # Python API, chatbot, Celery workers
├── client/          # Next.js frontend
├── config/          # Configuration files for services
├── scripts/         # Setup and initialization scripts
├── deploy/          # Kubernetes Helm charts
└── docker-compose.yaml

Get Started

Quickstart

Get Aurora running locally in under 5 minutes with prebuilt images.

Installation

Detailed installation instructions for development and production deployments.

Configuration

Learn how to configure LLM providers, cloud connectors, and integrations.

API Reference

Complete API documentation for building custom integrations.

What’s Next?

After getting Aurora running, you can:
  1. Add Cloud Connectors: Connect AWS, GCP, Azure, or other providers to enable infrastructure investigation
  2. Configure Integrations: Set up Slack, PagerDuty, GitHub, and other third-party services
  3. Customize Agents: Configure agent behavior and recursion limits for your use case
  4. Deploy to Production: Use Docker Compose or Kubernetes for production deployments
Aurora is designed for SRE teams and requires familiarity with cloud infrastructure concepts. Ensure you understand the security implications of granting Aurora access to your cloud accounts.

Open Source

Aurora is licensed under the Apache License 2.0. We welcome contributions from the community!
Ready to get started? Head to the Quickstart guide to deploy Aurora in minutes.

Build docs developers (and LLMs) love