Architecture Overview
Aurora is built as a microservices architecture using Docker Compose. The system consists of multiple components that work together to provide a natural-language interface for cloud infrastructure management.System Components
Backend Services
Aurora Server (Flask API)
- Location:
server/ - Entry Point:
main_compute.py - Port: 5080
- Framework: Flask 3.1.3
- Purpose: REST API for all compute operations, user management, and cloud provider integrations
Chatbot WebSocket Server
- Location:
server/ - Entry Point:
main_chatbot.py - Port: 5006
- Protocol: WebSocket
- Purpose: Real-time conversational AI interface powered by LangGraph
Celery Worker
- Location:
server/ - Purpose: Background task processing for long-running operations
- Broker: Redis
- Tasks: Cloud resource discovery, billing updates, infrastructure provisioning
Celery Beat
- Purpose: Periodic task scheduler
- Tasks: Scheduled billing updates, resource synchronization
Frontend
- Location:
client/ - Framework: Next.js 15
- Language: TypeScript
- UI Components: shadcn/ui (Radix UI primitives)
- Styling: Tailwind CSS
- Authentication: Auth.js (NextAuth v5 beta)
- Port: 3000
Data Layer
PostgreSQL
- Port: 5432
- Database:
aurora_db - Purpose: Primary relational database for users, projects, infrastructure state
- Driver: psycopg2 (backend)
Weaviate
- Port: 8080
- Purpose: Vector database for semantic search and RAG (Retrieval-Augmented Generation)
- Use Cases: Knowledge base search, context retrieval for AI agent
Redis
- Port: 6379
- Purpose: Message broker for Celery, caching layer
Infrastructure Services
HashiCorp Vault
- Port: 8200
- Purpose: Secrets management for cloud provider credentials, API keys
- Storage: File-based with Docker volumes (
vault-data,vault-init) - Mount: KV v2 engine at
auroramount - Auto-initialization:
vault-initcontainer handles setup and unsealing
SeaweedFS
- S3 API Port: 8333
- File Browser: 8888
- Cluster Status: 9333
- Purpose: S3-compatible object storage for Terraform state files, artifacts
- License: Apache 2.0
- Alternatives: AWS S3, Cloudflare R2, MinIO, GCS (S3 interop)
Tech Stack
Backend Technologies
Frontend Technologies
Project Structure
Backend Structure (server/)
Frontend Structure (client/)
Data Flow
Chat Workflow
- User Input: User sends message via WebSocket from frontend
- WebSocket Server:
main_chatbot.pyreceives message - LangGraph Agent: Message flows through LangGraph workflow
- Tool Execution: Agent calls cloud provider tools, database queries
- LLM Processing: OpenRouter/Anthropic/OpenAI generates response
- Streaming Response: Response streamed back to frontend via WebSocket
Infrastructure Provisioning
- User Request: User requests infrastructure via chat or UI
- Agent Analysis: LangGraph agent analyzes requirements
- Terraform Generation: Agent generates Terraform configurations
- State Storage: Terraform state stored in SeaweedFS
- Approval Flow: User confirms changes via WebSocket
- Celery Task: Background task executes Terraform apply
- Result Notification: User notified of completion
Authentication Flow
- Login: User submits credentials to Next.js API route
- Auth.js: NextAuth validates credentials against PostgreSQL
- JWT Token: Stateless JWT token issued (flask-jwt-extended)
- Session: Session stored in secure HTTP-only cookie
- API Requests: JWT included in Authorization header
- Validation: Flask middleware validates JWT on each request
Agent Architecture (LangGraph)
Aurora uses LangGraph for orchestrating the AI agent workflow:Agent State
Agent Tools
The agent has access to various tools:- Cloud Provider Tools: List resources, create/modify/delete resources
- Database Tools: Query infrastructure state, user data
- Terraform Tools: Generate, validate, apply IaC
- Knowledge Base Tools: Search documentation, retrieve context
- Billing Tools: Get cost estimates, analyze spending
Secrets Management
Aurora uses HashiCorp Vault for secure secrets storage:Secret References
Secrets stored in the database use a special format:Vault Structure
Storage Architecture
Aurora uses S3-compatible storage via SeaweedFS:Code Style Guidelines
Python (Backend)
- Naming:
snake_casefor functions, variables, files - Imports: Group imports (stdlib, third-party, local)
- Error Handling: Use try/except with logging
- Async: Use async/await with langchain/langgraph
- Logging: Use
logging.INFOlevel, no emojis in logs - Database: Use connection pooling via
db_pool - Routes: Organize as Flask blueprints in
routes/
TypeScript (Frontend)
- Naming:
camelCasefor variables/functions,PascalCasefor components - Imports: Use path alias
@/*for./src/* - Components: Functional components with TypeScript
- Hooks: Follow React hooks best practices
- Error Handling: Use try/catch with user-friendly messages
- Styling: Use Tailwind CSS utility classes
- URLs: Use
kebab-casefor routes
General
- Keep functions small and focused
- Avoid deep nesting
- Write self-documenting code
- Add comments for complex logic only
- No commented-out code in commits
- No emojis in code or logs
Configuration
Docker Compose Files
- docker-compose.yaml: Development environment
- docker-compose.prod-local.yml: Production builds for local testing
Environment Variables
Configuration is managed via.env file. See .env.example for all available options.
Key configuration areas:
- Database credentials
- LLM API keys
- Cloud provider credentials (or use Vault)
- Service URLs and ports
- Feature flags
Next Steps
Setup Guide
Set up your development environment
Contributing
Learn how to contribute to Aurora
Testing
Write and run tests