Overview
Aurora’s AI agent is built on LangGraph, a framework for building stateful, multi-step workflows with large language models. The agent can execute cloud operations, search knowledge bases, run commands, and more through a tool-based architecture.LangGraph Architecture
State Graph
The agent workflow is implemented as a directed graph with nodes representing operations and edges defining the flow.server/chat/backend/agent/workflow.py:103-113
State Management
The workflow state stores conversation context and execution metadata:server/chat/backend/agent/utils/state.py
Memory Persistence
LangGraph uses a memory saver to persist conversation context across requests:server/chat/backend/agent/workflow.py:60-64
Agent Execution Flow
1. Initialize Connection
User connects via WebSocket with authentication:server/main_chatbot.py:94-133
2. Receive Query
User sends a question with context:server/main_chatbot.py:806-839
3. Set User Context
The agent sets thread-local context for tools:server/chat/backend/agent/agent.py:200-208
4. Build System Prompt
Dynamic system prompt based on connected providers and mode:server/chat/backend/agent/agent.py:244-250
5. Load Available Tools
Tools are filtered based on mode and connected providers:server/chat/backend/agent/agent.py:168
6. Execute LangGraph Workflow
The workflow streams events as the agent thinks and acts:server/main_chatbot.py:331-470
7. Tool Execution
When the LLM decides to use a tool:8. Stream Response
Final answer is streamed token-by-token to the frontend:server/main_chatbot.py:439-470
Agent Tools
Aurora provides 30+ tools across multiple categories:Cloud Operations
| Tool | Purpose | File |
|---|---|---|
list_gke_clusters | List GKE clusters | cloud_tools.py |
list_gcp_compute_instances | List GCP VMs | cloud_tools.py |
list_aws_ec2_instances | List AWS EC2 | cloud_tools.py |
list_azure_vms | List Azure VMs | cloud_tools.py |
get_gcp_logs | Query GCP logs | cloud_tools.py |
iac_write | Write Terraform | iac_tool.py |
iac_deploy | Deploy infrastructure | iac_tool.py |
Knowledge & Search
| Tool | Purpose | File |
|---|---|---|
knowledge_base_search | Semantic search uploaded docs | knowledge_base_search_tool.py |
confluence_search | Search Confluence wikis | confluence_search_tool.py |
web_search | Search the internet | web_search_tool.py |
Command Execution
| Tool | Purpose | File |
|---|---|---|
run_kubectl_command | Execute kubectl in pods | cloud_tools.py |
terminal_exec | Run bash commands | terminal_exec_tool.py |
cloud_exec | Execute cloud CLI commands | cloud_exec_tool.py |
Source Control
| Tool | Purpose | File |
|---|---|---|
github_search_code | Search GitHub repos | github_rca_tool.py |
github_get_file | Read file content | github_rca_tool.py |
github_commit | Commit and push changes | github_commit_tool.py |
github_apply_fix | Apply code fixes | github_apply_fix_tool.py |
Monitoring & Observability
| Tool | Purpose | File |
|---|---|---|
splunk_search | Search Splunk logs | splunk_tool.py |
dynatrace_query | Query Dynatrace metrics | dynatrace_tool.py |
coroot_analyze | Analyze Coroot data | coroot_tool.py |
jenkins_get_build_logs | Fetch Jenkins logs | jenkins_rca_tool.py |
File Operations
| Tool | Purpose | File |
|---|---|---|
extract_zip_file | Extract ZIP archives | zip_file_tool.py |
read_terraform_file | Read Terraform files | iac_tool.py |
server/chat/backend/agent/tools/
Tool Execution Pattern
Tool Definition
Tools are defined using LangChain’s@tool decorator:
Tool Context Access
Tools access user context via thread-local storage:server/chat/backend/agent/tools/cloud_tools.py:58-98
Tool Output Streaming
Tools can send real-time updates via WebSocket:server/main_chatbot.py:176-215
Error Handling
Tools return structured JSON with status:Access Control
Agent vs Ask Mode
Aurora supports two operational modes: Agent Mode (Full Access):- Can execute infrastructure changes
- Can commit code to GitHub
- Can deploy Terraform resources
- Can run destructive operations
- Can read cloud resources
- Can search knowledge bases
- Cannot modify infrastructure
- Cannot commit code
server/chat/backend/agent/access.py
Tool Filtering
server/main_chatbot.py:867-876
LLM Integration
Multi-Provider Support
Aurora supports multiple LLM providers:- OpenRouter: Access 100+ models (default)
- OpenAI: GPT-3.5, GPT-4, GPT-4o
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus
- Google: Gemini Pro, Gemini 2.0 (with thinking)
server/chat/backend/agent/llm.py
Prompt Caching
Aurora uses prefix caching to reduce latency and costs:server/chat/backend/agent/utils/prefix_cache.py
MCP Integration
Aurora supports Model Context Protocol (MCP) for extending agent capabilities:MCP Preloader
MCP servers are preloaded on startup for faster response times:server/main_compute.py:102-106
Dynamic Tool Loading
MCP tools are dynamically loaded based on user connections:server/chat/backend/agent/tools/mcp_tools.py
Streaming Architecture
Token Streaming
LLM responses stream token-by-token for perceived speed:server/main_chatbot.py:341-360
Tool Call Streaming
Tool execution status is streamed in real-time:server/main_chatbot.py:386-414
Cancellation & Cleanup
Workflow Cancellation
Users can cancel in-progress workflows:- Cancels the asyncio task
- Waits for ongoing tool calls to complete
- Consolidates message chunks
- Saves context for resumption
- Sends END status to frontend
server/main_chatbot.py:709-800
Terraform Cleanup
Terraform state is cleaned up after deployment:server/chat/backend/agent/agent.py:40-78