Skip to main content

Chat Messages

Messages are exchanged with Aurora’s AI agent through WebSocket connections for real-time streaming responses. Use HTTP endpoints for session management and WebSocket for live chat interactions.

Message Structure

Messages in Aurora follow a consistent structure for both user and assistant messages.

User Messages

sender
string
default:"user"
Always “user” for messages from the user
text
string
required
Message text content. Can include:
  • Natural language instructions
  • Questions about infrastructure
  • Commands for cloud operations
  • File attachment context (auto-appended)
images
array
Attached images for multimodal messages (supports image analysis)
Example User Message:
{
  "sender": "user",
  "text": "Deploy a load balancer in GCP us-central1"
}

Assistant Messages

sender
string
Always “assistant” for AI responses
text
string
Response text with markdown formatting
tool_calls
array
Array of tool invocations made by the agent
tool_name
string
Name of the tool called (e.g., “gcp_compute”, “aws_ec2”)
input
object
Parameters passed to the tool
output
string
Tool execution result
Example Assistant Message:
{
  "sender": "assistant",
  "text": "I'll create a load balancer in GCP us-central1 for you.",
  "tool_calls": [
    {
      "tool_name": "gcp_compute",
      "input": {
        "action": "create_load_balancer",
        "region": "us-central1"
      },
      "output": "Load balancer created: lb-prod-001"
    }
  ]
}

Multimodal Messages

Aurora supports multimodal messages with images for visual analysis.

Sending Images

Images are sent as part of the message content in data URL format:
{
  "query": "What's in this infrastructure diagram?",
  "attachments": [
    {
      "filename": "diagram.png",
      "file_type": "image/png",
      "file_data": "iVBORw0KGgoAAAANSUhEUg..."
    }
  ]
}
attachments
array
Array of file attachments
filename
string
required
Original filename
file_type
string
required
MIME type (image/png, image/jpeg, application/pdf)
file_data
string
required
Base64-encoded file data (without data URL prefix for images)
is_server_path
boolean
default:false
Whether file_data is a server path reference
server_path
string
Server-side file path (for large files like ZIP archives)

Supported File Types

  • Images: PNG, JPEG, GIF, WebP (for visual analysis)
  • Documents: PDF (text extraction)
  • Archives: ZIP (server-side processing)

Image Processing

When messages with images are retrieved:
  1. Data URLs are parsed and split into components
  2. Images are extracted into images array
  3. Text content is separated from image data
  4. Each image includes:
    • displayData - Full data URL for rendering
    • type - MIME type
    • data - Base64-encoded image data
    • name - Generated filename
Example with Retrieved Images:
{
  "sender": "user",
  "text": "Analyze this diagram",
  "images": [
    {
      "displayData": "data:image/png;base64,iVBORw0KGg...",
      "type": "image/png",
      "data": "iVBORw0KGg...",
      "name": "image_0.png"
    }
  ]
}

Message Context

Aurora maintains conversation context across messages:

Context Management

llm_context_history
array
Complete conversation history sent to LLM
  • Includes all user and assistant messages
  • Contains tool call results
  • Used for context-aware responses
ui_messages
array
Simplified messages for UI display
  • User-facing message format
  • Excludes internal tool details
  • Shown in chat interface

Token Management

Messages are subject to token limits:
  • Input limit: 20,000 tokens per user message (~80,000 characters)
  • Context window: Managed automatically by Aurora
  • Token counting: Validated before processing
Error Example:
{
  "type": "error",
  "data": {
    "text": "Your message is too long (25000 tokens). Please limit your message to 20,000 tokens.",
    "severity": "error",
    "session_id": "550e8400-e29b-41d4-a716-446655440000"
  }
}

Message Streaming

Messages are streamed in real-time via WebSocket. See WebSocket Protocol for details on:
  • Sending chat messages
  • Receiving streamed responses
  • Handling tool call events
  • Managing session state

Auto-Generated Titles

Session titles are auto-generated from the first user message:
  1. Extracts first 50 characters of first user message
  2. Trims to last complete word
  3. Appends ”…” if truncated
  4. Defaults to “New Chat” if no user message exists
Example:
Input: "Deploy a highly available Kubernetes cluster in GCP with auto-scaling enabled"
Title: "Deploy a highly available Kubernetes cluster..."

Best Practices

Message Composition

  • Be specific with cloud resource requirements
  • Include region/zone information when relevant
  • Provide context for multi-step operations
  • Reference previous messages naturally

Attachments

  • Optimize images before uploading (< 5MB recommended)
  • Use PNG for diagrams, JPEG for photos
  • Include descriptive filenames
  • Attach relevant context files

Error Handling

  • Check token count for large messages
  • Validate file formats before uploading
  • Handle WebSocket disconnections gracefully
  • Retry failed operations with exponential backoff

Message Lifecycle

  1. Compose - User creates message with text/attachments
  2. Validate - Token count and format validation
  3. Send - Transmitted via WebSocket
  4. Process - Aurora agent analyzes and executes
  5. Stream - Response streamed back in chunks
  6. Store - Messages persisted in session
  7. Display - Rendered in chat UI

Build docs developers (and LLMs) love