Skip to main content

Chat API

The Chat API enables streaming conversations with GAIA’s AI assistant. It uses Server-Sent Events (SSE) for real-time message streaming with Redis-backed background execution.

Architecture

The streaming architecture is decoupled from HTTP request lifecycle:
  1. Endpoint starts background task for LangGraph execution
  2. Background task publishes chunks to Redis channel
  3. Endpoint subscribes to channel and forwards to HTTP response
  4. If client disconnects, stream continues in background
  5. Conversation is always saved to MongoDB on completion

Endpoints

Stream Chat Messages

Stream a chat message with AI assistant.
POST /api/v1/chat-stream
curl -X POST https://api.heygaia.io/api/v1/chat-stream \
  -H "Cookie: wos_session=YOUR_SESSION_TOKEN" \
  -H "Content-Type: application/json" \
  -H "x-timezone: America/New_York" \
  -d '{
    "message": "What's on my calendar today?",
    "conversation_id": "conv_123",
    "history": []
  }'

Request Body

message
string
required
The user’s message to send to the AI assistant
conversation_id
string
Conversation ID to continue an existing conversation. If omitted, a new conversation is created.
history
array
Array of previous messages in the conversation. Used for context.Each message object contains:
  • type (string) - “user” or “assistant”
  • response (string) - Message content
  • date (string) - ISO 8601 timestamp
fileIds
array
Array of uploaded file IDs to include in the message context
fileData
array
Array of file metadata objects with id, name, type, and url
selectedWorkflow
object
Selected workflow to execute (if applicable)Properties:
  • id (string) - Workflow ID
  • title (string) - Workflow title
replyToMessage
object
Message being replied toProperties:
  • id (string) - Message ID
  • content (string) - Message content

Response Headers

Content-Type
string
text/event-stream - Server-Sent Events format
Cache-Control
string
no-cache - Prevents caching of stream
Connection
string
keep-alive - Maintains connection for streaming
X-Stream-Id
string
Unique stream ID for cancellation
Access-Control-Allow-Origin
string
* - CORS header for cross-origin requests
X-Accel-Buffering
string
no - Disables Nginx buffering for real-time streaming

Stream Events

The response streams data in Server-Sent Events format:
data: {"type": "token", "content": "I"}

data: {"type": "token", "content": " found"}

data: {"type": "token", "content": " 3"}

data: {"type": "tool_call", "tool": "calendar", "status": "started"}

data: {"type": "tool_result", "tool": "calendar", "data": {...}}

data: {"type": "token", "content": " events"}

data: [DONE]
type
string
Event type:
  • token - Text token from AI response
  • tool_call - Tool execution started
  • tool_result - Tool execution completed
  • error - Error occurred
  • metadata - Additional metadata

Cancel Stream

Cancel a running chat stream.
POST /api/v1/cancel-stream/{stream_id}
curl -X POST https://api.heygaia.io/api/v1/cancel-stream/stream_abc123 \
  -H "Cookie: wos_session=YOUR_SESSION_TOKEN"

Path Parameters

stream_id
string
required
The stream ID to cancel (from X-Stream-Id header)

Response

success
boolean
Whether the stream was successfully cancelled
stream_id
string
The cancelled stream ID
error
string
Error message if cancellation failed
Response Example
{
  "success": true,
  "stream_id": "stream_abc123"
}

Stream Format Details

Token Events

Text tokens are streamed as they’re generated:
{
  "type": "token",
  "content": "Hello"
}

Tool Call Events

When the AI uses a tool:
{
  "type": "tool_call",
  "tool": "search_web",
  "status": "started",
  "params": {
    "query": "weather today"
  }
}

Tool Result Events

When a tool completes:
{
  "type": "tool_result",
  "tool": "search_web",
  "status": "completed",
  "data": {
    "results": [...]
  }
}

Error Events

If an error occurs:
{
  "type": "error",
  "error": "Rate limit exceeded",
  "code": "rate_limit_error"
}

Stream Completion

The stream ends with:
data: [DONE]
Or if an error occurred:
data: [STREAM_ERROR]

Rate Limiting

Chat streaming is subject to rate limits:
  • Free: 50 messages/hour
  • Pro: 500 messages/hour
  • Team: 2000 messages/hour
See Rate Limits for details.

Background Execution

The chat stream continues executing in the background even if the client disconnects. This ensures:
  • Conversations are always saved to MongoDB
  • Tool executions complete successfully
  • No data loss from network interruptions
Background tasks are tracked in Redis and automatically cleaned up after completion.

Error Handling

Redis Unavailable

If Redis is unavailable, the stream returns an error:
data: [STREAM_ERROR]

Client Disconnection

If the client disconnects:
Client disconnected, stream {stream_id} continues in background
The conversation is still saved and processing completes.

Cancellation

When a stream is cancelled:
{
  "type": "cancelled",
  "message": "Stream cancelled by user"
}

Best Practices

Implement automatic reconnection with exponential backoff:
let retries = 0;
const maxRetries = 3;

function connectStream() {
  const eventSource = new EventSource(streamUrl);
  
  eventSource.onerror = (error) => {
    eventSource.close();
    
    if (retries < maxRetries) {
      retries++;
      setTimeout(() => {
        connectStream();
      }, Math.pow(2, retries) * 1000);
    }
  };
}
Process stream events without blocking the UI:
const messageQueue = [];

eventSource.onmessage = (event) => {
  messageQueue.push(JSON.parse(event.data));
  processQueue();
};

async function processQueue() {
  while (messageQueue.length > 0) {
    const event = messageQueue.shift();
    await handleEvent(event);
  }
}
Maintain stream state for UI updates:
const streamState = {
  streamId: null,
  isStreaming: false,
  currentMessage: '',
  toolCalls: []
};

eventSource.onopen = () => {
  streamState.isStreaming = true;
  streamState.streamId = response.headers.get('X-Stream-Id');
};
Show visual indicators for different stream states:
function updateUI(event) {
  switch (event.type) {
    case 'token':
      appendToMessage(event.content);
      break;
    case 'tool_call':
      showToolIndicator(event.tool);
      break;
    case 'tool_result':
      hideToolIndicator(event.tool);
      displayToolResult(event.data);
      break;
  }
}

Next Steps

Todos API

Manage tasks and projects

Workflows API

Automate tasks with workflows

Build docs developers (and LLMs) love