Chat API
The Chat API enables streaming conversations with GAIA’s AI assistant. It uses Server-Sent Events (SSE) for real-time message streaming with Redis-backed background execution.Architecture
The streaming architecture is decoupled from HTTP request lifecycle:- Endpoint starts background task for LangGraph execution
- Background task publishes chunks to Redis channel
- Endpoint subscribes to channel and forwards to HTTP response
- If client disconnects, stream continues in background
- Conversation is always saved to MongoDB on completion
Endpoints
Stream Chat Messages
Stream a chat message with AI assistant.Request Body
The user’s message to send to the AI assistant
Conversation ID to continue an existing conversation. If omitted, a new conversation is created.
Array of previous messages in the conversation. Used for context.Each message object contains:
type(string) - “user” or “assistant”response(string) - Message contentdate(string) - ISO 8601 timestamp
Array of uploaded file IDs to include in the message context
Array of file metadata objects with
id, name, type, and urlSelected workflow to execute (if applicable)Properties:
id(string) - Workflow IDtitle(string) - Workflow title
Message being replied toProperties:
id(string) - Message IDcontent(string) - Message content
Response Headers
text/event-stream - Server-Sent Events formatno-cache - Prevents caching of streamkeep-alive - Maintains connection for streamingUnique stream ID for cancellation
* - CORS header for cross-origin requestsno - Disables Nginx buffering for real-time streamingStream Events
The response streams data in Server-Sent Events format:Event type:
token- Text token from AI responsetool_call- Tool execution startedtool_result- Tool execution completederror- Error occurredmetadata- Additional metadata
Cancel Stream
Cancel a running chat stream.Path Parameters
The stream ID to cancel (from
X-Stream-Id header)Response
Whether the stream was successfully cancelled
The cancelled stream ID
Error message if cancellation failed
Response Example
Stream Format Details
Token Events
Text tokens are streamed as they’re generated:Tool Call Events
When the AI uses a tool:Tool Result Events
When a tool completes:Error Events
If an error occurs:Stream Completion
The stream ends with:Rate Limiting
Chat streaming is subject to rate limits:- Free: 50 messages/hour
- Pro: 500 messages/hour
- Team: 2000 messages/hour
Background Execution
The chat stream continues executing in the background even if the client disconnects. This ensures:- Conversations are always saved to MongoDB
- Tool executions complete successfully
- No data loss from network interruptions
Background tasks are tracked in Redis and automatically cleaned up after completion.
Error Handling
Redis Unavailable
If Redis is unavailable, the stream returns an error:Client Disconnection
If the client disconnects:Cancellation
When a stream is cancelled:Best Practices
Handle reconnections
Handle reconnections
Implement automatic reconnection with exponential backoff:
Process events asynchronously
Process events asynchronously
Process stream events without blocking the UI:
Track stream state
Track stream state
Maintain stream state for UI updates:
Provide user feedback
Provide user feedback
Show visual indicators for different stream states:
Next Steps
Todos API
Manage tasks and projects
Workflows API
Automate tasks with workflows