Skip to main content

Send Message (Non-Streaming)

curl -X POST "https://api.example.com/chat/prompt" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "What are the benefits of microservices architecture?",
    "chat_id": "550e8400-e29b-41d4-a716-446655440000",
    "generate_title": true
  }'
Sends a message to the AI assistant and receives a complete response. This endpoint returns the full response once generation is complete. For real-time streaming responses, use the streaming endpoint.

Method & Path

POST /chat/prompt

Authentication

Requires bearer token authentication via the Authorization header.

Request Body

message
string
required
The user’s message or question to send to the assistant
chat_id
string
ID of existing conversation to continue. If omitted or null, a new conversation is created automatically.
generate_title
boolean
default:true
Whether to auto-generate a conversation title from the first message. Only applies to new conversations or those with default titles.
provider_id
string
Optional LLM provider ID to override the default provider for this message
model_id
string
Optional model ID to use with the specified provider. Must be provided if provider_id is specified.

Response

answer
string
The AI assistant’s complete response to the user’s message
chat_id
string
The conversation ID (either the provided ID or newly created one)
success
boolean
Only present in error cases. When false, indicates the request failed.
message
string
Error message if the request failed (e.g., validation errors, processing errors)

Success Response Example

{
  "answer": "Microservices architecture offers several key benefits:\n\n1. **Scalability**: Individual services can be scaled independently based on demand.\n2. **Flexibility**: Teams can use different technologies for different services.\n3. **Resilience**: Failure in one service doesn't bring down the entire system.\n4. **Faster deployment**: Services can be deployed independently without affecting others.\n5. **Better organization**: Clear boundaries between services align with business capabilities.",
  "chat_id": "550e8400-e29b-41d4-a716-446655440000"
}

Validation Error Response

{
  "success": false,
  "message": "Your message contains prohibited content. Please rephrase and try again."
}

Processing Error Response

{
  "success": false,
  "message": "An error occurred while processing the prompt: Model API timeout"
}

Error Codes

  • 401 Unauthorized: Missing or invalid authentication token
  • 404 Not Found: Specified chat_id does not exist or user does not have access
  • 422 Unprocessable Entity: Invalid request body format
  • 500 Internal Server Error: Processing or database error

Features

When chat_id is omitted or null, the system automatically creates a new conversation and returns its ID in the response. This allows seamless conversation initiation.
Messages are validated against configurable guardrails before processing. If a message contains prohibited content, the request is rejected with a validation error.
Frequently asked questions may be served from cache for faster response times. Cached responses are indistinguishable from newly generated ones.
Override the default LLM provider and model on a per-message basis using provider_id and model_id parameters.

Retrieve Messages

curl -X GET "https://api.example.com/chat/messages/550e8400-e29b-41d4-a716-446655440000" \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -G \
  -d "limit=50" \
  -d "offset=0"
Retrieves conversation details and paginated message history, sorted chronologically from oldest to newest.

Method & Path

GET /chat/messages/{chat_id}

Authentication

Requires bearer token authentication. Users can only access their own conversations.

Path Parameters

chat_id
string
required
Unique identifier of the conversation

Query Parameters

limit
integer
default:50
Maximum number of messages to return per page
offset
integer
default:0
Number of messages to skip for pagination

Response

error
boolean
Indicates whether an error occurred
id
string
Conversation ID
user_id
string
ID of the user who owns this conversation
title
string
Conversation title
updated_at
string
ISO 8601 timestamp of last conversation update
messages
array
Array of message objects, sorted chronologically (oldest first)
total
integer
Total number of messages in the conversation
limit
integer
Limit used for this request
offset
integer
Offset used for this request
has_more
boolean
Whether more messages are available beyond the current page

Success Response Example

{
  "error": false,
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "user_id": "770e8400-e29b-41d4-a716-446655440002",
  "title": "Microservices architecture discussion",
  "updated_at": "2026-03-01T10:30:00Z",
  "messages": [
    {
      "id": "880e8400-e29b-41d4-a716-446655440003",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",
      "human": "What are the benefits of microservices architecture?",
      "bot": "Microservices architecture offers several key benefits:\n\n1. **Scalability**: Individual services can be scaled independently...",
      "created_at": "2026-03-01T10:25:00Z"
    },
    {
      "id": "990e8400-e29b-41d4-a716-446655440004",
      "chat_id": "550e8400-e29b-41d4-a716-446655440000",
      "human": "What are the challenges?",
      "bot": "While microservices offer many benefits, they also come with challenges:\n\n1. **Complexity**: Distributed systems are inherently more complex...",
      "created_at": "2026-03-01T10:28:00Z"
    }
  ],
  "total": 4,
  "limit": 50,
  "offset": 0,
  "has_more": false
}

Error Response Example

{
  "error": true,
  "message": "Chat messages could not be retrieved: Chat not found"
}

Error Codes

  • 401 Unauthorized: Missing or invalid authentication token
  • 404 Not Found: Conversation does not exist, is archived, or user does not have access
  • 500 Internal Server Error: Database or server error
Messages are returned in chronological order (oldest first) to facilitate building conversation UIs. Use pagination for conversations with many messages to optimize performance.

Build docs developers (and LLMs) love