Overview
The Messages API enables AI-powered conversations within projects. Send natural language messages and receive AI responses with tool calling capabilities.
Send Message
Send a message to the AI assistant in a conversation
Request Body
The ID of the conversation to send the message to
The message content from the user
Response
Always true on successful message creation
The Trigger.dev background job run ID for tracking message processing
The ID of the created assistant message (initially empty, populated asynchronously)
Example Request
curl -X POST https://your-domain.com/api/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{
"conversationId": "k17a8b9c0d1e2f3g4h5i6j7",
"message": "Create a React component for a todo list"
}'
const response = await fetch ( '/api/messages' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ token } `
},
body: JSON . stringify ({
conversationId: 'k17a8b9c0d1e2f3g4h5i6j7' ,
message: 'Create a React component for a todo list'
})
});
const { messageId , runId } = await response . json ();
Example Response
{
"success" : true ,
"runId" : "run_1234567890abcdef" ,
"messageId" : "k27b9c1d2e3f4g5h6i7j8k9"
}
{
"error" : "Conversation not found"
}
{
"error" : "A message is already being processed" ,
"messageId" : "k27b9c1d2e3f4g5h6i7j8k9"
}
Error Codes
400 - Invalid request body
401 - Unauthorized
404 - Conversation not found
409 - Another message is already being processed
500 - Internal server error
Messages are processed asynchronously via Trigger.dev background jobs. The response is immediate, but the AI response is populated over time. Use Convex real-time queries to watch for updates.
Cancel Message
Cancel a message that is currently being processed
Request Body
The ID of the message to cancel
Response
Always true on successful cancellation
Example Request
curl -X DELETE https://your-domain.com/api/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_JWT_TOKEN" \
-d '{"messageId": "k27b9c1d2e3f4g5h6i7j8k9"}'
const response = await fetch ( '/api/messages' , {
method: 'DELETE' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ token } `
},
body: JSON . stringify ({
messageId: 'k27b9c1d2e3f4g5h6i7j8k9'
})
});
const { success } = await response . json ();
Example Response
Error Codes
400 - Invalid request body
401 - Unauthorized
500 - Internal server error
Canceling a message stops the background job and marks the message as “cancelled”. This cannot be undone.
Message Schema
Messages in Polaris IDE follow this structure:
messages : {
_id : Id < "messages" > ,
conversationId : Id < "conversations" > ,
projectId : Id < "projects" > ,
role : "user" | "assistant" ,
content : string ,
status ?: "processing" | "completed" | "cancelled" | "failed" ,
triggerRunId ?: string ,
toolCalls ?: Array <{
id : string ,
name : string ,
args : any ,
result ?: any
}>
}
Message Lifecycle
User sends message via POST /api/messages
User message created with role: "user"
Assistant message created with role: "assistant", content: "", status: "processing"
Background job triggered via Trigger.dev
AI processes message and updates content in real-time
Tool calls executed (if AI requests file operations, etc.)
Message completed with status: "completed"
Message Status
Status Description processingAI is currently generating the response completedResponse has been fully generated cancelledUser cancelled the message failedAn error occurred during processing
AI messages can include tool calls for file operations:
{
"_id" : "k27b9c1d2e3f4g5h6i7j8k9" ,
"role" : "assistant" ,
"content" : "I'll create a React component for you." ,
"status" : "completed" ,
"toolCalls" : [
{
"id" : "call_123" ,
"name" : "write_file" ,
"args" : {
"path" : "src/components/TodoList.jsx" ,
"content" : "import React from 'react'; \n\n export default function TodoList() { ... }"
},
"result" : {
"success" : true ,
"fileId" : "k37c1d2e3f4g5h6i7j8k9l0"
}
}
]
}
Streaming Responses
While messages are processed asynchronously, the frontend uses Convex real-time subscriptions to receive updates as the AI generates content. This provides a streaming-like experience without server-sent events.
Watch Message Updates
import { useQuery } from 'convex/react' ;
import { api } from './convex/_generated/api' ;
function ConversationView ({ conversationId }) {
const messages = useQuery ( api . messages . getMessages , { conversationId });
return (
< div >
{ messages ?. map ( msg => (
< div key = { msg . _id } className = { msg . role } >
< p > { msg . content } </ p >
{ msg . status === 'processing' && < span > Typing... </ span > }
{ /* Show tool calls */ }
{ msg . toolCalls ?. map ( tool => (
< div key = { tool . id } >
< code > { tool . name } ( { JSON . stringify ( tool . args ) } ) </ code >
</ div >
)) }
</ div >
)) }
</ div >
);
}
Conversations
Conversations are containers for messages:
conversations : {
_id : Id < "conversations" > ,
projectId : Id < "projects" > ,
title : string ,
updatedAt : number
}
Create Conversation
Use Convex mutation to create a conversation:
import { useMutation } from 'convex/react' ;
import { api } from './convex/_generated/api' ;
function MyComponent ({ projectId }) {
const createConversation = useMutation ( api . conversations . create );
const handleNewChat = async () => {
const conversationId = await createConversation ({
projectId ,
title: 'New Conversation'
});
};
}
Get Conversations
import { useQuery } from 'convex/react' ;
import { api } from './convex/_generated/api' ;
function ConversationList ({ projectId }) {
const conversations = useQuery (
api . conversations . getConversations ,
{ projectId }
);
return (
< ul >
{ conversations ?. map ( conv => (
< li key = { conv . _id } > { conv . title } </ li >
)) }
</ ul >
);
}
Complete Example
Complete Conversation Flow
import { useMutation , useQuery } from 'convex/react' ;
import { api } from './convex/_generated/api' ;
import { useState } from 'react' ;
function AIConversation ({ projectId }) {
const [ message , setMessage ] = useState ( '' );
const [ conversationId , setConversationId ] = useState ( null );
const createConversation = useMutation ( api . conversations . create );
const messages = useQuery (
api . messages . getMessages ,
conversationId ? { conversationId } : 'skip'
);
// Send message via API
const sendMessage = async () => {
// Create conversation if needed
let convId = conversationId ;
if ( ! convId ) {
convId = await createConversation ({
projectId ,
title: 'New Chat'
});
setConversationId ( convId );
}
// Send message
const response = await fetch ( '/api/messages' , {
method: 'POST' ,
headers: {
'Content-Type' : 'application/json' ,
'Authorization' : `Bearer ${ token } `
},
body: JSON . stringify ({
conversationId: convId ,
message
})
});
setMessage ( '' );
};
return (
< div >
{ /* Messages */ }
< div className = "messages" >
{ messages ?. map ( msg => (
< div key = { msg . _id } className = { msg . role } >
< p > { msg . content } </ p >
{ msg . status === 'processing' && < span > ... </ span > }
</ div >
)) }
</ div >
{ /* Input */ }
< div >
< input
value = { message }
onChange = { ( e ) => setMessage ( e . target . value ) }
placeholder = "Ask AI anything..."
/>
< button onClick = { sendMessage } > Send </ button >
</ div >
</ div >
);
}
AI Providers
Polaris IDE supports multiple AI providers with automatic fallback:
Primary : Moonshot AI Kimi K2.5 via OpenRouter (via OPENROUTER_API_KEY)
Fallback : Cerebras GLM-4.7 (via CEREBRAS_API_KEY)
The system automatically falls back to Cerebras if OpenRouter is unavailable.
Next Steps
AI Suggestions Learn about code suggestions and quick edit
Projects API Manage projects and files