Skip to main content

Overview

Polaris IDE’s conversation system provides an intelligent AI assistant integrated directly into your IDE. The assistant can understand your code, execute tools to modify files, run commands, and provide contextual help through a real-time streaming interface.

Message history and storage

Conversations are persisted in Convex database with the following structure:
  • Conversations: Each project has multiple conversations
  • Messages: Each conversation contains user and assistant messages
  • Tool calls: Messages can include tool executions (file operations, searches, etc.)
  • Status tracking: Messages track processing state (pending, processing, completed, failed)
All conversations are project-scoped, meaning each project maintains its own conversation history.

Message flow

1

User sends message

User types a message in the conversation panel and submits it.
2

Message creation

API creates both user message and empty assistant message with “processing” status.
3

Background job trigger

Trigger.dev task is launched to process the assistant’s response asynchronously.
4

Streaming response

AI generates response with tool calls, streamed back in real-time with 100ms throttle.
5

Message completion

Once complete, message status updates to “completed” and final content is saved.

Streaming responses

The conversation system uses real-time streaming to display AI responses as they’re generated:

Streaming implementation

// Stream throttling (100ms minimum between updates)
const STREAM_THROTTLE_MS = 100
let lastStreamUpdate = 0

onTextChunk: async (chunk: string, fullText: string) => {
  const now = Date.now()
  if (now - lastStreamUpdate >= STREAM_THROTTLE_MS) {
    lastStreamUpdate = now
    await convex.mutation(api.system.streamMessageContent, {
      internalKey,
      messageId,
      content: fullText,
      isComplete: false
    })
  }
}
Streaming updates are throttled to 100ms intervals to prevent database overload while maintaining smooth user experience.

Context management

The AI assistant receives comprehensive context for each message:

System prompt

The assistant is initialized with a detailed system prompt that explains:
  • Its identity as “Polaris, an AI coding assistant”
  • Available tools and their purposes
  • When to use each tool category
  • Best practices for code generation

Message context

interface MessageContext {
  messageId: string           // Current message being processed
  conversationId: string      // Conversation this message belongs to
  projectId: string           // Project scope
  messages: Message[]         // Full conversation history
}
interface Message {
  role: "user" | "assistant"
  content: string
  toolCalls?: ToolCall[]
  toolResults?: ToolResult[]
  status?: "pending" | "processing" | "completed" | "failed" | "cancelled"
  triggerRunId?: string
}

Tool calling system

The conversation AI has access to powerful tools for interacting with your project:

Available tool categories

File management

Read, write, delete files, list directories, get project structure

LSP (Language Server)

Find symbols, get references, diagnostics, go to definition

Code search

Regex search, AST-aware search, find files by pattern

Context & relevance

Find relevant files using import analysis and symbol matching

Terminal

Execute safe commands (npm, git, node, tsc, eslint, etc.)

Tool execution flow

// Tools are created with project scope
const fileTools = createFileTools(projectId, internalKey)
const lspTools = createLSPTools(projectId, internalKey)
const searchTools = createSearchTools(projectId, internalKey)
const terminalTools = createTerminalTools(projectId, internalKey)
const contextTools = createContextTools(projectId, internalKey)

const tools = {
  ...fileTools,
  ...lspTools,
  ...searchTools,
  ...terminalTools,
  ...contextTools
}

// AI SDK handles tool calling automatically
const response = await streamTextWithToolsPreferCerebras({
  system: SYSTEM_PROMPT,
  messages: context.messages,
  tools,
  maxSteps: 10,
  maxTokens: 2000
})

Trigger.dev background jobs integration

Conversations use Trigger.dev for asynchronous message processing:

Why Trigger.dev?

  • Long-running tasks - AI responses can take several seconds
  • Reliable execution - Automatic retries and error handling
  • Cancellation support - Users can cancel in-progress messages
  • Status tracking - Monitor job progress in real-time

Background job lifecycle

1

Trigger task

API triggers “process-message” task with message ID.
2

Store run ID

Trigger.dev run ID is saved to message for cancellation support.
3

Process message

Background task loads context, executes AI generation with tools.
4

Stream updates

Task streams content and tool calls back to database in real-time.
5

Complete or fail

Task updates final message status and content.
import { tasks } from "@trigger.dev/sdk/v3"

const handle = await tasks.trigger("process-message", {
  messageId: assistantMessageId
})

await convex.mutation(api.system.updateMessageTriggerRunId, {
  internalKey,
  messageId: assistantMessageId,
  triggerRunId: handle.id
})

Message status tracking

Messages progress through these states:
StatusDescriptionUser visible
pendingMessage created, waiting to be processedLoading indicator
processingAI is generating responseStreaming text appears
completedResponse fully generatedFull message visible
failedError occurred during processingError message shown
cancelledUser cancelled the messageCancelled indicator
Concurrent message limit: Only one message can be processed per project at a time. New messages return 409 Conflict if a message is already processing.

API implementation

The messages endpoint is located at /api/messages:
POST /api/messages

{
  "conversationId": "kg2h8...",
  "message": "Add a new React component for the header"
}

// Response:
{
  "success": true,
  "runId": "run_abc123",
  "messageId": "kg2h9..."
}

Performance metrics

The conversation system tracks performance metrics:
const metricsStartTime = Date.now()
let timeToFirstToken: number | null = null

onTextChunk: async (chunk: string, fullText: string) => {
  if (!firstChunkReceived) {
    timeToFirstToken = Date.now() - metricsStartTime
    console.log(`[Metrics] Time to first token: ${timeToFirstToken}ms`)
  }
}

// Log final metrics
const totalResponseTime = Date.now() - metricsStartTime
console.log(`[Metrics] Total response time: ${totalResponseTime}ms`)

Error handling

The conversation system handles errors gracefully:
  • Processing errors - Message status set to “failed” with error message
  • Conversation not found - Returns 404 with clear error
  • Unauthorized access - Returns 403 for unauthenticated requests
  • Internal errors - Logged and returned as 500 with safe error message

Source code reference

Implementation details:
  • API route: src/app/api/messages/route.ts:20
  • Background task: trigger/tasks/process-message.ts:53
  • System prompt: trigger/tasks/process-message.ts:17
  • Tool creation: trigger/tasks/process-message.ts:70
  • Streaming logic: trigger/tasks/process-message.ts:91

Build docs developers (and LLMs) love