Skip to main content
OpenCouncil uses Anthropic’s Claude AI to automatically analyze council meeting transcripts, generating summaries, identifying subjects, classifying topics, and providing an interactive chat assistant for deeper exploration.

AI capabilities

Speaker summaries

Concise summaries of what each speaker said during their segments, with substantive vs. procedural classification.

Subject extraction

Automatic identification of agenda items with names, descriptions, speakers, votes, and decisions.

Topic classification

AI assigns topic labels to speaker segments for better organization and filtering.

Chat assistant

Interactive Q&A about meeting content with context-aware responses and citations.

How AI summarization works

The summarization process uses Claude with custom prompts and structured output:
1

Transcript preparation

The system collects the full meeting transcript with speaker identities, party affiliations, and role information. Utterances are grouped into speaker segments.
2

Context building

Context includes:
  • City information and municipality details
  • Person roster with roles and party affiliations
  • Topic taxonomy for classification
  • Administrative body type (council, committee, etc.)
  • Meeting date and metadata
3

AI processing

Claude analyzes the transcript in batches, generating:
  • Speaker segment summaries (substantive vs. procedural)
  • Topic labels for each segment
  • Subject extraction with structured data
  • Discussion status for each utterance
  • Speaker contributions per subject
4

Data persistence

Results are stored in the database:
  • Summaries linked to speaker segments
  • Subjects with full metadata and relationships
  • Utterance discussion statuses
  • Topic labels for filtering
5

Notification creation

If enabled, notifications are created for users interested in the discussed subjects based on matching rules.

AI configuration

The AI system uses sophisticated configuration for optimal results:
# Anthropic API
ANTHROPIC_API_KEY=sk-ant-api03-your-key

# Optional: Customize AI behavior
AI_MAX_TOKENS=8192
AI_TEMPERATURE=0
AI_MODEL=claude-sonnet-4-0
From src/lib/ai.ts:14-35

Speaker segment summaries

Claude generates summaries for each speaker’s contribution:
Speaker segments are classified as:Substantive - Content related to policy, decisions, or issues:
{
  "speakerSegmentId": "segment-123",
  "summary": "Ο δήμαρχος παρουσίασε την πρόταση για νέο ποδηλατόδρομο...",
  "type": "SUBSTANTIVE",
  "topicLabels": ["Μεταφορές", "Υποδομές"]
}
Procedural - Administrative or process-related content:
{
  "speakerSegmentId": "segment-456",
  "summary": "Η γραμματέας ανακοίνωσε την παρουσία των μελών...",
  "type": "PROCEDURAL",
  "topicLabels": []
}
Procedural summaries help filter out administrative content when users want to focus on substantive discussions.

Subject extraction

The AI identifies and structures meeting subjects:
Each subject includes:
interface Subject {
  agendaItemIndex: number;    // Position in agenda
  name: string;               // Short title
  description: string;        // Detailed summary
  topicId: string | null;     // Primary topic classification
  introducedById: string | null;  // Person who introduced it
  context: string | null;     // Additional background
  vote: Vote | null;          // Voting results if applicable
  decision: Decision | null;  // Final decision text
  speakerSegments: string[];  // Linked speaker segment IDs
  contributions: SpeakerContribution[];  // Summary per speaker
}
Subjects maintain relationships to speakers, topics, and decisions for rich querying.
For each subject, Claude generates per-speaker summaries:
interface SpeakerContribution {
  speakerId: string | null;      // Person ID
  speakerName: string | null;    // Display name for unknown speakers
  text: string;                  // Markdown with special reference links
}
References use special syntax:
  • [text](REF:UTTERANCE:id) - Link to specific utterance
  • [text](REF:PERSON:id) - Link to person profile
  • [text](REF:PARTY:id) - Link to party page
From src/lib/apiTypes.ts:131-135
Subjects are matched by agendaItemIndex to preserve IDs across re-summarization:
const subjectNameToIdMap = await saveSubjectsForMeeting(
  response.subjects,
  cityId,
  meetingId
);

// Matches existing subjects to avoid orphaning ES documents
// Returns map of API subject ID -> database subject ID
This ensures Elasticsearch documents aren’t orphaned when meetings are re-summarized.From src/lib/tasks/summarize.ts:116-120

Discussion status tracking

Utterances are tagged with discussion context:
enum DiscussionStatus {
  ATTENDANCE = "ATTENDANCE",           // Roll call
  SUBJECT_DISCUSSION = "SUBJECT_DISCUSSION",  // Discussing a subject
  VOTE = "VOTE",                       // Voting on a subject
  OTHER = "OTHER"                      // General discussion
}
From src/lib/apiTypes.ts:138-143

AI chat assistant

The interactive chat feature provides context-aware answers:
import { aiChatStream } from '@/lib/ai';

const stream = await aiChatStream(
  systemPrompt,
  messages,
  {
    maxTokens: 4096,
    temperature: 0.3,  // Slightly creative for conversation
    enableWebSearch: true  // Optional for factual queries
  }
);

for await (const event of stream) {
  if (event.type === 'content_block_delta') {
    const textDelta = event.delta.text;
    // Stream text to UI
  }
}
From src/lib/ai.ts:358-390
Web search is useful for questions that require current information beyond the meeting transcript (e.g., “What is the current status of this proposal?”).

Prompt logging

Development mode logs all prompts for debugging:
function logPromptToFile(
  systemPrompt: string,
  messages: any[],
  config: AIConfig,
  metadata: Record<string, any> = {}
) {
  if (!config.logPrompts) return;
  
  const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
  const filename = path.join(config.promptsDir!, `prompt-${timestamp}.json`);
  
  fs.writeFileSync(filename, JSON.stringify({
    timestamp,
    systemPrompt,
    messages,
    metadata: {
      ...metadata,
      nodeEnv: process.env.NODE_ENV,
      maxTokens: config.maxTokens,
      model: config.model
    }
  }, null, 2));
  
  console.log(`[Dev] Prompt logged to ${filename}`);
}
From src/lib/ai.ts:40-81
Prompt logs are stored in logs/prompts/ and include full request/response data for debugging AI behavior.

Response continuation

Long responses are automatically continued:
if (response.stop_reason === "max_tokens") {
  console.log(`Claude stopped at max tokens: ${maxTokens}`);
  
  if (continuationAttempt >= maxContinuationAttempts) {
    console.log(`Reached max attempts (${maxContinuationAttempts})`);
    // Return partial result
    return { usage, result: extractAndParseJSON(responseContent) };
  }
  
  // Recursively continue with accumulated content
  const response2 = await aiChat(
    systemPrompt,
    userPrompt,
    (prefillSystemResponse + finalTextBlock.text).trim(),
    (prependToResponse + finalTextBlock.text).trim(),
    config,
    continuationAttempt + 1
  );
  
  // Combine usage stats
  return {
    usage: {
      input_tokens: response.usage.input_tokens + response2.usage.input_tokens,
      output_tokens: response.usage.output_tokens + response2.usage.output_tokens,
      // ...
    },
    result: response2.result
  };
}
From src/lib/ai.ts:285-327
Continuation attempts are limited to 3 by default to prevent infinite loops. Very long meetings may require larger maxTokens values.

JSON extraction and repair

The AI helper includes robust JSON parsing:
function extractAndParseJSON<T>(content: string): T {
  // Try direct parsing first
  try {
    return JSON.parse(content) as T;
  } catch (e) { /* continue to fixes */ }
  
  // Fix 1: Add missing opening brace
  if (fixedContent.match(/^"[^"]+"\s*:/)) {
    fixedContent = '{' + fixedContent;
  }
  
  // Fix 2: Add missing closing brace
  const openBraces = (fixedContent.match(/\{/g) || []).length;
  const closeBraces = (fixedContent.match(/\}/g) || []).length;
  if (openBraces > closeBraces) {
    fixedContent = fixedContent + '}';
  }
  
  // Fix 3: Remove trailing commas
  fixedContent = fixedContent.replace(/,(\s*[}\]])/g, '$1');
  
  // Fix 4: Extract from markdown code blocks
  const codeBlockMatch = fixedContent.match(/```(?:json)?\s*(\{[\s\S]*?\})\s*```/);
  if (codeBlockMatch) {
    fixedContent = codeBlockMatch[1];
  }
  
  return JSON.parse(fixedContent) as T;
}
From src/lib/ai.ts:135-199

Usage tracking

All AI calls return usage statistics:
interface ResultWithUsage<T> {
  result: T;
  usage: {
    input_tokens: number;
    output_tokens: number;
    cache_creation_input_tokens?: number;
    cache_read_input_tokens?: number;
    server_tool_use?: any;
    service_tier?: string;
  };
}

const { result, usage } = await aiChat<SummarizeResult>(
  systemPrompt,
  userPrompt
);

console.log(`Tokens used: ${usage.input_tokens} in, ${usage.output_tokens} out`);
From src/lib/ai.ts:11
Track usage stats to monitor costs and optimize prompt length for large meetings.

Performance optimization

Summarization uses several optimizations:
Use transaction batching for all updates:
const operations: any[] = [];

// Collect all operations
operations.push(prisma.summary.upsert({ /* ... */ }));
operations.push(prisma.topicLabel.upsert({ /* ... */ }));

// Execute together
await prisma.$transaction(operations);
Dramatically reduces transaction overhead.
Update utterance discussion statuses in parallel:
const updatePromises = validStatuses.map(async (status) => {
  try {
    await prisma.utterance.update({
      where: { id: status.utteranceId },
      data: { discussionStatus: status.status }
    });
  } catch (error) {
    console.error(`Failed to update ${status.utteranceId}:`, error);
    // Don't throw - continue with other updates
  }
});

await Promise.all(updatePromises);
From src/lib/tasks/summarize.ts:163-190

API reference

aiChat
async function
Main AI chat function with structured output
aiChatStream
async function
Streaming AI chat for real-time responses

Next steps

Transcription

Learn how transcripts are prepared for AI analysis

Notifications

See how summaries trigger notification creation

Search

Summaries improve search relevance and filtering

Chat assistant

Guide to using the AI chat assistant effectively

Build docs developers (and LLMs) love