Automatic meeting analysis with speaker summaries, subject extraction, and intelligent chat assistance powered by Claude
OpenCouncil uses Anthropic’s Claude AI to automatically analyze council meeting transcripts, generating summaries, identifying subjects, classifying topics, and providing an interactive chat assistant for deeper exploration.
The summarization process uses Claude with custom prompts and structured output:
1
Transcript preparation
The system collects the full meeting transcript with speaker identities, party affiliations, and role information. Utterances are grouped into speaker segments.
2
Context building
Context includes:
City information and municipality details
Person roster with roles and party affiliations
Topic taxonomy for classification
Administrative body type (council, committee, etc.)
Meeting date and metadata
3
AI processing
Claude analyzes the transcript in batches, generating:
Speaker segment summaries (substantive vs. procedural)
Topic labels for each segment
Subject extraction with structured data
Discussion status for each utterance
Speaker contributions per subject
4
Data persistence
Results are stored in the database:
Summaries linked to speaker segments
Subjects with full metadata and relationships
Utterance discussion statuses
Topic labels for filtering
5
Notification creation
If enabled, notifications are created for users interested in the discussed subjects based on matching rules.
The AI identifies and structures meeting subjects:
Subject structure
Each subject includes:
interface Subject { agendaItemIndex: number; // Position in agenda name: string; // Short title description: string; // Detailed summary topicId: string | null; // Primary topic classification introducedById: string | null; // Person who introduced it context: string | null; // Additional background vote: Vote | null; // Voting results if applicable decision: Decision | null; // Final decision text speakerSegments: string[]; // Linked speaker segment IDs contributions: SpeakerContribution[]; // Summary per speaker}
Subjects maintain relationships to speakers, topics, and decisions for rich querying.
Speaker contributions
For each subject, Claude generates per-speaker summaries:
interface SpeakerContribution { speakerId: string | null; // Person ID speakerName: string | null; // Display name for unknown speakers text: string; // Markdown with special reference links}
References use special syntax:
[text](REF:UTTERANCE:id) - Link to specific utterance
[text](REF:PERSON:id) - Link to person profile
[text](REF:PARTY:id) - Link to party page
From src/lib/apiTypes.ts:131-135
Subject matching
Subjects are matched by agendaItemIndex to preserve IDs across re-summarization:
const subjectNameToIdMap = await saveSubjectsForMeeting( response.subjects, cityId, meetingId);// Matches existing subjects to avoid orphaning ES documents// Returns map of API subject ID -> database subject ID
This ensures Elasticsearch documents aren’t orphaned when meetings are re-summarized.From src/lib/tasks/summarize.ts:116-120
enum DiscussionStatus { ATTENDANCE = "ATTENDANCE", // Roll call SUBJECT_DISCUSSION = "SUBJECT_DISCUSSION", // Discussing a subject VOTE = "VOTE", // Voting on a subject OTHER = "OTHER" // General discussion}
From src/lib/apiTypes.ts:138-143
Utterances are linked to subjects during discussion:
interface UtteranceDiscussionStatus { utteranceId: string; status: DiscussionStatus; subjectId: string | null; // Links to subject being discussed}
This enables features like:
“Jump to subject discussion” from subject page
Timeline visualization of subject discussions
Filtering transcript by subject
Discussion statuses are saved with deduplication and validation:
// Deduplicate by utteranceId (keep last entry)const statusMap = new Map();for (const status of response.utteranceDiscussionStatuses) { if (statusMap.has(status.utteranceId)) { console.warn(`Duplicate utterance ID: ${status.utteranceId}`); } statusMap.set(status.utteranceId, status);}// Validate utterances existconst existingUtterances = await prisma.utterance.findMany({ where: { id: { in: utteranceIds } }, select: { id: true }});// Update only valid utterancesawait Promise.all( validStatuses.map(status => prisma.utterance.update({ where: { id: status.utteranceId }, data: { discussionStatus: status.status, discussionSubjectId: dbSubjectId } }) ));
The interactive chat feature provides context-aware answers:
import { aiChatStream } from '@/lib/ai';const stream = await aiChatStream( systemPrompt, messages, { maxTokens: 4096, temperature: 0.3, // Slightly creative for conversation enableWebSearch: true // Optional for factual queries });for await (const event of stream) { if (event.type === 'content_block_delta') { const textDelta = event.delta.text; // Stream text to UI }}
From src/lib/ai.ts:358-390
Web search is useful for questions that require current information beyond the meeting transcript (e.g., “What is the current status of this proposal?”).