Overview
Argument Cartographer’s AI layer is built on Google Genkit , a TypeScript framework for building AI-powered applications. This architecture provides type-safe LLM interactions, tool calling, and observability out of the box.
Current Model: Gemini 2.5 Flash - optimized for speed and cost while maintaining high quality reasoning.
Genkit Configuration
The core AI instance is configured in src/ai/genkit.ts:
import { genkit } from 'genkit' ;
import { googleAI } from '@genkit-ai/google-genai' ;
export const ai = genkit ({
plugins: [ googleAI ()],
model: 'googleai/gemini-2.5-flash' ,
});
Key Configuration:
Plugin: @genkit-ai/google-genai for Gemini integration
Default Model: gemini-2.5-flash (fast, cost-effective)
API Key: Read from process.env.GOOGLE_GENAI_API_KEY
You can override the model per-flow by specifying model: in flow definitions.
AI Flows
Genkit Flows are the primary abstraction for AI tasks. Each flow represents a complete AI workflow with inputs, outputs, and intermediate steps.
Flow Architecture
Core Flows
File: src/ai/flows/generate-argument-blueprint.tsPurpose: Main analysis flow that generates complete argument mapsInput Schema: z . object ({
input: z . string (). describe ( 'Topic query, URL, or document text' ),
})
Output Schema: z . object ({
blueprint: z . array ( ArgumentNodeSchema ),
summary: z . string (),
analysis: z . string (),
credibilityScore: z . number (). min ( 1 ). max ( 10 ),
brutalHonestTake: z . string (),
keyPoints: z . array ( z . string ()),
socialPulse: z . string (),
tweets: z . array ( TweetSchema ),
fallacies: z . array ( DetectedFallacySchema ). optional (),
})
Processing Steps:
Generate search query (Gemini)
Web search (Firecrawl tool)
Scrape articles (Firecrawl tool)
Twitter search (Twitter API tool)
Main analysis (Gemini with full context)
Social pulse summary (Gemini)
Schema validation (Zod)
File: src/ai/flows/identify-logical-fallacies.tsPurpose: Standalone fallacy detection for arbitrary textInput: z . object ({
argumentText: z . string (),
})
Output: z . object ({
fallacies: z . array ( z . string ()),
explanation: z . string (),
})
Prompt Strategy: Provide examples of each fallacy type, ask AI to identify and explainFile: src/ai/flows/ask-more.tsPurpose: Interactive chat for follow-up questions about analysisContext: Full blueprint + user questionPattern: Retrieval-augmented generation (RAG)
File: src/ai/flows/summarize-source-text.tsPurpose: Extract key points from lengthy articlesUse Case: Pre-processing scraped content for token efficiency
File: src/ai/flows/explain-logical-fallacy.tsPurpose: Educational deep-dive on specific fallacy typesOutput: Definition, examples, how to avoid, real-world instances
Prompt Engineering
Main Analysis Prompt
The core prompt for argument blueprint generation:
const mainAnalysisPrompt = ai . definePrompt ({
name: 'mainAnalysisPrompt' ,
input: {
schema: z . object ({
input: z . string (),
searchQuery: z . string (),
context: z . string ()
})
},
output: {
schema: z . object ({
blueprint: z . array ( ArgumentNodeSchema ),
summary: z . string (),
analysis: z . string (),
credibilityScore: z . number (),
brutalHonestTake: z . string (),
keyPoints: z . array ( z . string ()),
fallacies: z . array ( DetectedFallacySchema ),
})
},
system: `You are an expert AI assistant specializing in rigorous,
balanced argument deconstruction and LOGICAL FALLACY DETECTION.
Core Principles:
1. Objectivity is Paramount - Act as neutral synthesizer
2. Depth and Detail - Identify distinct lines of reasoning
3. Ground Everything in Sources - Tie every node to provided context
4. Detect Logical Fallacies - Actively scan for errors in reasoning
Execution Process:
1. Analyze Context - Read provided sources
2. Identify Thesis - Determine central question
3. Deconstruct Both Sides - Claims, counterclaims, evidence
4. Excavate Evidence - Extract verbatim snippets
5. Detect Fallacies - Identify specific logical errors
6. Build Blueprint - Construct JSON object
You must respond with valid JSON in a \`\`\` json code block.` ,
prompt: `Initial Query: {{{input}}}
Search Query Used: {{{searchQuery}}}
*** RESEARCH CONTEXT (Analysis Sources) ***
{{{context}}}`
});
Prompt Techniques
Few-Shot Learning
Chain of Thought
Constrained Generation
Role Prompting
Pattern: Provide 2-3 examples before asking for analysisconst prompt = `Here are examples of good argument blueprints:
Example 1:
{
"blueprint": [...],
"credibilityScore": 8
}
Example 2: ...
Now analyze this topic:
{{{userInput}}}`
Benefit: Improves consistency and output qualityPattern: Ask AI to reason step-by-stepsystem : `Before providing final output, think through:
1. What is the central thesis?
2. What are the main supporting arguments?
3. What are the main objections?
4. What evidence backs each claim?
Then provide structured JSON output.`
Benefit: More thorough, reasoned analysisPattern: Force specific output structure via JSON modeoutput : {
format : 'json' ,
schema : GenerateArgumentBlueprintOutputSchema
}
Benefit: Guaranteed parseable output, no regex hacksPattern: Assign AI specific expertisesystem : `You are an expert in logical fallacies with a PhD in
philosophy and 20 years teaching critical thinking...`
Benefit: Better domain-specific responses
Genkit’s tool system enables AI to call external functions during generation.
import { ai } from '@/ai/genkit' ;
export const webSearch = ai . defineTool (
{
name: 'webSearch' ,
description: 'Searches the web for information using Firecrawl' ,
inputSchema: z . object ({
query: z . string (). describe ( 'Search query' ),
}),
outputSchema: z . array ( z . object ({
title: z . string (),
link: z . string (),
snippet: z . string (),
})),
},
async ( input ) => {
// Implementation
const results = await searchWeb ( input . query );
return results ;
}
);
webSearch - Firecrawl API search
webScraper - Article content extraction
twitterSearch - Social sentiment gathering
Tool Calling Flow:
In the current implementation, tools are called programmatically rather than via AI function calling to maintain deterministic control flow.
Context Management
Token Budget Strategy
Gemini 2.5 Flash Limits:
Input: 1M tokens
Output: 8K tokens
Our Strategy:
Allocate 20K tokens max for source context
12K chars per source × 8 sources = ~96K chars = ~24K tokens
Leaves room for prompt, examples, and safety margin
let context = "" ;
if ( scrapedDocs . length > 0 ) {
context = scrapedDocs . map (( doc , index ) => `
--- SOURCE ${ index + 1 } ---
URL: ${ doc . url }
Extracted Text:
${ doc . content . substring ( 0 , 12000 ) }
` ). join ( " \n\n " );
}
Context Prioritization
When sources exceed budget:
Prioritize by Domain
Trusted outlets (Reuters, BBC) get full content
Truncate Less Reliable
Reduce token allocation for lower-quality sources
Summarize If Needed
Use summarizeSourceText flow for lengthy articles
Response Parsing
AI responses may wrap JSON in markdown code blocks:
const rawText = mainAnalysisResponse . text ;
let jsonString = "" ;
// Try to extract from code block
const jsonBlockMatch = rawText . match ( /```json \n ( [ \s\S ] *? ) \n ```/ );
if ( jsonBlockMatch && jsonBlockMatch [ 1 ]) {
jsonString = jsonBlockMatch [ 1 ];
} else {
// Fallback: Find braces
const firstBrace = rawText . indexOf ( '{' );
const lastBrace = rawText . lastIndexOf ( '}' );
if ( firstBrace !== - 1 && lastBrace !== - 1 ) {
jsonString = rawText . substring ( firstBrace , lastBrace + 1 );
}
}
// Use JSON5 for lenient parsing (allows trailing commas)
const parsed = JSON5 . parse ( jsonString );
Schema Validation
try {
const validatedOutput = GenerateArgumentBlueprintOutputSchema . parse ( parsed );
return validatedOutput ;
} catch ( error ) {
if ( error instanceof ZodError ) {
console . error ( "Schema validation failed:" , error . errors );
// Log specific field errors for debugging
error . errors . forEach ( err => {
console . error ( `- ${ err . path . join ( '.' ) } : ${ err . message } ` );
});
}
throw new Error ( "AI output does not match expected schema" );
}
Error Handling
Retry Logic
async function callAIWithRetry < T >(
fn : () => Promise < T >,
maxRetries = 3
) : Promise < T > {
for ( let i = 0 ; i < maxRetries ; i ++ ) {
try {
return await fn ();
} catch ( error ) {
if ( i === maxRetries - 1 ) throw error ;
const delay = Math . pow ( 2 , i ) * 1000 ; // 1s, 2s, 4s
console . log ( `Retry ${ i + 1 } / ${ maxRetries } after ${ delay } ms` );
await new Promise ( resolve => setTimeout ( resolve , delay ));
}
}
throw new Error ( "Max retries exceeded" );
}
Fallback Strategies
Trigger: Firecrawl returns 0 resultsFallback:
Try broader search query
Use AI knowledge-only mode
Display disclaimer about missing sources
Trigger: JSON extraction or Zod validation failsFallback:
Retry generation with stronger prompt
Use regex to extract partial data
Return error to user with raw text
Trigger: API returns 429 errorFallback:
Implement exponential backoff
Queue request for later
Show user estimated wait time
Parallel Processing
const [ searchResults , twitterResults ] = await Promise . all ([
searchWeb ( searchQuery ),
twitterSearch ({ query: searchQuery }),
]);
Impact: Reduces total latency from 20s to 12s
Streaming Responses (Future)
const stream = await ai . generateStream ({
prompt: analysisPrompt ,
model: 'gemini-2.5-flash' ,
});
for await ( const chunk of stream ) {
// Send partial results to client
sendSSE ({ type: 'chunk' , data: chunk });
}
Benefit: User sees progress in real-time
Observability
Genkit Dev UI
Run npm run genkit:dev to access:
Flow Inspector: View all registered flows
Trace Viewer: See execution steps and timing
Input/Output Tester: Test flows with sample data
Model Switcher: Try different Gemini models
Genkit UI runs on http://localhost:4000 separate from the main app.
Logging Strategy
console . log ( '[Flow] Generated Query:' , searchQuery );
console . log ( '[Flow] Found ${results.length} sources' );
console . log ( '[Flow] Context length:' , context . length , 'chars' );
console . log ( '[Flow] Analysis complete. Credibility:' , score );
Production: Replace with Pino or Winston for structured JSON logs
Next Steps
External Integrations Learn how Firecrawl, Twitter, and Gemini APIs are integrated
Data Layer Understand data persistence and security
Configuration Customize AI model, prompts, and behavior
API Reference Detailed API documentation for all flows