Core Functions
Analysis Execution
runAnalysis
Analyzes a single prompt/response pair to generate comprehensive brand intelligence metrics.Comprehensive analysis object containing:geoScore (GEO Performance)
overall: 0-100 composite scoreverdict: One-sentence evidence-based summary
mentioned: Boolean - brand appears in responsementionCount: Number of times brand is referencedvisibility: 0-100 weighted visibility scoreprominence: “dominant” | “significant” | “moderate” | “minor” | “passing” | “absent”firstMentionPosition: “top” | “middle” | “bottom” | “absent”
rankPosition: Absolute rank (1-indexed) or nulltotalRanked: Total brands in responseisTopPick: Boolean - explicitly marked as #1 choiceisTopThree: Boolean - ranks in top 3rankingContext: Description of ranking category
score: 0-100 sentiment scorelabel: “very_negative” | “negative” | “neutral” | “positive” | “very_positive”positives: Array of positive phrases from responsenegatives: Array of negative phrases from response
type: “top_pick” | “strong_alternative” | “conditional” | “mentioned_only” | “discouraged” | “not_mentioned”bestFor: Array of use cases/audiencescaveats: Array of limitations/conditions
- Array of competitor objects with:
name: Competitor brand namedomain: Competitor domainvisibility: 0-100 visibility scoresentiment: 0-100 sentiment scorerankPosition: Absolute rankisRecommended: BooleanwinsOver: Areas where competitor beats target brandlosesTo: Areas where target brand wins
coreClaims: Key statements about the branddifferentiators: What sets brand apartbestKnownFor: Primary association (single phrase)pricingPerception: “premium” | “mid_range” | “budget” | “free” | “not_mentioned”
hasRisks: Booleanitems: Array of risk objects:type: “outdated_info” | “factual_error” | “brand_confusion” | “negative_association” | “missing_from_response”severity: “critical” | “warning” | “info”detail: Specific description
- Array of action objects:
priority: “critical” | “high” | “medium” | “low”recommendation: Specific, actionable advice
brandName: Brand analyzedbrandDomain: Domain analyzedprompt: Original promptprompt_id: Prompt UUID (if available)analyzedAt: ISO timestamp
- Zero hallucination policy (every metric must be traceable to response text)
- Anti-inflation scoring rules (resists AI tendency to over-score)
- Five-dimension visibility calculation (Coverage, Placement, Structural Prominence, Frequency, Contextual Framing)
- Absolute ranking (not local category ranks)
- Cross-validation checks (15+ consistency rules)
- Evidence-first approach (defaults to conservative values)
ExternalServiceError: If ChatGPT API call fails (502 status)ValidationError: If response JSON is invalid
analysePromptsForWorkspace
Batch-processes unanalyzed prompt responses for a workspace, storing results in ClickHouse.analysedCount: Number of successfully analyzed responsesfailedCount: Number of failed analyseserrors: Array of error objects withresponseId,modelProvider,errorremainingCount: Number of unanalyzed responses still in queue
- Queries
analytics.prompt_responsesfor rows whereis_analysed = false - Fetches workspace details (brand name, domain)
- For each response:
- Calls
runAnalysis()with brand and response data - Stores result in
analytics.prompt_analysis - Marks response as analyzed via
ALTER TABLE UPDATE
- Calls
- Uses offset-based pagination to handle ClickHouse async mutations
- Waits 100ms between batches to allow mutations to complete
- Continues until no unanalyzed responses remain (if
analyzeAll: true)
- Individual analysis failures are logged and collected in
errorsarray - Does not throw on individual failures
- Continues processing remaining responses
- Returns detailed error information for debugging
- Default batch size: 50 responses
- Sequential analysis (not parallelized to avoid rate limits)
- Typical speed: ~2-3 seconds per response (GPT-4 latency)
Analysis Retrieval
fetchAnalysedPrompts
Retrieves all prompt responses with their analysis data (analyzed and unanalyzed).Array of analysis records with:
id: Response UUIDprompt_id: Prompt UUIDprompt: Prompt textprompt_run_at: Execution timestampuser_id: User IDworkspace_id: Workspace IDmodel_provider: Provider nameresponse: AI response textsources: Array of source objectsbrand_analysis: BrandAnalysisResult object (or undefined if not analyzed)created_at: Storage timestampis_analysed: Boolean flag
- Joins
analytics.prompt_responseswithanalytics.prompt_analysis - Uses LEFT JOIN to include unanalyzed responses
- Orders by
prompt_run_at DESC(most recent first) - Handles empty/malformed JSON gracefully
getLastPromptRunTime
Retrieves the timestamp of the most recent prompt execution for a workspace.Workspace ID
ISO timestamp of last prompt run, or null if no runs exist
Analysis Management
resetWorkspaceAnalysis
Clears all analysis data for a workspace while preserving raw prompt responses.Workspace to reset
- Deletes all rows from
analytics.prompt_analysisfor the workspace - Sets
is_analysed = falseon all rows inanalytics.prompt_responses - Does NOT delete prompt responses or user prompts
- Allows re-analysis with updated logic/prompts
- Brand name or domain changed (called automatically by
updateWorkspaceDetails) - Analysis logic updated and historical data should be reprocessed
- Manual reset requested by user
Analysis Prompt Engineering
The analysis uses a highly sophisticated 494-line prompt (seeanalysisPrompt.ts:6-493) that implements:
Core Principles
- Zero Hallucination Policy - Every metric must be traceable to response text
- Quote-or-Default - Before scoring, mentally quote the justifying passage
- Literal Reading - No inference beyond what’s explicitly stated
- Anti-Inflation Mandate - Actively resist over-scoring
- Evidence-First - Positive scores require explicit evidence
Visibility Calculation (5 Dimensions)
A. Coverage (25%) - Space occupied- 0-5: Name-drop only
- 16-30: Short paragraph
- 51-75: Primary subject
- 76-100: Dominates response
- 90-100: First sentence
- 70-89: First quarter
- 40-69: Middle
- 15-39: Last quarter
- 80-100: Heading/title/top pick slot
- 60-79: Top 3 list item
- 40-59: Lower list item
- 20-39: Inline prose
- 80-100: 6+ mentions
- 60-79: 4-5 mentions
- 40-59: 2-3 mentions
- 20-39: 1 mention
- 90-100: Direct answer
- 70-89: Actively recommended
- 50-69: Compared with peers
- 30-49: Context/background
GEO Score Calculation
Absolute Ranking Rules
Rankings reflect reading order across entire response, not local category positions. Example:Cross-Validation Checks
The prompt enforces 18 consistency rules before outputting, including:- If not mentioned → overall = 0, visibility = 0, sentiment = 50
- If sentiment ≥ 60 → positives[] must be non-empty
- If isTopPick = true → rankPosition = 1, overall ≥ 60
- Prominence must match visibility score range
- Target brand must NOT appear in competitors array
Usage in tRPC Routers
Example fromapps/web/src/server/api/routers/analysis/analysis.ts:
ClickHouse Schema
analytics.prompt_analysis
(prompt_id, prompt_run_at, model_provider, workspace_id) uniquely identifies a response
Type Definitions
Source Files
packages/services/src/analysis/runAnalysis.ts- Core analysis enginepackages/services/src/analysis/analysisPrompt.ts- 494-line analysis promptpackages/services/src/analysis/analysePromptsForWorkspace.ts- Batch processingpackages/services/src/analysis/fetchAnalysedPrompts.ts- Data retrievalpackages/services/src/analysis/resetWorkspaceAnalysis.ts- Reset logic
packages/services/src/analysis/index.ts.