Skip to main content

Overview

The analysis types module defines the core data structures for analyzing brand presence in AI-generated responses. These types power OneGlance’s ability to evaluate brand visibility, sentiment, positioning, and competitive landscape across multiple AI models.

AnalysisFilters

Filter parameters for querying analysis data.
interface AnalysisFilters {
  modelFilter?: string;
  timeFilter?: "all" | "7d" | "14d" | "30d";
  promptId?: string;
}
modelFilter
string
Filter by specific AI model provider (e.g., “chatgpt”, “claude”)
timeFilter
'all' | '7d' | '14d' | '30d'
Time range filter: all time, last 7 days, 14 days, or 30 days
promptId
string
Filter to a specific prompt for detail view

Usage

const filters: AnalysisFilters = {
  modelFilter: "claude",
  timeFilter: "7d",
  promptId: "prompt_123"
};
Used in: packages/utils/src/analysis/filterAnalysisRecords.ts:4

AnalysisInputSingle

Input parameters for analyzing a single AI response.
interface AnalysisInputSingle {
  brandDomain: string;
  brandName: string;
  response: string;
  prompt: string;
}
brandDomain
string
required
The domain of the brand being analyzed (e.g., “oneglance.ai”)
brandName
string
required
The name of the brand being analyzed
response
string
required
The AI-generated response text to analyze
prompt
string
required
The original prompt that generated the response

Usage

const input: AnalysisInputSingle = {
  brandDomain: "oneglance.ai",
  brandName: "OneGlance",
  response: "OneGlance is a leading AI analytics platform...",
  prompt: "What are the best AI analytics tools?"
};
Used in: packages/services/src/analysis/runAnalysis.ts:2

BrandAnalysisResult

Comprehensive analysis result evaluating a brand’s presence and perception in an AI-generated response.
interface BrandAnalysisResult {
  metadata?: {
    brandName: string;
    brandDomain: string;
    prompt: string | null;
    prompt_id: string | null;
    analyzedAt: string;
  };

  geoScore: {
    overall: number;
    verdict: string;
  };

  presence: {
    mentioned: boolean;
    mentionCount: number;
    visibility: number;
    prominence: "dominant" | "significant" | "moderate" | "minor" | "passing" | "absent";
    firstMentionPosition: "top" | "middle" | "bottom" | "absent";
  };

  position: {
    rankPosition: number | null;
    totalRanked: number | null;
    isTopPick: boolean;
    isTopThree: boolean;
    rankingContext: string | null;
  };

  sentiment: {
    score: number;
    label: "very_negative" | "negative" | "neutral" | "positive" | "very_positive";
    positives: string[];
    negatives: string[];
  };

  recommendation: {
    type: "top_pick" | "strong_alternative" | "conditional" | "mentioned_only" | "discouraged" | "not_mentioned";
    bestFor: string[];
    caveats: string[];
  };

  competitors: {
    name: string;
    domain: string;
    visibility: number;
    sentiment: number;
    rankPosition: number | null;
    isRecommended: boolean;
    winsOver: string[];
    losesTo: string[];
  }[];

  perception: {
    coreClaims: string[];
    differentiators: string[];
    bestKnownFor: string | null;
    pricingPerception: "premium" | "mid_range" | "budget" | "free" | "not_mentioned";
  };

  risks: {
    hasRisks: boolean;
    items: {
      type: "outdated_info" | "factual_error" | "brand_confusion" | "negative_association" | "missing_from_response";
      severity: "critical" | "warning" | "info";
      detail: string;
    }[];
  };

  actions: {
    priority: "critical" | "high" | "medium" | "low";
    recommendation: string;
  }[];
}

Core Sections

Metadata

metadata
object
Optional contextual information populated by application code
metadata.brandName
string
Name of the analyzed brand
metadata.brandDomain
string
Domain of the analyzed brand
metadata.prompt
string | null
The prompt that generated the response
metadata.prompt_id
string | null
Unique identifier for the prompt
metadata.analyzedAt
string
ISO 8601 timestamp of when the analysis was performed

GEO Score

The headline composite score (0-100) indicating overall brand performance.
geoScore.overall
number
required
Composite score from 0-100 representing overall brand performance
geoScore.verdict
string
required
Human-readable verdict explaining the score

Presence

Measures if and how prominently the brand appears in the response.
presence.mentioned
boolean
required
Whether the brand was mentioned at all
presence.mentionCount
number
required
Number of times the brand was mentioned
presence.visibility
number
required
Visibility score (0-100)
presence.prominence
string
required
Level of prominence: dominant, significant, moderate, minor, passing, or absent
presence.firstMentionPosition
string
required
Where the brand first appears: top, middle, bottom, or absent

Position

Tracks the brand’s ranking within the response.
position.rankPosition
number | null
required
Numerical rank (1 = first place, null if not ranked)
position.totalRanked
number | null
required
Total number of brands ranked in the response
position.isTopPick
boolean
required
Whether the brand is the #1 recommendation
position.isTopThree
boolean
required
Whether the brand is in the top 3 recommendations
position.rankingContext
string | null
required
Additional context about the ranking

Sentiment

Evaluates how favorably the brand is portrayed.
sentiment.score
number
required
Sentiment score (-100 to 100)
sentiment.label
string
required
Categorical label: very_negative, negative, neutral, positive, or very_positive
sentiment.positives
string[]
required
Array of positive mentions or attributes
sentiment.negatives
string[]
required
Array of negative mentions or criticisms

Recommendation

Indicates how strongly the AI recommends the brand.
recommendation.type
string
required
Recommendation strength: top_pick, strong_alternative, conditional, mentioned_only, discouraged, or not_mentioned
recommendation.bestFor
string[]
required
Use cases or scenarios where the brand is recommended
recommendation.caveats
string[]
required
Limitations or conditions mentioned

Competitors

Competitive landscape analysis.
competitors
array
required
Array of competitor analysis objects
competitors[].name
string
required
Competitor brand name
competitors[].domain
string
required
Competitor domain
competitors[].visibility
number
required
Competitor’s visibility score
competitors[].sentiment
number
required
Competitor’s sentiment score
competitors[].rankPosition
number | null
required
Competitor’s rank position (null if not ranked)
Whether the competitor is actively recommended
competitors[].winsOver
string[]
required
Brands this competitor is positioned above
competitors[].losesTo
string[]
required
Brands this competitor is positioned below

Perception

How the brand is characterized and positioned.
perception.coreClaims
string[]
required
Main claims made about the brand
perception.differentiators
string[]
required
Unique features or advantages highlighted
perception.bestKnownFor
string | null
required
Primary association or specialty
perception.pricingPerception
string
required
Price positioning: premium, mid_range, budget, free, or not_mentioned

Risks

Issues that need attention.
risks.hasRisks
boolean
required
Whether any risks were identified
risks.items
array
required
Array of risk items
risks.items[].type
string
required
Risk category: outdated_info, factual_error, brand_confusion, negative_association, or missing_from_response
risks.items[].severity
string
required
Risk severity: critical, warning, or info
risks.items[].detail
string
required
Detailed description of the risk

Actions

Recommended next steps for the brand.
actions
array
required
Array of actionable recommendations
actions[].priority
string
required
Priority level: critical, high, medium, or low
actions[].recommendation
string
required
The recommended action to take

Usage Example

const analysis: BrandAnalysisResult = {
  geoScore: {
    overall: 85,
    verdict: "Strong presence with positive sentiment"
  },
  presence: {
    mentioned: true,
    mentionCount: 3,
    visibility: 90,
    prominence: "significant",
    firstMentionPosition: "top"
  },
  position: {
    rankPosition: 2,
    totalRanked: 5,
    isTopPick: false,
    isTopThree: true,
    rankingContext: "Listed as a top alternative"
  },
  sentiment: {
    score: 75,
    label: "positive",
    positives: ["User-friendly", "Comprehensive analytics"],
    negatives: []
  },
  recommendation: {
    type: "strong_alternative",
    bestFor: ["Enterprise teams", "Advanced analytics needs"],
    caveats: ["Premium pricing"]
  },
  competitors: [
    {
      name: "CompetitorA",
      domain: "competitora.com",
      visibility: 95,
      sentiment: 80,
      rankPosition: 1,
      isRecommended: true,
      winsOver: ["OneGlance"],
      losesTo: []
    }
  ],
  perception: {
    coreClaims: ["AI-powered analytics", "Real-time insights"],
    differentiators: ["Multi-model analysis"],
    bestKnownFor: "Brand visibility tracking",
    pricingPerception: "mid_range"
  },
  risks: {
    hasRisks: false,
    items: []
  },
  actions: [
    {
      priority: "high",
      recommendation: "Emphasize unique differentiators in marketing content"
    }
  ]
};
Used in:
  • packages/services/src/analysis/runAnalysis.ts:8 - Analysis generation
  • packages/services/src/analysis/fetchAnalysedPrompts.ts:4 - Data retrieval
  • apps/web/src/app/(auth)/dashboard/_hooks/use-dashboard-data.ts:1 - Dashboard data processing

AnalysisModelInput

Input structure for model-specific analysis.
interface AnalysisModelInput {
  model_provider: string;
  response: string;
}
model_provider
string
required
The AI model provider identifier (e.g., “chatgpt”, “claude”)
response
string
required
The response text from the model

Usage

const input: AnalysisModelInput = {
  model_provider: "claude",
  response: "OneGlance provides comprehensive AI analytics..."
};

PromptAnalysis

Database representation of a prompt analysis stored in ClickHouse.
interface PromptAnalysis {
  id: string;
  prompt_id: string;
  workspace_id: string;
  user_id: string;
  model_provider: string;
  prompt: string;
  brand_analysis: string;
  prompt_run_at: string;
  created_at: string;
}
id
string
required
Unique identifier for the analysis record
prompt_id
string
required
References the prompt that was analyzed
workspace_id
string
required
Workspace context identifier
user_id
string
required
User who created the prompt
model_provider
string
required
AI model provider that generated the response
prompt
string
required
The original prompt text (stored for convenience)
brand_analysis
string
required
Complete BrandAnalysisResult serialized as JSON string
prompt_run_at
string
required
ISO 8601 timestamp when the prompt was executed
created_at
string
required
ISO 8601 timestamp when the analysis was created

Usage

This type represents the persisted structure in ClickHouse:
const stored: PromptAnalysis = {
  id: "analysis_123",
  prompt_id: "prompt_456",
  workspace_id: "ws_789",
  user_id: "user_012",
  model_provider: "claude",
  prompt: "What are the best AI tools?",
  brand_analysis: JSON.stringify(brandAnalysisResult),
  prompt_run_at: "2026-03-04T10:00:00Z",
  created_at: "2026-03-04T10:05:00Z"
};

AnalysisRecord

Flattened analysis record structure optimized for filtering and display.
interface AnalysisRecord {
  // Identifiers
  id: string;
  prompt_id: string;
  prompt_run_at: string;
  prompt: string;

  // User context
  user_id: string;
  workspace_id: string;

  // Model info
  model_provider: string;

  // Response data
  response: string;
  sources: Source[];

  // Analysis data
  brand_analysis?: BrandAnalysisResult;

  // Analysis status
  is_analysed?: boolean;

  // Timestamps
  created_at: string;
}
id
string
required
Unique record identifier
prompt_id
string
required
References the prompt
prompt_run_at
string
required
When the prompt was executed
prompt
string
required
The prompt text
user_id
string
required
User identifier
workspace_id
string
required
Workspace identifier
model_provider
string
required
AI model provider name
response
string
required
The AI-generated response text
sources
Source[]
required
Array of sources cited in the response
brand_analysis
BrandAnalysisResult
Parsed analysis object (only present if analyzed)
is_analysed
boolean
True if the record has been analyzed, false if it’s a raw response
created_at
string
required
ISO 8601 timestamp of record creation

Usage

const record: AnalysisRecord = {
  id: "rec_123",
  prompt_id: "prompt_456",
  prompt_run_at: "2026-03-04T10:00:00Z",
  prompt: "What are the best AI analytics platforms?",
  user_id: "user_789",
  workspace_id: "ws_012",
  model_provider: "claude",
  response: "OneGlance is a leading platform...",
  sources: [],
  brand_analysis: { /* ... */ },
  is_analysed: true,
  created_at: "2026-03-04T10:05:00Z"
};
Used in:
  • packages/utils/src/export/buildAnalysisCsvRow.ts:1 - CSV export functionality
  • packages/utils/src/analysis/filterAnalysisRecords.ts:1 - Filtering operations
  • apps/web/src/app/(auth)/prompts/page.tsx:5 - Prompts page display
  • apps/web/src/app/(auth)/dashboard/page.tsx:5 - Dashboard display
  • apps/web/src/app/(auth)/dashboard/_hooks/use-dashboard-data.ts:1 - Data hooks

AnalysisMetadata

Metadata about available filters in the analysis dataset.
interface AnalysisMetadata {
  available_brands: Array<{
    name: string;
    website: string;
  }>;
  available_models: string[];
}
available_brands
array
required
List of brands that have analysis data
available_brands[].name
string
required
Brand name
available_brands[].website
string
required
Brand website URL
available_models
string[]
required
List of AI model providers with available data

Usage

const metadata: AnalysisMetadata = {
  available_brands: [
    { name: "OneGlance", website: "https://oneglance.ai" },
    { name: "CompetitorA", website: "https://competitora.com" }
  ],
  available_models: ["chatgpt", "claude", "gemini"]
};

AnalysisResponse

Complete response object containing analysis records and metadata.
interface AnalysisResponse {
  records: AnalysisRecord[];
  metadata: AnalysisMetadata;
}
records
AnalysisRecord[]
required
Array of analysis records
metadata
AnalysisMetadata
required
Metadata about available filters and options

Usage

const response: AnalysisResponse = {
  records: [
    { /* AnalysisRecord */ }
  ],
  metadata: {
    available_brands: [{ name: "OneGlance", website: "https://oneglance.ai" }],
    available_models: ["chatgpt", "claude"]
  }
};

AnalysisRow

Legacy analysis row structure with brand metrics.
interface AnalysisRow {
  id: string;
  prompt_id: string;
  prompt_run_at: string;
  user_id: string;
  workspace_id: string;
  model_provider: string;
  response: string;
  brand_metrics: string | BrandMetricMap;
  sources: Source[];
  created_at: string;
}
id
string
required
Unique row identifier
prompt_id
string
required
Associated prompt ID
prompt_run_at
string
required
Prompt execution timestamp
user_id
string
required
User identifier
workspace_id
string
required
Workspace identifier
model_provider
string
required
AI model provider
response
string
required
AI response text
brand_metrics
string | BrandMetricMap
required
Brand metrics as JSON string or parsed object. See BrandMetricMap in metrics types.
sources
Source[]
required
Source citations
created_at
string
required
Record creation timestamp

Build docs developers (and LLMs) love