Overview
OneGlance tracks how AI providers mention your brand using multiple metrics. This guide explains what each metric means, how it’s calculated, and how to use it for strategic decisions.All metrics are derived from actual AI provider responses to your prompts. OneGlance does not simulate or estimate—every score is grounded in real LLM output.
Core Metrics Explained
GEO Score (Generative Engine Optimization)
The GEO Score (0-100) measures your brand’s overall AI visibility. It’s a weighted average of four components:Visibility (25%)
How prominently your brand appears in responses.Factors:
- Coverage: How much text discusses your brand
- Placement: Where your brand first appears
- Structural prominence: Headings, lists, emphasis
- Frequency: Number of mentions
- Contextual framing: Role in the response
Rank (25%)
Your position in recommendation lists.Mapping:
- #1 → 100 points
- #2 → 80 points
- #3 → 65 points
- #4 → 50 points
- #5 → 40 points
- #6+ → 30 points
- Mentioned but unranked → 15 points
- Absent → 0 points
Sentiment (25%)
Tone of mentions (positive/neutral/negative).Scale:
- 81-100: Enthusiastic superlatives
- 60-80: Favorable with caveats
- 41-59: Neutral/factual
- 21-40: Significant drawbacks
- 0-20: Actively discouraged
Recommendation (25%)
Strength of endorsement.Types:
- Top pick: 100 points
- Strong alternative: 80 points
- Conditional: 60 points
- Mentioned only: 30 points
- Discouraged: 10 points
- Not mentioned: 0 points
Interpreting GEO Score Ranges
80-100: Dominant AI Visibility
80-100: Dominant AI Visibility
What it means: Your brand is a go-to recommendation for this prompt topic.Typical characteristics:
- Listed in top 3 absolute positions
- Positive sentiment with superlatives (“best”, “excellent”, “standout”)
- Recommended without major caveats
- Multiple mentions throughout response
“For sales pipeline management, HubSpot stands out as the top choice for mid-market teams. Its intuitive interface, robust integrations, and excellent customer support make it the clear leader in this space. HubSpot consistently ranks #1 for ease of use…”Strategic actions:
- Maintain current positioning (don’t relax)
- Identify what content/sources AI providers cite
- Expand to adjacent prompt categories
- Monitor competitors for shifts
60-79: Strong AI Presence
60-79: Strong AI Presence
What it means: Your brand is reliably mentioned and favorably positioned.Typical characteristics:
- Appears in top 3-5 positions
- Positive sentiment, often with conditions (“great for X use case”)
- Recommended as a strong alternative
- Discussed alongside category leaders
“While Salesforce dominates enterprise CRM, Pipedrive is an excellent choice for small to mid-sized sales teams. It offers streamlined pipeline management at a fraction of the cost, though it lacks some advanced features.”Strategic actions:
- Push for #1 ranking: Address caveats mentioned by AI
- Increase content volume on differentiators
- Target prompts where you rank #4-5 to break into top 3
40-59: Moderate Visibility
40-59: Moderate Visibility
What it means: Your brand appears but isn’t prominently recommended.Typical characteristics:
- Ranked #5-8 or mentioned without ranking
- Neutral sentiment (factual descriptions)
- Listed among “other options” or “also consider”
- Limited coverage (1-2 sentences)
“Other CRM tools worth considering include Freshsales, Copper, and Close. Each has its strengths, but they’re generally less popular than HubSpot or Salesforce.”Strategic actions:
- Diagnose why you’re not higher: lack of content? competitor dominance? outdated info?
- Publish authoritative content targeting this prompt topic
- Get featured in high-authority review sites (G2, Capterra) that AI providers cite
- Test prompt variations to see if wording affects ranking
20-39: Low Visibility
20-39: Low Visibility
What it means: Your brand rarely appears or is positioned negatively.Typical characteristics:
- Mentioned only in passing or as a negative example
- Appears in “long tail” (rank #10+)
- Contrastive mentions (“unlike [YourBrand], Competitor X…”)
- Low sentiment or factual errors
“Many teams have moved away from legacy tools like SugarCRM in favor of more modern alternatives such as HubSpot and Pipedrive.”Strategic actions:
- Investigate if AI has outdated information about your brand
- Check for factual errors in AI responses (file corrections if possible)
- Increase PR and content marketing to update AI training data
- Consider rebranding or repositioning if perception is entrenched
0-19: Absent or Harmful
0-19: Absent or Harmful
What it means: Your brand is missing from AI recommendations or actively criticized.Typical characteristics:
- Not mentioned at all in responses
- Appears only with warnings or discouragements
- AI lacks awareness of your brand in this category
- Competitors dominate all rankings
- Confirm your brand name and domain are correct in workspace settings
- Audit if AI providers even know your brand exists (test with direct brand queries)
- Launch aggressive content marketing and link-building campaigns
- Get listed on major review platforms
- Consider partnerships or PR to raise awareness
Detailed Metrics Breakdown
Presence Metrics
Found on the Dashboard and Prompts pages.Presence Rate
Definition: Percentage of AI responses where your brand was mentioned. Calculation:- 80-100%: Excellent category awareness
- 60-79%: Strong presence, some gaps
- 40-59%: Moderate presence, significant room for growth
- 20-39%: Low awareness, major visibility issues
- 0-19%: Critical awareness gap
Mention Count
Definition: Number of times your brand is referenced in a single response. Counting rules (frompackages/services/src/analysis/analysisPrompt.ts:48):
Counts as a mention:
- Exact brand name: “HubSpot”
- Domain reference: “hubspot.com”
- Sub-products: “HubSpot CRM”, “HubSpot Marketing Hub”
- Well-known abbreviations: “SFDC” for Salesforce
- Brand name only in URLs/citations (not prose)
- Partial string matches (“Hub” ≠ “HubSpot”)
- Brand mentioned in echoed user question (only counts in AI’s own answer)
- High mention count (4+): Brand is central to the response → good visibility
- Low mention count (1-2): Brand is peripheral → increase content depth on this topic
Visibility Score (0-100)
A five-dimension calculation of how prominently your brand appears.- Coverage (25%)
- Placement (25%)
- Structural Prominence (20%)
- Frequency (15%)
- Contextual Framing (15%)
Question: How much text discusses your brand?Scoring:
- 0-5: Name-dropped in a word or fragment
- 6-15: One brief sentence
- 16-30: Short paragraph (2-3 sentences)
- 31-50: Multiple paragraphs
- 51-75: One of the primary subjects
- 76-100: Dominates the response
packages/services/src/analysis/analysisPrompt.ts:190:Rank and Position
Absolute Rank vs Local Rank
Example:| Brand | Local Rank | Absolute Rank |
|---|---|---|
| HubSpot | #1 in Small Teams | #1 (first overall) |
| Pipedrive | #2 in Small Teams | #2 |
| Freshsales | #3 in Small Teams | #3 |
| Salesforce | #1 in Enterprise | #4 (fourth brand mentioned) |
| Microsoft Dynamics | #2 in Enterprise | #5 |
| SAP CRM | #3 in Enterprise | #6 |
packages/services/src/analysis/analysisPrompt.ts:129:
In this example, Salesforce’s LOCAL rank within “Best for Enterprise” is #1, but its ABSOLUTE rank in the full response is #4 (it is the 4th distinct brand listed overall).
Why Absolute Rank Matters
Users read responses sequentially. If your brand appears as “#1 for Enterprise” but is the 7th brand mentioned overall, most users may never reach it. Strategic implication: Aim for absolute top-3 positioning, not just top rank within a niche category.isTopPick vs isTopThree
isTopPick:- Requires absolute rank #1 AND explicit superlative language
- Examples: “best overall”, “top recommendation”, “our #1 pick”
- Being listed first without superlatives →
isTopPick = false
- True if absolute rank is 1, 2, or 3
- No language requirement—purely positional
Sentiment Analysis
Sentiment measures the tone of AI mentions.Sentiment Score (0-100)
Frompackages/services/src/analysis/analysisPrompt.ts:162:
| Condition | Score Range |
|---|---|
| Response explicitly warns against or discourages the brand | 0-20 |
| Response notes significant drawbacks or unfavorable comparisons | 21-40 |
| Mention is purely factual/descriptive with zero evaluative language | 41-59 |
| Response uses favorable language WITH noted limitations | 60-80 |
| Response uses enthusiastic superlatives with NO caveats | 81-100 |
- A score of 81+ requires EXPLICIT superlatives (“excellent”, “best”, “standout”)
- “Good”, “solid”, “popular” → 60-75 range, NOT 80+
- Being listed in a recommendation list does NOT automatically = positive sentiment
- If response lists BOTH pros AND cons → sentiment cannot exceed 79
Sentiment Label
Automatic label based on score:- very_positive (81-100)
- positive (60-80)
- neutral (41-59)
- negative (21-40)
- very_negative (0-20)
Positives and Negatives Arrays
Text snippets explaining sentiment. Example from an 85-sentiment response:Recommendation Type
Classifies the strength of AI endorsement. Frompackages/services/src/analysis/analysisPrompt.ts:334:
- top_pick
- strong_alternative
- conditional
- mentioned_only
- discouraged
- not_mentioned
Definition: Brand is explicitly named as the #1 choice with superlative language.Requirements:
- Absolute rank #1
- Language: “best overall”, “top recommendation”, “our #1 pick”
- Must be positioned as top recommendation of the ENTIRE response (not just a sub-category)
“For CRM tools, HubSpot is our top recommendation. It consistently delivers the best balance of features, usability, and support.”
Competitive Analysis
Competitor Extraction
Frompackages/services/src/analysis/analysisPrompt.ts:286:
Included as competitors:
- Brands directly compared to yours in the same response
- Brands listed alongside yours in rankings
- Brands mentioned in the same category discussion
- Brands in a completely different section/topic
- Generic category references (“CRM software” is not a competitor)
- Your own brand (never listed as its own competitor)
- Brands only in the user’s prompt (not AI’s response)
Competitor Deduplication
Frompackages/services/src/analysis/analysisPrompt.ts:302:
Rule: Sub-products are consolidated under the parent brand.
Examples:
- “Zoho CRM” + “Zoho One” + “Bigin by Zoho” → name: “Zoho”
- “Google Workspace” + “Gmail” + “Google Docs” → name: “Google”
- “Salesforce Sales Cloud” + “Service Cloud” → name: “Salesforce”
Competitor Metrics
Each competitor has:Visibility
Competitor’s prominence in the response (same 5-dimension formula as your brand).
Sentiment
How positively/negatively the competitor is mentioned.
Rank Position
Competitor’s absolute rank in the response.
isRecommended
Boolean: Was the competitor explicitly recommended?
winsOver
Areas where the competitor beats your brand (per AI response).
losesTo
Areas where your brand beats the competitor.
Competitive Landscape Table
On the Dashboard, the Competitive Landscape card shows:- Brand: Competitor name (deduplicated)
- Mentions: Number of responses where competitor appeared
- Avg Rank: Average absolute position across all mentions
- Sentiment: Average sentiment score
Advanced Metrics
Brand Perception
Extracted from AI responses about how your brand is perceived.Core Claims
Key statements AI providers make about your brand. Example:Differentiators
What AI says sets your brand apart from competitors. Example:Best Known For
Single phrase summarizing brand perception. Examples:- “Ease of use and intuitive interface”
- “Enterprise-grade security and compliance”
- “Affordable pricing for small teams”
Pricing Perception
AI’s understanding of your pricing tier. Values:- premium: High-cost, enterprise-focused
- mid_range: Balanced pricing
- budget: Low-cost option
- free: Freemium or fully free
- not_mentioned: AI didn’t discuss pricing
Risk Identification
Flags issues in AI responses that could harm your brand. Risk types (frompackages/services/src/analysis/analysisPrompt.ts:353):
outdated_info
outdated_info
Definition: AI states something factually outdated about your brand.Example:
“HubSpot does not offer native LinkedIn integration.” (If you launched LinkedIn integration 6 months ago, this is outdated.)Severity: Usually critical if materially incorrect.Action: Publish press releases, update review site listings, create content explicitly stating the correction.
factual_error
factual_error
Definition: AI makes an incorrect claim (not just outdated—factually wrong).Example:
“Pipedrive is owned by Salesforce.” (Incorrect—Pipedrive is independent.)Severity: criticalAction: File corrections with AI providers if possible (OpenAI, Google have feedback mechanisms). Increase authoritative content stating the fact.
brand_confusion
brand_confusion
Definition: AI conflates your brand with another or attributes another brand’s features to yours.Example:
“Zoho CRM, formerly known as HubSpot…”Severity: criticalAction: Severe brand positioning issue. Increase distinct branding in all content. Consider trademark enforcement if appropriate.
negative_association
negative_association
Definition: AI associates your brand with a negative category or outcome.Example:
“CRM tools known for poor customer support include [YourBrand] and LegacyCRM.”Severity: critical to warningAction: Investigate if the criticism is valid. If unfair, launch reputation management campaign.
missing_from_response
missing_from_response
Definition: Your brand is absent from a response where it objectively should appear.Example: Prompt: “What are the best CRM tools?” AI lists 10 competitors but not your brand, despite being a major player.Severity: critical (indicates severe visibility gap)Action: Increase content marketing, get featured on major review sites, improve SEO for category terms.
Actions and Recommendations
Each analysis provides 3-5 specific, actionable recommendations. Priority levels:- critical: Brand is actively harmed—immediate action required
- high: Major missed opportunity
- medium: Optimization opportunity
- low: Nice-to-have improvement
Using Metrics for Strategy
Scenario 1: Low Presence Rate (<40%)
Diagnosis: AI providers lack awareness of your brand. Action plan:- Content blitz: Publish 10-20 authoritative articles targeting your category
- Review sites: Get listed on G2, Capterra, TrustRadius with customer reviews
- PR campaign: Secure mentions in industry publications AI likely trains on
- Backlink building: Increase domain authority so AI trusts your content
- Re-run prompts monthly: Track presence rate improvement
Scenario 2: High Presence (70%+) but Low GEO Scores (<50)
Diagnosis: AI knows your brand but doesn’t recommend it. Action plan:- Analyze sentiment and negatives: What criticisms are recurring?
- Address product gaps: If AI mentions “lacks feature X”, prioritize building it
- Messaging overhaul: If perception is off, update positioning across all channels
- Competitive comparison content: Publish “[YourBrand] vs [Competitor]” emphasizing wins
- Customer proof: Case studies, testimonials, and reviews that counter negative perceptions
Scenario 3: Strong in Some Prompts, Weak in Others
Diagnosis: Visibility is uneven across topics. Action plan:- Identify patterns: Group prompts by topic/buyer persona
- Double down on strengths: Expand content for high-performing topics
- Fill gaps: Create content specifically for low-performing topics
- Test prompt wording: Sometimes rephrasing a prompt changes AI behavior
Scenario 4: Competitor Consistently Outranks You
Diagnosis: One competitor dominates AI mentions. Action plan:- Study competitor’s sources: What sites does AI cite when mentioning them?
- Get featured on those sources: Aim for G2, Gartner, review sites they dominate
- Direct comparison content: Publish “Why [YourBrand] vs [Competitor]” content
- Monitor competitor differentiators: See what AI says they do better, close those gaps
- Track their GEO scores: Use OneGlance to add prompts mentioning the competitor
Scenario 5: Factual Errors in AI Responses
Diagnosis: AI has incorrect or outdated info. Action plan:- Document errors: Screenshot AI responses showing incorrect claims
- Publish corrections: Blog posts, press releases explicitly correcting the info
- Update review sites: Ensure G2, Capterra, etc. have current accurate info
- File feedback with AI providers: OpenAI, Google, Anthropic have feedback forms
- Re-run prompts monthly: Verify corrections propagate to AI models
Exporting and Reporting
Share metrics with stakeholders via exports.Dashboard Export
From the Dashboard, click Export → JSON or CSV. Contents:- Aggregate stats (presence rate, avg rank, top competitor)
- Impact metrics (total responses, recommendation rate, sentiment)
- Brand perception (core claims, differentiators, pricing)
- Source intelligence (top domains, citation counts)
- Competitor data (visibility, sentiment, rank for each)
Prompts Export
From the Prompts page, click Export. Contents:- Prompt-level metrics (GEO score, sentiment, visibility, position)
- Full AI responses for each prompt
- Response-level metrics (per provider)
- Sources and citations
Custom Reporting
For recurring executive reports, consider:- Export JSON: Richer data for custom analysis
- Process in Python/R: Calculate custom metrics
- Visualize: Use Tableau, Looker, or custom dashboards
- Automate: Use OneGlance API to schedule exports
Next Steps
Managing Prompts
Refine your prompt strategy based on metric insights
Scheduling
Track metric trends over time with automated runs
Team Collaboration
Share metric reports and coordinate strategy across teams
API Reference
Programmatically access metrics for custom dashboards
Related Concepts
- Setup Guide - Initial workspace configuration
- Analysis Concepts - How OneGlance calculates metrics
- Workspace API - Programmatic access to metrics