Skip to main content

Overview

The agent types module defines interfaces for handling agent responses, citations, and content blocks. These types are used throughout the OneGlance system to structure AI agent outputs and manage source citations.

AgentCitation

Represents a citation or reference within agent-generated content.
interface AgentCitation {
  text: string;
  href?: string | null;
  title?: string | null;
  ariaLabel?: string | null;
  type?: "link" | "superscript" | "button";
}
text
string
required
The citation text to display
href
string | null
Optional URL the citation links to
title
string | null
Optional tooltip or title text for the citation
ariaLabel
string | null
Optional accessibility label for screen readers
type
'link' | 'superscript' | 'button'
Visual style of the citation. Determines how it’s rendered in the UI.

Usage

Citations are used to link agent responses back to their source material:
const citation: AgentCitation = {
  text: "[1]",
  href: "https://example.com/source",
  title: "Example Source",
  type: "superscript"
};
Used in:
  • ContentBlock - as part of content block citations
  • ExtractionResult - as inline citations in extracted content

ContentBlock

Represents a structured block of content with associated metadata and citations.
interface ContentBlock {
  text: string;
  tag: string;
  citations?: AgentCitation[];
}
text
string
required
The main text content of the block
tag
string
required
HTML tag or identifier for the content block (e.g., “p”, “h1”, “ul”)
citations
AgentCitation[]
Optional array of citations associated with this content block

Usage

Content blocks structure agent responses into semantic chunks:
const block: ContentBlock = {
  text: "OneGlance provides AI-powered analytics.",
  tag: "p",
  citations: [
    {
      text: "[1]",
      href: "https://oneglance.ai",
      type: "superscript"
    }
  ]
};
Used in:
  • ExtractionResult - array of content blocks from extraction

ExtractionResult

Complete result from an agent extraction operation, including the response text, structured content, citations, and sources.
interface ExtractionResult {
  response: string;
  contentBlocks: ContentBlock[];
  inlineCitations: AgentCitation[];
  sources: Source[];
  hasSourcesButton: boolean;
  extractionErrors: string[];
}
response
string
required
The raw text response from the agent
contentBlocks
ContentBlock[]
required
Structured content blocks parsed from the response
inlineCitations
AgentCitation[]
required
Citations that appear inline within the response text
sources
Source[]
required
Source documents or URLs referenced in the response. Each source contains a URL, title, and other metadata.
hasSourcesButton
boolean
required
Whether to display a “Show Sources” button in the UI
extractionErrors
string[]
required
Array of error messages encountered during extraction

Usage

This type represents the complete output from agent content extraction:
const result: ExtractionResult = {
  response: "OneGlance is a leading AI analytics platform...",
  contentBlocks: [
    {
      text: "OneGlance is a leading AI analytics platform",
      tag: "p",
      citations: [{ text: "[1]", href: "https://oneglance.ai", type: "superscript" }]
    }
  ],
  inlineCitations: [{ text: "[1]", href: "https://oneglance.ai", type: "superscript" }],
  sources: [
    {
      title: "OneGlance Homepage",
      cited_text: "AI analytics platform",
      url: "https://oneglance.ai",
      domain: "oneglance.ai",
      favicon: "https://oneglance.ai/favicon.ico"
    }
  ],
  hasSourcesButton: true,
  extractionErrors: []
};
Used in:
  • Agent response processing pipelines
  • UI components that display agent-generated content

AskPromptResult

Result from executing a prompt against an AI model, including the response and associated metadata.
interface AskPromptResult {
  userId: string;
  workspaceId: string;
  promptId: string;
  prompt: string;
  response: string;
  sources: Source[];
}
userId
string
required
Unique identifier of the user who submitted the prompt
workspaceId
string
required
Unique identifier of the workspace context
promptId
string
required
Unique identifier for this specific prompt
prompt
string
required
The original prompt text submitted to the AI model
response
string
required
The AI model’s response text
sources
Source[]
required
Sources cited in the response

Usage

const result: AskPromptResult = {
  userId: "user_123",
  workspaceId: "ws_456",
  promptId: "prompt_789",
  prompt: "What are the best AI analytics platforms?",
  response: "OneGlance is a leading AI analytics platform...",
  sources: []
};
Used in:
  • Prompt execution workflows
  • Response storage and retrieval

Provider

Supported AI model providers.
const PROVIDER_LIST = [
  "chatgpt",
  "claude",
  "perplexity",
  "gemini",
  "ai-overview",
] as const;

type Provider = (typeof PROVIDER_LIST)[number];
The Provider type is a union of supported AI model provider identifiers:
  • "chatgpt" - OpenAI’s ChatGPT
  • "claude" - Anthropic’s Claude
  • "perplexity" - Perplexity AI
  • "gemini" - Google’s Gemini
  • "ai-overview" - Google AI Overview

Usage

const provider: Provider = "claude";
Used in:
  • Model selection and filtering
  • Analysis record identification
  • Results aggregation by provider

AgentResult

Result wrapper for agent operations, indicating success or failure.
type AgentResult = {
  status: "fulfilled" | "rejected";
  data: AskPromptResult[];
};
status
'fulfilled' | 'rejected'
required
Whether the agent operation succeeded or failed
data
AskPromptResult[]
required
Array of prompt results (may be empty if rejected)

Usage

const result: AgentResult = {
  status: "fulfilled",
  data: [
    {
      userId: "user_123",
      workspaceId: "ws_456",
      promptId: "prompt_789",
      prompt: "What are the best AI platforms?",
      response: "Here are the top platforms...",
      sources: []
    }
  ]
};
Used in:
  • Multi-model prompt execution
  • Error handling in agent workflows

ModelResult

Aggregates results from multiple AI model providers.
type ModelResult = Record<Provider, AgentResult>;
A record mapping each AI provider to its corresponding agent result. This allows tracking results across all providers simultaneously.

Usage

const modelResults: ModelResult = {
  chatgpt: {
    status: "fulfilled",
    data: [/* ... */]
  },
  claude: {
    status: "fulfilled",
    data: [/* ... */]
  },
  perplexity: {
    status: "rejected",
    data: []
  },
  gemini: {
    status: "fulfilled",
    data: [/* ... */]
  },
  "ai-overview": {
    status: "fulfilled",
    data: [/* ... */]
  }
};
Used in:
  • Multi-model analysis workflows
  • Comparing responses across different AI providers
  • Aggregating results for dashboard displays

Build docs developers (and LLMs) love