Overview
OneGlanse tracks brand mentions across multiple AI providers (ChatGPT, Claude, Gemini, Perplexity, AI Overview). Adding a new provider involves:- Creating a provider configuration
- Implementing response extraction logic
- Implementing source citation extraction
- Registering the provider
- Testing the integration
Provider Architecture
All provider behavior is declared in a singleProviderConfig interface located at:
The ProviderConfig Interface
Page type from playwright and Source type from @oneglanse/types.
Step-by-Step: Adding a Provider
Let’s add a hypothetical provider called “Llama” as an example.import { extractAssistantMarkdown } from "../../../lib/input/markdown/toMarkdown.js";
import { waitForAssistantToFinish } from "../../../lib/input/response/waitForFinish.js";
import type { ProviderConfig } from "../types.js";
import { extractSourcesFromLlama } from "./lib/extractSources.js";
export const llamaConfig: ProviderConfig = {
url: "https://llama.ai/chat",
warmupDelayMs: 5000,
label: "Llama",
displayName: "Llama",
requiresWarmup: true,
waitForResponse: (page) => waitForAssistantToFinish(page, "llama"),
extractResponse: (page) => extractAssistantMarkdown(page, "llama"),
extractSources: async (page) => extractSourcesFromLlama(page),
};
url - The provider’s chat interface URLwarmupDelayMs - How long to wait after navigation (usually 5000ms)label & displayName - Provider identificationrequiresWarmup - Set true to clear the editor before first usewaitForResponse - Reuse shared helper or create custom logicextractResponse - Reuse shared markdown extractor or write customextractSources - Custom source extraction (covered below)If the provider doesn’t have sources, create
apps/agent/src/core/providers/llama/lib/extractSources.ts:import type { Source } from "@oneglanse/types";
import type { Page } from "playwright";
export async function extractSourcesFromLlama(
_page: Page
): Promise<Source[]> {
// This provider doesn't provide sources
return [];
}
import type { Source } from "@oneglanse/types";
import { SELECTORS } from "@oneglanse/utils";
import type { Locator, Page } from "playwright";
import {
type RawSource,
buildSources,
clickButtonViaDispatch,
} from "../../_shared/sourceUtils.js";
export async function extractSourcesFromLlama(
page: Page,
sourcesButton: Locator
): Promise<Source[]> {
// Extract raw source data from the page DOM
const rawSources = (await page.evaluate((sels) => {
const results: Array<{
rawHref: string;
title: string;
citedText: string;
imgSrc: string | null;
}> = [];
// Find the sources panel/flyout
const flyout = document.querySelector('[data-testid="sources-panel"]');
if (!flyout) return results;
// Query all source links
const anchors = flyout.querySelectorAll<HTMLAnchorElement>('a[href]');
for (const a of Array.from(anchors)) {
let href = a.getAttribute("href");
if (!href) continue;
try {
// Normalize URL
href = new URL(href, location.origin).toString();
href = href.replace(/#.*$/, "") ?? "";
} catch {
continue;
}
const title = a.querySelector('.source-title')?.textContent?.trim() || "";
const citedText = a.querySelector('.citation-text')?.textContent?.trim() || "";
const imgSrc = a.querySelector('img')?.getAttribute('src') ?? null;
results.push({ rawHref: href, title, citedText, imgSrc });
}
return results;
}, SELECTORS.llama)) as RawSource[];
// Close the sources panel
if (!(await clickButtonViaDispatch(page, sourcesButton))) return [];
await page.waitForTimeout(300);
// Convert raw sources to typed Source objects with deduplication
return buildSources(rawSources);
}
import { openSourcesPanel } from "../../../lib/input/sources/openPanel.js";
import { findSourcesButton } from "../../../lib/input/sources/findButton.js";
import { extractSourcesFromLlama } from "./lib/extractSources.js";
export const llamaConfig: ProviderConfig = {
// ... other config
extractSources: async (page) => {
const btn = await findSourcesButton(page);
if (!btn) return [];
await openSourcesPanel(page, btn);
return extractSourcesFromLlama(page, btn);
},
};
See ChatGPT’s implementation at
apps/agent/src/core/providers/chatgpt/index.ts:16-21 for this pattern.export const PROVIDER_LIST = [
"chatgpt",
"claude",
"perplexity",
"gemini",
"ai-overview",
"llama", // Add your provider here
] as const;
export type Provider = (typeof PROVIDER_LIST)[number];
import type { Provider } from "@oneglanse/types";
import { aiOverviewConfig } from "./ai-overview/index.js";
import { chatgptConfig } from "./chatgpt/index.js";
import { claudeConfig } from "./claude/index.js";
import { geminiConfig } from "./gemini/index.js";
import { perplexityConfig } from "./perplexity/index.js";
import { llamaConfig } from "./llama/index.js"; // Import your config
import type { ProviderConfig } from "./types.js";
export const PROVIDER_CONFIGS: Record<Provider, ProviderConfig> = {
gemini: geminiConfig,
chatgpt: chatgptConfig,
perplexity: perplexityConfig,
claude: claudeConfig,
"ai-overview": aiOverviewConfig,
llama: llamaConfig, // Add to the map
};
Real Examples from the Codebase
Example 1: Claude (No Sources)
The simplest provider implementation atapps/agent/src/core/providers/claude/index.ts:
Example 2: Gemini (With Sources)
A provider with source extraction atapps/agent/src/core/providers/gemini/index.ts:
Example 3: Perplexity (With Post-Navigation Hook)
A provider with custom navigation behavior atapps/agent/src/core/providers/perplexity/index.ts:
Understanding Source Extraction
The Source Type
Defined inpackages/types/src/types/sources.ts:
The RawSource Type
Before processing, sources are extracted asRawSource from the DOM:
The buildSources Helper
Converts raw sources to typedSource[] with normalization and deduplication.
Defined at apps/agent/src/lib/extraction/sourceUtils.ts:28-54:
The clickButtonViaDispatch Helper
Closes flyouts/panels by dispatching a synthetic click event. Defined atapps/agent/src/lib/extraction/sourceUtils.ts:63-84:
Testing Your Provider
import { logger } from "@oneglanse/utils";
export async function extractSourcesFromLlama(page: Page): Promise<Source[]> {
logger.log("[llama] Starting source extraction");
const rawSources = await page.evaluate(() => {
const results = [];
// ... extraction logic
console.log(`Found ${results.length} sources`);
return results;
});
logger.log(`[llama] Extracted ${rawSources.length} raw sources`);
return buildSources(rawSources);
}
Advanced Configuration
Custom response waiting
If the sharedwaitForAssistantToFinish doesn’t work for your provider:
Custom response extraction
If the markdown extractor doesn’t work:Between-prompts hook
Reset the UI state between consecutive prompts:Before-retry hook
Recover from errors before retrying:Common Pitfalls
Bot detection
Some providers detect automation. Mitigate with:Dynamic selectors
Avoid brittle CSS class names. Prefer:data-testidattributesaria-labelattributes- Semantic HTML elements
- Text content matching
Timeout errors
If responses take a long time:Empty sources
If sources don’t extract, check:- Is the sources button clicked?
- Is the panel fully loaded?
- Are selectors correct?
- Add debug logging:
Submitting Your Provider
ProviderConfig fields[] if not applicable)Follow the Contributing Guide to:
feature/add-llama-providerPROVIDER_LIST in packages/types/src/types/agent.tsapps/agent/src/core/providers/llama/index.tsapps/agent/src/core/providers/index.ts[])pnpm typecheckpnpm lint:fixNeed Help?
If you get stuck:-
Review existing provider implementations:
- Simple:
apps/agent/src/core/providers/claude/ - Moderate:
apps/agent/src/core/providers/gemini/ - Complex:
apps/agent/src/core/providers/chatgpt/
- Simple:
-
Check shared utilities:
apps/agent/src/lib/input/- Input/editor helpersapps/agent/src/lib/extraction/- Source extraction utilitiesapps/agent/src/core/steps/- Prompt execution pipeline
-
Ask for help:
- Open a GitHub Discussion
- Comment on related issues
- Join the community chat