Skip to main content

Overview

The TokenAnalyzer class is the main orchestrator of the Tokenizador application. It coordinates the tokenization service, UI controller, and statistics calculator to provide real-time token analysis and visualization.
This is the entry point for the entire application. It initializes all services and manages the application lifecycle.

Constructor

Creates a new TokenAnalyzer instance and initializes all dependent services.
const analyzer = new TokenAnalyzer();
The constructor automatically:
  • Instantiates TokenizationService, UIController, and StatisticsCalculator
  • Calls init() to set up event handlers and wait for tokenization initialization
  • Configures the UI with the selected model

Methods

init()

Initializes the application and sets up event handlers.
async init()
returns
Promise<void>
Returns a promise that resolves when initialization is complete
What it does:
  1. Configures event handlers for text changes, model changes, and clear actions
  2. Waits for the tokenization service to initialize tiktoken
  3. Triggers initial model change to configure the UI
const analyzer = new TokenAnalyzer();
// Automatically calls init() in constructor
// Wait for initialization to complete
await analyzer.init();
console.log('Application ready!');

handleTextChange()

Handles changes in the text input field.
async handleTextChange()
Triggered when the user types or modifies text in the input area. Automatically calls performRealTimeAnalysis() to tokenize and analyze the new text.

handleModelChange()

Handles changes in the model selection dropdown.
async handleModelChange()
returns
Promise<void>
Returns a promise that resolves when the model change is processed
What it does:
  1. Gets the newly selected model ID
  2. Updates model information display (context limit, tokenizer type, pricing)
  3. Re-analyzes the current text with the new model

handleClear()

Handles the clear button click.
handleClear()
Clears the text input and resets all visualizations to their empty state.

performRealTimeAnalysis()

Performs real-time tokenization analysis on the current text.
async performRealTimeAnalysis()
returns
Promise<void>
Returns a promise that resolves when analysis is complete
Process flow:
  1. Gets current text and selected model
  2. Returns early if text is empty (resets displays)
  3. Shows loading state in UI
  4. Tokenizes text using TokenizationService
  5. Calculates statistics using StatisticsCalculator
  6. Updates all UI displays with results
  7. Shows context warnings if applicable
const analyzer = new TokenAnalyzer();

// Manually trigger analysis (normally automatic)
await analyzer.performRealTimeAnalysis();
This method is called automatically on text input and model changes. You typically don’t need to call it manually.

getState()

Retrieves the current state of the application.
getState()
returns
Object
Object containing current application state
Return value structure:
currentText
string
The current text in the input field
selectedModel
string
The currently selected model ID (e.g., “gpt-4o”)
isInitialized
boolean
Whether the tokenization service has finished initializing
availableModels
string[]
Array of all available model IDs from MODELS_DATA
const analyzer = new TokenAnalyzer();
const state = analyzer.getState();

console.log(state);
// {
//   currentText: "Hello world",
//   selectedModel: "gpt-4o",
//   isInitialized: true,
//   availableModels: ["gpt-4o", "claude-3.5-sonnet", ...]
// }

compareModels()

Compares tokenization results across multiple models for the current text.
async compareModels(modelIds)
modelIds
string[]
Array of model IDs to compare
returns
Promise<Array>
Array of comparison objects sorted by cost (cheapest first)
const analyzer = new TokenAnalyzer();

// Compare three models
const comparison = await analyzer.compareModels([
  'gpt-4o',
  'claude-3.5-sonnet',
  'llama-3.1-70b'
]);

console.log(comparison);
// [
//   { modelId: 'llama-3.1-70b', stats: {...}, formatted: {...} },
//   { modelId: 'gpt-4o', stats: {...}, formatted: {...} },
//   { modelId: 'claude-3.5-sonnet', stats: {...}, formatted: {...} }
// ]
Results are automatically sorted by cost, making it easy to find the most cost-effective model for your text.

exportResults()

Exports the current analysis results in the specified format.
async exportResults(format)
format
string
default:"json"
Export format: "json", "csv", or "txt"
returns
Promise<string>
Formatted string containing the exported data
Export formats:
Complete data with timestamp, model info, statistics, and token details
{
  "timestamp": "2026-03-06T17:00:00.000Z",
  "model": "gpt-4o",
  "text": "Hello world",
  "statistics": {...},
  "tokens": [...]
}
const analyzer = new TokenAnalyzer();
const jsonData = await analyzer.exportResults('json');

// Download or save the data
const blob = new Blob([jsonData], { type: 'application/json' });
const url = URL.createObjectURL(blob);
window.location.href = url;

updateDisplays()

Updates all UI visualization components with analysis results.
updateDisplays(tokenResult, statistics)
tokenResult
Object
required
Result object from tokenizeText() containing tokens array and count
statistics
Object
required
Statistics object from calculateStatistics()
This method updates:
  • Statistics display (token count, character count, word count, cost)
  • Visual token representation with color coding
  • Token list with IDs and types

resetDisplays()

Resets all visualizations to empty state.
resetDisplays()
Sets all statistics to zero and clears the token visualizations.

showContextWarnings()

Displays warnings if the text exceeds or approaches context limits.
showContextWarnings(statistics, modelId)
statistics
Object
required
Calculated statistics object
modelId
string
required
ID of the model to check limits against
Warning thresholds:
  • 100%+: Text exceeds context limit
  • 90-99%: Near context limit
  • 75-89%: High context usage

Usage Example

// Initialize the application
const analyzer = new TokenAnalyzer();

// Wait for initialization
await analyzer.init();

// Get current state
const state = analyzer.getState();
console.log('Current model:', state.selectedModel);
console.log('Is initialized:', state.isInitialized);

// Compare models
const comparison = await analyzer.compareModels([
  'gpt-4o',
  'claude-3.5-sonnet',
  'llama-3.1-70b'
]);

console.log('Cheapest model:', comparison[0].modelId);

// Export results
const results = await analyzer.exportResults('json');
console.log('Analysis results:', results);

Dependencies

The TokenAnalyzer requires these services to be loaded:

TokenizationService

Handles tiktoken integration and tokenization logic

UIController

Manages DOM manipulation and user interactions

StatisticsCalculator

Calculates token statistics and cost estimates

Error Handling

The TokenAnalyzer includes comprehensive error handling:
try {
  const analyzer = new TokenAnalyzer();
  await analyzer.performRealTimeAnalysis();
} catch (error) {
  console.error('Analysis error:', error);
  // Error is displayed to user via showError() method
}
Errors during initialization and analysis are logged to console and displayed to users through the UI.

See Also

TokenizationService

Learn about the tokenization engine

UIController

Explore UI management methods

StatisticsCalculator

Understand statistics calculations

Architecture

View the complete architecture

Build docs developers (and LLMs) love