Skip to main content

Overview

The LlmRegistry is the central system that manages all LLM providers in ADK. It uses regex pattern matching to automatically route model names to the correct provider implementation, enabling the seamless multi-provider experience. Source: packages/adk/src/models/llm-registry.ts:23

How It Works

Automatic Registration

Providers are automatically registered when the models module is imported:
// From registry.ts:9-14
export function registerProviders(): void {
  // Register Google models
  LLMRegistry.registerLLM(GoogleLlm);
  LLMRegistry.registerLLM(AnthropicLlm);
  LLMRegistry.registerLLM(OpenAiLlm);
}

// Auto-register all providers
registerProviders();
This happens automatically when you import ADK:
import { AgentBuilder } from '@iqai/adk';
// Providers are already registered!

Pattern Matching

Each provider class defines supported model patterns:
// From openai-llm.ts:27-29
static override supportedModels(): string[] {
  return ["gpt-3.5-.*", "gpt-4.*", "gpt-4o.*", "gpt-5.*", "o1-.*", "o3-.*"];
}

Model Resolution

When you specify a model, the registry resolves it:
// From llm-registry.ts:38-45
static resolve(model: string): LLMClass | null {
  for (const [regex, llmClass] of LLMRegistry.llmRegistry.entries()) {
    if (regex.test(model)) {
      return llmClass;
    }
  }
  return null;
}
Example:
import { LLMRegistry } from '@iqai/adk';

const LlmClass = LLMRegistry.resolve('gpt-4o');
// Returns: OpenAiLlm (matches "gpt-4.*" pattern)

const llm = new LlmClass('gpt-4o');

Core API

Creating LLM Instances

import { AgentBuilder } from '@iqai/adk';

// Automatic resolution
const agent = AgentBuilder.withModel('gpt-4o').build();
Best for: Most use cases, simple and clean

Registry Methods

newLLM

Create a new LLM instance by model name:
import { LLMRegistry } from '@iqai/adk';

const llm = LLMRegistry.newLLM('gpt-4o');
// Returns: instance of OpenAiLlm
// From llm-registry.ts:30-36
static newLLM(model: string): BaseLlm {
  const llmClass = LLMRegistry.resolve(model);
  if (!llmClass) {
    throw new Error(`No LLM class found for model: ${model}`);
  }
  return new llmClass(model);
}

resolve

Get the LLM class for a model name:
const LlmClass = LLMRegistry.resolve('claude-3-5-sonnet-20241022');
// Returns: AnthropicLlm class

if (LlmClass) {
  const llm = new LlmClass('claude-3-5-sonnet-20241022');
}

registerLLM

Register a custom provider:
import { LLMRegistry, BaseLlm } from '@iqai/adk';

class CustomLlm extends BaseLlm {
  static override supportedModels(): string[] {
    return ['custom-.*'];
  }

  protected async *generateContentAsyncImpl(
    llmRequest: LlmRequest,
    stream?: boolean
  ) {
    // Implementation
  }
}

LLMRegistry.registerLLM(CustomLlm);

// Now you can use custom models
const llm = LLMRegistry.newLLM('custom-model-v1');

Named Model Instances

For shared or preconfigured model instances:

Register Named Instance

import { LLMRegistry, OpenAiLlm } from '@iqai/adk';

const premiumGpt = new OpenAiLlm('gpt-4o');
LLMRegistry.registerModel('premium-gpt', premiumGpt);

const budgetGpt = new OpenAiLlm('gpt-4o-mini');
LLMRegistry.registerModel('budget-gpt', budgetGpt);

Retrieve Named Instance

const llm = LLMRegistry.getModel('premium-gpt');

const agent = AgentBuilder.withLlm(llm).build();

Check Named Instance

if (LLMRegistry.hasModel('premium-gpt')) {
  const llm = LLMRegistry.getModel('premium-gpt');
}

Get or Create

Get a named instance or create a new one:
// From llm-registry.ts:78-84
static getModelOrCreate(name: string): LlmModel | BaseLlm {
  if (LLMRegistry.hasModel(name)) {
    return LLMRegistry.getModel(name);
  }

  return LLMRegistry.newLLM(name);
}
Usage:
// Returns registered instance if exists, otherwise creates new
const llm = LLMRegistry.getModelOrCreate('gpt-4o');

Custom Providers

Create Custom Provider

import { BaseLlm } from '@iqai/adk';
import type { LlmRequest } from '@iqai/adk';
import { LlmResponse } from '@iqai/adk';

export class CustomLlm extends BaseLlm {
  // Define supported model patterns
  static override supportedModels(): string[] {
    return [
      'custom-.*',           // custom-model, custom-large, etc.
      'my-company/.*',      // my-company/model-v1
    ];
  }

  // Implement content generation
  protected async *generateContentAsyncImpl(
    llmRequest: LlmRequest,
    stream = false,
  ): AsyncGenerator<LlmResponse, void, unknown> {
    const model = llmRequest.model || this.model;
    const messages = llmRequest.contents || [];
    
    // Call your custom API
    const response = await fetch('https://api.custom-llm.com/v1/generate', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.CUSTOM_LLM_API_KEY}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({
        model,
        messages,
        stream,
      }),
    });

    if (stream) {
      // Handle streaming
      const reader = response.body?.getReader();
      const decoder = new TextDecoder();

      while (true) {
        const { done, value } = await reader!.read();
        if (done) break;

        const chunk = decoder.decode(value);
        yield new LlmResponse({
          content: {
            role: 'model',
            parts: [{ text: chunk }],
          },
          partial: true,
        });
      }
    } else {
      // Handle non-streaming
      const data = await response.json();
      yield new LlmResponse({
        content: {
          role: 'model',
          parts: [{ text: data.text }],
        },
      });
    }
  }
}

Register Custom Provider

import { LLMRegistry } from '@iqai/adk';
import { CustomLlm } from './custom-llm';

// Register the custom provider
LLMRegistry.registerLLM(CustomLlm);

// Now it works like any other provider
const agent = AgentBuilder.withModel('custom-model-v1').build();

Use Custom Provider

import { AgentBuilder } from '@iqai/adk';

const agent = AgentBuilder
  .withModel('custom-model-v1')
  .withInstruction('You are a helpful assistant')
  .build();

const response = await agent.ask('Hello!');

Pattern Precedence

Registration Order

Providers are matched in registration order:
// From registry.ts:9-14
LLMRegistry.registerLLM(GoogleLlm);    // Registered first
LLMRegistry.registerLLM(AnthropicLlm); // Registered second
LLMRegistry.registerLLM(OpenAiLlm);    // Registered third

Overlapping Patterns

If patterns overlap, the first registered provider wins:
// Example of overlapping patterns
class ProviderA extends BaseLlm {
  static override supportedModels() {
    return ['model-.*'];  // Broad pattern
  }
}

class ProviderB extends BaseLlm {
  static override supportedModels() {
    return ['model-premium.*'];  // Specific pattern
  }
}

// Register specific first for priority
LLMRegistry.registerLLM(ProviderB);  // Gets model-premium-*
LLMRegistry.registerLLM(ProviderA);  // Gets other model-*

const llm1 = LLMRegistry.newLLM('model-premium-v1');  // Uses ProviderB
const llm2 = LLMRegistry.newLLM('model-basic-v1');    // Uses ProviderA

Debugging

Log Registered Models

import { LLMRegistry } from '@iqai/adk';

// Log all registered patterns and instances
LLMRegistry.logRegisteredModels();
Output:
Registered LLM class patterns: [
  "/gemini-.*/",
  "/google\/.*/",
  "/projects\/.+\/locations\/.+\/endpoints\/.+/",
  "/claude-3-.*/",
  "/claude-.*-4.*/",
  "/gpt-3.5-.*/",
  "/gpt-4.*/",
  "/gpt-4o.*/",
  "/o1-.*/"
]
Registered LLM instances: ["premium-gpt", "budget-gpt"]

Test Model Resolution

function testModelResolution(modelName: string) {
  const LlmClass = LLMRegistry.resolve(modelName);
  
  if (LlmClass) {
    console.log(`✓ ${modelName}${LlmClass.name}`);
  } else {
    console.log(`✗ ${modelName} → No provider found`);
  }
}

testModelResolution('gpt-4o');                    // ✓ gpt-4o → OpenAiLlm
testModelResolution('claude-3-5-sonnet-20241022'); // ✓ claude-3-5-sonnet-20241022 → AnthropicLlm
testModelResolution('gemini-2.5-flash');          // ✓ gemini-2.5-flash → GoogleLlm
testModelResolution('unknown-model');              // ✗ unknown-model → No provider found

Registry Management

Clear All

import { LLMRegistry } from '@iqai/adk';

// Clear everything
LLMRegistry.clear();

Clear Model Instances Only

// Clear named instances, keep class registrations
LLMRegistry.clearModels();

Clear Class Registrations Only

// Clear class registrations, keep named instances
LLMRegistry.clearClasses();

Unregister Named Instance

LLMRegistry.unregisterModel('premium-gpt');

Best Practices

  • Use specific patterns first (e.g., "gpt-4o.*" before "gpt-4.*")
  • Include version suffixes in patterns (e.g., "model-.*-v[0-9]+")
  • Test patterns thoroughly with regex testers
  • Document supported models in your provider
  • Extend BaseLlm for full ADK integration
  • Implement both streaming and non-streaming
  • Handle errors properly (use RateLimitError for rate limits)
  • Add usage metadata to responses
  • Use for shared configurations (API keys, base URLs)
  • Use for environment-specific models (dev, staging, prod)
  • Use for A/B testing different models
  • Clear instances in tests to avoid state leakage
  • Register custom providers early (at app startup)
  • Register specific patterns before broad patterns
  • Use automatic registration (via registry.ts pattern)
  • Test registration with logRegisteredModels()

Advanced Patterns

Provider Factory

Create providers dynamically:
import { LLMRegistry, BaseLlm } from '@iqai/adk';

function createProvider(config: {
  provider: 'openai' | 'anthropic' | 'google';
  model: string;
  customConfig?: any;
}): BaseLlm {
  const llm = LLMRegistry.newLLM(config.model);
  
  // Apply custom configuration
  if (config.customConfig) {
    Object.assign(llm, config.customConfig);
  }
  
  return llm;
}

const llm = createProvider({
  provider: 'openai',
  model: 'gpt-4o',
  customConfig: { /* custom settings */ }
});

Multi-Provider Fallback

Fallback to alternative providers:
import { LLMRegistry } from '@iqai/adk';

async function askWithFallback(
  prompt: string,
  models: string[]
): Promise<string> {
  for (const modelName of models) {
    try {
      const llm = LLMRegistry.newLLM(modelName);
      const request = {
        contents: [{
          role: 'user',
          parts: [{ text: prompt }]
        }]
      };
      
      for await (const response of llm.generateContentAsync(request)) {
        return response.text || '';
      }
    } catch (error) {
      console.warn(`Model ${modelName} failed, trying next...`);
      continue;
    }
  }
  
  throw new Error('All models failed');
}

// Try GPT-4o, fallback to Claude, then Gemini
const response = await askWithFallback('Hello', [
  'gpt-4o',
  'claude-3-5-sonnet-20241022',
  'gemini-2.5-flash'
]);

Environment-Based Provider

import { LLMRegistry } from '@iqai/adk';

// Configure providers based on environment
if (process.env.NODE_ENV === 'development') {
  LLMRegistry.registerModel('default', LLMRegistry.newLLM('gpt-4o-mini'));
} else if (process.env.NODE_ENV === 'production') {
  LLMRegistry.registerModel('default', LLMRegistry.newLLM('gpt-4o'));
}

// Use named instance
const agent = AgentBuilder
  .withLlm(LLMRegistry.getModel('default'))
  .build();

Next Steps

OpenAI Provider

Learn about GPT model configuration

Custom Providers

Build your own LLM provider

AgentBuilder

Use providers with AgentBuilder

Error Handling

Handle provider errors

Build docs developers (and LLMs) love