Skip to main content

Overview

The LLMRegistry is a singleton class that manages the registration and resolution of LLM implementations in ADK-TS. It provides a centralized way to:
  • Register LLM classes with model name patterns
  • Resolve model strings to appropriate LLM instances
  • Manage custom model instances
  • Create LLMs dynamically based on model names

Class Definition

class LLMRegistry {
  private static llmRegistry: Map<RegExp, LLMClass>;
  private static modelInstances: Map<string, LlmModel>;
  private static logger: Logger;

  static newLLM(model: string): BaseLlm;
  static resolve(model: string): LLMClass | null;
  static register(modelNameRegex: string, llmClass: LLMClass): void;
  static registerLLM(llmClass: LLMClass): void;
  static registerModel(name: string, model: LlmModel): void;
  static getModel(name: string): LlmModel;
  static hasModel(name: string): boolean;
  static unregisterModel(name: string): void;
  static getModelOrCreate(name: string): LlmModel | BaseLlm;
  static clear(): void;
  static clearModels(): void;
  static clearClasses(): void;
  static logRegisteredModels(): void;
}

Type Definitions

LLMClass

interface LLMClass {
  new (model: string): BaseLlm;
  supportedModels(): string[];
}
An LLM class constructor that creates BaseLlm instances and declares supported model patterns.

LlmModel

interface LlmModel {
  generateContent(
    options: { prompt: string } & LlmModelConfig
  ): Promise<LlmResponse>;
}
Simplified interface for custom model instances.

LlmModelConfig

interface LlmModelConfig {
  temperature?: number;
  maxOutputTokens?: number;
  topP?: number;
  topK?: number;
}

Static Methods

newLLM()

Creates a new LLM instance for the specified model string by resolving it through the registry.
model
string
required
The model identifier (e.g., "gpt-4", "gemini-2.5-flash").
Returns: BaseLlm Throws: Error if no LLM class matches the model string
import { LLMRegistry } from "@iqai/adk";

const llm = LLMRegistry.newLLM("gpt-4");
// Returns instance of OpenAILlm

const gemini = LLMRegistry.newLLM("gemini-2.5-flash");
// Returns instance of GoogleLlm

resolve()

Resolves a model string to its LLM class without instantiating it.
model
string
required
The model identifier to resolve.
Returns: LLMClass | null - The LLM class if found, null otherwise
const llmClass = LLMRegistry.resolve("claude-3-opus");
if (llmClass) {
  const instance = new llmClass("claude-3-opus");
}
Resolution Process:
  • Iterates through registered regex patterns
  • Returns first matching LLM class
  • Returns null if no match found

register()

Registers an LLM class with a specific model name regex pattern.
modelNameRegex
string
required
Regular expression pattern to match model names.
llmClass
LLMClass
required
The LLM class to register.
import { MyCustomLlm } from "./my-llm";

LLMRegistry.register("^my-model-.*", MyCustomLlm);

// Now this works:
const llm = LLMRegistry.newLLM("my-model-v1");

registerLLM()

Registers an LLM class using its supportedModels() method to get patterns. This is the preferred registration method.
llmClass
LLMClass
required
The LLM class with a supportedModels() static method.
import { MyCustomLlm } from "./my-llm";

// MyCustomLlm.supportedModels() returns ["^my-.*", "^custom-.*"]
LLMRegistry.registerLLM(MyCustomLlm);

// Both work now:
const llm1 = LLMRegistry.newLLM("my-model");
const llm2 = LLMRegistry.newLLM("custom-model");
Built-in Registrations: ADK-TS automatically registers these providers:
  • OpenAILlm - GPT models (^gpt-.*, ^o1-.*)
  • AnthropicLlm - Claude models (^claude-.*)
  • GoogleLlm - Gemini models (^gemini-.*, ^models/gemini-.*)
  • AiSdkLlm - Vercel AI SDK models (^(openai|anthropic|google):.*)

registerModel()

Registers a custom model instance directly by name, bypassing class-based resolution.
name
string
required
Unique name for the model instance.
model
LlmModel
required
Model instance implementing the LlmModel interface.
const customModel = {
  async generateContent(options) {
    // Custom implementation
    return new LlmResponse({ text: "Response" });
  },
};

LLMRegistry.registerModel("my-custom-instance", customModel);

const model = LLMRegistry.getModel("my-custom-instance");
Use Cases:
  • Fine-tuned models with specific configurations
  • Mock models for testing
  • Custom inference endpoints
  • Models requiring special initialization

getModel()

Retrieves a previously registered model instance by name.
name
string
required
The name used during registration.
Returns: LlmModel Throws: Error if model not found
const model = LLMRegistry.getModel("my-custom-instance");
const response = await model.generateContent({
  prompt: "Hello",
  temperature: 0.7,
});

hasModel()

Checks if a model instance is registered.
name
string
required
The model name to check.
Returns: boolean
if (LLMRegistry.hasModel("my-model")) {
  const model = LLMRegistry.getModel("my-model");
} else {
  console.log("Model not found");
}

unregisterModel()

Removes a registered model instance.
name
string
required
The model name to unregister.
LLMRegistry.unregisterModel("my-custom-instance");

getModelOrCreate()

Returns a registered model instance if it exists, otherwise creates a new LLM using class-based resolution.
name
string
required
Model name to get or model string to create.
Returns: LlmModel | BaseLlm
// If "my-instance" is registered, returns that
// Otherwise creates new LLM via newLLM("my-instance")
const model = LLMRegistry.getModelOrCreate("my-instance");
Resolution Priority:
  1. Check registered model instances first
  2. Fall back to class-based resolution via newLLM()

clear()

Clears all registered LLM classes and model instances. Useful for testing.
LLMRegistry.clear();
// Both llmRegistry and modelInstances are now empty

clearModels()

Clears only registered model instances, preserving class registrations.
LLMRegistry.clearModels();
// Only modelInstances cleared, class patterns remain

clearClasses()

Clears only LLM class registrations, preserving model instances.
LLMRegistry.clearClasses();
// Only llmRegistry cleared, model instances remain

logRegisteredModels()

Logs all registered patterns and instances for debugging.
LLMRegistry.logRegisteredModels();
// Outputs:
// Registered LLM class patterns: [/^gpt-.*/, /^claude-.*/, ...]
// Registered LLM instances: ["my-instance", "test-model"]

Usage Examples

Basic Usage with Built-in Providers

import { LLMRegistry } from "@iqai/adk";

// Automatically resolves to OpenAILlm
const gpt = LLMRegistry.newLLM("gpt-4");

// Automatically resolves to AnthropicLlm
const claude = LLMRegistry.newLLM("claude-3-opus");

// Automatically resolves to GoogleLlm
const gemini = LLMRegistry.newLLM("gemini-2.5-flash");

Registering a Custom LLM Provider

import { BaseLlm, LLMRegistry } from "@iqai/adk";
import type { LlmRequest, LlmResponse } from "@iqai/adk";

class MyLlm extends BaseLlm {
  static supportedModels(): string[] {
    return ["^myai-.*", "^custom-.*"];
  }

  protected async *generateContentAsyncImpl(
    llmRequest: LlmRequest,
    stream?: boolean
  ): AsyncGenerator<LlmResponse, void, unknown> {
    // Implementation
    yield new LlmResponse({ text: "Response" });
  }
}

// Register the custom LLM
LLMRegistry.registerLLM(MyLlm);

// Now you can use it
const myLlm = LLMRegistry.newLLM("myai-gpt-5");
const customLlm = LLMRegistry.newLLM("custom-model-1");

Using Model Instances

import { LLMRegistry, LlmResponse } from "@iqai/adk";

// Create a mock model for testing
const mockModel = {
  async generateContent(options) {
    return new LlmResponse({
      text: `Mock response to: ${options.prompt}`,
      usageMetadata: {
        promptTokenCount: 10,
        candidatesTokenCount: 20,
        totalTokenCount: 30,
      },
    });
  },
};

LLMRegistry.registerModel("mock-model", mockModel);

if (LLMRegistry.hasModel("mock-model")) {
  const model = LLMRegistry.getModel("mock-model");
  const response = await model.generateContent({
    prompt: "Test",
    temperature: 0.7,
  });
  console.log(response.text);
}

Registry in AgentBuilder

import { AgentBuilder } from "@iqai/adk";

// AgentBuilder uses LLMRegistry internally
const agent = new AgentBuilder()
  .withModel("gpt-4") // Resolved via LLMRegistry
  .withInstruction("You are helpful")
  .buildLlm();

const response = await agent.ask("Hello");

Testing with Custom Registry

import { LLMRegistry } from "@iqai/adk";
import { describe, it, beforeEach, afterEach } from "vitest";

describe("My Agent Tests", () => {
  beforeEach(() => {
    // Register test model
    LLMRegistry.registerModel("test-model", {
      async generateContent(options) {
        return new LlmResponse({ text: "Test response" });
      },
    });
  });

  afterEach(() => {
    // Clean up
    LLMRegistry.clearModels();
  });

  it("should work with test model", async () => {
    const model = LLMRegistry.getModel("test-model");
    const response = await model.generateContent({ prompt: "Test" });
    expect(response.text).toBe("Test response");
  });
});

Conditional Model Loading

import { LLMRegistry } from "@iqai/adk";

function getOptimalModel(task: string): BaseLlm {
  if (task === "coding") {
    return LLMRegistry.newLLM("gpt-4");
  } else if (task === "creative") {
    return LLMRegistry.newLLM("claude-3-opus");
  } else {
    return LLMRegistry.newLLM("gemini-2.5-flash");
  }
}

const llm = getOptimalModel("coding");

Handling Resolution Failures

import { LLMRegistry } from "@iqai/adk";

function createLLMSafely(model: string): BaseLlm | null {
  try {
    return LLMRegistry.newLLM(model);
  } catch (error) {
    console.error(`Failed to create LLM for model: ${model}`);
    console.error(error.message);
    return null;
  }
}

const llm = createLLMSafely("unknown-model-xyz");
if (!llm) {
  console.log("Falling back to default model");
  const fallback = LLMRegistry.newLLM("gpt-4");
}

Automatic Registration

ADK-TS automatically registers built-in providers via /packages/adk/src/models/registry.ts:
// Happens automatically on import
import { LLMRegistry } from "@iqai/adk";

// These are already registered:
// - OpenAILlm
// - AnthropicLlm
// - GoogleLlm
// - AiSdkLlm

Best Practices

1. Use registerLLM() for Custom Providers

// Good: Centralized pattern management
class MyLlm extends BaseLlm {
  static supportedModels() {
    return ["^myai-.*"];
  }
}
LLMRegistry.registerLLM(MyLlm);

// Avoid: Manual pattern registration
LLMRegistry.register("^myai-.*", MyLlm);

2. Use Model Instances for Testing

// Good: Deterministic testing
LLMRegistry.registerModel("test", mockModel);
const model = LLMRegistry.getModel("test");

// Avoid: Using real APIs in tests
const llm = LLMRegistry.newLLM("gpt-4"); // Expensive!

3. Check Model Existence

// Good: Defensive programming
if (LLMRegistry.hasModel(name)) {
  return LLMRegistry.getModel(name);
}
return LLMRegistry.newLLM(fallbackModel);

// Avoid: Catching errors
try {
  return LLMRegistry.getModel(name);
} catch {
  return LLMRegistry.newLLM(fallbackModel);
}

4. Clean Up in Tests

afterEach(() => {
  LLMRegistry.clearModels(); // Or clear() for full reset
});

Source Reference

See implementation: /packages/adk/src/models/llm-registry.ts See auto-registration: /packages/adk/src/models/registry.ts

Build docs developers (and LLMs) love