Overview
ADK-TS provides first-class support for multiple LLM providers through a unified, provider-agnostic API. The framework automatically registers providers and handles model selection through intelligent pattern matching.
Supported Providers
OpenAI GPT-3.5, GPT-4, GPT-4o, o1, and o3 models with streaming support
Anthropic Claude 3 and Claude 4 models with prompt caching
Google Gemini models via Google AI API or Vertex AI with context caching
AI SDK Vercel AI SDK integration for any provider
Quick Start
Basic Usage
Simply specify the model name - ADK automatically selects the right provider:
import { AgentBuilder } from '@iqai/adk' ;
// OpenAI GPT-4
const agent = AgentBuilder . withModel ( 'gpt-4o' ). build ();
// Anthropic Claude
const agent = AgentBuilder . withModel ( 'claude-3-5-sonnet-20241022' ). build ();
// Google Gemini
const agent = AgentBuilder . withModel ( 'gemini-2.5-flash' ). build ();
Environment Configuration
Set the appropriate API keys for your providers:
# OpenAI
OPENAI_API_KEY = sk-...
# Anthropic
ANTHROPIC_API_KEY = sk-ant-...
# Google AI (API Key)
GOOGLE_API_KEY = AI...
# Google Vertex AI
GOOGLE_GENAI_USE_VERTEXAI = true
GOOGLE_CLOUD_PROJECT = your-project
GOOGLE_CLOUD_LOCATION = us-central1
Model Selection
ADK uses the LlmRegistry to automatically match model names to providers using regex patterns:
OpenAI Patterns
Anthropic Patterns
Google Patterns
// Matches these patterns:
"gpt-3.5-.*" // gpt-3.5-turbo, gpt-3.5-turbo-16k
"gpt-4.*" // gpt-4, gpt-4-turbo, gpt-4-32k
"gpt-4o.*" // gpt-4o, gpt-4o-mini
"gpt-5.*" // Future GPT-5 models
"o1-.*" // o1-preview, o1-mini
"o3-.*" // o3 models
Core Features
Unified API
All providers implement the same BaseLlm interface:
import { LLMRegistry } from '@iqai/adk' ;
// Automatic provider selection
const llm = LLMRegistry . newLLM ( 'gpt-4o' );
// Generate content
for await ( const response of llm . generateContentAsync ( request , true )) {
console . log ( response . text );
}
Streaming Support
All providers support streaming responses:
const agent = AgentBuilder . withModel ( 'claude-3-5-sonnet-20241022' )
. build ();
// Stream responses
for await ( const chunk of agent . run ( 'Explain quantum computing' , { stream: true })) {
process . stdout . write ( chunk . text || '' );
}
Function Calling
All providers support function calling with ADK tools:
import { AgentBuilder , BaseTool } from '@iqai/adk' ;
import { z } from 'zod/v4' ;
class WeatherTool extends BaseTool {
name = 'get_weather' ;
description = 'Get current weather' ;
inputSchema = z . object ({
location: z . string ()
});
async execute ( input : { location : string }) {
return { temp: 72 , condition: 'sunny' };
}
}
const agent = AgentBuilder . withModel ( 'gpt-4o' )
. withTools ( new WeatherTool ())
. build ();
Configuration Options
All providers support common configuration parameters:
Maximum number of tokens to generate
Controls randomness (0.0 to 2.0). Higher = more creative
Nucleus sampling parameter (0.0 to 1.0)
Top-K sampling parameter (Google models only)
const agent = AgentBuilder . withModel ( 'gemini-2.5-flash' )
. withConfig ({
maxOutputTokens: 2048 ,
temperature: 0.7 ,
topP: 0.9
})
. build ();
Advanced Features
Context Caching
Reduce costs and latency by caching prompts (Google and Anthropic):
const agent = AgentBuilder . withModel ( 'claude-3-5-sonnet-20241022' )
. withInstruction ( 'You are a helpful assistant...' )
. withCacheConfig ({ ttlSeconds: 3600 }) // 1 hour TTL
. build ();
See provider-specific pages for caching details:
Rate Limit Handling
ADK automatically detects and throws structured rate limit errors:
import { RateLimitError } from '@iqai/adk' ;
try {
const response = await agent . ask ( 'Hello' );
} catch ( error ) {
if ( error instanceof RateLimitError ) {
console . log ( 'Rate limited:' , error . provider , error . model );
console . log ( 'Retry after:' , error . retryAfter );
}
}
Registry System
The LlmRegistry manages all provider registrations:
import { LLMRegistry , OpenAiLlm , AnthropicLlm , GoogleLlm } from '@iqai/adk' ;
// Providers are auto-registered on import
// Manual registration (if needed):
LLMRegistry . registerLLM ( OpenAiLlm );
LLMRegistry . registerLLM ( AnthropicLlm );
LLMRegistry . registerLLM ( GoogleLlm );
// Resolve provider for a model
const LlmClass = LLMRegistry . resolve ( 'gpt-4o' ); // Returns OpenAiLlm
const llm = new LlmClass ( 'gpt-4o' );
// Create instance directly
const llm = LLMRegistry . newLLM ( 'claude-3-5-sonnet-20241022' );
For custom providers and registry details, see Registry documentation .
Provider Comparison
Feature OpenAI Anthropic Google AI SDK Streaming ✓ ✓ ✓ ✓ Function Calling ✓ ✓ ✓ ✓ Vision ✓ ✓ ✓ ✓ Context Caching ✗ ✓ ✓ Provider-dependent Max Context 128K 200K 2M Provider-dependent
Next Steps
OpenAI Provider Learn about GPT models and configuration
Anthropic Provider Explore Claude models and prompt caching
Google Provider Configure Gemini with API or Vertex AI
AI SDK Integration Use any provider via Vercel AI SDK