Overview
The AI SDK integration (AiSdkLlm) enables ADK to work with any language model supported by Vercel’s AI SDK , including providers not directly implemented in ADK. This provides maximum flexibility while maintaining ADK’s unified API.
Source: packages/adk/src/models/ai-sdk.ts:59
Supported Providers
Through AI SDK, you can use:
OpenAI GPT-4, GPT-3.5, o1 models
Anthropic Claude 3 and Claude 4 models
Google Gemini models via Google AI
Mistral Mistral Large, Medium, Small
Cohere Command, Command-Light
Azure OpenAI Azure-hosted GPT models
AWS Bedrock Claude, Llama, Titan models
Ollama Local models (Llama, Mistral, etc.)
And many more - see AI SDK Providers .
Installation
Install AI SDK and provider packages:
OpenAI
Anthropic
Google
Mistral
Ollama
npm install ai @ai-sdk/openai
npm install ai @ai-sdk/anthropic
npm install ai @ai-sdk/google
npm install ai @ai-sdk/mistral
npm install ai ollama-ai-provider
Basic Usage
With AgentBuilder
import { AgentBuilder , AiSdkLlm } from '@iqai/adk' ;
import { anthropic } from '@ai-sdk/anthropic' ;
// Create AI SDK model instance
const model = anthropic ( 'claude-3-5-sonnet-20241022' );
// Wrap in AiSdkLlm
const llm = new AiSdkLlm ( model );
// Use with AgentBuilder
const agent = AgentBuilder . withLlm ( llm )
. withInstruction ( 'You are a helpful assistant' )
. build ();
const response = await agent . ask ( 'What is TypeScript?' );
console . log ( response . text );
Direct Usage
import { AiSdkLlm } from '@iqai/adk' ;
import { openai } from '@ai-sdk/openai' ;
const llm = new AiSdkLlm ( openai ( 'gpt-4o' ));
const request = {
contents: [
{
role: 'user' ,
parts: [{ text: 'Hello!' }]
}
]
};
for await ( const response of llm . generateContentAsync ( request )) {
console . log ( response . text );
}
Provider Examples
OpenAI
import { AgentBuilder , AiSdkLlm } from '@iqai/adk' ;
import { openai } from '@ai-sdk/openai' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( openai ( 'gpt-4o' )))
. build ();
Anthropic
import { anthropic } from '@ai-sdk/anthropic' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( anthropic ( 'claude-3-5-sonnet-20241022' )))
. build ();
Google
import { google } from '@ai-sdk/google' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( google ( 'gemini-2.5-flash' )))
. build ();
Mistral
import { mistral } from '@ai-sdk/mistral' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( mistral ( 'mistral-large-latest' )))
. build ();
Ollama (Local Models)
import { ollama } from 'ollama-ai-provider' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( ollama ( 'llama3.1' )))
. build ();
AWS Bedrock
import { bedrock } from '@ai-sdk/amazon-bedrock' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm (
bedrock ( 'anthropic.claude-3-5-sonnet-20241022-v2:0' )
))
. build ();
Configuration
Environment Variables
Set API keys for your chosen provider:
# OpenAI
OPENAI_API_KEY = sk-...
# Anthropic
ANTHROPIC_API_KEY = sk-ant-...
# Google
GOOGLE_GENERATIVE_AI_API_KEY = AI...
# Mistral
MISTRAL_API_KEY = ...
# AWS (for Bedrock)
AWS_ACCESS_KEY_ID = ...
AWS_SECRET_ACCESS_KEY = ...
AWS_REGION = us-east-1
Model Configuration
import { AgentBuilder , AiSdkLlm } from '@iqai/adk' ;
import { openai } from '@ai-sdk/openai' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( openai ( 'gpt-4o' )))
. withConfig ({
maxOutputTokens: 2048 ,
temperature: 0.7 ,
topP: 0.9
})
. build ();
Maximum tokens to generate
Controls randomness (0.0 - 2.0)
Nucleus sampling parameter (0.0 - 1.0)
Provider Detection
The AI SDK provider automatically detects the model’s provider:
// From ai-sdk.ts:66-71
private static readonly PROVIDER_PATTERNS : Record < ModelProvider , RegExp [] > = {
[ModelProvider. GOOGLE ]: [ / ^ google \/ / i , / ^ gemini/ i , / ^ models \/ gemini/ i ],
[ModelProvider. ANTHROPIC ]: [ / ^ anthropic \/ / i , / ^ claude/ i ],
[ModelProvider. UNKNOWN ]: [],
};
This enables provider-specific optimizations like context caching.
Streaming
Basic Streaming
import { AiSdkLlm } from '@iqai/adk' ;
import { anthropic } from '@ai-sdk/anthropic' ;
const llm = new AiSdkLlm ( anthropic ( 'claude-3-5-sonnet-20241022' ));
const agent = AgentBuilder . withLlm ( llm ). build ();
for await ( const chunk of agent . run ( 'Write a story' , { stream: true })) {
process . stdout . write ( chunk . text || '' );
}
Streaming Implementation
The provider uses AI SDK’s streamText:
// From ai-sdk.ts:310-372
private async * handleStreamingResponse (
requestParams : AiSdkRequestParams ,
provider : ModelProvider ,
cacheMetadata : CacheMetadata | null ,
): AsyncGenerator < LlmResponse , void , unknown > {
const result = streamText ( requestParams );
let accumulatedText = "" ;
let cacheMetadataEmitted = false ;
for await ( const delta of result.textStream) {
accumulatedText += delta ;
yield new LlmResponse ({
content: { role: "model" , parts: [{ text: delta }] },
partial: true ,
cacheMetadata: ! cacheMetadataEmitted ? cacheMetadata : undefined ,
});
cacheMetadataEmitted = true ;
}
// ... handle tool calls and final response
}
Function Calling
import { AgentBuilder , AiSdkLlm , BaseTool } from '@iqai/adk' ;
import { openai } from '@ai-sdk/openai' ;
import { z } from 'zod/v4' ;
class CalculatorTool extends BaseTool {
name = 'calculate' ;
description = 'Perform mathematical calculations' ;
inputSchema = z . object ({
expression: z . string (). describe ( 'Math expression to evaluate' ),
});
async execute ( input : { expression : string }) {
// Safely evaluate expression
const result = eval ( input . expression );
return { result };
}
}
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( openai ( 'gpt-4o' )))
. withTools ( new CalculatorTool ())
. build ();
const response = await agent . ask ( 'What is 25 * 34?' );
The provider converts ADK tools to AI SDK format:
// From ai-sdk.ts:488-506
private convertToAiSdkTools ( llmRequest : LlmRequest ): Record < string , Tool > {
const tools: Record < string , Tool> = {};
if (llmRequest.config?.tools) {
for ( const toolConfig of llmRequest . config . tools ) {
if ( "functionDeclarations" in toolConfig ) {
for ( const funcDecl of toolConfig . functionDeclarations ) {
tools [ funcDecl . name ] = {
description: funcDecl . description ,
inputSchema: jsonSchema (
this . transformSchemaForAiSdk ( funcDecl . parameters || {}),
),
};
}
}
}
}
return tools;
}
Context Caching
Google Provider Caching
For Google models, context caching is automatically supported:
import { AiSdkLlm } from '@iqai/adk' ;
import { google } from '@ai-sdk/google' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( google ( 'gemini-2.5-flash' )))
. withInstruction ( 'You are a helpful assistant...' )
. withCacheConfig ({ ttlSeconds: 3600 })
. build ();
The provider handles cache creation and retrieval:
// From ai-sdk.ts:145-171
private async handleGoogleContextCaching (
llmRequest : LlmRequest ,
): Promise < CacheMetadata | null > {
this.logger.debug( "Handling Google context caching" );
// Ensure cache manager is initialized
this.initializeCacheManager();
// Normalize model ID for Google API compatibility
const modelId = this.getModelId(this.modelInstance);
llmRequest.model = this.normalizeGoogleModelId(modelId);
this.logger.debug( `Using model for caching: ${ llmRequest . model } ` );
// Handle caching through the manager
const cacheMetadata =
await this.cacheManager!.handleContextCaching(llmRequest);
if (cacheMetadata?.cacheName) {
this.logger.debug( `Using cache: ${ cacheMetadata . cacheName } ` );
} else if (cacheMetadata) {
this.logger.debug( "Cache fingerprint only, no active cache" );
}
return cacheMetadata;
}
Anthropic Provider Caching
For Anthropic models, prompt caching is supported:
import { anthropic } from '@ai-sdk/anthropic' ;
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( anthropic ( 'claude-3-5-sonnet-20241022' )))
. withCacheConfig ({ ttlSeconds: 3600 })
. build ();
Cache control is added via provider options:
// From ai-sdk.ts:210-227
if ( provider === ModelProvider . ANTHROPIC && llmRequest . cacheConfig ) {
const ttl =
llmRequest . cacheConfig . ttlSeconds &&
llmRequest . cacheConfig . ttlSeconds > 1800
? "1h"
: "5m" ;
params . providerOptions = {
... params . providerOptions ,
anthropic: {
cacheControl: {
type: "ephemeral" ,
ttl ,
},
},
};
}
Error Handling
Rate Limit Errors
import { RateLimitError } from '@iqai/adk' ;
try {
const response = await agent . ask ( 'Hello' );
} catch ( error ) {
if ( error instanceof RateLimitError ) {
console . log ( 'Rate limited!' );
console . log ( 'Provider:' , error . provider ); // 'ai-sdk'
console . log ( 'Model:' , error . model );
}
}
Error Responses
The provider converts errors to LlmResponse:
// From ai-sdk.ts:292-305
catch ( error : any ) {
if ( RateLimitError . isRateLimitError ( error )) {
throw RateLimitError . fromError ( error , "ai-sdk" , this . model );
}
this . logger . error ( `AI SDK Error: ${ String ( error ) } ` , {
error ,
llmRequest ,
});
yield LlmResponse . fromError ( error , {
errorCode: "AI_SDK_ERROR" ,
model: this . model ,
});
}
Advanced Features
Custom Provider Configuration
import { openai } from '@ai-sdk/openai' ;
const customOpenAI = openai ({
apiKey: process . env . CUSTOM_OPENAI_KEY ,
baseURL: 'https://custom-endpoint.com/v1' ,
});
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( customOpenAI ( 'gpt-4o' )))
. build ();
Azure OpenAI
import { azure } from '@ai-sdk/azure' ;
const azureOpenAI = azure ({
resourceName: 'your-resource-name' ,
apiKey: process . env . AZURE_API_KEY ,
});
const agent = AgentBuilder
. withLlm ( new AiSdkLlm ( azureOpenAI ( 'gpt-4o' )))
. build ();
Local Models with Ollama
import { ollama } from 'ollama-ai-provider' ;
// Use local Llama model
const agent = AgentBuilder
. withLlm ( new AiSdkLlm (
ollama ( 'llama3.1' , {
baseURL: 'http://localhost:11434'
})
))
. build ();
const response = await agent . ask ( 'Explain TypeScript' );
Multiple Providers
import { openai } from '@ai-sdk/openai' ;
import { anthropic } from '@ai-sdk/anthropic' ;
import { google } from '@ai-sdk/google' ;
// Different agents with different providers
const gptAgent = AgentBuilder
. withLlm ( new AiSdkLlm ( openai ( 'gpt-4o' )))
. build ();
const claudeAgent = AgentBuilder
. withLlm ( new AiSdkLlm ( anthropic ( 'claude-3-5-sonnet-20241022' )))
. build ();
const geminiAgent = AgentBuilder
. withLlm ( new AiSdkLlm ( google ( 'gemini-2.5-flash' )))
. build ();
// Use different agents for different tasks
const codeReview = await gptAgent . ask ( 'Review this code' );
const analysis = await claudeAgent . ask ( 'Analyze this data' );
const summary = await geminiAgent . ask ( 'Summarize this document' );
All providers return usage metadata:
const response = await agent . ask ( 'Hello' );
if ( response . usageMetadata ) {
console . log ( 'Input tokens:' , response . usageMetadata . promptTokenCount );
console . log ( 'Output tokens:' , response . usageMetadata . candidatesTokenCount );
console . log ( 'Total tokens:' , response . usageMetadata . totalTokenCount );
}
Best Practices
Need a provider not directly implemented in ADK
Want to use local models (Ollama)
Require custom provider configuration
Working with Azure OpenAI or AWS Bedrock
When to Use Direct Providers
Using OpenAI, Anthropic, or Google (use native providers for better optimization)
Need provider-specific features not exposed through AI SDK
Want maximum performance (native providers are slightly faster)
Use GPT-4o (via OpenAI) for general tasks
Use Claude 3.5 Sonnet (via Anthropic) for reasoning
Use Gemini 2.5 Flash (via Google) for large context
Use Mistral for European data residency
Use Ollama for offline/local deployment
Enable context caching for repeated prompts
Use smaller models (gpt-4o-mini, claude-3-haiku) where appropriate
Monitor usage metadata to track costs
Consider local models (Ollama) for development
Limitations
No Pattern Matching : AI SDK provider doesn’t register patterns with LlmRegistry. Use direct instantiation with new AiSdkLlm(model) or AgentBuilder.withLlm().
Provider-Specific Features : Some features (like Google’s grounding or Anthropic’s extended thinking) may not be fully exposed through AI SDK’s unified interface.
Next Steps
AI SDK Documentation Explore all AI SDK providers and features
OpenAI Provider Use native OpenAI provider for GPT models
Anthropic Provider Use native Anthropic provider for Claude
Registry System Learn about provider registration