Overview
Tool calling (also known as function calling) enables AI models to interact with external tools, APIs, and functions. The model can decide when to call tools and generate the appropriate arguments based on the conversation context.
Dedalus supports:
Client-side tool execution - Model returns tool calls for you to execute
Server-side tool execution - Dedalus executes tools automatically
MCP (Model Context Protocol) - Connect to MCP servers for extended capabilities
Define tools using the OpenAI function calling format:
import Dedalus from 'dedalus-labs' ;
const client = new Dedalus ({
apiKey: process . env . DEDALUS_API_KEY ,
});
const tools = [
{
type: 'function' as const ,
function: {
name: 'get_weather' ,
description: 'Get the current weather for a location' ,
parameters: {
type: 'object' ,
properties: {
location: {
type: 'string' ,
description: 'The city and state, e.g., San Francisco, CA' ,
},
unit: {
type: 'string' ,
enum: [ 'celsius' , 'fahrenheit' ],
description: 'Temperature unit' ,
},
},
required: [ 'location' ],
},
},
},
];
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is the weather in Paris?' }
],
tools ,
});
Tool type. Currently, only 'function' is supported for standard tools.
function
FunctionDefinition
required
Function definition object containing:
Function name. Must be a-z, A-Z, 0-9, underscores, or dashes. Maximum 64 characters.
Description of what the function does. Used by the model to decide when to call it.
JSON Schema describing the function parameters. Defines the structure, types, and validation rules.
Enable strict schema adherence. When true, the model follows the exact schema.
With client-side execution, you receive tool calls and execute them yourself:
// Step 1: Get the model's tool call
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is the weather in Paris and London?' }
],
tools ,
});
const message = completion . choices [ 0 ]. message ;
// Step 2: Check if model wants to call tools
if ( message . tool_calls ) {
// Execute each tool call
const toolResults = await Promise . all (
message . tool_calls . map ( async ( toolCall ) => {
const args = JSON . parse ( toolCall . function . arguments );
// Execute the function
let result ;
if ( toolCall . function . name === 'get_weather' ) {
result = await getWeather ( args . location , args . unit );
}
return {
role: 'tool' as const ,
tool_call_id: toolCall . id ,
content: JSON . stringify ( result ),
};
})
);
// Step 3: Send tool results back to the model
const finalCompletion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is the weather in Paris and London?' },
message ,
... toolResults ,
],
tools ,
});
console . log ( finalCompletion . choices [ 0 ]. message . content );
}
When the model decides to use a tool, the response contains:
interface ChatCompletionMessageToolCall {
id : string ; // Unique tool call ID
type : 'function' ; // Tool type
function : {
name : string ; // Function name
arguments : string ; // JSON string of arguments
};
}
Tool results must be sent back as tool role messages:
{
role : 'tool' ,
tool_call_id : 'call_abc123' , // Must match the tool call ID
content : JSON . stringify ({
temperature: 72 ,
condition: 'sunny'
}),
}
Enable automatic server-side execution with automatic_tool_execution:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is the weather in Paris?' }
],
tools ,
automatic_tool_execution: true , // Dedalus executes tools
});
// Response already includes tool execution results
console . log ( completion . choices [ 0 ]. message . content );
// Check which tools were executed
if ( completion . tools_executed ) {
console . log ( 'Executed tools:' , completion . tools_executed );
}
With server-side execution, Dedalus handles the full tool execution loop automatically, making multiple API calls as needed until the model produces a final response.
Control how the model uses tools with the tool_choice parameter:
Auto (Default)
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
tool_choice: 'auto' , // Model decides
});
Required
Force the model to use at least one tool:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
tool_choice: 'required' , // Must call a tool
});
None
Prevent the model from using tools:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
tool_choice: 'none' , // Don't call tools
});
Force a specific tool to be called:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
tool_choice: {
type: 'tool' ,
name: 'get_weather' ,
},
});
By default, models can call multiple tools in parallel:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'Get weather for Paris, London, and Tokyo' }
],
tools ,
parallel_tool_calls: true , // Allow parallel calls (default)
});
// message.tool_calls may contain multiple calls
const toolCalls = completion . choices [ 0 ]. message . tool_calls ;
console . log ( `Model made ${ toolCalls ?. length } tool calls` );
Disable parallel calls:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
parallel_tool_calls: false , // One call at a time
});
MCP Server Integration
Connect to MCP (Model Context Protocol) servers for extended capabilities:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'Search for recent AI news' }
],
mcp_servers: [
'github:user/web-search-mcp' ,
'https://mcp.example.com/server'
],
automatic_tool_execution: true ,
});
MCP server identifiers. Accepts:
GitHub repository: 'github:user/repo'
URL: 'https://mcp.example.com'
Server ID: 'server_123'
MCP Error Handling
Check for MCP server errors in the response:
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
mcp_servers: [ 'github:user/repo' ],
automatic_tool_execution: true ,
});
if ( completion . mcp_server_errors ) {
console . error ( 'MCP errors:' , completion . mcp_server_errors );
}
Dedalus supports custom tools with flexible input formats:
const tools = [
{
type: 'custom' as const ,
custom: {
name: 'regex_extractor' ,
description: 'Extract data using regex patterns' ,
format: {
type: 'grammar' as const ,
grammar: {
syntax: 'regex' as const ,
definition: '[a-zA-Z0-9]+' ,
},
},
},
},
];
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
});
Complete Examples
const tools = [
{
type: 'function' as const ,
function: {
name: 'add' ,
description: 'Add two numbers' ,
parameters: {
type: 'object' ,
properties: {
a: { type: 'number' },
b: { type: 'number' },
},
required: [ 'a' , 'b' ],
},
},
},
{
type: 'function' as const ,
function: {
name: 'multiply' ,
description: 'Multiply two numbers' ,
parameters: {
type: 'object' ,
properties: {
a: { type: 'number' },
b: { type: 'number' },
},
required: [ 'a' , 'b' ],
},
},
},
];
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is (5 + 3) * 2?' }
],
tools ,
});
const message = completion . choices [ 0 ]. message ;
if ( message . tool_calls ) {
const results = message . tool_calls . map (( call ) => {
const args = JSON . parse ( call . function . arguments );
let result ;
if ( call . function . name === 'add' ) {
result = args . a + args . b ;
} else if ( call . function . name === 'multiply' ) {
result = args . a * args . b ;
}
return {
role: 'tool' as const ,
tool_call_id: call . id ,
content: String ( result ),
};
});
const final = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is (5 + 3) * 2?' },
message ,
... results ,
],
tools ,
});
console . log ( final . choices [ 0 ]. message . content );
}
Weather API Integration
import Dedalus from 'dedalus-labs' ;
const client = new Dedalus ({
apiKey: process . env . DEDALUS_API_KEY ,
});
async function getWeather ( location : string ) {
// Call external weather API
const response = await fetch (
`https://api.weather.com/v1/current?location= ${ location } `
);
return response . json ();
}
const tools = [
{
type: 'function' as const ,
function: {
name: 'get_weather' ,
description: 'Get current weather for a location' ,
parameters: {
type: 'object' ,
properties: {
location: {
type: 'string' ,
description: 'City name or coordinates' ,
},
},
required: [ 'location' ],
},
},
},
];
const messages = [
{ role: 'user' as const , content: 'What should I wear in Paris today?' }
];
let completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages ,
tools ,
});
let assistantMessage = completion . choices [ 0 ]. message ;
messages . push ( assistantMessage );
while ( assistantMessage . tool_calls ) {
const toolResults = await Promise . all (
assistantMessage . tool_calls . map ( async ( call ) => {
const args = JSON . parse ( call . function . arguments );
const weather = await getWeather ( args . location );
return {
role: 'tool' as const ,
tool_call_id: call . id ,
content: JSON . stringify ( weather ),
};
})
);
messages . push ( ... toolResults );
completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages ,
tools ,
});
assistantMessage = completion . choices [ 0 ]. message ;
messages . push ( assistantMessage );
}
console . log ( assistantMessage . content );
Tool calls can be streamed incrementally:
const stream = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [
{ role: 'user' , content: 'What is the weather in Paris?' }
],
tools ,
stream: true ,
});
const toolCalls : Record < number , any > = {};
for await ( const chunk of stream ) {
const delta = chunk . choices [ 0 ]?. delta ;
if ( delta ?. tool_calls ) {
for ( const toolCall of delta . tool_calls ) {
const index = toolCall . index ;
if ( ! toolCalls [ index ]) {
toolCalls [ index ] = {
id: toolCall . id || '' ,
type: 'function' ,
function: {
name: toolCall . function ?. name || '' ,
arguments: toolCall . function ?. arguments || '' ,
},
};
} else {
if ( toolCall . function ?. arguments ) {
toolCalls [ index ]. function . arguments +=
toolCall . function . arguments ;
}
}
}
}
}
const finalToolCalls = Object . values ( toolCalls );
console . log ( 'Tool calls:' , finalToolCalls );
Best Practices
Clear descriptions - Write detailed function descriptions to help the model decide when to use them
Validate arguments - Always validate and sanitize tool call arguments before execution
Error handling - Wrap tool execution in try/catch and return errors as tool results
Type safety - Use TypeScript for type-safe tool definitions and argument parsing
Idempotency - Design tools to be idempotent when possible
Rate limiting - Implement rate limiting for external API calls
Logging - Log tool calls and results for debugging and monitoring
TypeScript Types
Key types for tool calling:
import type {
ChatCompletionToolParam ,
ChatCompletionMessageToolCall ,
ChatCompletionToolMessageParam ,
ToolChoice ,
FunctionDefinition ,
} from 'dedalus-labs' ;
// Define a tool
const tool : ChatCompletionToolParam = {
type: 'function' ,
function: { /* ... */ },
};
// Handle tool call
function handleToolCall ( toolCall : ChatCompletionMessageToolCall ) {
const { id , function : fn } = toolCall ;
const args = JSON . parse ( fn . arguments );
// ...
}
Error Handling
try {
const completion = await client . chat . completions . create ({
model: 'openai/gpt-4' ,
messages: [ ... ],
tools ,
});
const message = completion . choices [ 0 ]. message ;
if ( message . tool_calls ) {
const results = await Promise . all (
message . tool_calls . map ( async ( call ) => {
try {
const args = JSON . parse ( call . function . arguments );
const result = await executeFunction ( call . function . name , args );
return {
role: 'tool' as const ,
tool_call_id: call . id ,
content: JSON . stringify ( result ),
};
} catch ( error ) {
// Return error as tool result
return {
role: 'tool' as const ,
tool_call_id: call . id ,
content: JSON . stringify ({
error: error . message ,
}),
};
}
})
);
}
} catch ( error ) {
console . error ( 'Tool calling error:' , error );
}
Next Steps
Chat Completions Learn about chat completions
Streaming Implement streaming responses