Overview
Function calling allows chat models to call external functions and tools. This enables you to extend the model’s capabilities with custom logic, API calls, database queries, and more.
Basic Function Calling
Define a function using OpenAI::BaseModel and pass it to the tools parameter:
#!/usr/bin/env ruby
# frozen_string_literal: true
# typed: strong
require_relative "../lib/openai"
class GetWeather < OpenAI::BaseModel
required :location, String, doc: "City and country e.g. Bogotá, Colombia"
end
# gets API Key from environment variable `OPENAI_API_KEY`
client = OpenAI::Client.new
chat_completion = client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: [
{
role: :user,
content: "What's the weather like in Paris today?"
}
],
tools: [GetWeather]
)
chat_completion
.choices
.reject { _1.message.refusal }
.flat_map { _1.message.tool_calls.to_a }
.each do |tool_call|
case tool_call
when OpenAI::Chat::ChatCompletionMessageFunctionToolCall
# parsed is an instance of `GetWeather`
pp(tool_call.function.parsed)
else
puts("Unexpected tool call type: #{tool_call.type}")
end
end
Tools are defined as classes that inherit from OpenAI::BaseModel:
class GetWeather < OpenAI::BaseModel
required :location, String, doc: "City and country e.g. Bogotá, Colombia"
end
Multiple Parameters
Define tools with multiple parameters and types:
class SearchDatabase < OpenAI::BaseModel
required :query, String, doc: "The search query"
required :max_results, Integer, doc: "Maximum number of results to return"
required :include_archived, OpenAI::Boolean, doc: "Whether to include archived items"
required :filters, OpenAI::ArrayOf[String], nil?: true, doc: "Optional filter criteria"
end
Extract and process tool calls from the response:
chat_completion
.choices
.reject { _1.message.refusal }
.flat_map { _1.message.tool_calls.to_a }
.each do |tool_call|
case tool_call
when OpenAI::Chat::ChatCompletionMessageFunctionToolCall
# Access the parsed arguments
args = tool_call.function.parsed
# Execute your function with the parsed arguments
result = execute_weather_api(args.location)
# Return the result to the model if needed
pp(result)
else
puts("Unexpected tool call type: #{tool_call.type}")
end
end
You can stream tool calls in real-time to see arguments as they’re generated:
#!/usr/bin/env ruby
# frozen_string_literal: true
require_relative "../../lib/openai"
class GetWeather < OpenAI::BaseModel
required :location, String
end
# gets API Key from environment variable `OPENAI_API_KEY`
client = OpenAI::Client.new
stream = client.chat.completions.stream(
model: "gpt-4o-mini",
tools: [GetWeather],
messages: [
{role: :user, content: "Call get_weather with location San Francisco in JSON."}
]
)
stream.each do |event|
case event
when OpenAI::Streaming::ChatFunctionToolCallArgumentsDeltaEvent
puts("delta: #{event.arguments_delta}")
pp(event.parsed)
when OpenAI::Streaming::ChatFunctionToolCallArgumentsDoneEvent
puts("--- Tool call finalized ---")
puts("name: #{event.name}")
puts("args: #{event.arguments}")
pp(event.parsed)
end
end
When streaming tool calls, you’ll encounter these event types:
Contains incremental argument data as it’s generated:
when OpenAI::Streaming::ChatFunctionToolCallArgumentsDeltaEvent
puts("delta: #{event.arguments_delta}")
pp(event.parsed) # Partially parsed object
Signals that the tool call is complete:
when OpenAI::Streaming::ChatFunctionToolCallArgumentsDoneEvent
puts("name: #{event.name}")
puts("args: #{event.arguments}")
pp(event.parsed) # Fully parsed object
Multi-turn Conversations
Implement multi-turn conversations by adding tool results back to the message history:
Initial request
Send the user’s message with available tools:messages = [
{role: :user, content: "What's the weather in Tokyo?"}
]
response = client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages,
tools: [GetWeather]
)
Process tool calls
Execute the requested function:tool_call = response.choices.first.message.tool_calls.first
args = tool_call.function.parsed
result = get_weather(args.location)
Add results to conversation
Return the tool result to the model:messages << response.choices.first.message.to_h
messages << {
role: :tool,
tool_call_id: tool_call.id,
content: result.to_json
}
Get final response
Request the final answer:final_response = client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages
)
puts final_response.choices.first.message.content
Best Practices
Always validate and sanitize tool arguments before executing functions. Even with structured outputs, treat tool call arguments as untrusted input.
Use descriptive tool names
Tool class names should clearly describe what the function does (e.g., GetWeather, not Tool1).
Add comprehensive documentation
Use the doc parameter to help the model understand when and how to use each parameter.
Handle errors gracefully
Implement error handling for tool execution and return meaningful error messages.
Validate tool arguments
Always validate parsed arguments before executing potentially dangerous operations.
Control when the model should use tools:
Auto (Default)
Required
Specific Tool
None
Let the model decide whether to call a function:client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages,
tools: [GetWeather]
# tool_choice defaults to "auto"
)
Force the model to call a function:client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages,
tools: [GetWeather],
tool_choice: "required"
)
Force the model to call a specific function:client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages,
tools: [GetWeather, GetForecast],
tool_choice: {type: "function", function: {name: "GetWeather"}}
)
Prevent the model from calling any functions:client.chat.completions.create(
model: "gpt-4o-2024-08-06",
messages: messages,
tools: [GetWeather],
tool_choice: "none"
)
Next Steps
Structured Outputs
Learn more about defining complex data structures with BaseModel
Streaming
Stream tool calls and arguments in real-time