Skip to main content

Overview

The Config struct allows you to customize how the AI models behave. You can adjust reasoning effort levels and enable web search capabilities.

Basic Configuration

The simplest way to use configuration:
use t3router::t3::config::Config;

let config = Config::new();

// Use with any request
let response = client.send(
    "claude-4-sonnet",
    Some(Message::new(Type::User, "Hello!".to_string())),
    Some(config),
).await?;

Configuration Options

The Config struct (config.rs:27-47) has two main fields:
pub struct Config {
    pub include_search: bool,
    pub reasoning_effort: ReasoningEffort,
}

Default Values

When you call Config::new(), you get:
  • include_search: false
  • reasoning_effort: ReasoningEffort::Low

Reasoning Effort Levels

The ReasoningEffort enum (config.rs:1-24) controls how much computational effort the model uses:
pub enum ReasoningEffort {
    Low,
    Medium,
    High,
}

When to Use Each Level

Best for:
  • Quick responses
  • Simple questions
  • Casual conversations
  • High-volume requests
Characteristics:
  • Fastest response time
  • Lower computational cost
  • Suitable for most use cases
Best for:
  • Moderate complexity tasks
  • Technical questions
  • Code explanations
  • Balanced speed/quality
Characteristics:
  • Balanced performance
  • More thorough reasoning
  • Slightly longer response time
Best for:
  • Complex problem-solving
  • Research questions
  • Critical analysis
  • Maximum accuracy needed
Characteristics:
  • Slowest response time
  • Most thorough reasoning
  • Highest quality output

Setting Reasoning Effort

use t3router::t3::config::Config;

let config = Config::new();
// reasoning_effort is ReasoningEffort::Low by default
When include_search is enabled, the model can search the web for current information:
use t3router::t3::config::Config;

let mut config = Config::new();
config.include_search = true;

let response = client
    .send(
        "gemini-2.5-flash",
        Some(Message::new(
            Type::User,
            "What is the current price of Bitcoin?".to_string(),
        )),
        Some(config),
    )
    .await?;
Web search is useful for questions about current events, prices, weather, or any information that changes over time. However, it may increase response time.

Complete Configuration Examples

For complex research questions:
use t3router::t3::config::{Config, ReasoningEffort};

let mut config = Config::new();
config.reasoning_effort = ReasoningEffort::High;
config.include_search = true;

let response = client
    .send(
        "claude-4-sonnet",
        Some(Message::new(
            Type::User,
            "Analyze the latest developments in quantum computing and their practical applications".to_string(),
        )),
        Some(config),
    )
    .await?;
For quick, simple responses:
use t3router::t3::config::Config;

let config = Config::new(); // Default: low effort, no search

let response = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "What is 2 + 2?".to_string(),
        )),
        Some(config),
    )
    .await?;
Balanced approach for technical questions:
use t3router::t3::config::{Config, ReasoningEffort};

let mut config = Config::new();
config.reasoning_effort = ReasoningEffort::Medium;
config.include_search = true;

let response = client
    .send(
        "gpt-4o",
        Some(Message::new(
            Type::User,
            "Explain the latest features in Rust 1.75".to_string(),
        )),
        Some(config),
    )
    .await?;

How Configuration Works

When you send a request, the configuration is serialized into the API payload (client.rs:397-400):
"modelParams": {
    "reasoningEffort": resolved_config.reasoning_effort.as_str(),
    "includeSearch": resolved_config.include_search
}
The as_str() method (config.rs:17-23) converts the enum to the API format:
pub fn as_str(&self) -> &'static str {
    match self {
        ReasoningEffort::Low => "low",
        ReasoningEffort::Medium => "medium",
        ReasoningEffort::High => "high",
    }
}

Reusing Configurations

You can clone and reuse configurations:
use t3router::t3::config::{Config, ReasoningEffort};

// Create a base configuration
let mut base_config = Config::new();
base_config.reasoning_effort = ReasoningEffort::Medium;

// Clone for different requests
let search_config = Config {
    include_search: true,
    ..base_config.clone()
};

let no_search_config = Config {
    include_search: false,
    ..base_config.clone()
};

// Use them independently
let response1 = client.send("gpt-4o", Some(msg1), Some(search_config)).await?;
let response2 = client.send("gpt-4o", Some(msg2), Some(no_search_config)).await?;

Configuration Presets

You can create helper functions for common configurations:
use t3router::t3::config::{Config, ReasoningEffort};

pub fn fast_config() -> Config {
    Config::new() // Low effort, no search
}

pub fn research_config() -> Config {
    let mut config = Config::new();
    config.reasoning_effort = ReasoningEffort::High;
    config.include_search = true;
    config
}

pub fn balanced_config() -> Config {
    let mut config = Config::new();
    config.reasoning_effort = ReasoningEffort::Medium;
    config
}

// Usage
let response = client
    .send("claude-4-sonnet", Some(message), Some(research_config()))
    .await?;

Optional Configuration

The send() method accepts Option<Config>. If you pass None, it uses default values (client.rs:365):
// These are equivalent:
let response1 = client.send("gpt-4o", Some(message), None).await?;
let response2 = client.send("gpt-4o", Some(message), Some(Config::new())).await?;

Configuration with Image Generation

Configuration also works with image generation:
use std::path::Path;
use t3router::t3::config::{Config, ReasoningEffort};

let mut config = Config::new();
config.reasoning_effort = ReasoningEffort::High;

let response = client
    .send_with_image_download(
        "gpt-image-1",
        Some(Message::new(
            Type::User,
            "Create a detailed illustration of a cyberpunk city".to_string(),
        )),
        Some(config),
        Some(Path::new("output/cyberpunk.png")),
    )
    .await?;
For image generation, higher reasoning effort may result in more accurate interpretation of complex prompts.

Performance Considerations

Response Time Impact

Reasoning EffortTypical Response TimeUse Case
Low1-3 secondsSimple questions
Medium3-8 secondsTechnical discussions
High8-20+ secondsComplex analysis

Search Impact

Enabling include_search typically adds 2-5 seconds to response time as the model:
  1. Formulates search queries
  2. Retrieves web results
  3. Synthesizes information

Best Practices

  1. Start with defaults - Config::new() works well for most cases
  2. Use high effort sparingly - Reserve it for truly complex tasks
  3. Enable search when needed - Only for questions requiring current information
  4. Match effort to model - Reasoning models benefit more from high effort
  5. Create presets - Define reusable configurations for common patterns
Higher reasoning effort increases response time and may consume more API resources. Use it judiciously, especially in high-volume applications.

Debugging Configuration

Print configuration values to verify settings:
let config = Config {
    reasoning_effort: ReasoningEffort::High,
    include_search: true,
};

println!("Reasoning effort: {}", config.reasoning_effort.as_str());
println!("Include search: {}", config.include_search);

Next Steps

Basic Chat

Apply configuration to basic chat requests

Multi-turn Conversations

Use configuration across conversation turns

Build docs developers (and LLMs) love