Skip to main content

Overview

T3Router provides access to multiple AI models from different providers through the t3.chat platform. Each model has different capabilities, speeds, and use cases.

Model Discovery

Use the ModelsClient to discover available models:
use t3router::t3::models::ModelsClient;

let cookies = std::env::var("COOKIES")?;
let session_id = std::env::var("CONVEX_SESSION_ID")?;

let models_client = ModelsClient::new(cookies, session_id);
let models = models_client.get_model_statuses().await?;

for model in models {
    println!("{}: {}", model.name, model.description);
}
Method Signature:
pub fn new(cookies: String, convex_session_id: String) -> Self
pub async fn get_model_statuses(&self) -> Result<Vec<ModelStatus>, Box<dyn std::error::Error>>

Model Types

The ModelsClient returns two types of model information:

ModelStatus

Basic model information:
pub struct ModelStatus {
    pub name: String,
    pub indicator: String,
    pub description: String,
}

ModelInfo

Detailed model information:
pub struct ModelInfo {
    pub id: String,
    pub name: String,
    pub provider: String,
    pub developer: String,
    pub short_description: String,
    pub full_description: String,
    pub requires_pro: bool,
    pub premium: bool,
}

Available Models

T3Router supports models from multiple providers:

Text Generation Models

Google Gemini

gemini-2.5-flash

Google’s state of the art fast model. Best balance of speed and quality.

gemini-2.5-flash-lite

Google’s most cost-efficient model. Fastest response times.
let response = client.send(
    "gemini-2.5-flash-lite",
    Some(Message::new(Type::User, "Hello".to_string())),
    Some(config),
).await?;

Anthropic Claude

claude-3.7

Anthropic’s Claude 3.7 Sonnet. Excellent reasoning and coding.

claude-4-sonnet

Anthropic’s Claude 4 Sonnet. Latest generation model.
let response = client.send(
    "claude-4-sonnet",
    Some(Message::new(Type::User, "Explain quantum computing".to_string())),
    Some(config),
).await?;

OpenAI

gpt-o4-mini

OpenAI’s latest small reasoning model. Good for complex problem-solving.
let response = client.send(
    "gpt-o4-mini",
    Some(Message::new(Type::User, "Solve this logic puzzle...".to_string())),
    Some(config),
).await?;

DeepSeek

deepseek-r1-groq

DeepSeek R1 distilled on Llama. Optimized for reasoning tasks.
let response = client.send(
    "deepseek-r1-groq",
    Some(Message::new(Type::User, "Analyze this data...".to_string())),
    Some(config),
).await?;

Image Generation Models

gpt-image-1

OpenAI’s DALL-E based image generation.

gemini-imagen-4

Google’s Imagen 4 model for high-quality image generation.
use std::path::Path;

let save_path = Path::new("output/robot.png");
let response = client.send_with_image_download(
    "gpt-image-1",
    Some(Message::new(Type::User, "Create a happy robot".to_string())),
    Some(config),
    Some(save_path),
).await?;

Model Selection Guide

Use Case Based Selection

Use gemini-2.5-flash-lite for fastest responses with good quality.
client.send("gemini-2.5-flash-lite", message, config).await?
Use claude-4-sonnet or gpt-o4-mini for deeper reasoning tasks.
client.send("claude-4-sonnet", message, config).await?
Use claude-3.7 or claude-4-sonnet for excellent coding capabilities.
client.send("claude-3.7", message, config).await?
Use gpt-image-1 or gemini-imagen-4 depending on style preferences.
client.send_with_image_download("gemini-imagen-4", message, config, save_path).await?
Use gemini-2.5-flash-lite for the most efficient token usage.
client.send("gemini-2.5-flash-lite", message, config).await?

Dynamic Model Discovery

The ModelsClient dynamically fetches model information from t3.chat:
let models_client = ModelsClient::new(cookies, session_id);

// Attempts dynamic discovery from t3.chat chunks
let models = models_client.get_model_statuses().await?;

for model in &models {
    println!("ID: {}", model.name);
    println!("Status: {}", model.indicator);
    println!("Description: {}", model.description);
    println!();
}

Fallback Models

If dynamic discovery fails, the client returns a curated list of known models:
  • gemini-2.5-flash
  • gemini-2.5-flash-lite
  • claude-3.7
  • claude-4-sonnet
  • gpt-o4-mini
  • deepseek-r1-groq

Model Naming Conventions

Model names in T3Router follow these patterns:
  • Provider prefix: gemini-, claude-, gpt-, deepseek-
  • Version/generation: 2.5, 3.7, 4, r1
  • Variant: -flash, -lite, -sonnet, -mini
  • Special suffixes: -groq (for Groq-hosted models), -image-1 (for image models)
Examples:
  • gemini-2.5-flash-lite → Google Gemini 2.5, Flash variant, Lite version
  • claude-4-sonnet → Anthropic Claude 4, Sonnet variant
  • deepseek-r1-groq → DeepSeek R1, hosted on Groq

Switching Models in Conversations

You can use different models within the same conversation:
use t3router::t3::message::ContentType;

client.new_conversation();

// Use Claude for text explanation
let response1 = client.send(
    "claude-4-sonnet",
    Some(Message::new(Type::User, "Explain impressionist art".to_string())),
    Some(config.clone()),
).await?;

// Use Imagen for image generation in same thread
let response2 = client.send(
    "gemini-imagen-4",
    Some(Message::new(Type::User, "Create an impressionist landscape".to_string())),
    Some(config.clone()),
).await?;

if matches!(response2.content_type, ContentType::Image) {
    println!("Generated image based on explanation!");
}

Model Configuration

Some models support additional configuration through the Config struct:
use t3router::t3::config::{Config, ReasoningEffort};

let mut config = Config::new();
config.reasoning_effort = ReasoningEffort::High;
config.include_search = true;

let response = client.send(
    "claude-4-sonnet",
    Some(message),
    Some(config),
).await?;
See the Configuration guide for more details.

Best Practices

Different models excel at different tasks. Benchmark a few models with your specific use case to find the best fit.
Start with gemini-2.5-flash-lite during development for faster iteration, then switch to more capable models for production.
Use text models for text, image models for images. Don’t attempt image generation with text-only models.
If a specific model fails or is unavailable, have a fallback model ready in your error handling logic.

Error Handling

let model = "claude-4-sonnet";

match client.send(model, message, config).await {
    Ok(response) => {
        println!("Success with {}: {}", model, response.content);
    },
    Err(e) => {
        eprintln!("Error with {}: {}", model, e);
        // Try fallback model
        let fallback = "gemini-2.5-flash-lite";
        let response = client.send(fallback, message, config).await?;
        println!("Fallback to {}: {}", fallback, response.content);
    }
}

Build docs developers (and LLMs) love