Skip to main content
T3Router provides access to 50+ AI models through t3.chat. All models require a paid t3.chat subscription.

Language Models

Claude Models (Anthropic)

Anthropic’s Claude models excel at reasoning, analysis, and natural conversation.
claude-3.5
string
Claude 3.5 Sonnet - Balanced performance and speed
claude-3.7
string
Claude 3.7 Sonnet - Enhanced reasoning capabilities
claude-4-opus
string
Claude 4 Opus - Most capable Claude model
claude-4-sonnet
string
Claude 4 Sonnet - Latest generation with improved performance

GPT Models (OpenAI)

OpenAI’s GPT models offer versatile capabilities for various tasks.
gpt-4o
string
GPT-4 Optimized - OpenAI’s flagship model
gpt-4o-mini
string
GPT-4 Optimized Mini - Faster, cost-effective version
gpt-o3-mini
string
GPT-o3 Mini - Small reasoning model
gpt-o4-mini
string
GPT-o4 Mini - Latest small reasoning model
o3-full
string
O3 Full - Complete reasoning model
o3-pro
string
O3 Pro - Professional-grade reasoning model

Gemini Models (Google)

Google’s Gemini models provide fast and efficient AI capabilities.
gemini-2.0-flash
string
Gemini 2.0 Flash - Fast response times
gemini-2.5-pro
string
Gemini 2.5 Pro - Advanced capabilities
gemini-2.5-flash
string
Gemini 2.5 Flash - Google’s state of the art fast model
gemini-2.5-flash-lite
string
Gemini 2.5 Flash Lite - Most cost-efficient model

DeepSeek Models

DeepSeek’s models offer specialized reasoning capabilities.
deepseek-v3
string
DeepSeek V3 - Advanced version
deepseek-r1
string
DeepSeek R1 - Reasoning-focused model
deepseek-r1-groq
string
DeepSeek R1 Groq - DeepSeek R1 distilled on Llama, optimized for speed

Open Models

Open-source and alternative models for various use cases.
llama-3.3-70b
string
Meta’s Llama 3.3 with 70B parameters
qwen3-32b
string
Alibaba’s Qwen3 with 32B parameters
grok-v3
string
xAI’s Grok V3 model
grok-v4
string
xAI’s Grok V4 model - Latest generation

Image Generation Models

DALL-E (OpenAI)

gpt-image-1
string
OpenAI’s DALL-E image generation model

Usage in API Calls

Basic Chat Request

Use the model ID as the first parameter in client.send():
use t3router::t3::{client::Client, message::{Message, Type}, config::Config};

let mut client = Client::new(cookies, session_id);
client.init().await?;

let config = Config::new();
let response = client.send(
    "claude-3.7",  // Model ID
    Some(Message::new(Type::User, "Explain quantum computing".to_string())),
    Some(config)
).await?;

println!("{}", response.content);

Image Generation Request

Use the image model ID with send_with_image_download():
use std::path::Path;

let save_path = Path::new("output/quantum.png");
let response = client.send_with_image_download(
    "gpt-image-1",  // Image model ID
    Some(Message::new(Type::User, "Quantum computer visualization".to_string())),
    Some(config),
    Some(save_path)
).await?;

Switching Between Models

Each request can use a different model:
// Use Claude for reasoning
let analysis = client.send(
    "claude-4-opus",
    Some(Message::new(Type::User, "Analyze this code".to_string())),
    Some(config.clone())
).await?;

// Use GPT for creative writing
let story = client.send(
    "gpt-4o",
    Some(Message::new(Type::User, "Write a short story".to_string())),
    Some(config.clone())
).await?;

// Use Gemini for fast responses
let quick = client.send(
    "gemini-2.5-flash",
    Some(Message::new(Type::User, "Quick fact check".to_string())),
    Some(config)
).await?;

Model Discovery

Instead of hardcoding model IDs, you can dynamically discover available models:
use t3router::t3::models::ModelsClient;

let models_client = ModelsClient::new(cookies.clone(), session_id.clone());
let models = models_client.get_model_statuses().await?;

println!("Available models:");
for model in models {
    println!("  {} - {}", model.name, model.description);
}

Model Selection Tips

Use claude-4-opus, claude-4-sonnet, or o3-pro for complex analysis, mathematical reasoning, and logical tasks.
Use gemini-2.5-flash-lite, gemini-2.5-flash, or gpt-4o-mini for quick responses and cost-efficiency.
Use claude-3.7, gpt-4o, or deepseek-v3 for code generation, debugging, and technical explanations.
Use gpt-4o, claude-4-sonnet, or claude-3.7 for storytelling, creative content, and natural dialogue.
Use llama-3.3-70b, qwen3-32b, or grok-v4 if you prefer open-source or alternative models.

Notes

  • All models require a paid t3.chat subscription
  • Model availability may change as t3.chat adds or removes models
  • Use ModelsClient::get_model_statuses() to check current availability
  • Some models may have rate limits or usage restrictions
  • Image generation is only available with gpt-image-1

Build docs developers (and LLMs) love