Skip to main content

Overview

T3Router provides a ModelsClient that dynamically discovers all available models from t3.chat. This ensures you always have access to the latest models without hardcoding model lists.

Why Model Discovery?

t3.chat frequently adds new models. Instead of maintaining a static list, the ModelsClient parses t3.chat’s JavaScript bundles to extract model information in real-time.

Quick Start

1

Import the ModelsClient

use t3router::t3::models::ModelsClient;
use dotenv::dotenv;
2

Create a ModelsClient instance

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();

    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = std::env::var("CONVEX_SESSION_ID")
        .expect("CONVEX_SESSION_ID not set");

    let models_client = ModelsClient::new(cookies, convex_session_id);
    
    Ok(())
}
3

Fetch available models

let models = models_client.get_model_statuses().await?;

println!("Found {} models:", models.len());
for model in &models {
    println!("  {} - {}", model.name, model.description);
}

Complete Example

use dotenv::dotenv;
use t3router::t3::models::ModelsClient;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();

    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = std::env::var("CONVEX_SESSION_ID")
        .expect("CONVEX_SESSION_ID not set");

    let models_client = ModelsClient::new(cookies, convex_session_id);
    
    println!("=== Discovering Available Models ===");
    
    let models = models_client.get_model_statuses().await?;

    println!("\nFound {} models:\n", models.len());
    
    for (i, model) in models.iter().enumerate() {
        println!("{}. {}", i + 1, model.name);
        println!("   Status: {}", model.indicator);
        println!("   Description: {}", model.description);
        println!();
    }

    Ok(())
}

Expected Output

=== Discovering Available Models ===

Found 47 models:

1. gemini-2.5-flash
   Status: operational
   Description: Google's state of the art fast model

2. claude-4-sonnet
   Status: operational
   Description: Anthropic's Claude 4 Sonnet

3. gpt-4o
   Status: operational
   Description: OpenAI's flagship model

4. deepseek-r1
   Status: operational
   Description: DeepSeek's reasoning model

...

Model Data Structures

ModelStatus

The get_model_statuses() method returns ModelStatus structs (models.rs:3-8):
pub struct ModelStatus {
    pub name: String,
    pub indicator: String,
    pub description: String,
}
Fields:
  • name - The model ID to use with client.send() (e.g., "claude-4-sonnet")
  • indicator - Status indicator (typically "operational")
  • description - Short description of the model’s purpose

ModelInfo

The internal ModelInfo struct (models.rs:11-20) contains more detailed information:
pub struct ModelInfo {
    pub id: String,
    pub name: String,
    pub provider: String,
    pub developer: String,
    pub short_description: String,
    pub full_description: String,
    pub requires_pro: bool,
    pub premium: bool,
}

How Discovery Works

The ModelsClient uses a multi-stage approach (models.rs:175-193):
1

Try known chunks first

The client first tries known JavaScript chunk URLs that typically contain model definitions:
let known_chunks = vec!["https://t3.chat/_next/static/chunks/3af0bf4d01fe7216.js"];
2

Parse the homepage if needed

If known chunks don’t work, it fetches the t3.chat homepage and extracts all JavaScript chunk URLs:
let chunk_urls = self.get_chunk_urls_from_homepage().await?;
3

Parse each chunk for model data

It downloads each chunk and uses regex to extract model definitions:
for chunk_url in chunk_urls {
    if let Ok(models) = self.parse_models_from_chunk(&chunk_url).await {
        if models.len() > 10 {
            return Ok(models);
        }
    }
}
4

Fallback to hardcoded list

If all else fails, it returns a fallback list of known models (models.rs:199-233):
fn get_fallback_models(&self) -> Result<Vec<ModelStatus>, Box<dyn std::error::Error>> {
    let model_statuses = vec![
        ModelStatus {
            name: "gemini-2.5-flash".to_string(),
            indicator: "operational".to_string(),
            description: "Google's state of the art fast model".to_string(),
        },
        // ... more fallback models
    ];
    Ok(model_statuses)
}

Filtering Models

You can filter models by category:
let models = models_client.get_model_statuses().await?;

// Language models only (exclude image models)
let language_models: Vec<_> = models
    .iter()
    .filter(|m| !m.name.contains("image") && !m.name.contains("imagen"))
    .collect();

println!("Language models: {}", language_models.len());

// Image generation models
let image_models: Vec<_> = models
    .iter()
    .filter(|m| m.name.contains("image") || m.name.contains("imagen"))
    .collect();

println!("Image models: {}", image_models.len());

Searching for Specific Models

let models = models_client.get_model_statuses().await?;

// Find all Claude models
let claude_models: Vec<_> = models
    .iter()
    .filter(|m| m.name.contains("claude"))
    .collect();

println!("Claude models:");
for model in claude_models {
    println!("  {} - {}", model.name, model.description);
}

// Find a specific model
if let Some(gpt4) = models.iter().find(|m| m.name == "gpt-4o") {
    println!("Found GPT-4o: {}", gpt4.description);
}

Building a Model Selector

You can use model discovery to build interactive model selectors:
use std::io::{self, Write};

let models = models_client.get_model_statuses().await?;

println!("Select a model:");
for (i, model) in models.iter().enumerate() {
    println!("{}. {} - {}", i + 1, model.name, model.description);
}

print!("\nEnter number: ");
io::stdout().flush()?;

let mut input = String::new();
io::stdin().read_line(&mut input)?;

let choice: usize = input.trim().parse()?;
if choice > 0 && choice <= models.len() {
    let selected_model = &models[choice - 1];
    println!("\nYou selected: {}", selected_model.name);
    
    // Now use it with the client
    let response = client
        .send(
            &selected_model.name,
            Some(Message::new(Type::User, "Hello!".to_string())),
            Some(config),
        )
        .await?;
}

Caching Model Lists

To avoid fetching models repeatedly, you can cache the results:
use std::sync::Arc;
use tokio::sync::RwLock;

struct CachedModelsClient {
    client: ModelsClient,
    cache: Arc<RwLock<Option<Vec<ModelStatus>>>>,
}

impl CachedModelsClient {
    fn new(cookies: String, session_id: String) -> Self {
        Self {
            client: ModelsClient::new(cookies, session_id),
            cache: Arc::new(RwLock::new(None)),
        }
    }
    
    async fn get_models(&self) -> Result<Vec<ModelStatus>, Box<dyn std::error::Error>> {
        // Check cache first
        let cache_read = self.cache.read().await;
        if let Some(cached) = cache_read.as_ref() {
            return Ok(cached.clone());
        }
        drop(cache_read);
        
        // Fetch and cache
        let models = self.client.get_model_statuses().await?;
        let mut cache_write = self.cache.write().await;
        *cache_write = Some(models.clone());
        
        Ok(models)
    }
}

Parsing Implementation Details

The chunk parser (models.rs:84-148) uses regex to extract model information:
  1. Find model ID list: Searches for arrays of model IDs
    let model_list_regex = Regex::new(r#"let\s+\w+\s*=\s*\[((?:"[^"]+",?\s*)+)\]"#)?;
    
  2. Extract model details: For each ID, finds corresponding metadata
    let pattern = format!(
        r#"(?s)"{}":\s*\{{.*?id:\s*"([^"]+)"(?s).*?name:\s*"([^"]+)"..."#,
        regex::escape(model_id)
    );
    
  3. Parse fields: Extracts name, provider, developer, and descriptions
The parsing logic is resilient to changes in t3.chat’s JavaScript structure. If parsing fails, it gracefully falls back to a known-good model list.

Error Handling

match models_client.get_model_statuses().await {
    Ok(models) => {
        println!("Successfully fetched {} models", models.len());
        // Use models
    }
    Err(e) => {
        eprintln!("Failed to fetch models: {}", e);
        eprintln!("Using hardcoded fallback list");
        // The method automatically falls back, but you can handle it explicitly
    }
}

Best Practices

  1. Fetch once at startup - Model lists don’t change frequently
  2. Cache the results - Avoid repeated network requests
  3. Handle fallback gracefully - The fallback list is always available
  4. Filter for your use case - Not all models may be suitable for all tasks
  5. Update periodically - Fetch fresh model data daily or weekly
The model discovery process takes a few seconds. If you’re building a CLI tool, consider fetching models in the background while showing a loading indicator.

Next Steps

Basic Chat

Use discovered models to send messages

Configuration

Configure model parameters and settings

Build docs developers (and LLMs) love