T3Router provides a ModelsClient that dynamically discovers all available models from t3.chat. This ensures you always have access to the latest models without hardcoding model lists.
t3.chat frequently adds new models. Instead of maintaining a static list, the ModelsClient parses t3.chat’s JavaScript bundles to extract model information in real-time.
use t3router::t3::models::ModelsClient;use dotenv::dotenv;
2
Create a ModelsClient instance
#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> { dotenv().ok(); let cookies = std::env::var("COOKIES").expect("COOKIES not set"); let convex_session_id = std::env::var("CONVEX_SESSION_ID") .expect("CONVEX_SESSION_ID not set"); let models_client = ModelsClient::new(cookies, convex_session_id); Ok(())}
3
Fetch available models
let models = models_client.get_model_statuses().await?;println!("Found {} models:", models.len());for model in &models { println!(" {} - {}", model.name, model.description);}
The ModelsClient uses a multi-stage approach (models.rs:175-193):
1
Try known chunks first
The client first tries known JavaScript chunk URLs that typically contain model definitions:
let known_chunks = vec!["https://t3.chat/_next/static/chunks/3af0bf4d01fe7216.js"];
2
Parse the homepage if needed
If known chunks don’t work, it fetches the t3.chat homepage and extracts all JavaScript chunk URLs:
let chunk_urls = self.get_chunk_urls_from_homepage().await?;
3
Parse each chunk for model data
It downloads each chunk and uses regex to extract model definitions:
for chunk_url in chunk_urls { if let Ok(models) = self.parse_models_from_chunk(&chunk_url).await { if models.len() > 10 { return Ok(models); } }}
4
Fallback to hardcoded list
If all else fails, it returns a fallback list of known models (models.rs:199-233):
fn get_fallback_models(&self) -> Result<Vec<ModelStatus>, Box<dyn std::error::Error>> { let model_statuses = vec![ ModelStatus { name: "gemini-2.5-flash".to_string(), indicator: "operational".to_string(), description: "Google's state of the art fast model".to_string(), }, // ... more fallback models ]; Ok(model_statuses)}
Fetch once at startup - Model lists don’t change frequently
Cache the results - Avoid repeated network requests
Handle fallback gracefully - The fallback list is always available
Filter for your use case - Not all models may be suitable for all tasks
Update periodically - Fetch fresh model data daily or weekly
The model discovery process takes a few seconds. If you’re building a CLI tool, consider fetching models in the background while showing a loading indicator.