Overview
T3Router provides access to multiple AI models from different providers through the t3.chat platform. Each model has different capabilities, speeds, and use cases.Model Discovery
Use theModelsClient to discover available models:
Model Types
TheModelsClient returns two types of model information:
ModelStatus
Basic model information:ModelInfo
Detailed model information:Available Models
T3Router supports models from multiple providers:Text Generation Models
Google Gemini
gemini-2.5-flash
Google’s state of the art fast model. Best balance of speed and quality.
gemini-2.5-flash-lite
Google’s most cost-efficient model. Fastest response times.
Anthropic Claude
claude-3.7
Anthropic’s Claude 3.7 Sonnet. Excellent reasoning and coding.
claude-4-sonnet
Anthropic’s Claude 4 Sonnet. Latest generation model.
OpenAI
gpt-o4-mini
OpenAI’s latest small reasoning model. Good for complex problem-solving.
DeepSeek
deepseek-r1-groq
DeepSeek R1 distilled on Llama. Optimized for reasoning tasks.
Image Generation Models
gpt-image-1
OpenAI’s DALL-E based image generation.
gemini-imagen-4
Google’s Imagen 4 model for high-quality image generation.
Model Selection Guide
Use Case Based Selection
Quick responses and general queries
Quick responses and general queries
Use
gemini-2.5-flash-lite for fastest responses with good quality.Complex reasoning and analysis
Complex reasoning and analysis
Use
claude-4-sonnet or gpt-o4-mini for deeper reasoning tasks.Code generation and debugging
Code generation and debugging
Use
claude-3.7 or claude-4-sonnet for excellent coding capabilities.Image generation
Image generation
Use
gpt-image-1 or gemini-imagen-4 depending on style preferences.Cost-sensitive applications
Cost-sensitive applications
Use
gemini-2.5-flash-lite for the most efficient token usage.Dynamic Model Discovery
TheModelsClient dynamically fetches model information from t3.chat:
Fallback Models
If dynamic discovery fails, the client returns a curated list of known models:- gemini-2.5-flash
- gemini-2.5-flash-lite
- claude-3.7
- claude-4-sonnet
- gpt-o4-mini
- deepseek-r1-groq
Model Naming Conventions
Model names in T3Router follow these patterns:- Provider prefix:
gemini-,claude-,gpt-,deepseek- - Version/generation:
2.5,3.7,4,r1 - Variant:
-flash,-lite,-sonnet,-mini - Special suffixes:
-groq(for Groq-hosted models),-image-1(for image models)
gemini-2.5-flash-lite→ Google Gemini 2.5, Flash variant, Lite versionclaude-4-sonnet→ Anthropic Claude 4, Sonnet variantdeepseek-r1-groq→ DeepSeek R1, hosted on Groq
Switching Models in Conversations
You can use different models within the same conversation:Model Configuration
Some models support additional configuration through theConfig struct:
Best Practices
Test different models for your use case
Test different models for your use case
Different models excel at different tasks. Benchmark a few models with your specific use case to find the best fit.
Use lite/mini models for prototyping
Use lite/mini models for prototyping
Start with
gemini-2.5-flash-lite during development for faster iteration, then switch to more capable models for production.Match model to content type
Match model to content type
Use text models for text, image models for images. Don’t attempt image generation with text-only models.
Keep fallback models in mind
Keep fallback models in mind
If a specific model fails or is unavailable, have a fallback model ready in your error handling logic.
Error Handling
Related
- Client - Learn how to use the Client
- Messages - Understanding message types
- Configuration - Configure model behavior