T3Router supports image generation through multiple models including OpenAI’s DALL-E and Google’s Gemini Imagen. You can generate images from text prompts and either get the URL or download them directly.
The simplest way to generate an image is to use send() with an image model:
use t3router::t3::client::Client;use t3router::t3::message::{Message, Type, ContentType};use t3router::t3::config::Config;let mut client = Client::new(cookies, convex_session_id);client.init().await?;let config = Config::new();let response = client .send( "gpt-image-1", Some(Message::new( Type::User, "Create an image of a futuristic city at sunset".to_string(), )), Some(config), ) .await?;// Check if the response is an imagematch response.content_type { ContentType::Image => { if let Some(url) = response.image_url { println!("Generated image at URL: {}", url); } } ContentType::Text => { println!("Got text response: {}", response.content); }}
use dotenv::dotenv;use std::path::Path;use t3router::t3::client::Client;use t3router::t3::config::Config;use t3router::t3::message::{ContentType, Message, Type};#[tokio::main]async fn main() -> Result<(), Box<dyn std::error::Error>> { dotenv().ok(); let cookies = std::env::var("COOKIES").expect("COOKIES not set"); let convex_session_id = format!( "\"{}\"", std::env::var("CONVEX_SESSION_ID").expect("CONVEX_SESSION_ID not set") ); let mut client = Client::new(cookies, convex_session_id); if client.init().await? { println!("Client initialized successfully\n"); } let config = Config::new(); println!("=== Example 1: Generate Image (No Save) ==="); let response = client .send( "gpt-image-1", Some(Message::new( Type::User, "Create an image of a futuristic city at sunset with flying cars".to_string(), )), Some(config.clone()), ) .await?; println!("User: Create an image of a futuristic city at sunset with flying cars"); match response.content_type { ContentType::Image => { if let Some(url) = response.image_url { println!("Assistant: Generated image at URL: {}", url); } } ContentType::Text => { println!("Assistant: {}", response.content); } } println!("\n=== Example 2: Generate and Download Image ==="); client.new_conversation(); let save_path = Path::new("output/pokemon.png"); let response2 = client .send_with_image_download( "gpt-image-1", Some(Message::new( Type::User, "Make a image of a pokemon".to_string(), )), Some(config.clone()), Some(save_path), ) .await?; println!("User: Make a image of a pokemon"); match response2.content_type { ContentType::Image => { if let Some(url) = response2.image_url { println!("Assistant: Generated image at URL: {}", url); } println!("Image saved to: {:?}", save_path); if let Some(b64) = response2.base64_data.as_ref() { println!("Base64 data length: {} characters", b64.len()); } } ContentType::Text => { println!("Assistant: {}", response2.content); } } Ok(())}
let response = client .send_with_image_download( "gpt-image-1", Some(Message::new( Type::User, "A photograph of a modern apartment interior".to_string(), )), Some(config), Some(Path::new("output/apartment.png")), ) .await?;
You can mix text and image generation in the same conversation:
client.new_conversation();// Start with a text questionlet response1 = client .send( "gemini-2.5-flash-lite", Some(Message::new( Type::User, "What makes a good landscape photo?".to_string(), )), Some(config.clone()), ) .await?;println!("Assistant: {}", response1.content);// Now generate an image based on the advicelet response2 = client .send_with_image_download( "gemini-imagen-4", Some(Message::new( Type::User, "Create an example based on what you just described".to_string(), )), Some(config), Some(Path::new("output/example.png")), ) .await?;
When switching from a text model to an image model in the same conversation, the image model will have access to the previous conversation context.
Be specific in prompts - Detailed prompts produce better images
Use appropriate models - DALL-E and Imagen have different strengths
Handle both content types - Always check if you got an image or text response
Save images immediately - Use send_with_image_download() to avoid losing temporary URLs
Create output directories - The library creates parent directories automatically
Some prompts may result in a text response instead of an image if the model refuses or cannot generate the requested image. Always check the content_type field.