Skip to main content

Overview

T3Router supports image generation through multiple models including OpenAI’s DALL-E and Google’s Gemini Imagen. You can generate images from text prompts and either get the URL or download them directly.

Available Image Models

  • gpt-image-1 - OpenAI’s DALL-E model
  • gemini-imagen-4 - Google’s Gemini Imagen model

Basic Image Generation

The simplest way to generate an image is to use send() with an image model:
use t3router::t3::client::Client;
use t3router::t3::message::{Message, Type, ContentType};
use t3router::t3::config::Config;

let mut client = Client::new(cookies, convex_session_id);
client.init().await?;

let config = Config::new();

let response = client
    .send(
        "gpt-image-1",
        Some(Message::new(
            Type::User,
            "Create an image of a futuristic city at sunset".to_string(),
        )),
        Some(config),
    )
    .await?;

// Check if the response is an image
match response.content_type {
    ContentType::Image => {
        if let Some(url) = response.image_url {
            println!("Generated image at URL: {}", url);
        }
    }
    ContentType::Text => {
        println!("Got text response: {}", response.content);
    }
}

Downloading Images

Use send_with_image_download() (client.rs:453-471) to automatically download generated images:
1

Import Path for file operations

use std::path::Path;
use t3router::t3::client::Client;
use t3router::t3::message::{Message, Type, ContentType};
use t3router::t3::config::Config;
2

Specify save path and generate image

let save_path = Path::new("output/generated_image.png");

let response = client
    .send_with_image_download(
        "gpt-image-1",
        Some(Message::new(
            Type::User,
            "Create a simple drawing of a happy robot".to_string(),
        )),
        Some(config),
        Some(save_path),
    )
    .await?;
The directory will be created automatically if it doesn’t exist.
3

Access the downloaded image data

match response.content_type {
    ContentType::Image => {
        if let Some(url) = response.image_url {
            println!("Image URL: {}", url);
        }
        if save_path.exists() {
            println!("Image saved to: {:?}", save_path);
        }
        if let Some(b64) = response.base64_data {
            println!("Base64 data available ({} characters)", b64.len());
        }
    }
    ContentType::Text => {
        println!("Assistant: {}", response.content);
    }
}

Complete Example

From examples/image_generation.rs:27-86:
use dotenv::dotenv;
use std::path::Path;
use t3router::t3::client::Client;
use t3router::t3::config::Config;
use t3router::t3::message::{ContentType, Message, Type};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();

    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = format!(
        "\"{}\"",
        std::env::var("CONVEX_SESSION_ID").expect("CONVEX_SESSION_ID not set")
    );

    let mut client = Client::new(cookies, convex_session_id);

    if client.init().await? {
        println!("Client initialized successfully\n");
    }

    let config = Config::new();

    println!("=== Example 1: Generate Image (No Save) ===");
    let response = client
        .send(
            "gpt-image-1",
            Some(Message::new(
                Type::User,
                "Create an image of a futuristic city at sunset with flying cars".to_string(),
            )),
            Some(config.clone()),
        )
        .await?;

    println!("User: Create an image of a futuristic city at sunset with flying cars");
    match response.content_type {
        ContentType::Image => {
            if let Some(url) = response.image_url {
                println!("Assistant: Generated image at URL: {}", url);
            }
        }
        ContentType::Text => {
            println!("Assistant: {}", response.content);
        }
    }

    println!("\n=== Example 2: Generate and Download Image ===");
    client.new_conversation();

    let save_path = Path::new("output/pokemon.png");
    let response2 = client
        .send_with_image_download(
            "gpt-image-1",
            Some(Message::new(
                Type::User,
                "Make a image of a pokemon".to_string(),
            )),
            Some(config.clone()),
            Some(save_path),
        )
        .await?;

    println!("User: Make a image of a pokemon");
    match response2.content_type {
        ContentType::Image => {
            if let Some(url) = response2.image_url {
                println!("Assistant: Generated image at URL: {}", url);
            }
            println!("Image saved to: {:?}", save_path);
            if let Some(b64) = response2.base64_data.as_ref() {
                println!("Base64 data length: {} characters", b64.len());
            }
        }
        ContentType::Text => {
            println!("Assistant: {}", response2.content);
        }
    }

    Ok(())
}

Using Different Image Models

DALL-E (gpt-image-1)

let response = client
    .send_with_image_download(
        "gpt-image-1",
        Some(Message::new(
            Type::User,
            "A photograph of a modern apartment interior".to_string(),
        )),
        Some(config),
        Some(Path::new("output/apartment.png")),
    )
    .await?;

Gemini Imagen (gemini-imagen-4)

let response = client
    .send_with_image_download(
        "gemini-imagen-4",
        Some(Message::new(
            Type::User,
            "Create a beautiful mountain landscape with a lake".to_string(),
        )),
        Some(config),
        Some(Path::new("output/landscape.png")),
    )
    .await?;

Manually Downloading Images

You can also download images manually using download_image() (client.rs:302-321):
// Generate image without downloading
let response = client
    .send(
        "gpt-image-1",
        Some(Message::new(Type::User, "A red sports car".to_string())),
        Some(config),
    )
    .await?;

// Download the image manually
if let Some(url) = response.image_url {
    let save_path = Path::new("output/car.png");
    let base64_data = client.download_image(&url, Some(save_path)).await?;
    
    println!("Downloaded image to {:?}", save_path);
    println!("Base64 data: {} bytes", base64_data.len());
}

Mixed Text and Image Conversations

You can mix text and image generation in the same conversation:
client.new_conversation();

// Start with a text question
let response1 = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "What makes a good landscape photo?".to_string(),
        )),
        Some(config.clone()),
    )
    .await?;

println!("Assistant: {}", response1.content);

// Now generate an image based on the advice
let response2 = client
    .send_with_image_download(
        "gemini-imagen-4",
        Some(Message::new(
            Type::User,
            "Create an example based on what you just described".to_string(),
        )),
        Some(config),
        Some(Path::new("output/example.png")),
    )
    .await?;
When switching from a text model to an image model in the same conversation, the image model will have access to the previous conversation context.

Understanding the Image Response

The Message struct has specific fields for image content:
pub struct Message {
    pub role: Type,
    pub content: String,
    pub content_type: ContentType,
    pub image_url: Option<String>,
    pub base64_data: Option<String>,
    // ...
}
Fields:
  • content_type - Either ContentType::Text or ContentType::Image
  • image_url - The URL where the image is hosted
  • base64_data - Base64-encoded image data (only populated when using send_with_image_download())

Response Parsing

The client automatically detects image responses from the EventStream format (client.rs:132-244):
// The parser looks for image generation events
if type_str == Some("image-gen") {
    image_url = value.get("url")
        .and_then(Value::as_str)
        .map(|s| s.to_string());
}

Error Handling

let response = client
    .send_with_image_download(
        "gpt-image-1",
        Some(Message::new(Type::User, "Create an image".to_string())),
        Some(config),
        Some(Path::new("output/image.png")),
    )
    .await;

match response {
    Ok(msg) => match msg.content_type {
        ContentType::Image => println!("Image generated successfully"),
        ContentType::Text => println!("Got text instead of image: {}", msg.content),
    },
    Err(e) => eprintln!("Error generating image: {}", e),
}

Best Practices

  1. Be specific in prompts - Detailed prompts produce better images
  2. Use appropriate models - DALL-E and Imagen have different strengths
  3. Handle both content types - Always check if you got an image or text response
  4. Save images immediately - Use send_with_image_download() to avoid losing temporary URLs
  5. Create output directories - The library creates parent directories automatically
Some prompts may result in a text response instead of an image if the model refuses or cannot generate the requested image. Always check the content_type field.

Next Steps

Multi-turn Conversations

Learn how to maintain context across messages

Model Discovery

Discover all available models dynamically

Build docs developers (and LLMs) love