Skip to main content

Overview

Multi-turn conversations allow the AI to remember previous messages and maintain context throughout a discussion. This is essential for building chatbots, assistants, and any application that requires ongoing dialogue.

How Conversations Work

The T3Router client maintains:
  • A conversation history of all messages (accessible via get_messages())
  • A thread ID that groups related messages together
  • Both user and assistant messages in chronological order

Basic Multi-turn Conversation

1

Send the first message

use t3router::t3::client::Client;
use t3router::t3::message::{Message, Type};
use t3router::t3::config::Config;

let mut client = Client::new(cookies, convex_session_id);
client.init().await?;

let config = Config::new();

// First message in the conversation
let response1 = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "I'm planning a trip to Paris. What are the top 3 attractions?".to_string(),
        )),
        Some(config.clone()),
    )
    .await?;

println!("User: I'm planning a trip to Paris. What are the top 3 attractions?");
println!("Assistant: {}", response1.content);
2

Continue the conversation

// The client remembers the previous context
let response2 = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "Tell me more about the first one.".to_string(),
        )),
        Some(config.clone()),
    )
    .await?;

println!("\nUser: Tell me more about the first one.");
println!("Assistant: {}", response2.content);
The AI understands “the first one” refers to the first attraction mentioned earlier.
3

Ask follow-up questions

let response3 = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "What's the best time to visit?".to_string(),
        )),
        Some(config),
    )
    .await?;

println!("\nUser: What's the best time to visit?");
println!("Assistant: {}", response3.content);
The AI maintains full context about Paris and the attractions.

Pre-populating Conversations

You can build conversation history before sending a message using append_message() (client.rs:264-266):
client.new_conversation();

// Add context to the conversation
client.append_message(Message::new(
    Type::User,
    "Let's play a word association game.".to_string(),
));
client.append_message(Message::new(
    Type::Assistant,
    "Great! I'm ready to play. Go ahead!".to_string(),
));
client.append_message(Message::new(Type::User, "Ocean".to_string()));
client.append_message(Message::new(Type::Assistant, "Waves".to_string()));
client.append_message(Message::new(Type::User, "Beach".to_string()));

// Now send without a new message - it will respond to "Beach"
let response = client
    .send("gemini-2.5-flash-lite", None, Some(config))
    .await?;
When you pass None as the second parameter to send(), it sends the existing conversation history without adding a new message. This is useful when you’ve pre-populated messages with append_message().

Managing Conversations

View Conversation History

println!("Conversation history:");
for msg in client.get_messages() {
    let role = match msg.role {
        Type::User => "User",
        Type::Assistant => "Assistant",
    };
    println!("{}: {}", role, msg.content);
}

Get Thread Information

// Get the current thread ID
if let Some(thread_id) = client.get_thread_id() {
    println!("Thread ID: {}", thread_id);
}

// Get message count
let message_count = client.get_messages().len();
println!("Total messages: {}", message_count);

Start a New Conversation

The new_conversation() method (client.rs:252-255) resets the thread and clears all messages:
// Clear all history and start fresh
client.new_conversation();

// This will start a completely new conversation thread
let response = client
    .send(
        "claude-3.7",
        Some(Message::new(Type::User, "Hello!".to_string())),
        Some(config),
    )
    .await?;

Clear Messages Without Resetting Thread

// Clear messages but keep the same thread ID
client.clear_messages();

Complete Example

From examples/multi_message.rs:38-74:
use dotenv::dotenv;
use t3router::t3::{
    client::Client,
    config::Config,
    message::{Message, Type},
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();

    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = std::env::var("CONVEX_SESSION_ID")
        .expect("CONVEX_SESSION_ID not set");

    let mut client = Client::new(cookies, convex_session_id);
    client.init().await?;

    let config = Config::new();

    println!("=== Multi-turn Conversation ===");
    client.new_conversation();
    
    client.append_message(Message::new(
        Type::User,
        "I'm planning a trip to Paris. What are the top 3 attractions?".to_string(),
    ));
    
    let response1 = client
        .send("gemini-2.5-flash-lite", None, Some(config.clone()))
        .await?;
    
    println!("User: I'm planning a trip to Paris. What are the top 3 attractions?");
    println!("Assistant: {}", response1.content);

    let response2 = client
        .send(
            "gemini-2.5-flash-lite",
            Some(Message::new(
                Type::User,
                "Tell me more about the first one.".to_string(),
            )),
            Some(config.clone()),
        )
        .await?;
    
    println!("\nUser: Tell me more about the first one.");
    println!("Assistant: {}", response2.content);

    let response3 = client
        .send(
            "gemini-2.5-flash-lite",
            Some(Message::new(
                Type::User,
                "What's the best time to visit?".to_string(),
            )),
            Some(config),
        )
        .await?;
    
    println!("\nUser: What's the best time to visit?");
    println!("Assistant: {}", response3.content);

    println!("\n=== Conversation Summary ===");
    println!("Total messages: {}", client.get_messages().len());
    println!("Thread ID: {:?}", client.get_thread_id());

    Ok(())
}

Expected Output

=== Multi-turn Conversation ===
User: I'm planning a trip to Paris. What are the top 3 attractions?
Assistant: The top 3 attractions in Paris are:
1. The Eiffel Tower
2. The Louvre Museum
3. Notre-Dame Cathedral

User: Tell me more about the first one.
Assistant: The Eiffel Tower is an iconic iron lattice tower...

User: What's the best time to visit?
Assistant: The best time to visit Paris is during spring (April-June)...

=== Conversation Summary ===
Total messages: 6
Thread ID: Some("a1b2c3d4-e5f6-7890-abcd-ef1234567890")

Understanding Thread IDs

Thread IDs are automatically managed:
  • First call to send() generates a new UUID thread ID
  • Subsequent calls use the same thread ID
  • Calling new_conversation() resets the thread ID to None
  • The thread ID is used by t3.chat to group related messages
From client.rs:366-369:
let thread_id = match &self.thread_id {
    Some(id) => id.clone(),
    None => Uuid::new_v4().to_string(),
};

Two Approaches for Multi-turn Conversations

Approach 1: Send with Each Message

// Simple and straightforward
let response1 = client.send("model", Some(Message::new(Type::User, "Hello".into())), config).await?;
let response2 = client.send("model", Some(Message::new(Type::User, "How are you?".into())), config).await?;

Approach 2: Pre-populate Then Send

// More control over conversation structure
client.append_message(Message::new(Type::User, "Hello".into()));
client.append_message(Message::new(Type::Assistant, "Hi there!".into()));
client.append_message(Message::new(Type::User, "How are you?".into()));

// Send all at once
let response = client.send("model", None, config).await?;
Use Approach 2 when you need to inject specific conversation history or simulate a conversation state. This is useful for testing or implementing conversation templates.

Best Practices

  1. Call new_conversation() when starting a new topic - Prevents context bleed between unrelated conversations
  2. Check message count periodically - Very long conversations may hit token limits
  3. Store thread IDs - Useful for logging and debugging conversation flows
  4. Use the same model throughout a conversation - Switching models mid-conversation can cause inconsistencies

Next Steps

Configuration

Learn how to adjust reasoning effort and enable search

Image Generation

Generate images within conversations

Build docs developers (and LLMs) love