Skip to main content

Overview

Learn how to send a simple message to any AI model and receive a response. This is the foundation for all interactions with the T3Router library.

Prerequisites

  • A paid t3.chat subscription
  • Your cookies and session ID set up in .env
  • T3Router added to your Cargo.toml

Quick Start

1

Import the required modules

use t3router::t3::client::Client;
use t3router::t3::message::{Message, Type};
use t3router::t3::config::Config;
use dotenv::dotenv;
2

Load credentials and initialize the client

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();
    
    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = std::env::var("CONVEX_SESSION_ID")
        .expect("CONVEX_SESSION_ID not set");
    
    let mut client = Client::new(cookies, convex_session_id);
    
    // Initialize the client by connecting to t3.chat
    if client.init().await? {
        println!("Client initialized successfully");
    }
    
    Ok(())
}
The init() method establishes a connection with t3.chat and validates your credentials.
3

Send a message and get a response

let config = Config::new();

let response = client
    .send(
        "gemini-2.5-flash-lite",
        Some(Message::new(
            Type::User,
            "What is the capital of France?".to_string(),
        )),
        Some(config),
    )
    .await?;

println!("User: What is the capital of France?");
println!("Assistant: {}", response.content);

Complete Example

Here’s a complete working example from examples/basic_usage.rs:23-36:
use dotenv::dotenv;
use t3router::t3::{
    client::Client,
    config::Config,
    message::{Message, Type},
};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();

    let cookies = std::env::var("COOKIES").expect("COOKIES not set");
    let convex_session_id = std::env::var("CONVEX_SESSION_ID")
        .expect("CONVEX_SESSION_ID not set");

    let mut client = Client::new(cookies, convex_session_id);

    if client.init().await? {
        println!("Client initialized successfully\n");
    }

    let config = Config::new();

    println!("=== Example 1: Single Message ===");
    let response = client
        .send(
            "gemini-2.5-flash-lite",
            Some(Message::new(
                Type::User,
                "What is the capital of France?".to_string(),
            )),
            Some(config.clone()),
        )
        .await?;

    println!("User: What is the capital of France?");
    println!("Assistant: {}\n", response.content);

    Ok(())
}

Expected Output

Client initialized successfully

=== Example 1: Single Message ===
User: What is the capital of France?
Assistant: The capital of France is Paris.

Understanding the Code

The send() Method

The send() method (client.rs:349-438) is the core function for sending messages:
pub async fn send(
    &mut self,
    model: &str,
    new_message: Option<Message>,
    config: Option<Config>,
) -> Result<Message, reqwest::Error>
Parameters:
  • model: The model ID (e.g., "gemini-2.5-flash-lite", "claude-3.7", "gpt-4o")
  • new_message: Optional message to send. If provided, it’s appended to the conversation
  • config: Optional configuration for reasoning effort and search options
Returns: A Message containing the assistant’s response

Message Types

Messages have a Type indicating who sent them:
  • Type::User - Messages from the user
  • Type::Assistant - Responses from the AI model

Choosing a Model

You can use any available model by passing its ID to send(). Popular options:
  • Fast models: gemini-2.5-flash-lite, gpt-4o-mini
  • Powerful models: claude-4-sonnet, gemini-2.5-pro, gpt-4o
  • Reasoning models: deepseek-r1, gpt-o3-mini
  • Open models: llama-3.3-70b, qwen3-32b
See the Model Discovery guide to list all available models programmatically.
The client automatically refreshes your session before each request, so you don’t need to worry about expired credentials during normal use.

Next Steps

Multi-turn Conversations

Learn how to maintain context across multiple messages

Configuration

Customize reasoning effort and enable search

Build docs developers (and LLMs) love