Skip to main content

Overview

ChatterboxTTS is the flagship English text-to-speech model with advanced control over voice characteristics and generation quality. It supports classifier-free guidance and multiple sampling strategies for high-quality, expressive speech synthesis.

Class Signature

class ChatterboxTTS:
    def __init__(
        self,
        t3: T3,
        s3gen: S3Gen,
        ve: VoiceEncoder,
        tokenizer: EnTokenizer,
        device: str,
        conds: Conditionals = None,
    )

Parameters

t3
T3
required
The T3 text-to-speech tokens model instance
s3gen
S3Gen
required
The S3Gen vocoder model instance for token-to-audio conversion
ve
VoiceEncoder
required
Voice encoder for extracting speaker embeddings from reference audio
tokenizer
EnTokenizer
required
English text tokenizer instance
device
str
required
Device to run inference on (“cuda”, “cpu”, or “mps”)
conds
Conditionals
Optional pre-computed conditionals for voice and style. See Conditionals reference

Class Methods

from_pretrained()

Load the pre-trained ChatterboxTTS model from Hugging Face.
@classmethod
def from_pretrained(cls, device: str) -> 'ChatterboxTTS'

Parameters

device
str
required
Device to load the model on (“cuda”, “cpu”, or “mps”). Automatically falls back to “cpu” if MPS is not available

Returns

model
ChatterboxTTS
Initialized ChatterboxTTS model with pre-trained weights from ResembleAI/chatterbox

Example

from chatterbox import ChatterboxTTS
import torch

# Load on GPU
device = "cuda" if torch.cuda.is_available() else "cpu"
model = ChatterboxTTS.from_pretrained(device)

from_local()

Load the model from a local checkpoint directory.
@classmethod
def from_local(cls, ckpt_dir: str, device: str) -> 'ChatterboxTTS'

Parameters

ckpt_dir
str
required
Path to the directory containing model checkpoint files
device
str
required
Device to load the model on (“cuda”, “cpu”, or “mps”)

Returns

model
ChatterboxTTS
Initialized ChatterboxTTS model with weights loaded from local directory

Instance Methods

prepare_conditionals()

Prepare voice conditionals from an audio prompt for subsequent generation calls.
def prepare_conditionals(
    self,
    wav_fpath: str,
    exaggeration: float = 0.5
)

Parameters

wav_fpath
str
required
Path to the audio file to use as voice reference
exaggeration
float
default:"0.5"
Voice exaggeration level (0.0 to 1.0). Higher values produce more expressive speech

Example

# Prepare voice from reference audio
model.prepare_conditionals(
    wav_fpath="voice_sample.wav",
    exaggeration=0.5
)

generate()

Generate speech from text using the prepared voice conditionals.
def generate(
    self,
    text: str,
    repetition_penalty: float = 1.2,
    min_p: float = 0.05,
    top_p: float = 1.0,
    audio_prompt_path: str = None,
    exaggeration: float = 0.5,
    cfg_weight: float = 0.5,
    temperature: float = 0.8,
) -> torch.Tensor

Parameters

text
str
required
The text to convert to speech
repetition_penalty
float
default:"1.2"
Penalty for repeating tokens (1.0 = no penalty, higher values discourage repetition)
min_p
float
default:"0.05"
Minimum probability threshold for sampling. Filters out tokens below this probability
top_p
float
default:"1.0"
Nucleus sampling threshold (0.0 to 1.0). Only tokens with cumulative probability up to top_p are considered
audio_prompt_path
str
Optional path to audio file for voice cloning. If provided, will override existing conditionals
exaggeration
float
default:"0.5"
Voice exaggeration level (0.0 to 1.0). Higher values produce more expressive and animated speech
cfg_weight
float
default:"0.5"
Classifier-free guidance weight (0.0 to 1.0+). Higher values increase adherence to conditioning
temperature
float
default:"0.8"
Sampling temperature (higher = more random, lower = more deterministic)

Returns

audio
torch.Tensor
Generated audio waveform as a PyTorch tensor with shape [1, samples]. Sample rate is 44100 Hz (accessible via model.sr). Audio includes perceptual watermarking

Example

import torchaudio

# Generate speech with prepared conditionals
audio = model.generate(
    text="Hello, this is a demonstration of Chatterbox TTS.",
    temperature=0.8,
    cfg_weight=0.5,
    exaggeration=0.5,
    repetition_penalty=1.2,
    min_p=0.05
)

# Save to file
torchaudio.save("output.wav", audio, model.sr)

# Or generate with a new voice in one call
audio = model.generate(
    text="Hello world!",
    audio_prompt_path="new_voice.wav",
    exaggeration=0.7
)

Attributes

sr
int
Sample rate of generated audio (44100 Hz)
device
str
Device the model is running on
conds
Conditionals
Current voice conditionals used for generation

Notes

  • This model supports classifier-free guidance (CFG) for improved quality control
  • Generated audio is automatically watermarked using the Perth implicit watermarker
  • Text is automatically normalized (capitalization, punctuation) before generation
  • The exaggeration parameter can be updated on-the-fly without re-preparing conditionals

Build docs developers (and LLMs) love