Skip to main content

Overview

ChatterboxMultilingualTTS extends Chatterbox’s capabilities to 23 languages with high-quality voice cloning and synthesis. It supports cross-lingual voice cloning, allowing you to clone a voice in one language and synthesize speech in another.

Class Signature

class ChatterboxMultilingualTTS:
    def __init__(
        self,
        t3: T3,
        s3gen: S3Gen,
        ve: VoiceEncoder,
        tokenizer: MTLTokenizer,
        device: str,
        conds: Conditionals = None,
    )

Parameters

t3
T3
required
The T3 text-to-speech tokens model instance configured for multilingual support
s3gen
S3Gen
required
The S3Gen vocoder model instance for token-to-audio conversion
ve
VoiceEncoder
required
Voice encoder for extracting speaker embeddings from reference audio
tokenizer
MTLTokenizer
required
Multilingual text tokenizer instance
device
str
required
Device to run inference on (“cuda”, “cpu”, or “mps”)
conds
Conditionals
Optional pre-computed conditionals for voice and style. See Conditionals reference

Class Methods

from_pretrained()

Load the pre-trained ChatterboxMultilingualTTS model from Hugging Face.
@classmethod
def from_pretrained(cls, device: str) -> 'ChatterboxMultilingualTTS'

Parameters

device
str
required
Device to load the model on (“cuda”, “cpu”, or “mps”)

Returns

model
ChatterboxMultilingualTTS
Initialized ChatterboxMultilingualTTS model with pre-trained weights from ResembleAI/chatterbox

Example

from chatterbox import ChatterboxMultilingualTTS
import torch

# Load on GPU
device = "cuda" if torch.cuda.is_available() else "cpu"
model = ChatterboxMultilingualTTS.from_pretrained(device)

from_local()

Load the model from a local checkpoint directory.
@classmethod
def from_local(cls, ckpt_dir: str, device: str) -> 'ChatterboxMultilingualTTS'

Parameters

ckpt_dir
str
required
Path to the directory containing model checkpoint files
device
str
required
Device to load the model on (“cuda”, “cpu”, or “mps”)

Returns

model
ChatterboxMultilingualTTS
Initialized ChatterboxMultilingualTTS model with weights loaded from local directory

get_supported_languages()

Return a dictionary of all supported language codes and their names.
@classmethod
def get_supported_languages(cls) -> dict

Returns

languages
dict
Dictionary mapping language codes to language names. See Supported Languages for the full list

Example

languages = ChatterboxMultilingualTTS.get_supported_languages()
print(languages)
# {'ar': 'Arabic', 'da': 'Danish', 'de': 'German', ...}

Instance Methods

prepare_conditionals()

Prepare voice conditionals from an audio prompt for subsequent generation calls.
def prepare_conditionals(
    self,
    wav_fpath: str,
    exaggeration: float = 0.5
)

Parameters

wav_fpath
str
required
Path to the audio file to use as voice reference
exaggeration
float
default:"0.5"
Voice exaggeration level (0.0 to 1.0). Higher values produce more expressive speech

Example

# Prepare voice from reference audio
model.prepare_conditionals(
    wav_fpath="voice_sample.wav",
    exaggeration=0.5
)

generate()

Generate speech from text in the specified language using the prepared voice conditionals.
def generate(
    self,
    text: str,
    language_id: str,
    audio_prompt_path: str = None,
    exaggeration: float = 0.5,
    cfg_weight: float = 0.5,
    temperature: float = 0.8,
    repetition_penalty: float = 2.0,
    min_p: float = 0.05,
    top_p: float = 1.0,
) -> torch.Tensor

Parameters

text
str
required
The text to convert to speech
language_id
str
required
Language code for the text (e.g., “en”, “es”, “fr”). See Supported Languages for valid codes
audio_prompt_path
str
Optional path to audio file for voice cloning. If provided, will override existing conditionals
exaggeration
float
default:"0.5"
Voice exaggeration level (0.0 to 1.0). Higher values produce more expressive and animated speech
cfg_weight
float
default:"0.5"
Classifier-free guidance weight (0.0 to 1.0+). Higher values increase adherence to conditioning
temperature
float
default:"0.8"
Sampling temperature (higher = more random, lower = more deterministic)
repetition_penalty
float
default:"2.0"
Penalty for repeating tokens (1.0 = no penalty, higher values discourage repetition)
min_p
float
default:"0.05"
Minimum probability threshold for sampling. Filters out tokens below this probability
top_p
float
default:"1.0"
Nucleus sampling threshold (0.0 to 1.0). Only tokens with cumulative probability up to top_p are considered

Returns

audio
torch.Tensor
Generated audio waveform as a PyTorch tensor with shape [1, samples]. Sample rate is 44100 Hz (accessible via model.sr). Audio includes perceptual watermarking

Example

import torchaudio
from chatterbox import ChatterboxMultilingualTTS, SUPPORTED_LANGUAGES

device = "cuda"
model = ChatterboxMultilingualTTS.from_pretrained(device)

# Generate French speech
audio = model.generate(
    text="Bonjour, comment allez-vous?",
    language_id="fr",
    audio_prompt_path="french_voice.wav",
    exaggeration=0.6
)

torchaudio.save("french_output.wav", audio, model.sr)

# Cross-lingual voice cloning: English voice speaking Spanish
model.prepare_conditionals("english_voice.wav")
audio = model.generate(
    text="Hola, ¿cómo estás?",
    language_id="es",
    exaggeration=0.5
)

torchaudio.save("spanish_output.wav", audio, model.sr)

Attributes

sr
int
Sample rate of generated audio (44100 Hz)
device
str
Device the model is running on
conds
Conditionals
Current voice conditionals used for generation

Notes

  • Supports 23 languages with cross-lingual voice cloning capabilities
  • Language code validation is performed automatically - invalid codes will raise a ValueError
  • Generated audio is automatically watermarked using the Perth implicit watermarker
  • Text is automatically normalized for the target language (capitalization, punctuation)
  • The exaggeration parameter can be updated on-the-fly without re-preparing conditionals
  • Higher repetition_penalty (default 2.0) helps prevent repetition in multilingual synthesis

Build docs developers (and LLMs) love