This is a streaming-optimized fork of Chatterbox Multilingual, Resemble AI's production-grade open source TTS model supporting 23 languages. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations.
This fork adds streaming implementation that achieves a realtime factor of 0.499 (target < 1) on a 4090 GPU with latency to first chunk of around 0.472s.
Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life across languages. It's the first open source TTS model to support emotion exaggeration control with robust multilingual zero-shot voice cloning. Try the english only version now on our English Hugging Face Gradio app.. Or try the multilingual version on our Multilingual Hugging Face Gradio app..
If you like the model but need to scale or tune it for higher accuracy, check out our competitively priced TTS service (link). It delivers reliable performance with ultra-low latency of sub 200ms—ideal for production use in agents, applications, or interactive media.
- Multilingual, zero-shot TTS supporting 23 languages
- SoTA zeroshot English TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Easy voice conversion script
- Real-time streaming generation (0.499 RTF on 4090)
- Outperforms ElevenLabs
Arabic (ar) • Danish (da) • German (de) • Greek (el) • English (en) • Spanish (es) • Finnish (fi) • French (fr) • Hebrew (he) • Hindi (hi) • Italian (it) • Japanese (ja) • Korean (ko) • Malay (ms) • Dutch (nl) • Norwegian (no) • Polish (pl) • Portuguese (pt) • Russian (ru) • Swedish (sv) • Swahili (sw) • Turkish (tr) • Chinese (zh)
-
General Use (TTS and Voice Agents):
- Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip's language. To mitigate this, set
cfg_weightto0. - The default settings (
exaggeration=0.5,cfg_weight=0.5) work well for most prompts across all languages. - If the reference speaker has a fast speaking style, lowering
cfg_weightto around0.3can improve pacing.
- Ensure that the reference clip matches the specified language tag. Otherwise, language transfer outputs may inherit the accent of the reference clip's language. To mitigate this, set
-
Expressive or Dramatic Speech:
- Try lower
cfg_weightvalues (e.g.~0.3) and increaseexaggerationto around0.7or higher. - Higher
exaggerationtends to speed up speech; reducingcfg_weighthelps compensate with slower, more deliberate pacing.
- Try lower
git clone https://github.com/rossturner/chatterbox-streaming.git
cd chatterbox-streaming
python3.10 -m venv .venv
source .venv/bin/activate
pip install -e .pip install chatterbox-ttsWe developed and tested this streaming fork on Python 3.10.12 on Debian 11 OS; the versions of the dependencies are pinned in pyproject.toml to ensure consistency. You can modify the code or dependencies in this installation mode.
import torchaudio as ta
from chatterbox.tts import ChatterboxTTS
from chatterbox.mtl_tts import ChatterboxMultilingualTTS
# English example
model = ChatterboxTTS.from_pretrained(device="cuda")
text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill."
wav = model.generate(text)
ta.save("test-english.wav", wav, model.sr)
# Multilingual examples
multilingual_model = ChatterboxMultilingualTTS.from_pretrained(device="cuda")
french_text = "Bonjour, comment ça va? Ceci est le modèle de synthèse vocale multilingue Chatterbox, il prend en charge 23 langues."
french_wav = multilingual_model.generate(french_text, language='fr')
ta.save("test-french.wav", french_wav, multilingual_model.sr)
