Skip to content

Godlikemandyy/OpenVoice

 
 

Repository files navigation

 

Paper | Website

Join Our Community

Join our Discord community and select the Developer role upon joining to gain exclusive access to our developer-only channel! Don't miss out on valuable discussions and collaboration opportunities.

Introduction

As we detailed in our paper and website, the advantages of OpenVoice are three-fold:

1. Accurate Tone Color Cloning. OpenVoice can accurately clone the reference tone color and generate speech in multiple languages and accents.

2. Flexible Voice Style Control. OpenVoice enables granular control over voice styles, such as emotion and accent, as well as other style parameters including rhythm, pauses, and intonation.

3. Zero-shot Cross-lingual Voice Cloning. Neither of the language of the generated speech nor the language of the reference speech needs to be presented in the massive-speaker multi-lingual training dataset.

openvoice.mp4
 
 

OpenVoice has been powering the instant voice cloning capability of myshell.ai since May 2023. Until Nov 2023, the voice cloning model has been used tens of millions of times by users worldwide, and witnessed the explosive user growth on the platform.

Main Contributors

Live Demo

         

Common Issues

Please see QnA for common questions and answers. We will regularly update the question and answer list.

Installation

Clone this repo, and run

conda create -n openvoice python=3.9
conda activate openvoice
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -r requirements.txt

Download the checkpoint from here and extract it to the checkpoints folder

Usage

1. Flexible Voice Style Control. Please see demo_part1.ipynb for an example usage of how OpenVoice enables flexible style control over the cloned voice.

2. Cross-Lingual Voice Cloning. Please see demo_part2.ipynb for an example for languages seen or unseen in the MSML training set.

3. Gradio Demo.. We provide a minimalist local gradio demo here. We strongly suggest the users to look into demo_part1.ipynb, demo_part2.ipynb and the QnA if they run into issues with the gradio demo. Launch a local gradio demo with python -m openvoice_app --share.

3. Advanced Usage. The base speaker model can be replaced with any model (in any language and style) that the user prefer. Please use the se_extractor.get_se function as demonstrated in the demo to extract the tone color embedding for the new base speaker.

4. Tips to Generate Natural Speech. There are many single or multi-speaker TTS methods that can generate natural speech, and are readily available. By simply replacing the base speaker model with the model you prefer, you can push the speech naturalness to a level you desire.

Roadmap

  • Inference code
  • Tone color converter model
  • Multi-style base speaker model
  • Multi-style and multi-lingual demo
  • Base speaker model in other languages
  • EN base speaker model with better naturalness

Citation

@article{qin2023openvoice,
  title={OpenVoice: Versatile Instant Voice Cloning},
  author={Qin, Zengyi and Zhao, Wenliang and Yu, Xumin and Sun, Xin},
  journal={arXiv preprint arXiv:2312.01479},
  year={2023}
}

License

This repository is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which prohibits commercial usage. This will be changed to a license that allows Free Commercial usage in the near future. Stay tuned. For social responsibility and anti-misuse considerations, MyShell reserves the ability to detect whether an audio is generated by OpenVoice, no matter whether the watermark is added or not.

Acknowledgements

This implementation is based on several excellent projects, TTS, VITS, and VITS2. Thanks for their awesome work!

About

Instant voice cloning by MyShell.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.5%
  • Jupyter Notebook 10.5%