Skip to content

hengjiUSTC/xtts-streaming-server

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

XTTS streaming server

Running the server

To run a pre-built container (CUDA 11.8):

$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest

CUDA 12.1 version (for newer cards)

$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80  ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121

If you have already downloaded v2 model and like to use this server, and using Ubuntu, change your /home/YOUR_USER_NAME

$ docker run -v /home/YOUR_USER_NAME/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2:/root/.local/share/tts/tts_models--multilingual--multi-dataset--xtts_v2 --env NVIDIA_DISABLE_REQUIRE=1 --gpus=all -e COQUI_TOS_AGREED=1  --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest`

Setting the COQUI_TOS_AGREED environment variable to 1 indicates you have read and agreed to the terms of the CPML license.

Testing the server

  1. Generate audio with the test script:
$ cd test
$ python -m pip install -r requirements.txt
$ python test_streaming.py

Building the container

  1. To build the Docker container Pytorch 2.1 and CUDA 11.8 :
$ cd server
$ docker build -t xtts-stream .

For Pytorch 2.1 and CUDA 12.1 :

$ cd server
docker build -t xtts-stream . -f Dockerfile.cuda121
  1. Run the server container:
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream

Setting the COQUI_TOS_AGREED environment variable to 1 indicates you have read and agreed to the terms of the CPML license.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 90.6%
  • Dockerfile 6.2%
  • Shell 3.2%