-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference does not work in webui #768
Comments
@Picus303 Could you have a look at it? |
I'll try to reproduce it. I think it happened once while I was doing tests but never happened again. |
You're running without CUDA. Bandwidth and inference speed are too slow |
Thank you for your reply. |
@yuzifu Use |
I can't reproduce this issue for the moment. I guess I'll end up getting it again and then be able to investigate it, but for now I'm not really making progress. |
|
Self Checks
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
ubuntu 24.04, torch 2.4.1, gradio 5.9.1, python 3.10
Steps to Reproduce
$git clone https://github.com/fishaudio/fish-speech.git
$cd fish-speech
$huggingface-cli download fishaudio/fish-speech-1.5 --local-dir checkpoints/fish-speech-1.5/
$RADIO_SERVER_NAME=0.0.0.0 GRADIO_SHARE=True python tools/run_webui.py --llama-checkpoint-path "checkpoints/fish-speech-1.5" --decoder-checkpoint-path "checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth" --decoder-config-name firefly-gan_vq
✔️ Expected Behavior
Inference can work in webui
❌ Actual Behavior
Inference does not work in webui, and no error
The text was updated successfully, but these errors were encountered: