You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This template is only for bug reports. For questions, please visit Discussions.
I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English中文日本語Portuguese (Brazil)
I have searched for existing issues, including closed ones. Search issues
I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
[FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
Please do not modify this template and fill in all required fields.
import os
import torch
from loguru import logger
from fish_speech.inference_engine import TTSInferenceEngine
from fish_speech.models.text2semantic.inference import launch_thread_safe_queue
from fish_speech.models.vqgan.inference import load_model as load_decoder_model
from fish_speech.utils.schema import ServeTTSRequest
from tools.webui.inference import get_inference_wrapper
from fish_speech.inference_engine.utils import normalize_text
import soundfile as sf
os.environ["EINX_FILTER_TRACEBACK"] = "false"
#os.environ["CUDA_VISIBLE_DEVICES"] = "6"
os.environ["TORCH_LOGS"] = "inductor"
os.environ["TORCH_DUMP_GRAPH"] = "1"
os.environ["TORCH_CUDNN_SDPA_ENABLED"]="1"
from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse, HTMLResponse
from pydantic import BaseModel
import librosa
import numpy as np
import io
import time
import re
import soundfile as sf
import os
import uuid
import logging
import asyncio
device = "cuda"
# Check if MPS or CUDA is available
if torch.backends.mps.is_available():
device = "mps"
logger.info("mps is available, running on mps.")
elif not torch.cuda.is_available():
logger.info("CUDA is not available, running on CPU.")
device = "cpu"
logger.info("Loading Llama model...")
llama_queue = launch_thread_safe_queue(
checkpoint_path="checkpoints/fish-speech-1.5",
device=device,
precision=torch.bfloat16,
compile=True,
)
logger.info("Loading VQ-GAN model...")
decoder_model = load_decoder_model(
config_name="firefly_gan_vq",
checkpoint_path="checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth",
device=device,
)
logger.info("Decoder model loaded, warming up...")
# Create the inference engine
inference_engine = TTSInferenceEngine(
llama_queue=llama_queue,
decoder_model=decoder_model,
compile=True,
precision=torch.bfloat16,
)
# Dry run to check if the model is loaded correctly and avoid the first-time latency
list(
inference_engine.inference(
ServeTTSRequest(
text="Hello world.",
references=[],
reference_id=None,
max_new_tokens=1024,
chunk_length=200,
top_p=0.7,
repetition_penalty=1.5,
temperature=0.7,
format="wav",
)
)
)
app = FastAPI()
✔️ Expected Behavior
Inference complete successfully.
❌ Actual Behavior
Inferencing is blocked forever.
The text was updated successfully, but these errors were encountered:
Self Checks
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
Ubuntu 22.04.4 LTS
Steps to Reproduce
Follow steps in 'https://speech.fish.audio/zh/#linux' to set up python env.
Follow steps in 'https://github.com/fishaudio/fish-speech/blob/main/inference.ipynb' to download model files.
Then run this python code with uvicorn:
uvicorn fish_speech_test:app
fish_speech_test.py content:
✔️ Expected Behavior
Inference complete successfully.
❌ Actual Behavior
Inferencing is blocked forever.
The text was updated successfully, but these errors were encountered: