Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When the 'compile' option is enabled, using uvicorn to start the Python script will cause blocking during inference. #834

Open
6 tasks done
steven8274 opened this issue Jan 17, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@steven8274
Copy link

Self Checks

  • This template is only for bug reports. For questions, please visit Discussions.
  • I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • I have searched for existing issues, including closed ones. Search issues
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Self Hosted (Source)

Environment Details

Ubuntu 22.04.4 LTS

Steps to Reproduce

Follow steps in 'https://speech.fish.audio/zh/#linux' to set up python env.
Follow steps in 'https://github.com/fishaudio/fish-speech/blob/main/inference.ipynb' to download model files.
Then run this python code with uvicorn:
uvicorn fish_speech_test:app

fish_speech_test.py content:

import os
import torch
from loguru import logger

from fish_speech.inference_engine import TTSInferenceEngine
from fish_speech.models.text2semantic.inference import launch_thread_safe_queue
from fish_speech.models.vqgan.inference import load_model as load_decoder_model
from fish_speech.utils.schema import ServeTTSRequest
from tools.webui.inference import get_inference_wrapper

from fish_speech.inference_engine.utils import normalize_text

import soundfile as sf

os.environ["EINX_FILTER_TRACEBACK"] = "false"
#os.environ["CUDA_VISIBLE_DEVICES"] = "6"

os.environ["TORCH_LOGS"] = "inductor"
os.environ["TORCH_DUMP_GRAPH"] = "1"
os.environ["TORCH_CUDNN_SDPA_ENABLED"]="1"

from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse, HTMLResponse
from pydantic import BaseModel
import librosa
import numpy as np
import io
import time
import re
import soundfile as sf
import os
import uuid
import logging
import asyncio

device = "cuda"
# Check if MPS or CUDA is available
if torch.backends.mps.is_available():
    device = "mps"
    logger.info("mps is available, running on mps.")
elif not torch.cuda.is_available():
    logger.info("CUDA is not available, running on CPU.")
    device = "cpu"

logger.info("Loading Llama model...")
llama_queue = launch_thread_safe_queue(
    checkpoint_path="checkpoints/fish-speech-1.5",
    device=device,
    precision=torch.bfloat16,
    compile=True,
)

logger.info("Loading VQ-GAN model...")
decoder_model = load_decoder_model(
    config_name="firefly_gan_vq",
    checkpoint_path="checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth",
    device=device,
)

logger.info("Decoder model loaded, warming up...")

# Create the inference engine
inference_engine = TTSInferenceEngine(
    llama_queue=llama_queue,
    decoder_model=decoder_model,
    compile=True,
    precision=torch.bfloat16,
)

# Dry run to check if the model is loaded correctly and avoid the first-time latency
list(
    inference_engine.inference(
        ServeTTSRequest(
            text="Hello world.",
            references=[],
            reference_id=None,
            max_new_tokens=1024,
            chunk_length=200,
            top_p=0.7,
            repetition_penalty=1.5,
            temperature=0.7,
            format="wav",
        )
    )
)

app = FastAPI()

✔️ Expected Behavior

Inference complete successfully.

❌ Actual Behavior

Inferencing is blocked forever.

@steven8274 steven8274 added the bug Something isn't working label Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant