Skip to content

bytedance/LatentSync

Repository files navigation

LatentSync: Audio Conditioned Latent Diffusion Models for Lip Sync

arXiv

πŸ“– Introduction

We present LatentSync, an end-to-end lip sync framework based on audio conditioned latent diffusion models without any intermediate motion representation, diverging from previous diffusion-based lip sync methods based on pixel space diffusion or two-stage generation. Our framework can leverage the powerful capabilities of Stable Diffusion to directly model complex audio-visual correlations.

🎬 Demo

Original video Lip-synced video
demo1_video.mp4
demo1_output.mp4
demo2_video.mp4
demo2_output.mp4
demo3_video.mp4
demo3_output.mp4
demo4_video.mp4
demo4_output.mp4
demo5_video.mp4
demo5_output.mp4

(Photorealistic videos are filmed by contracted models, and anime videos are from VASA-1 and EMO)

πŸ“‘ Open-source Plan

  • Inference code and checkpoints
  • Data processing pipeline
  • Training code

πŸ”§ Setting up the Environment

Install the required packages and download the checkpoints via:

source setup_env.sh

If the download is successful, the checkpoints should appear as follows:

./checkpoints/
|-- latentsync_unet.pt
|-- latentsync_syncnet.pt
|-- whisper
|   `-- tiny.pt
|-- auxiliary
|   |-- 2DFAN4-cd938726ad.zip
|   |-- i3d_torchscript.pt
|   |-- koniq_pretrained.pkl
|   |-- s3fd-619a316812.pth
|   |-- sfd_face.pth
|   |-- syncnet_v2.model
|   |-- vgg16-397923af.pth
|   `-- vit_g_hybrid_pt_1200e_ssv2_ft.pth

These already include all the checkpoints required for latentsync training and inference. If you just want to try inference, you only need to download latentsync_unet.pt and tiny.pt from our HuggingFace repo

πŸš€ Inference

Run the script for inference, which requires about 6.5 GB GPU memory.

./inference.sh

πŸ”„ Data Processing Pipeline

The complete data processing pipeline includes the following steps:

  1. Remove the broken video files.
  2. Resample the video FPS to 25, and resample the audio to 16000 Hz.
  3. Scene detect via PySceneDetect.
  4. Split each video into 5-10 second segments.
  5. Remove videos where the face is smaller than 256 $\times$ 256, as well as videos with more than one face.
  6. Affine transform the faces according to the landmarks detected by face-alignment, then resize to 256 $\times$ 256.
  7. Remove videos with sync confidence score lower than 3, and adjust the audio-visual offset to 0.
  8. Calculate hyperIQA score, and remove videos with scores lower than 40.

Run the script to execute the data processing pipeline:

./data_processing_pipeline.sh

You can change the parameter input_dir in the script to specify the data directory to be processed. The processed data will be saved in the same directory. Each step will generate a new directory to prevent the need to redo the entire pipeline in case the process is interrupted by an unexpected error.

πŸ‹οΈβ€β™‚οΈ Training U-Net

Before training, you must process the data as described above and download all the checkpoints. We released a pretrained SyncNet with 94% accuracy on the VoxCeleb2 dataset for the supervision of U-Net training. Note that this SyncNet is trained on affine transformed videos, so when using or evaluating this SyncNet, you need to perform affine transformation on the video first (the code of affine transformation is included in the data processing pipeline).

If all the preparations are complete, you can train the U-Net with the following script:

./train_unet.sh

You should change the parameters in U-Net config file to specify the data directory, checkpoint save path, and other training hyperparameters.

πŸ‹οΈβ€β™‚οΈ Training SyncNet

In case you want to train SyncNet on your own datasets, you can run the following script:

./train_syncnet.sh

About

Taming Stable Diffusion for Lip Sync!

Topics

Resources

License

Stars

Watchers

Forks