[Colab Demo 😀]
This repository contains the source code for our paper:
eMotion-GAN: A Motion-based GAN for Photorealistic and Facial Expression Preserving Frontal View Synthesis
Check out our Supplementary Video for more details & results.
Animation:
- ...
Create and activate conda environment:
conda create -n eMotionGAN python=3.10
conda activate eMotionGAN
Install all dependencies:
pip install -r requirements.txt
Install Jupyter Lab to visualize demo:
conda install -c conda-forge jupyterlab
Download the pre-trained weights file from Google Drive. Move it to the ./weights
folder.
Run our interactive visualization demo using Colab notebook (no GPU needed).
By default, the training datasets are structured as follows:
├── datasets
├── CK+
├── sub_id
├── seq_id
├── img_001.png
├── img_002.png
...
├── img_010.png
├── ADFES
├── sub_id
├── seq_id
├── img_001.png
├── img_002.png
...
├── img_010.png
...
python generate_data_files.py --dataset_name SNAP
--data_dir_F ./datasets/SNAPcam1/
--data_dir_NF ./datasets/SNAPcam2
--save_dir_F ./datasets/optical_flows/
--save_dir_NF ./datasets/optical_flows/
--emotions_file_path ./datasets/emotions/SNAP_emotions.csv
--flow_algo Farneback
Download DMUE and its pre-trained model.
python train.py --data_path ./datasets/data_file.txt
--proto_path ./datasets/protocols/SNAP_proto.csv
--fold 1
python FVS_demo.py --img1 images/images_1.png
--img2 images/images2.png
--model_path weights/eMotionGAN_model.pth
python motion_transfer_demo.py --img1 images/images_1.png
--img2 images/images2.png
--neutrl_img images/neutral_img.png
--model_path weights/eMotionGAN_model.pth
If you find this repo useful, please consider citing our paper
@article{ikne2024emotion,
title={eMotion-GAN: A Motion-based GAN for Photorealistic and Facial Expression Preserving Frontal View Synthesis},
author={Ikne, Omar and Allaert, Benjamin and Bilasco, Ioan Marius and Wannous, Hazem},
journal={arXiv preprint arXiv:2404.09940},
year={2024}
}