Yufei Zhang, Jeffrey O. Kephart, Zijun Cui, Qiang Ji
CVPR2024, arXiv
This repositoray provides the PhysPT demo code for estimating human dynamics from a monocular video.
conda create -n physpt python=3.7
conda activate physpt
pip install -r requirements.txt
Please download the required data and trained model assets and directly overwrite the ./assets folder in the current directory. (please also download the checkpoint CLIFF and put it under ./models/cliff_hr48 used for generating kinematics-based motion estimates)
python video_preprocessing.py --vid_path './demo/jumpingjacks'
python video_inference.py --vid_processed_path './demo/jumpingjacks_CLIFF.json'
python vis_motion_force_withvideo.py --vid_output_path './demo/jumpingjacks_output.npz'
If you find our work useful, please consider citing the paper:
@InProceedings{Zhang_2024_CVPR,
author = {Zhang, Yufei and Kephart, Jeffrey O. and Cui, Zijun and Ji, Qiang},
title = {PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {2305-2317}
}
If you have questions or encouter any issues when running the code, feel free to open an issue or directly contact me via: [email protected].
The SMPL model data is downloaded from SMPL-X model. The adaptation of the CLIFF model is based on CLIFF. We thank them for generously sharing their outstanding work.