Meta Reality Labs Research
Roman Fedotov, Brian Budge, Ladislav Kavan,
-
Install python libraries:
pip3 install libigl torch numpy scipy
-
Download the models located in the
infolder from the repositoryhttps://github.com/facebookresearch/compskinto sub foldermodels -
To run script edit
runModel()function change list of the models in the main loop:for model in ["jupiter", "aura"]: # here we optimze jupiter and aura models
change parameters of optimization:
p = Params( model=model, numIterations=20_000, numColumnB=200, numNz=posterParams[model], power=2, alpha=modelAlpas[model], lr=1e-2, seed=1, initB=1e-3, initC=1e-3, numBits=16, # quantize with 16 bits )
and run script
$ python3 sparseFactorization.py -
Script will create
outfolder and will use this as output folder. (it will create subfolders and file names automatically based on parameters of optimization For example one optimizatio of model will generatet quantized and float point output data:data - compressed data file in numpy format npz if quantized has suffix _Q<numBits> objBS - all blendshapes in obj format objAn - testing animation in obj format the name of the folder encoded several paramters of optimization: Nnz110000 - 110'000 non zeros values CM - wrinkles map was on Nb200 - 200 number of columns in matrix B (row in C) a25 - alpha 25.0 Example: 📁aura └── 📁Nnz35000_CM_Nb200_a9 ├── 📁objAn ├── 📁objAn_Q111110 ├── 📁objAn_Q16 ├── 📁objAn_Q8 ├── 📁objBS ├── 📁objBS_Q111110 ├── 📁objBS_Q16 ├── 📁objBS_Q8 ├── data.npz ├── data_Q111110.npz ├── data_Q16.npz ├── data_Q8.npz └── numNz.npz -
To compare with blendshapes without compression uncomment lines in
main()function# for model in ("aura", "bowen", "jupiter", "proteus"): # outputBlendshapesObj(model, calcGeo(model))
it will generate folder with original blendshapes
📁bowen └── objBS_lossless
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).