Skip to content

QMF matrix factorization minimization code associated with the paper "QMF-Blend Quantized Matrix Factorization for EfficientBlendshape Compression"

License

Notifications You must be signed in to change notification settings

facebookresearch/qmf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

QMF-Blend: Quantized Matrix Factorization for Efficient Blendshape Compression

Meta Reality Labs Research

Roman Fedotov, Brian Budge, Ladislav Kavan,

How to use script

  1. Install python libraries:

    pip3 install libigl torch numpy scipy
  2. Download the models located in the in folder from the repository https://github.com/facebookresearch/compskin to sub folder models

  3. To run script edit runModel() function change list of the models in the main loop:

    for model in ["jupiter", "aura"]: # here we optimze jupiter and aura models

    change parameters of optimization:

        p = Params(
            model=model,
            numIterations=20_000,
            numColumnB=200,
            numNz=posterParams[model],
            power=2,
            alpha=modelAlpas[model],
            lr=1e-2,
            seed=1,
            initB=1e-3,
            initC=1e-3,
            numBits=16, # quantize with 16 bits
        )

    and run script

    $ python3 sparseFactorization.py
    
  4. Script will create out folder and will use this as output folder. (it will create subfolders and file names automatically based on parameters of optimization For example one optimizatio of model will generatet quantized and float point output data:

    data - compressed data file in numpy format npz if quantized has suffix _Q<numBits>
    objBS - all blendshapes in obj format
    objAn - testing animation in obj format
    the name of the folder encoded several paramters of optimization:
        Nnz110000 - 110'000 non zeros values
        CM - wrinkles map was on
        Nb200 - 200 number of columns in matrix B (row in C)
        a25 - alpha 25.0
    
    Example:
    
    📁aura
    └── 📁Nnz35000_CM_Nb200_a9
        ├── 📁objAn
        ├── 📁objAn_Q111110
        ├── 📁objAn_Q16
        ├── 📁objAn_Q8
        ├── 📁objBS
        ├── 📁objBS_Q111110
        ├── 📁objBS_Q16
        ├── 📁objBS_Q8
        ├── data.npz
        ├── data_Q111110.npz
        ├── data_Q16.npz
        ├── data_Q8.npz
        └── numNz.npz
    
    
  5. To compare with blendshapes without compression uncomment lines in main() function

    # for model in ("aura", "bowen", "jupiter", "proteus"):
    #     outputBlendshapesObj(model, calcGeo(model))

    it will generate folder with original blendshapes

    📁bowen
    └── objBS_lossless
    

License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).

About

QMF matrix factorization minimization code associated with the paper "QMF-Blend Quantized Matrix Factorization for EfficientBlendshape Compression"

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages