Skip to content

Personal repository for "VOGUE: Try-On by StyleGAN Interpolation Optimization" (CVPR 2021). SOTA results for garments to deform according to the given body shape, while preserving pattern and material details.

License

Notifications You must be signed in to change notification settings

Charmve/VOGUE-Try-On

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VOGUE: Try-On by StyleGAN Interpolation Optimization

 	Kathleen M Lewis1,2		Srivatsan Varadharajan1		Ira Kemelmacher-Shlizerman1,3
  		1Google Research	    2MIT CSAIL	       3University of Washington

Figure 1: VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples. Bottom: pants try-on synthesized by our method. Note how our method preserves the identity of the person while allowing high detail garment try on.

Abstract

Given an image of a target person and an image of another person wearing a garment, we automatically generate the target person in the given garment. At the core of our method is a pose-conditioned StyleGAN2 latent space interpolation, which seamlessly combines the areas of interest from each image, i.e., body shape, hair, and skin color are derived from the target person, while the garment with its folds, material properties, and shape comes from the garment image. By automatically optimizing for interpolation coefficients per layer in the latent space, we can perform a seamless, yet true to source, merging of the garment and target person. Our algorithm allows for garments to deform according to the given body shape, while preserving pattern and material details. Experiments demonstrate state-of-theart photo-realistic results at high resolution (512 x 512).

VOGUE Method

We train a pose-conditioned StyleGAN2 network that outputs RGB images and segmentations.

After training our modified StyleGAN2 network, we run an optimization method to learn interpolation coefficients for each style block. These interpolation coefficients are used to combine style codes of two different images and semantically transfer a region of interest from one image to another. This method can be used for generated StyleGAN2 images or on real images by first projecting the real images into the latent space.

Figure 2: The try-on optimization setup illustrated here takes two latent codes z+1 and z+2 (representing two input images) and a pose heatmap as input into a pose-conditioned StyleGAN2 generator (gray). The generator produces the try-on image and its corresponding segmentation by interpolating between the latent codes using the interpolation-coefficients q. By minimizing the loss function over the space of interpolation coefficients, we are able to transfer garment(s) g from a garment image Ig, to the person image Ip.

Generated Image Try-On

VOGUE can transfer garments between different poses and body shapes. It preserves garment details (shape, pattern, color, texture) and person identity (hair, skin color, pose).

Shirt Try-On

With VOGUE, the same person can try on shirts of different styles (above). The identity of the person is preserved. When transferring a shorter garment or a different neckline, VOGUE is able to synthesize skin that is realistic and consistent with identity (below).


Different people can also try on the same shirt (below). The characteristics of the shirt are preserved across different poses and people.

Pants Try-On

Projected Image Try-On

Virtual try-on between two real images is possible by first projecting the two images into the StyleGAN Z+ latent space. Improving projection is an active area of research.

Shirt Try-On

Comparison with SOTA

- Wang, Bochao, et al. "Toward characteristic-preserving image-based virtual try-on network." Proceedings of the European Conference on Computer Vision (ECCV). 2018.
-
Men, Yifang, et al. "Controllable person image synthesis with attribute-decomposed gan." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

Video

<iframe width="642" height="360" src="https://www.youtube.com/embed/r4NgwLeJWvY" frameborder="0" allow=" autoplay; encrypted-media;" allowfullscreen></iframe>

Acknowledgements

We thank Edo Collins, Hao Peng, Jiaming Liu, Daniel Bauman, and Blake Farmer for their support of this work.

Paper list



Feel free to ask any questions, open a PR if you feel something can be done differently!

🌟Star this repository🌟

Created by Charmve & maiwei.ai Community | Deployed on Kaggle

About

Personal repository for "VOGUE: Try-On by StyleGAN Interpolation Optimization" (CVPR 2021). SOTA results for garments to deform according to the given body shape, while preserving pattern and material details.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •