The folder AFHQ/prepare_data
contains the code to prepare data for the AFHQ-Cats experiments.
Since we apply the CLIP model to annotate the AFHQ-Cats data by designing proper prompts that contain the controlling attributes, you first need to install it as a Python package:
pip install git+https://github.com/openai/CLIP.git
Generate 10k images and latent variables of StyleGAN2-ADA (including w and z):
bash scripts/run_gen_batch.sh
Use the pre-trained CLIP model to annotate the generated images:
bash scripts/run_clip_labeling.sh
The resulting pairs of latent variables and labels will be used to train latent classifiers.