Skip to content

Officially unofficial PyTorch re-implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

License

Notifications You must be signed in to change notification settings

Huage001/AdaAttN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

524b2b1 · Apr 20, 2023

History

23 Commits
Sep 5, 2021
Sep 5, 2021
Sep 5, 2021
Sep 5, 2021
Aug 6, 2021
Sep 5, 2021
Apr 20, 2023
Dec 7, 2022
Oct 28, 2021
Dec 7, 2022
Oct 28, 2021
Sep 5, 2021
Sep 5, 2021
Sep 5, 2021
Sep 5, 2021
Dec 7, 2022

Repository files navigation

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer

[Paper] [PyTorch Implementation] [Paddle Implementation]

Overview

This repository contains the officially unofficial PyTorch **re-**implementation of paper:

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer,

Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Meiling Wang, Xin Li, Zhengxing Sun, Qian Li, Errui Ding

ICCV 2021

Updates

  • [2022-12-07] Upload script of user control. Please see user_specify_demo.py
  • [2022-12-07] Upload inference code of video style transfer. Please see inference_frame.py. Please download checkpoints from here and extract the package to the main directory of this repo before running.

Prerequisites

  • Linux or macOS

  • Python 3

  • PyTorch 1.7+ and other dependencies (torchvision, visdom, dominate, and other common python libs)

  • Getting Started

    • Clone this repository:

      git clone https://github.com/Huage001/AdaAttN
      cd AdaAttN
    • Inference:

      • Make a directory for checkpoints if there is not:

        mkdir checkpoints
      • Download pretrained model from Google Drive, move it to checkpoints directory, and unzip:

        mv [Download Directory]/AdaAttN_model.zip checkpoints/
        unzip checkpoints/AdaAttN_model.zip
        rm checkpoints/AdaAttN_model.zip
      • Configure content_path and style_path in test_adaattn.sh firstly, indicating paths to folders of testing content images and testing style images respectively.

      • Then, simply run:

        bash test_adaattn.sh
      • Check the results under results/AdaAttN folder.

    • Train:

      • Download 'vgg_normalised.pth' from here.

      • Download COCO dataset and WikiArt dataset and then extract them.

      • Configure content_path, style_path, and image_encoder_path in train_adaattn.sh, indicating paths to folders of training content images, training style images, and 'vgg_normalised.pth' respectively.

      • Before training, start visdom server:

        python -m visdom.server
      • Then, simply run:

        bash train_adaattn.sh
      • You can monitor training status at http://localhost:8097/ and models would be saved at checkpoints/AdaAttN folder.

      • You may feel free to try other training options written in train_adaattn.sh.

    Citation

    • If you find ideas or codes useful for your research, please cite:

      @inproceedings{liu2021adaattn,
        title={AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer},
        author={Liu, Songhua and Lin, Tianwei and He, Dongliang and Li, Fu and Wang, Meiling and Li, Xin and Sun, Zhengxing and Li, Qian and Ding, Errui},
        booktitle={Proceedings of the IEEE International Conference on Computer Vision},
        year={2021}
      }
      

    Acknowledgments

About

Officially unofficial PyTorch re-implementation of paper: AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer, ICCV 2021.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published