Skip to content

Comparing the effect of supervised and unsupervised methods on adversarial examples.

License

Notifications You must be signed in to change notification settings

kfarivar/Masters_thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

c91a2d1 · Nov 10, 2022

History

36 Commits
Apr 16, 2022
Apr 16, 2022
Apr 16, 2022
Apr 16, 2022
Apr 16, 2022
Nov 11, 2021
Jul 11, 2022
Jun 28, 2022
Jun 28, 2022
Jul 11, 2022
Apr 16, 2022
Apr 16, 2022
Jun 28, 2022
Apr 16, 2022
Apr 16, 2022
Jun 28, 2022
Apr 16, 2022
Apr 16, 2022
Apr 16, 2022
Jul 31, 2022
Apr 16, 2022
Jun 28, 2022
Apr 16, 2022
Apr 16, 2022
Feb 24, 2021
Nov 10, 2022
Nov 11, 2021
Apr 16, 2022
Jul 11, 2022
Jun 28, 2022
Apr 16, 2022
Apr 16, 2022

Repository files navigation

Experiments on Adversarial robustness and Self-supervised learning

  • The loss function is supposed to be a proxy for the accuracy measure used to analyse a model. But this proxy is not always perfect.

  • I explore how the definition of the loss function and the specific method of learning used (self-supervised or supervised) affects the robustness of a model to adversarial examples.

  • A synthetic dataset such as 3dident could help us isolate the factors that contribute to adversarial susceptibility.

  • refer to environment_setup_newpytorch.yml for required libraries.

Traning models

The code to train each SSL model is in model_name_module.py

The supervised model: ./huy_Supervised_models_training_CIFAR10/train.py

simclr and simsiam: ./bolt_self_supervised_training/[simclr or simsiam]

barlow twins: ./home/kfarivar/adversarial-components/barlow_twins_yao_training

  • I also create simple toy datasets and showed that it is possible to train a 100% robust model using the standard supervised training pipeline.

  • for more information see report.

Ideas to explore furthur

  • In connection to footnote on page 40. Can we attack a human brain using a back box attack. Will we beable to full a human in l_infinity norm and with a small epsilon ??

About

Comparing the effect of supervised and unsupervised methods on adversarial examples.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages