Data augmentation for NLP
-
Updated
Jun 24, 2024 - Jupyter Notebook
Data augmentation for NLP
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
A Toolbox for Adversarial Robustness Research
An Open-Source Package for Textual Adversarial Attack.
A Harder ImageNet Test Set (CVPR 2021)
Pytorch implementation of convolutional neural network adversarial attack techniques
Simple pytorch implementation of FGSM and I-FGSM
A non-targeted adversarial attack method, which won the first place in NIPS 2017 non-targeted adversarial attacks competition
Tensorflow Implementation of Adversarial Attack to Capsule Networks
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
PyTorch library for adversarial attack and training
Generative Adversarial Perturbations (CVPR 2018)
Code for the CVPR 2019 article "Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses"
A targeted adversarial attack method, which won the NIPS 2017 targeted adversarial attacks competition
A Paperlist of Adversarial Attack on Object Detection
List of state of the art papers, code, and other resources
Spatially Transformed Adversarial Examples with TensorFlow
My entry for ICLR 2018 Reproducibility Challenge for paper Synthesizing robust adversarial examples https://openreview.net/pdf?id=BJDH5M-AW
Deflecting Adversarial Attacks with Pixel Deflection
Project page for our paper: Interpreting Adversarially Trained Convolutional Neural Networks
Add a description, image, and links to the adversarial-example topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-example topic, visit your repo's landing page and select "manage topics."