-
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation. Sanjay Kariyappa,Atul Prakash, Moinuddin K Qureshi CVPR 2021, eprint
-
Data-Free Model Extraction. Truong Jean-Baptiste, Maini Pratyush, Walls Robert J, Papernot Nicolas CVPR 2021, eprint
-
Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System. Mahdi Khosravy, Kazuaki Nakamura, Yuki Hirose, Naoko Nitta, Noboru Babaguchi TIFS 2022, eprint
-
Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks. Struppek Lukas, Hintersdorf Dominik, Correia Antonio De Almeida, Adler Antonia, Kersting Kristian ICML 2022, eprint
-
See through gradients: Image batch recovery via gradinversion. Yin Hongxu, Mallya Arun, Vahdat Arash, Alvarez Jose M, Kautz Jan, Molchanov Pavlo CVPR 2021, eprint
-
ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning. Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti CVPR 2022, eprint
-
Soteria: Provable defense against privacy leakage in federated learning from representation perspective. Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, Yiran Chen CVPR 2021, eprint
-
Label-Only Membership Inference Attack. Christopher A. Choquette-Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot ICML 2021, eprint
-
Bilateral Dependency Optimization: Defending Against Model-inversion Attacks. Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han KDD 2022, eprint
-
Feature inference attack on model predictions in vertical federated learning. Xinjian Luo, Yuncheng Wu, Xiaokui Xiao, Beng Chin Ooi ICDE 2021, eprint