Skip to content

Latest commit

 

History

History
16 lines (9 loc) · 981 Bytes

Readme.md

File metadata and controls

16 lines (9 loc) · 981 Bytes

Membership Inference Attacks against Language Models via Neighbourhood Comparison

This is the code for the paper Membership Inference Attacks against Language Models via Neighbourhood Comparison.

Prerequisites:

To run our code, you need to have a model you want to attack (in path_to_attack_model)as well as a dataset consisting of training members and non members. in attack.py, examples for news, twitter and wikipedia data are provided. In the code, we assume that the first n lines of the text file are members and the n remaining ones are non-training-members.

How it works:

The code will use a BERT based model to generate neighbours and compute the likelihoods of neighbours and the original texts under the probability distribution of the provided, gpt2-based attack model. It will return these scores in a pickle file.

To parallelize the workload, you should provide a --proc-id as an argument