[DAC'24] EmMark: Robust Watermarks for IP Protection of Embedded Quantized Large Language Models
INT4 Quantization WM
For environment setups, plz refer to AWQ
Step1: Preprocess to acquire the quantized model along with the activations
$ cd int4_wm
$ bash scripts/opt_watermark.sh opt-2.7b ours /path/save/llm
Step2: Watermark the model
Change status
variable from save
to watermark
.
$ bash scripts/opt_watermark.sh opt-2.7b ours /path/save/llm
INT8 Quantization WM
For environment setups, plz refer to LLM.int8() and SmoothQuant
Step1: Preprocess to acquire the quantized model along with the activations
$ cd int8_wm
$ bash scripts/opt_watermark.sh opt-2.7b ours /path/save/llm
Step2: Watermark the model
Change status
variable from save
to watermark
.
$ bash scripts/opt_watermark.sh opt-2.7b ours /path/save/llm
Our code builds heavily upon AWQ, LLM.int8() and SmoothQuant. We thank the authors for open-sourcing the code.
If you found our code/paper helpful, please kindly cite:
@inproceedings{zhang2024emmark,
title={EmMark: Robust watermarks for IP protection of embedded quantized large language models},
author={Zhang, Ruisi and Koushanfar, Farinaz},
booktitle={Proceedings of the 61st ACM/IEEE Design Automation Conference},
pages={1--6},
year={2024}
}