- Modified Hugging Face PEFT library (original source).
- The only relevant file for our work is
peft/src/peft/tuners/lora.py
, which includes our method. It also contains additional implementations that were tested but not included in the paper. - The rest of the library is unchanged.
- Modified code for running experiments on the GLUE benchmark.
- Adapted from LoRA source code.
- Code for fine-tuning the llama2 model on the instruction-following dataset.
Both glue
and instruct
directories contain scripts for reproducing the results detailed in the paper. For different GLUE tasks, change task_name
, classifier_lr
, learning_rate
, and num_train_epochs
to corresponding values from the table in Appendix.
- Python 3.10
- Run
pip install -r requirements.txt
to install necessary packages.
- Run
pip install -U ./peft
to install modified peft library.
- Finetuning Llama2 model requires an access to the model weights on HuggingFace. Make sure you have the access before running the code.
- For evaluation, use Vicuna eval code.