Skip to content
/ ICE Public

The code for Consistent In-Context Editing, an approach for tuning language models through contextual distributions, overcoming the limitations of traditional fine-tuning methods that learn towards one-hot targets.

Notifications You must be signed in to change notification settings

bigai-ai/ICE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

If you like our project, please give us a star ⭐ on GitHub.

arXiv Dataset paperwithcode punkt KnowledgeEditingPapers AIModels.fyi

About

This project is developed based on EasyEdit. Please refer to the original repository for more details of other methods and an overview of knowledge editing. The following is a list of related repositories:

  • EasyEdit An open source knowledge edit framework.
  • ROME A related method of Locating and Editing.
  • MEMIT A related method of Locating and Editing.

Table of Contents

🔔News

🌟Overview

(a) In-Context Learning: Utilizes context prompts without modifying the model's parameters.

(b) Traditional Fine-Tuning: Relies on minimizing a distance metric between the model's predictions and one-hot target distributions, often leading to overfitting and unnatural language generation.

(c) Consistent In-Context Editing (ICE): Leverages context prompts related to the target knowledge to guide the model towards a new distribution that aligns with the desired knowledge, while maintaining similarity to its original distribution.

🤗Dataset

We evaluate our method using four datasets, WikiDatarecent, ZsRE, WikiBio, WikiDatacounterfact. The four datasets share two tasks of knowledge editing to test the generalization of our method.

Task Knowledge Insertion Knowledge Modification
Datasets WikiDatarecent ZsRE WikiBio WikiDatacounterfact
Type Fact Question Answering Hallucination Counterfact
# Train 570 10,000 592 1,455
# Test 1,266 1,230 1,392 885

You can download data 🤗 Huggingface Dataset. And the expected structure of files is:

ICE
|-- data
|   |-- wikibio.json
|   |-- wikidata_counterfact.json
|   |-- wikidata_recent.json
|   |-- zsre.json

😮Highlights

🔥 Target learning towards a distribution rather than a one-hot target

In-Context Editing (ICE) is a novel approach to overcome the brittleness of traditional fine-tuning in knowledge editing scenarios that learns towards a one-hot target.

Comparison of ICE with static and dynamic targets on an example, where the query is "The name of the country which Academy Award for Best Picture is associated with is?", and target is "Wassoulou Empire".

The line plots on the left illustrate the loss trajectories over optimization steps for static (top) and dynamic (bottom) targets under temperature from 0.1 to 100. The figures on the right show how the probabilities of the top-6 predicted tokens for $x_2$, the second token following the target, change with iteration steps.

The tokens are arranged from left to right in descending order of probability without context. At early steps, the token "Wass" appears due to its presence as the initial token in the target $x^*$. At later steps, the probability of "Wass" in dynamic targets (top) significantly declines, indicating successful adaptation and suppression of repetitive token predictions. In contrast, for static targets (bottom), the probability of "Wass" remains relatively high throughout the optimization steps.

💡 High continual editing performance

Our results confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that updated information is seamlessly incorporated while preserving the integrity of existing knowledge.

Continual editing with Llama2-7b-chat on Wikirecent. Each edit builds on the previous model, risking deterioration over time. The model is assessed immediately after each edit without re-evaluating previous edits, testing its ability to update continuously. While most methods deteriorate, sometimes performing worse than the unedited version, our method, ICE, maintains integrity and achieves promising performance.

🛠️Requirements and Installation

# clone ICE
git clone https://github.com/Yofuria/ICE.git
cd ICE

# create conda env
conda create -n ICE python=3.10
conda activate ICE

# install package
pip install -r requirements.txt

In lines 32 and 33 of examples/run_knowedit_llama2.py, you need to download the punkt package.

  • If your Internet speed is fast enough, you can run the code directly from the command line.
if __name__ == "__main__":
    # If you have a slow Internet connection and can't download nltk quickly, comment these two lines and use the second method of Requirements and Installation in README.md
    import nltk
    nltk.download('punkt')
  • If your Internet speed is slow, comment lines 32 and 33 and download punkt manually🤗 punkt. And place it in the ICE environment directory you created, create a nltk_data/tokenizers folder, and unpack punkt into this directory.

🤖Evaluation

You can get the evaluation results using eval.py.

The data used by PPL_ris the edit operation that saves the sentences generated by the model.

Such as:ICE_zsre_Llama-2-7b-chat-hf_gen_sentence.json

python eval.py 
    --model_name_or_path=''  # Path to pre-trained model
    --output_file='./FT-M_counterfact_gpt2-xl_gen_sentence.json'  # Generated sentences file (xxx.json)
    --result_file='./FT-M_counterfact_gpt2-xl_results.json'  # Result file (xxx.json)

You will get the following metrics

Edit_Succ: 30.262626262626263
Portability: 7.3802393354053
Portability (Subject_Aliasing_acc): 6.939620928384972
Portability (reasoning_acc): 3.511697773992855
Portability (Logical_Generalization_acc): 9.11111111111111
Locality: 33.95236461069794
Fluency: 557.8193009507412
ppl_r:  tensor(9.9633, device='cuda:0')

💥Training

We provide the training hyperparameters for five methods in ./hparams.

For ICE, we update GPT2-xl using layers 13 to 17 and Llama2-7b-chat using layers 4 to 8.

Both FT-L and FT-M use the same hparams located in ./hparams/FT.

For FT-L, replace objective_optimization with prompt_last, and for FT-M, replace it with target_new. For details on other methods, please refer to EasyEdit. You can execute the following commands to obtain results:

For ICE:

python examples/run_knowedit_llama2.py \
    --editing_method=ICE \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For FT-L:

python examples/run_knowedit_llama2.py \
    --editing_method=FT-L \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For FT-M:

python examples/run_knowedit_llama2.py \
    --editing_method=FT-M \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For MEMIT:

python examples/run_knowedit_llama2.py \
    --editing_method=MEMIT \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

For ROME:

python examples/run_knowedit_llama2.py \
    --editing_method=ROME \
    --hparams_dir=./hparams/ICE/gpt2-xl.yaml \
    --data_dir=./data/zsre.json \  
    --datatype='zsre' \  
    --metrics_save_dir=./results/gpt2-xl/ICE

The optional range of datatype is ['zsre','recent','counterfact','wikibio']

🚀Main Results

  • Main results on knowledge insertion and question-answering datasets of Llama2-7b-chat

  • Main results on knowledge modification datasets of Llama2-7b-chat

  • Continual editing results of Llama2-7b-chat

⚡️More qualitative results

✏️Citation

If you find our paper and code useful in your research, please consider giving a star ⭐ and citation 📝.

@article{qi2024ice,
      title={In-Context Editing: Learning Knowledge from Self-Induced Distributions}, 
      author={Siyuan Qi and Bangcheng Yang and Kailin Jiang and Xiaobo Wang and Jiaqi Li and Yifan Zhong and Yaodong Yang and Zilong Zheng},
      year={2024},
      eprint={2406.11194},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.11194}, 
}

✨Star History

Star History Chart

🎉Contributors

Contributors Contributors Contributors Contributors
Contributors Contributors Contributors Contributors

About

The code for Consistent In-Context Editing, an approach for tuning language models through contextual distributions, overcoming the limitations of traditional fine-tuning methods that learn towards one-hot targets.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages