Skip to content

ntunlplab/Self-ICL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations

This is the official repository of our paper Self-ICL: Zero-Shot In-Context Learning with Self-Generated Demonstrations, EMNLP 2023.

TL;DR: This work presents Self-ICL, a prompting framework bootstrapping LLMsʼ intrinsic task understanding ability to perform in-context learning via self-generated pseudo-demonstrations.


The Self-ICL prompting framework

Given the [task_description] and a corresponding [test_input], Self-ICL consists of three steps:

  1. Construction of pseudo-inputs.

    • Prompt template (the same prompt is use in both direct prompting and chain-of-thought prompting).
    • Prompt the LLM to generate [num_shot] pseudo-inputs, conditioned on the [task_description] and [test_input].
  2. Construction of pseudo-labels.

  3. In-context learning with pseudo-demonstrations.

Steps to Reproduce Paper Experiments

One may follow the steps below to reproduce the experiments. As an example, the following steps reproduce the text-bison-001 column in Table 3.

Setup API Keys

Set the corresponding environment variables to your API keys. For example, GOOGLE_API_KEY for PaLM-2, and OPENAI_API_KEY for GPT models.

export GOOGLE_API_KEY=<your_api_key>

Configure the Experiment Settings

Set the configuration file in ./configs to your experiment settings. For example, the ./configs/config_template_standard.yml file records the settings for running the ZS-Direct prompting method using the text-bison-001 API endpoint.

Run the Prompting Script

Run the following script to run different prompting methods and log output to log_path in the YAML file:

python experiment.py \
  --config_path ./configs/config_template_standard.yml \
  --label_type "class"

Note that one may need to configure other command line arguments in other experiments (please refer to experiment.py for more details).

Run the Evaluation Script

After finishing the previous step, you may evaluate the performance by running:

python experiment.py \
  --config_path ./configs/config_template_standard.yml \
  --label_type "class" \
  --eval

The evaluation results would be stored in the log_path along with model outputs from the previous step.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages