This is the official PyTorch implementation for the paper:
Xiaolei Wang, Kun Zhou, Xinyu Tang, Wayne Xin Zhao, Fan Pan, Zhao Cao, Ji-Rong Wen. Improving Conversational Recommendation Systems via Counterfactual Data Simulation. KDD 2023.
Conversational recommender systems (CRSs) aim to provide recommendation services via natural language conversations. Although a number of approaches have been proposed for developing capable CRSs, they typically rely on sufficient training data for training. Since it is difficult to annotate recommendation-oriented dialogue datasets, existing CRS approaches often suffer from the issue of insufficient training due to the scarcity of training data.
To address this issue, in this paper, we propose a CounterFactual data simulation approach for CRS, named CFCRS, to alleviate the issue of data scarcity in CRSs. Our approach is developed based on the framework of counterfactual data augmentation, which gradu-ally incorporates the rewriting to the user preference from a real dialogue without interfering with the entire conversation flow. To develop our approach, we characterize user preference and organize the conversation flow by the entities involved in the dialogue, and design a multi-stage recommendation dialogue simulator based on a conversation flow language model. Under the guidance of the learned user preference and dialogue schema, the flow language model can produce reasonable, coherent conversation flows, which can be further realized into complete dialogues. Based on the sim-ulator, we perform the intervention at the representations of the interacted entities of target users, and design an adversarial training method with a curriculum schedule that can gradually optimize the data augmentation strategy.
- python == 3.8
- pytorch == 1.8.1
- cudatoolkit == 11.1.1
- transformers == 4.21.3
- pyg == 2.0.1
- accelerate == 0.12
- nltk == 3.6
You can also see requirements.txt
.
We only list the version of key packages here.
We run all experiments and tune hyperparameters on a GPU with 24GB memory, you can adjust per_device_train_batch_size
and per_device_eval_batch_size
in the script according to your GPU, and then the optimization hyperparameters (e.g., learning_rate
) may also need to be tuned.
The number after each command is used to set CUDA_VISIBLE_DEVICES
.
You can change save_dir_prefix
in the script to set your own saving directory.
- dataset: [redial, inspired]
bash script/simualtor/{dataset}/train_FLM.sh 0
bash script/simualtor/{dataset}/train_schema.sh 0
- model: [KBRD, BARCOR, UniCRS]
- dataset: [redial, inspired]
bash script/{model}/{dataset}/train_pre.sh 0 # only for UniCRS
bash script/{model}/{dataset}/train_rec.sh 0
bash script/{model}/{dataset}/train_cf.sh 0
bash script/{model}/{dataset}/train_conv.sh 0
If you have any questions for our paper or codes, please send an email to [email protected].