- TLDR : We discover that LLM agents generally exhibit trust behavior in Trust Games and GPT-4 agents manifest high behavioral alignment with humans in terms of trust behavior, indicating the potential to simulate human trust behavior with LLM agents.
- Authors : Chengxing Xie*, Canyu Chen*, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li. (*equal contributions)
- Correspondence to: Chengxing Xie <[email protected]>, Canyu Chen <[email protected]>, Guohao Li <[email protected]>.
- Paper : Read our paper
- Project Website: https://agent-trust.camel-ai.org
- Online Demo: Trust Game Demo & Repeated Trust Game Demo
Our research investigates the simulation of human trust behaviors through the use of large language model agents. We leverage the foundational work of the Camel Project, acknowledging its significant contributions to our research. For further information about the Camel Project, please visit Camel AI.
Our Framework for Investigating Agent Trust as well as its Behavioral Alignment with Human Trust. First, this figure shows the major components for studying the trust behaviors of LLM agents with Trust Games and Belief-Desire-Intention (BDI) modeling. Then, our study centers on examining the behavioral alignment between LLM agents and humans regarding the trust behaviors.
All the experiment results are recorded for verification. The prompts for games in the paper are stored in agent_trust/prompt
. The experiment results for non-repeated games are stored in agent_trust/No repeated res
. The experiment results for repeated games are stored in agent_trust/repeated res
.
To prepare the environment for conducting experiments, follow these steps using Conda:
To create a new Conda environment with all required dependencies as specified in the environment.yaml
file, use:
conda env create -f environment.yaml
Alternatively, you can set up the environment manually as follows:
conda create -n agent-trust python=3.10
pip install -r requirements.txt
This guide provides instructions on how to run the trust games demos on your local machine. We offer two types of trust games: non-repeated and repeated. Follow the steps below to execute each demo accordingly.
To run the non-repeated trust game demo, use the following command in your terminal:
python agent_trust/no_repeated_demo.py
For the repeated trust game demo, execute this command:
python agent_trust/repeated_demo.py
Running this command will start the demo where the trust game is played repeatedly, illustrating how trust can evolve over repeated interactions.
Ensure you have the required environment set up and dependencies installed before running these commands. Enjoy exploring the trust dynamics in both scenarios!
The experiment code is primarily located in agent_trust/all_game_person.py
, which contains the necessary implementations for executing the trust behavior experiments with large language models.
We utilize the FastChat Framework for smooth interactions with open-source models. For comprehensive documentation, refer to the FastChat GitHub repository.
Game prompts are vital for our experiments and are stored in agent_trust/prompt
. These JSON files provide the prompts used throughout the experiments, ensuring transparency and reproducibility.
For scenarios where the trust game is not repeated, execute the experiment by running the run_exp
function in the all_game_person.py
file. Ensure you adjust the model_list
and other parameters according to your experiment's specifics.
For experiments involving repeated trust games, use the multi_round_exp
function in the all_game_person.py
file. This function is specifically designed for use with GPT-3.5-16k and GPT-4 models.
To access a web interface for running the experiments (demo), execute agent_trust/no_repeated_demo.py
or agent_trust/repeated_demo.py
. This provides a user-friendly interface to interact with the experiment setup. You can also visit our online demo websites: Trust Game Demo & Repeated Trust Game Demo
If you find our paper or code useful, we will greatly appreacite it if you could consider citing our paper:
@article{xie2024can,
title={Can Large Language Model Agents Simulate Human Trust Behaviors?},
author={Xie, Chengxing and Chen, Canyu and Jia, Feiran and Ye, Ziyu and Shu, Kai and Bibi, Adel and Hu, Ziniu and Torr, Philip and Ghanem, Bernard and Li, Guohao},
journal={arXiv preprint arXiv:2402.04559},
year={2024}
}