Can large language model agents simulate human trust behaviors?

Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in hum...

وصف كامل

التفاصيل البيبلوغرافية
المؤلفون الرئيسيون: Xie, C, Chen, C, Jia, F, Ye, Z, Shu, K, Bibi, A, Hu, Z, Torr, P, Ghanem, B, Li, G
التنسيق: Conference item
اللغة:English
منشور في: Neural Information Processing Systems Foundation 2024
_version_ 1826316933338759168
author Xie, C
Chen, C
Jia, F
Ye, Z
Shu, K
Bibi, A
Hu, Z
Torr, P
Ghanem, B
Li, G
author_facet Xie, C
Chen, C
Jia, F
Ye, Z
Shu, K
Bibi, A
Hu, Z
Torr, P
Ghanem, B
Li, G
author_sort Xie, C
collection OXFORD
description Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors. We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents. In addition, we probe into the biases in agent trust and the differences in agent trust towards agents and humans. We also explore the intrinsic properties of agent trust under conditions including advanced reasoning strategies and external manipulations. We further offer important implications of our discoveries for various scenarios where trust is paramount. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans.
first_indexed 2024-09-25T04:09:33Z
format Conference item
id oxford-uuid:d3f02bef-9dfc-45a3-8c38-253c6af8fc1d
institution University of Oxford
language English
last_indexed 2025-02-19T04:30:39Z
publishDate 2024
publisher Neural Information Processing Systems Foundation
record_format dspace
spelling oxford-uuid:d3f02bef-9dfc-45a3-8c38-253c6af8fc1d2024-12-19T10:36:01ZCan large language model agents simulate human trust behaviors?Conference itemhttp://purl.org/coar/resource_type/c_5794uuid:d3f02bef-9dfc-45a3-8c38-253c6af8fc1dEnglishSymplectic Elements Neural Information Processing Systems Foundation2024Xie, CChen, CJia, FYe, ZShu, KBibi, AHu, ZTorr, PGhanem, BLi, GLarge Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors. We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents. In addition, we probe into the biases in agent trust and the differences in agent trust towards agents and humans. We also explore the intrinsic properties of agent trust under conditions including advanced reasoning strategies and external manipulations. We further offer important implications of our discoveries for various scenarios where trust is paramount. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans.
spellingShingle Xie, C
Chen, C
Jia, F
Ye, Z
Shu, K
Bibi, A
Hu, Z
Torr, P
Ghanem, B
Li, G
Can large language model agents simulate human trust behaviors?
title Can large language model agents simulate human trust behaviors?
title_full Can large language model agents simulate human trust behaviors?
title_fullStr Can large language model agents simulate human trust behaviors?
title_full_unstemmed Can large language model agents simulate human trust behaviors?
title_short Can large language model agents simulate human trust behaviors?
title_sort can large language model agents simulate human trust behaviors
work_keys_str_mv AT xiec canlargelanguagemodelagentssimulatehumantrustbehaviors
AT chenc canlargelanguagemodelagentssimulatehumantrustbehaviors
AT jiaf canlargelanguagemodelagentssimulatehumantrustbehaviors
AT yez canlargelanguagemodelagentssimulatehumantrustbehaviors
AT shuk canlargelanguagemodelagentssimulatehumantrustbehaviors
AT bibia canlargelanguagemodelagentssimulatehumantrustbehaviors
AT huz canlargelanguagemodelagentssimulatehumantrustbehaviors
AT torrp canlargelanguagemodelagentssimulatehumantrustbehaviors
AT ghanemb canlargelanguagemodelagentssimulatehumantrustbehaviors
AT lig canlargelanguagemodelagentssimulatehumantrustbehaviors