Can large language model agents simulate human trust behaviors?
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in hum...
Huvudupphovsmän: | Xie, C, Chen, C, Jia, F, Ye, Z, Shu, K, Bibi, A, Hu, Z, Torr, P, Ghanem, B, Li, G |
---|---|
Materialtyp: | Conference item |
Språk: | English |
Publicerad: |
Neural Information Processing Systems Foundation
2024
|
Liknande verk
Liknande verk
-
CRAB: cross-environment agent benchmark for multimodal language model agents
av: Xu, T, et al.
Publicerad: (2024) -
Large Language Models: Trust and Regulation
av: David Banks, et al.
Publicerad: (2024-08-01) -
CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios
av: Ye, Q, et al.
Publicerad: (2024) -
Select to perfect: imitating desired behavior from large multi-agent data
av: Franzmeyer, T, et al.
Publicerad: (2024) -
Language model tokenizers introduce unfairness between languages
av: Petrov, A, et al.
Publicerad: (2024)