Can large language model agents simulate human trust behaviors?
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in hum...
Autors principals: | Xie, C, Chen, C, Jia, F, Ye, Z, Shu, K, Bibi, A, Hu, Z, Torr, P, Ghanem, B, Li, G |
---|---|
Format: | Conference item |
Idioma: | English |
Publicat: |
Neural Information Processing Systems Foundation
2024
|
Ítems similars
-
CRAB: cross-environment agent benchmark for multimodal language model agents
per: Xu, T, et al.
Publicat: (2024) -
Large Language Models: Trust and Regulation
per: David Banks, et al.
Publicat: (2024-08-01) -
CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios
per: Ye, Q, et al.
Publicat: (2024) -
Select to perfect: imitating desired behavior from large multi-agent data
per: Franzmeyer, T, et al.
Publicat: (2024) -
Language model tokenizers introduce unfairness between languages
per: Petrov, A, et al.
Publicat: (2024)