Can large language model agents simulate human trust behaviors?
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in hum...
Главные авторы: | Xie, C, Chen, C, Jia, F, Ye, Z, Shu, K, Bibi, A, Hu, Z, Torr, P, Ghanem, B, Li, G |
---|---|
Формат: | Conference item |
Язык: | English |
Опубликовано: |
Neural Information Processing Systems Foundation
2024
|
Схожие документы
-
CRAB: cross-environment agent benchmark for multimodal language model agents
по: Xu, T, и др.
Опубликовано: (2024) -
Large Language Models: Trust and Regulation
по: David Banks, и др.
Опубликовано: (2024-08-01) -
CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios
по: Ye, Q, и др.
Опубликовано: (2024) -
Select to perfect: imitating desired behavior from large multi-agent data
по: Franzmeyer, T, и др.
Опубликовано: (2024) -
Language model tokenizers introduce unfairness between languages
по: Petrov, A, и др.
Опубликовано: (2024)