Can large language model agents simulate human trust behaviors?
Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in hum...
Príomhchruthaitheoirí: | Xie, C, Chen, C, Jia, F, Ye, Z, Shu, K, Bibi, A, Hu, Z, Torr, P, Ghanem, B, Li, G |
---|---|
Formáid: | Conference item |
Teanga: | English |
Foilsithe / Cruthaithe: |
Neural Information Processing Systems Foundation
2024
|
Míreanna comhchosúla
Míreanna comhchosúla
-
CRAB: cross-environment agent benchmark for multimodal language model agents
de réir: Xu, T, et al.
Foilsithe / Cruthaithe: (2024) -
Large Language Models: Trust and Regulation
de réir: David Banks, et al.
Foilsithe / Cruthaithe: (2024-08-01) -
CAT: enhancing multimodal large language model to answer questions in dynamic audio-visual scenarios
de réir: Ye, Q, et al.
Foilsithe / Cruthaithe: (2024) -
Select to perfect: imitating desired behavior from large multi-agent data
de réir: Franzmeyer, T, et al.
Foilsithe / Cruthaithe: (2024) -
Language model tokenizers introduce unfairness between languages
de réir: Petrov, A, et al.
Foilsithe / Cruthaithe: (2024)