Intelligent autonomous agents and trust in virtual reality

Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self...

Full description

Bibliographic Details
Main Authors: Ningyuan Sun, Jean Botev
Format: Article
Language:English
Published: Elsevier 2021-08-01
Series:Computers in Human Behavior Reports
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2451958821000944
_version_ 1819279135685476352
author Ningyuan Sun
Jean Botev
author_facet Ningyuan Sun
Jean Botev
author_sort Ningyuan Sun
collection DOAJ
description Intelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self-driving or self-piloting systems, whether unbeknownst to the users or not, IAA are increasingly integrated into many aspects of daily life. Despite this technological development, many people remain skeptical of such agents. Conversely, others might have excessive confidence in them. Therefore, establishing an appropriate level of trust is crucial to the successful deployment of IAA in everyday contexts. Virtual Reality (VR) is another domain where IAA play a significant role, yet its experiential and immersive character particularly allows for new ways of interaction and tackling trust-related issues. In this article, we provide an overview of the numerous factors involved in establishing trust between users and IAA, spanning scientific disciplines as diverse as psychology, philosophy, sociology, computer science, and economics. Focusing on VR, we discuss the different types and definitions of trust and identify foundational factors classified into three interrelated dimensions: Human-Technology, Human-System, and Interpersonal. Based on this taxonomy, we identify open issues and a research agenda towards facilitating the study of trustful interaction and collaboration between users and IAA in VR settings.
first_indexed 2024-12-24T00:23:06Z
format Article
id doaj.art-a5ae8f0338454d4f96a615c53d002bc4
institution Directory Open Access Journal
issn 2451-9588
language English
last_indexed 2024-12-24T00:23:06Z
publishDate 2021-08-01
publisher Elsevier
record_format Article
series Computers in Human Behavior Reports
spelling doaj.art-a5ae8f0338454d4f96a615c53d002bc42022-12-21T17:24:32ZengElsevierComputers in Human Behavior Reports2451-95882021-08-014100146Intelligent autonomous agents and trust in virtual realityNingyuan Sun0Jean Botev1Department of Computer Science, University of Luxembourg, Avenue de la Fonte 6, 4346 Esch-sur-Alzette, LuxembourgCorresponding author.; Department of Computer Science, University of Luxembourg, Avenue de la Fonte 6, 4346 Esch-sur-Alzette, LuxembourgIntelligent autonomous agents (IAA) are proliferating and rapidly evolving due to the exponential growth in computational power and recent advances, for instance, in artificial intelligence research. Ranging from chatbots, over personal virtual assistants and medical decision-aiding systems, to self-driving or self-piloting systems, whether unbeknownst to the users or not, IAA are increasingly integrated into many aspects of daily life. Despite this technological development, many people remain skeptical of such agents. Conversely, others might have excessive confidence in them. Therefore, establishing an appropriate level of trust is crucial to the successful deployment of IAA in everyday contexts. Virtual Reality (VR) is another domain where IAA play a significant role, yet its experiential and immersive character particularly allows for new ways of interaction and tackling trust-related issues. In this article, we provide an overview of the numerous factors involved in establishing trust between users and IAA, spanning scientific disciplines as diverse as psychology, philosophy, sociology, computer science, and economics. Focusing on VR, we discuss the different types and definitions of trust and identify foundational factors classified into three interrelated dimensions: Human-Technology, Human-System, and Interpersonal. Based on this taxonomy, we identify open issues and a research agenda towards facilitating the study of trustful interaction and collaboration between users and IAA in VR settings.http://www.sciencedirect.com/science/article/pii/S2451958821000944Trust modelingVirtual realityAgent interactionCollaboration
spellingShingle Ningyuan Sun
Jean Botev
Intelligent autonomous agents and trust in virtual reality
Computers in Human Behavior Reports
Trust modeling
Virtual reality
Agent interaction
Collaboration
title Intelligent autonomous agents and trust in virtual reality
title_full Intelligent autonomous agents and trust in virtual reality
title_fullStr Intelligent autonomous agents and trust in virtual reality
title_full_unstemmed Intelligent autonomous agents and trust in virtual reality
title_short Intelligent autonomous agents and trust in virtual reality
title_sort intelligent autonomous agents and trust in virtual reality
topic Trust modeling
Virtual reality
Agent interaction
Collaboration
url http://www.sciencedirect.com/science/article/pii/S2451958821000944
work_keys_str_mv AT ningyuansun intelligentautonomousagentsandtrustinvirtualreality
AT jeanbotev intelligentautonomousagentsandtrustinvirtualreality