Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and use...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-05-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1121622/full |
_version_ | 1797824137276686336 |
---|---|
author | Skye Taylor Skye Taylor Manhua Wang Myounghoon Jeon |
author_facet | Skye Taylor Skye Taylor Manhua Wang Myounghoon Jeon |
author_sort | Skye Taylor |
collection | DOAJ |
description | Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers’ trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: “what”-then-wait (on-demand) and “what + why” (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles. |
first_indexed | 2024-03-13T10:34:32Z |
format | Article |
id | doaj.art-ed72e9d1ce44427485c90fdfa26bbaf8 |
institution | Directory Open Access Journal |
issn | 1664-1078 |
language | English |
last_indexed | 2024-03-13T10:34:32Z |
publishDate | 2023-05-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychology |
spelling | doaj.art-ed72e9d1ce44427485c90fdfa26bbaf82023-05-18T07:51:22ZengFrontiers Media S.A.Frontiers in Psychology1664-10782023-05-011410.3389/fpsyg.2023.11216221121622Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systemsSkye Taylor0Skye Taylor1Manhua Wang2Myounghoon Jeon3Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United StatesLink Lab, Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA, United StatesMind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United StatesMind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United StatesTrust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers’ trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: “what”-then-wait (on-demand) and “what + why” (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles.https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1121622/fulltrusttransparencyautomated vehiclesin-vehicle agentsreliability |
spellingShingle | Skye Taylor Skye Taylor Manhua Wang Myounghoon Jeon Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems Frontiers in Psychology trust transparency automated vehicles in-vehicle agents reliability |
title | Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
title_full | Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
title_fullStr | Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
title_full_unstemmed | Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
title_short | Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
title_sort | reliable and transparent in vehicle agents lead to higher behavioral trust in conditionally automated driving systems |
topic | trust transparency automated vehicles in-vehicle agents reliability |
url | https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1121622/full |
work_keys_str_mv | AT skyetaylor reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems AT skyetaylor reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems AT manhuawang reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems AT myounghoonjeon reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems |