Trust Miscalibration Is Sometimes Necessary: An Empirical Study and a Computational Model
The literature on trust seems to have reached a consensus that appropriately calibrated trust in humans or machines is highly desirable; miscalibrated (i.e., over- or under-) trust has been thought to only have negative consequences (i.e., over-reliance or under-utilization). While not invalidating...
Main Authors: | Michael G. Collins, Ion Juvina |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-08-01
|
Series: | Frontiers in Psychology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyg.2021.690089/full |
Similar Items
-
Dynamic Selection of Reliance Calibration Cues With AI Reliance Model
by: Yosuke Fukuchi, et al.
Published: (2023-01-01) -
Exploring the effects of human-centered AI explanations on trust and reliance
by: Nicolas Scharowski, et al.
Published: (2023-07-01) -
Impacts of transformational leadership on organizational change capability: a two-path mediating role of trust in leadership
by: Thanh Thi Cao, et al.
Published: (2024-04-01) -
Automation Use and Dis-Use in Golf: The Impact of Distance Measuring Devices on Trust in Technology and Confidence in Determining Distance
by: Lori Dithurbide, et al.
Published: (2021-07-01) -
Empirical Evaluations of Framework for Adaptive Trust Calibration in Human-AI Cooperation
by: Kazuo Okamura, et al.
Published: (2020-01-01)