Toward a more nuanced understanding of probability estimation biases

In real life, we often have to make judgements under uncertainty. One such judgement task is estimating the probability of a given event based on uncertain evidence for the event, such as estimating the chances of actual fire when the fire alarm goes off. On the one hand, previous studies have shown...

Full description

Bibliographic Details
Main Authors: Fallon Branch, Jay Hegdé
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-03-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1132168/full
_version_ 1797856422581501952
author Fallon Branch
Jay Hegdé
author_facet Fallon Branch
Jay Hegdé
author_sort Fallon Branch
collection DOAJ
description In real life, we often have to make judgements under uncertainty. One such judgement task is estimating the probability of a given event based on uncertain evidence for the event, such as estimating the chances of actual fire when the fire alarm goes off. On the one hand, previous studies have shown that human subjects often significantly misestimate the probability in such cases. On the other hand, these studies have offered divergent explanations as to the exact causes of these judgment errors (or, synonymously, biases). For instance, different studies have attributed the errors to the neglect (or underweighting) of the prevalence (or base rate) of the given event, or the overweighting of the evidence for the individual event (‘individuating information’), etc. However, whether or to what extent any such explanation can fully account for the observed errors remains unclear. To help fill this gap, we studied the probability estimation performance of non-professional subjects under four different real-world problem scenarios: (i) Estimating the probability of cancer in a mammogram given the relevant evidence from a computer-aided cancer detection system, (ii) estimating the probability of drunkenness based on breathalyzer evidence, and (iii & iv) estimating the probability of an enemy sniper based on two different sets of evidence from a drone reconnaissance system. In each case, we quantitatively characterized the contributions of the various potential explanatory variables to the subjects’ probability judgements. We found that while the various explanatory variables together accounted for about 30 to 45% of the overall variance of the subjects’ responses depending on the problem scenario, no single factor was sufficient to account for more than 53% of the explainable variance (or about 16 to 24% of the overall variance), let alone all of it. Further analyses of the explained variance revealed the surprising fact that no single factor accounted for significantly more than its ‘fair share’ of the variance. Taken together, our results demonstrate quantitatively that it is statistically untenable to attribute the errors of probabilistic judgement to any single cause, including base rate neglect. A more nuanced and unifying explanation would be that the actual biases reflect a weighted combination of multiple contributing factors, the exact mix of which depends on the particular problem scenario.
first_indexed 2024-04-09T20:40:10Z
format Article
id doaj.art-8068d4e6d96b468a98f386ae862150b6
institution Directory Open Access Journal
issn 1664-1078
language English
last_indexed 2024-04-09T20:40:10Z
publishDate 2023-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Psychology
spelling doaj.art-8068d4e6d96b468a98f386ae862150b62023-03-30T04:36:29ZengFrontiers Media S.A.Frontiers in Psychology1664-10782023-03-011410.3389/fpsyg.2023.11321681132168Toward a more nuanced understanding of probability estimation biasesFallon BranchJay HegdéIn real life, we often have to make judgements under uncertainty. One such judgement task is estimating the probability of a given event based on uncertain evidence for the event, such as estimating the chances of actual fire when the fire alarm goes off. On the one hand, previous studies have shown that human subjects often significantly misestimate the probability in such cases. On the other hand, these studies have offered divergent explanations as to the exact causes of these judgment errors (or, synonymously, biases). For instance, different studies have attributed the errors to the neglect (or underweighting) of the prevalence (or base rate) of the given event, or the overweighting of the evidence for the individual event (‘individuating information’), etc. However, whether or to what extent any such explanation can fully account for the observed errors remains unclear. To help fill this gap, we studied the probability estimation performance of non-professional subjects under four different real-world problem scenarios: (i) Estimating the probability of cancer in a mammogram given the relevant evidence from a computer-aided cancer detection system, (ii) estimating the probability of drunkenness based on breathalyzer evidence, and (iii & iv) estimating the probability of an enemy sniper based on two different sets of evidence from a drone reconnaissance system. In each case, we quantitatively characterized the contributions of the various potential explanatory variables to the subjects’ probability judgements. We found that while the various explanatory variables together accounted for about 30 to 45% of the overall variance of the subjects’ responses depending on the problem scenario, no single factor was sufficient to account for more than 53% of the explainable variance (or about 16 to 24% of the overall variance), let alone all of it. Further analyses of the explained variance revealed the surprising fact that no single factor accounted for significantly more than its ‘fair share’ of the variance. Taken together, our results demonstrate quantitatively that it is statistically untenable to attribute the errors of probabilistic judgement to any single cause, including base rate neglect. A more nuanced and unifying explanation would be that the actual biases reflect a weighted combination of multiple contributing factors, the exact mix of which depends on the particular problem scenario.https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1132168/fullbase rate neglectcognitive rules of thumbindividuating informationinverse fallacyjudgement and decision-making under uncertaintymiss rate neglect
spellingShingle Fallon Branch
Jay Hegdé
Toward a more nuanced understanding of probability estimation biases
Frontiers in Psychology
base rate neglect
cognitive rules of thumb
individuating information
inverse fallacy
judgement and decision-making under uncertainty
miss rate neglect
title Toward a more nuanced understanding of probability estimation biases
title_full Toward a more nuanced understanding of probability estimation biases
title_fullStr Toward a more nuanced understanding of probability estimation biases
title_full_unstemmed Toward a more nuanced understanding of probability estimation biases
title_short Toward a more nuanced understanding of probability estimation biases
title_sort toward a more nuanced understanding of probability estimation biases
topic base rate neglect
cognitive rules of thumb
individuating information
inverse fallacy
judgement and decision-making under uncertainty
miss rate neglect
url https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1132168/full
work_keys_str_mv AT fallonbranch towardamorenuancedunderstandingofprobabilityestimationbiases
AT jayhegde towardamorenuancedunderstandingofprobabilityestimationbiases