Where do hypotheses come from?
Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tas...
Main Authors: | , , |
---|---|
Format: | Technical Report |
Language: | en_US |
Published: |
Center for Brains, Minds and Machines (CBMM)
2016
|
Subjects: | |
Online Access: | http://hdl.handle.net/1721.1/105158 |
_version_ | 1826216019616595968 |
---|---|
author | Dasgupta, Ishita Schulz, Eric Gershman, Samuel J. |
author_facet | Dasgupta, Ishita Schulz, Eric Gershman, Samuel J. |
author_sort | Dasgupta, Ishita |
collection | MIT |
description | Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments. |
first_indexed | 2024-09-23T16:41:01Z |
format | Technical Report |
id | mit-1721.1/105158 |
institution | Massachusetts Institute of Technology |
language | en_US |
last_indexed | 2024-09-23T16:41:01Z |
publishDate | 2016 |
publisher | Center for Brains, Minds and Machines (CBMM) |
record_format | dspace |
spelling | mit-1721.1/1051582019-04-12T17:04:39Z Where do hypotheses come from? Dasgupta, Ishita Schulz, Eric Gershman, Samuel J. Bayes' Rule Monte Carlo approximation superadditivity subadditivity Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? One notable instance of this discrepancy is that tasks where the candidate hypotheses are explicitly available result in close to rational inference over the hypothesis space, whereas tasks requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes' rule. Specifically in our account, hypotheses are generated stochastically from a sampling process, such that the sampled hypotheses form a Monte Carlo approximation of the posterior. While this approximation will converge to the true posterior in the limit of infinite samples, we take a small number of samples as we expect that the number of samples humans take is limited by time pressure and cognitive resource constraints. We show that this model recreates several well-documented experimental findings such as anchoring and adjustment, subadditivity, superadditivity, the crowd within as well as the self-generation effect, the weak evidence, and the dud alternative effects. Additionally, we confirm the model's prediction that superadditivity and subadditivity can be induced within the same paradigm by manipulating the unpacking and typicality of hypotheses, in 2 experiments. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF – 1231216. 2016-10-31T16:23:36Z 2016-10-31T16:23:36Z 2016-10-24 Technical Report Working Paper Other http://hdl.handle.net/1721.1/105158 en_US CBMM Memo Series;056 Attribution-NonCommercial-ShareAlike 3.0 United States http://creativecommons.org/licenses/by-nc-sa/3.0/us/ application/pdf Center for Brains, Minds and Machines (CBMM) |
spellingShingle | Bayes' Rule Monte Carlo approximation superadditivity subadditivity Dasgupta, Ishita Schulz, Eric Gershman, Samuel J. Where do hypotheses come from? |
title | Where do hypotheses come from? |
title_full | Where do hypotheses come from? |
title_fullStr | Where do hypotheses come from? |
title_full_unstemmed | Where do hypotheses come from? |
title_short | Where do hypotheses come from? |
title_sort | where do hypotheses come from |
topic | Bayes' Rule Monte Carlo approximation superadditivity subadditivity |
url | http://hdl.handle.net/1721.1/105158 |
work_keys_str_mv | AT dasguptaishita wheredohypothesescomefrom AT schulzeric wheredohypothesescomefrom AT gershmansamuelj wheredohypothesescomefrom |