Methodological notes on model comparisons and strategy classification: A falsificationist proposition
Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adhe...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Cambridge University Press
2011-12-01
|
Series: | Judgment and Decision Making |
Subjects: | |
Online Access: | https://www.cambridge.org/core/product/identifier/S193029750000423X/type/journal_article |
_version_ | 1797701161285844992 |
---|---|
author | Morten Moshagen Benjamin E. Hilbig Andreas Glöckner Benjamin E. Hilbig |
author_facet | Morten Moshagen Benjamin E. Hilbig Andreas Glöckner Benjamin E. Hilbig |
author_sort | Morten Moshagen |
collection | DOAJ |
description | Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective. |
first_indexed | 2024-03-12T04:31:24Z |
format | Article |
id | doaj.art-fb214d180f4c4c379953d352d7dca3ba |
institution | Directory Open Access Journal |
issn | 1930-2975 |
language | English |
last_indexed | 2024-03-12T04:31:24Z |
publishDate | 2011-12-01 |
publisher | Cambridge University Press |
record_format | Article |
series | Judgment and Decision Making |
spelling | doaj.art-fb214d180f4c4c379953d352d7dca3ba2023-09-03T10:05:07ZengCambridge University PressJudgment and Decision Making1930-29752011-12-01681482010.1017/S193029750000423XMethodological notes on model comparisons and strategy classification: A falsificationist propositionMorten Moshagen0Benjamin E. Hilbig1Andreas GlöcknerBenjamin E. HilbigUniversity of Mannheim, Schloss, EO 254, 68133, Mannheim, GermanyUniversity of Mannheim, Germany, and Max-Planck Institute for Research on Collective Goods, GermanyTaking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective.https://www.cambridge.org/core/product/identifier/S193029750000423X/type/journal_articlefalsificationerrormodel testingmodel fit |
spellingShingle | Morten Moshagen Benjamin E. Hilbig Andreas Glöckner Benjamin E. Hilbig Methodological notes on model comparisons and strategy classification: A falsificationist proposition Judgment and Decision Making falsification error model testing model fit |
title | Methodological notes on model comparisons and strategy classification: A falsificationist proposition |
title_full | Methodological notes on model comparisons and strategy classification: A falsificationist proposition |
title_fullStr | Methodological notes on model comparisons and strategy classification: A falsificationist proposition |
title_full_unstemmed | Methodological notes on model comparisons and strategy classification: A falsificationist proposition |
title_short | Methodological notes on model comparisons and strategy classification: A falsificationist proposition |
title_sort | methodological notes on model comparisons and strategy classification a falsificationist proposition |
topic | falsification error model testing model fit |
url | https://www.cambridge.org/core/product/identifier/S193029750000423X/type/journal_article |
work_keys_str_mv | AT mortenmoshagen methodologicalnotesonmodelcomparisonsandstrategyclassificationafalsificationistproposition AT benjaminehilbig methodologicalnotesonmodelcomparisonsandstrategyclassificationafalsificationistproposition AT andreasglockner methodologicalnotesonmodelcomparisonsandstrategyclassificationafalsificationistproposition AT benjaminehilbig methodologicalnotesonmodelcomparisonsandstrategyclassificationafalsificationistproposition |