Using inferred probabilities to measure the accuracy of imprecise forecasts
Research on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probab...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Cambridge University Press
2012-11-01
|
Series: | Judgment and Decision Making |
Subjects: | |
Online Access: | http://journal.sjdm.org/10/101116/jdm101116.pdf |
_version_ | 1797717612311871488 |
---|---|
author | Paul Lehner Avra Michelson Leonard Adelman Anna Goodman |
author_facet | Paul Lehner Avra Michelson Leonard Adelman Anna Goodman |
author_sort | Paul Lehner |
collection | DOAJ |
description | Research on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probabilities. When forecasts are expressed with clarity, then quantitative measures (scoring rules, calibration, discrimination, etc.) can be used to measure forecast accuracy, which in turn can be used to measure the comparative accuracy of different forecasting methods. Unfortunately most real world forecasts are not expressed clearly. This lack of clarity extends to both the description of the forecast event and to the use of vague language to express forecast certainty. It is thus difficult to assess the accuracy of most real world forecasts, and consequently the accuracy the methods used to generate real world forecasts. This paper addresses this deficiency by presenting an approach to measuring the accuracy of imprecise real world forecasts using the same quantitative metrics routinely used to measure the accuracy of well-defined forecasts. To demonstrate applicability, the Inferred Probability Method is applied to measure the accuracy of forecasts in fourteen documents examining complex political domains. Key words: inferred probability, imputed probability, judgment-based forecasting, forecast accuracy, imprecise forecasts, political forecasting, verbal probability, probability calibration. |
first_indexed | 2024-03-12T08:38:53Z |
format | Article |
id | doaj.art-873f3f15ac634870892d806098a2bd39 |
institution | Directory Open Access Journal |
issn | 1930-2975 |
language | English |
last_indexed | 2024-03-12T08:38:53Z |
publishDate | 2012-11-01 |
publisher | Cambridge University Press |
record_format | Article |
series | Judgment and Decision Making |
spelling | doaj.art-873f3f15ac634870892d806098a2bd392023-09-02T17:03:15ZengCambridge University PressJudgment and Decision Making1930-29752012-11-0176728740Using inferred probabilities to measure the accuracy of imprecise forecastsPaul LehnerAvra MichelsonLeonard AdelmanAnna GoodmanResearch on forecasting is effectively limited to forecasts that are expressed with clarity; which is to say that the forecasted event must be sufficiently well-defined so that it can be clearly resolved whether or not the event occurred and forecasts certainties are expressed as quantitative probabilities. When forecasts are expressed with clarity, then quantitative measures (scoring rules, calibration, discrimination, etc.) can be used to measure forecast accuracy, which in turn can be used to measure the comparative accuracy of different forecasting methods. Unfortunately most real world forecasts are not expressed clearly. This lack of clarity extends to both the description of the forecast event and to the use of vague language to express forecast certainty. It is thus difficult to assess the accuracy of most real world forecasts, and consequently the accuracy the methods used to generate real world forecasts. This paper addresses this deficiency by presenting an approach to measuring the accuracy of imprecise real world forecasts using the same quantitative metrics routinely used to measure the accuracy of well-defined forecasts. To demonstrate applicability, the Inferred Probability Method is applied to measure the accuracy of forecasts in fourteen documents examining complex political domains. Key words: inferred probability, imputed probability, judgment-based forecasting, forecast accuracy, imprecise forecasts, political forecasting, verbal probability, probability calibration.http://journal.sjdm.org/10/101116/jdm101116.pdfResearch on forecasting is effectively limited to forecasts that areexpressed with clarity; which is to say that the forecasted event mustbe sufficiently well-defined so that it can be clearly resolvedwhether or not the event occurred and forecasts certainties areexpressed as quantitative probabilities. When forecasts are expressedwith claritythen quantitative measures (scoring rulescalibrationdiscriminationetc.) can be used to measure forecast accuracywhichin turn can be used to measure the comparative accuracy of differentforecasting methods. Unfortunately most real world forecasts are notexpressed clearly. This lack of clarity extends to both thedescription of the forecast event and to the use of vague language toexpress forecast certainty. It is thus difficult to assess theaccuracy of most real world forecastsand consequently the accuracythe methods used to generate real world forecasts. This paperaddresses this deficiency by presenting an approach to measuring theaccuracy of imprecise real world forecasts using the same quantitativemetrics routinely used to measure the accuracy of well-definedforecasts. To demonstrate applicabilitythe InferredProbability Method is applied to measure the accuracy of forecasts infourteen documents examining complex political domains.Key wordsforecastingforecast accuracyimprecise forecastspoliticalforecastingverbal probabilityprobability calibration. |
spellingShingle | Paul Lehner Avra Michelson Leonard Adelman Anna Goodman Using inferred probabilities to measure the accuracy of imprecise forecasts Judgment and Decision Making Research on forecasting is effectively limited to forecasts that areexpressed with clarity; which is to say that the forecasted event mustbe sufficiently well-defined so that it can be clearly resolvedwhether or not the event occurred and forecasts certainties areexpressed as quantitative probabilities. When forecasts are expressedwith clarity then quantitative measures (scoring rules calibration discrimination etc.) can be used to measure forecast accuracy whichin turn can be used to measure the comparative accuracy of differentforecasting methods. Unfortunately most real world forecasts are notexpressed clearly. This lack of clarity extends to both thedescription of the forecast event and to the use of vague language toexpress forecast certainty. It is thus difficult to assess theaccuracy of most real world forecasts and consequently the accuracythe methods used to generate real world forecasts. This paperaddresses this deficiency by presenting an approach to measuring theaccuracy of imprecise real world forecasts using the same quantitativemetrics routinely used to measure the accuracy of well-definedforecasts. To demonstrate applicability the InferredProbability Method is applied to measure the accuracy of forecasts infourteen documents examining complex political domains.Key wordsforecasting forecast accuracy imprecise forecasts politicalforecasting verbal probability probability calibration. |
title | Using inferred probabilities to measure the accuracy of imprecise forecasts |
title_full | Using inferred probabilities to measure the accuracy of imprecise forecasts |
title_fullStr | Using inferred probabilities to measure the accuracy of imprecise forecasts |
title_full_unstemmed | Using inferred probabilities to measure the accuracy of imprecise forecasts |
title_short | Using inferred probabilities to measure the accuracy of imprecise forecasts |
title_sort | using inferred probabilities to measure the accuracy of imprecise forecasts |
topic | Research on forecasting is effectively limited to forecasts that areexpressed with clarity; which is to say that the forecasted event mustbe sufficiently well-defined so that it can be clearly resolvedwhether or not the event occurred and forecasts certainties areexpressed as quantitative probabilities. When forecasts are expressedwith clarity then quantitative measures (scoring rules calibration discrimination etc.) can be used to measure forecast accuracy whichin turn can be used to measure the comparative accuracy of differentforecasting methods. Unfortunately most real world forecasts are notexpressed clearly. This lack of clarity extends to both thedescription of the forecast event and to the use of vague language toexpress forecast certainty. It is thus difficult to assess theaccuracy of most real world forecasts and consequently the accuracythe methods used to generate real world forecasts. This paperaddresses this deficiency by presenting an approach to measuring theaccuracy of imprecise real world forecasts using the same quantitativemetrics routinely used to measure the accuracy of well-definedforecasts. To demonstrate applicability the InferredProbability Method is applied to measure the accuracy of forecasts infourteen documents examining complex political domains.Key wordsforecasting forecast accuracy imprecise forecasts politicalforecasting verbal probability probability calibration. |
url | http://journal.sjdm.org/10/101116/jdm101116.pdf |
work_keys_str_mv | AT paullehner usinginferredprobabilitiestomeasuretheaccuracyofimpreciseforecasts AT avramichelson usinginferredprobabilitiestomeasuretheaccuracyofimpreciseforecasts AT leonardadelman usinginferredprobabilitiestomeasuretheaccuracyofimpreciseforecasts AT annagoodman usinginferredprobabilitiestomeasuretheaccuracyofimpreciseforecasts |