Should We Gain Confidence from the Similarity of Results between Methods?

Confirming the result of a calculation by a calculation with a different method is often seen as a validity check. However, when the methods considered are all subject to the same (systematic) errors, this practice fails. Using a statistical approach, we define measures for <i>reliability</...

Full description

Bibliographic Details
Main Authors: Pascal Pernot, Andreas Savin
Format: Article
Language:English
Published: MDPI AG 2022-02-01
Series:Computation
Subjects:
Online Access:https://www.mdpi.com/2079-3197/10/2/27
Description
Summary:Confirming the result of a calculation by a calculation with a different method is often seen as a validity check. However, when the methods considered are all subject to the same (systematic) errors, this practice fails. Using a statistical approach, we define measures for <i>reliability</i> and <i>similarity</i>, and we explore the extent to which the similarity of results can help improve our judgment of the validity of data. This method is illustrated on synthetic data and applied to two benchmark datasets extracted from the literature: band gaps of solids estimated by various density functional approximations, and effective atomization energies estimated by <i>ab initio</i> and machine-learning methods. Depending on the levels of bias and correlation of the datasets, we found that similarity may provide a null-to-marginal improvement in reliability and was mostly effective in eliminating large errors.
ISSN:2079-3197