On Neyman-Pearson Theory: Information Content of an Experiment and a Fancy Paradox
Two topics, connected with Neyman-Pearson theory of testing hypotheses, are treated in this article. The first topic is related to the information content of an experiment; after a short outline of ordinal comparability of experiments, the two most popular information measures – by Fisher and by Kul...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
University of Bologna
2007-10-01
|
Series: | Statistica |
Online Access: | http://rivista-statistica.unibo.it/article/view/38 |
Summary: | Two topics, connected with Neyman-Pearson theory of testing hypotheses, are treated in this article. The first topic is related to the information content of an experiment; after a short outline of ordinal comparability of experiments, the two most popular information measures – by Fisher and by Kullback-Leibler – are considered. As far as we require a comparison of two experiments at a time, the superiority of the couple (a,b) of the two error probabilities in the Neyman-Pearson approach is easily established, owing to their clear operational meaning. The second topic deals with the so called Jeffreys – or Lindley – paradox: it can be shown that, if we attach a positive probability to a point null hypothesis, some «paradoxical» posterior probabilities – in a Bayesian approach – result in sharp contrast with the error probabilities in the Neyman-Pearson approach. It is argued that such results are simply the outcomes of absurd assumptions, and it is shown that sensible assumptions about interval – not point – hypotheses can yield posterior probabilities perfectly compatible with the Neyman-Pearson approach (although one must be very careful in making such comparisons, as the two approaches are radically different both in assumptions and in purposes). |
---|---|
ISSN: | 0390-590X 1973-2201 |