Central limit theorem
It was supposed that an archer is shooting at a vertical line drawn on a target, then question how many shots land in various vertical bands on either side of it, concerns with probablity theory. Now is it not self evident that the hits must be assumed to be thicker and more numerous on any given band the nearer this is to the mark? If all the places on the vertical plane, whatever their distance form the mark, were equally liable to be his, the most skilful shot would have no advantage over a blind man. That, however, is the tacit assertion of those who use the common rule(the arithmetic mean) in estimating the value of various discrepant observations, when they treat them all indiscriminately. In this way, therefore the degree of probability of any given deviation could be determined to some extent a posteriori, since there is no doubt that, for a large number of shots, the probability is proportional to the number of shots which hit a band situated at a given distance from the mark. We see Bernoulli (1777) who saw the distinction between probability and frequency, failed completely to understand the basis for taking the arithmetic mean of the observation as an estimate of the true ‘mark’. He takes it for granted (although a short calculation, which he was easily capable of doing, would have taught him otherwise) that , if the observations are given equal weight in calculating the average, then one must be assigning equal probability to all errors, however great. Presumably, others made intuitive guesses like this, unchecked by calculation, making this part of the folklore of the time. Then one can appreciate how astonishing it was when Gauss, 32 years later, proved that the condition
Maximum likelihood estimate = arithmetic mean
Uniquely determines the Gaussian error law, not the uniform one.
In the meantime, Laplace (1783) had investigated this law as a limiting form of the binomial distribution, derived its main properties, and suggested that it was so important that it ought to be tabulated; yet, lacking the above property demonstrated by Gauss, he still failed to see that it was the natural error law Laplace persisted in trying to use the form f(x) exp {-a |x|}, which caused no end of analytical difficulties. But he did understand the qualitative principle that combination of observations improves the accuracy of estimates……………twenty two years later, when Laplace saw the Gauss derivation, he understood it all in a flash and hastened to give the central limit theorem and the full solution to the general problem of reduction of observations, which is still how we analyze it today. Not until the time of Einstein did such a simple mathematical argument again have such a great effect on scientific practice.
Propensity
<< Home