Library / The Most Dangerous Equation


Howard Wainer “The Most Dangerous Equation” (2007) // American Scientist. Publisher: Sigma Xi. Vol. 95. No 3. Pp. 249. DOI: 10.1511/2007.65.249


  title = {The Most Dangerous Equation},
  volume = {95},
  issn = {1545-2786},
  url = {},
  doi = {10.1511/2007.65.249},
  number = {3},
  journal = {American Scientist},
  publisher = {Sigma Xi},
  author = {Wainer, Howard},
  year = {2007},
  pages = {249},
  custom-url-pdf = {}

Quotes (1)

Three Dangerous Statistical Equations

Supporting ignorance is not, however, the direction I wish to pursue—indeed it is quite the antithesis of my message. Instead I am interested in equations that unleash their danger not when we know about them, but rather when we do not. Kept close at hand, these equations allow us to understand things clearly, but their absence leaves us dangerously ignorant.

There are many plausible candidates, and I have identified three prime examples: Kelley’s equation, which indicates that the truth is estimated best when its observed value is regressed toward the mean of the group that it came from; the standard linear regression equation; and the equation that provides us with the standard deviation of the sampling distribution of the mean—what might be called de Moivre’s equation: $\sigma_{\bar{x}} = \sigma/\sqrt{n}$, where $\sigma_{\bar{x}}$ is the standard error of the mean, $s$ is the standard deviation of the sample and $n$ is the size of the sample. (Note the square root symbol, which will be a key to at least one of the misunderstandings of variation.) De Moivre’s equation was derived by the French mathematician Abraham de Moivre, who described it in his 1730 exploration of the binomial distribution, Miscellanea Analytica.