By Gina Putt • Updated Aug 30, 2022
When data that arise naturally—such as height, IQ, or blood pressure—is plotted on a histogram, the frequencies of the scores typically form a symmetric, bell‑shaped curve known as a normal (or Gaussian) distribution. This shape allows statisticians to make powerful predictions about the likelihood of observing a particular score.
The arithmetic mean of a normal distribution sits at the curve’s center and corresponds to the 50th percentile: half of all observations fall above it and half below. Because the curve is perfectly symmetric, the median coincides with the mean, marking the point of greatest frequency.
The standard deviation quantifies how far, on average, individual scores lie from the mean. A larger standard deviation produces a flatter, more spread‑out curve, while a smaller one yields a steep, narrow shape. Each standard‑deviation increment moves you further from the mean and reduces the probability that a random score will fall there.
In a normal distribution, the empirical rule gives the following landmark probabilities:
These percentages form the backbone of statistical inference. For example, if a clinical trial finds that patients taking a new cholesterol‑lowering drug have average levels two standard deviations below the population mean, the result is unlikely to be due to chance alone.