By Michael Judge — Updated Aug 30, 2022
Statisticians describe a data set that follows a bell‑shaped, symmetric curve as “normal.” In a normal distribution, the spread of the data is measured by the standard deviation. Any observation can be transformed into a Z‑score, which tells you how many standard deviations the value lies from the mean. Once you have a Z‑score, you can determine the proportion of observations that fall above or below the corresponding value.
Discuss with a colleague or supervisor whether you want the proportion of observations that are above or below the value represented by your Z‑score. For example, if you have a perfectly normal distribution of SAT scores and you’re interested in the percentage of students scoring above 2,000 (a Z‑score of 2.85), that will be your starting point.
Open a standard normal (Z) table. Scan the leftmost column for the first two digits of your Z‑score. In the SAT example, “2.8” appears in the 29th row.
Look across the top row of the table for the third decimal place of the Z‑score. For 2.85, the third digit is “0.05,” which aligns with the sixth column.
At the intersection of the 29th row and the sixth column you’ll find 0.4978. This number represents the cumulative probability that a randomly selected observation is less than or equal to the value corresponding to a Z‑score of 2.85.
Subtract the cumulative probability from 0.5 (or 0.5 – 0.4978) to obtain the probability of being above the value: 0.0022.
Multiply by 100: 0.0022 × 100 = 0.22 %. Thus, only 0.22 % of students score above 2,000.
Subtract the upper‑tail percentage from 100 %: 100 – 0.22 = 99.78 %. Therefore, 99.78 % of students score below 2,000.
If your sample size is small, you’ll use a t‑score instead of a Z‑score. A t‑table is required for interpreting that statistic.