By Matthew Schieltz – Updated Aug 30, 2022
When you collect data or run an experiment, you often need to determine whether a change in one variable is linked to a change in another. T‑tests are the standard statistical tools for testing whether the difference between two groups is significant, beyond what might be expected by random chance.
Create a summary‑statistics table for each group. Calculate and record the sum, sample size (n), and mean. Label each row as sum, n, and mean.
Compute the degrees of freedom for each group: df = n – 1. Write this value beside the corresponding summary statistics.
Determine the variance and standard deviation for each group and add these to the table.
Sum the degrees of freedom from both groups and record this as df‑total.
Calculate the pooled variance:
Compute the standard error of the difference:
Find the t‑value:
For each paired observation, subtract the second score from the first and place the result in a column titled Difference. Sum all differences to obtain D.
Square each difference, store in a column D‑squared, and sum these to get ΣD².
Compute the divisor:
Divide D by the divisor to obtain the t‑value for the paired‑samples t‑test.
Compare the calculated t‑value with the critical t‑value from a t‑distribution table. If the absolute t‑value exceeds the critical value, reject the null hypothesis; otherwise, do not reject it.
For further reading, see Wikipedia – T‑test.