By Isaiah David — Updated Aug 30, 2022
Modern computers rely on binary—base‑2—because electronic circuits can reliably represent only two states: on (1) and off (0). This simplicity translates into faster, more reliable arithmetic operations.
To illustrate, the decimal number 9 converts to binary as 1001. Each binary digit represents a power of two: 1×8 + 0×4 + 0×2 + 1×1 = 9.
Adding numbers in binary follows the same logic as decimal addition but with a base of two. When two 1s are added, the result is 0 with a carry of 1. For example, adding 5 (0101) and 4 (0100) proceeds as follows:
0101 +0100 ------ 1001 (9)
The operation is efficient and forms the backbone of all higher‑level arithmetic.
Multiplication is implemented via repeated binary addition, often using shift‑and‑add algorithms. While it may require more steps than decimal multiplication, the underlying operations remain simple binary bit manipulations.
For instance, multiplying 8 (1000) by 9 (1001) in binary involves aligning partial products and summing them, resulting in 11111000 (72). This process mirrors long multiplication in base‑10 but operates on binary digits.
Subtraction is performed by adding the two’s complement of the subtrahend. The two’s complement flips all bits of the number and adds one. For example:
7 → 0111 -4 → 1011 (two’s complement of 0100)
Adding these yields 10010. Dropping the overflow bit leaves 0011, which is 3.
These fundamental techniques—addition, multiplication, and subtraction—are the building blocks of all arithmetic operations executed by processors.