By Sean Mann
Updated Aug 30, 2022
Converting between signed‑magnitude binary and decimal notation is a foundational concept in computer science, enabling accurate representation of both positive and negative integers.
Ignore the leftmost sign bit and any leading zeros that appear between the sign bit and the first ‘1’. Starting from the rightmost data bit, assign successive powers of two (2^0, 2^1, 2^2, …) to each position. For example, in the signed‑magnitude number 10000101, the relevant data bits are the three rightmost ones, which correspond to 2^2 = 4, 2^1 = 2, and 2^0 = 1.
Add the powers of two that correspond to the positions where the bit is 1. In the example above, 4 + 1 = 5.
Attach a negative sign if the sign bit (the far left bit) is 1; otherwise, the number is positive. Thus, 10000101 converts to -5 in decimal.