Why are floating point calculations so inaccurate?

People are often very surprised by results like this:

>>> 1.2-1.0
0.199999999999999996

and think it is a bug in Python. It’s not. It’s a problem caused by the internal representation of floating point numbers, which uses a fixed number of binary digits to represent a decimal number. Some decimal numbers can’t be represented exactly in binary, resulting in small roundoff errors.

In decimal math, there are many numbers that can’t be represented with a fixed number of decimal digits, e.g. 1/3 = 0.3333333333…….

In base 2, 1/2 = 0.1, 1/4 = 0.01, 1/8 = 0.001, etc. .2 equals 2/10 equals 1/5, resulting in the binary fractional number 0.001100110011001…

Floating point numbers only have 32 or 64 bits of precision, so the digits are cut off at some point, and the resulting number is 0.199999999999999996 in decimal, not 0.2.

A floating point’s repr function prints as many digits are necessary to make eval(repr(f)) == f true for any float f. The str function prints fewer digits and this often results in the more sensible number that was probably intended:

>>> 0.2
0.20000000000000001
>>> print 0.2
0.2

Again, this has nothing to do with Python, but with the way the underlying C platform handles floating point numbers, and ultimately with the inaccuracy, you’ll always have when writing down numbers as a string of a fixed number of digits.

One of the consequences of this is that it is dangerous to compare the result of some computation to a float with == ! Tiny inaccuracies may mean that == fails. Instead, you have to check that the difference between the two numbers is less than a certain threshold:

epsilon = 0.0000000000001 # Tiny allowed error
expected_result = 0.4

if expected_result-epsilon <= computation() <= expected_result+epsilon:
   ...