Floating Point Arithmetic Effect: Why 0.2 + 0.2 + 0.2 ≠ 0.6 in most of the Programming Languages?

Shriram Sivanandhan
7 min readAug 18, 2023

--

The Floating Point Precision Effect occurs in most of the Programming Languages. For example if we calculate

0.1 + 0.2 ≠ 0.3,

0.1 + 0.1 + 0.1 ≠ 0.3,

0.3–0.1 ≠ 0.2, etc.

The factors that affect this calculation and causes these issues in most of the Programming Languages is elaborately discussed and also, we will see how Programming languages store and do operations with Floating Point numbers.

In case of Python, we can execute and see the results of calculation,

>>> print(0.1 + 0.2 == 0.3)
False
>>> print(0.1 + 0.1 + 0.1 == 0.3)
False
>>> print(0.3 - 0.1 == 0.2)
False

In case of Java, we can execute and see the results of calculation,

0.1 + 0.2 ≠ 0.3 in Java
0.1 + 0.1 + 0.1 ≠ 0.3 in Java
0.3 - 0.1 ≠ 0.2 in Java

Humans deal with numbers with representation in Base 10. Where some fractions can be represented with finite digits like 1/4 = 0.25, 1/2 = 0.5, etc. But in some cases, the fractions will be represented with repeated or recurring decimals like 1/3 = 0.3333…, etc.

Hence in Base 10 some fractions like 1/3 etc, leads to repeated/ recurring decimals and hence these Floating Point numbers like 0.3333… will not be equal to 1/3 but it will be approximately equal to 1/3.

Example:

When we divide 1 by 7, Approximately we get,

1/7 = 142857142857…

Or more approximately,

1/7 = 142857142857142857…

Or for better approximation,

1/7 = 142857142857142857142857…

Therefore, some fractions like 1/3, 1/7, etc leads to repeated/recurring decimals and these decimals cannot be exactly equal to the fractions.

In case of Computers, they deal with numbers in Base 2 representation (Binary Representation). The numbers are stored and calculated in computers with only 0’s and 1’s (Binary Representation) in computers. The numbers in computers are stored in Binary Representation up to a maximum limit allowed by the memory.

The Floating Point Precision Effect in most of the Programming Languages occurs due to the Representation Error. Most Decimal Fractions cannot be exactly represented as Binary Fractions. The Decimal Floating Point numbers we enter to the computers are actually approximated to the Binary Floating Point and stored in Computers up to a maximum limit allowed by the memory.

Since 2000, most of the machines uses IEEE 754 binary floating-point arithmetic. And almost all platforms map most of the Programming Languages Floating Point Arithmetic to IEEE 754 binary64 “Double Precision” values.

IEEE 754 binary64 Standard:

The IEEE 754 binary64 standard contains:

Sign bit: 1 bit

Exponent: 11 bits

Significand precision: 53 bits (52 explicitly stored)

The below Figure shows the Bits arrangement in IEEE 754 binary64.

Bits arrangement in IEEE 754 binary64

So, when Decimal Floating Point numbers are converted to Binary Floating Point and stored the Decimal Floating Point numbers we enter are actually approximated to the Binary Floating Point and stored in Computers up to a maximum limit allowed by the memory.

Conversion of Decimal Floating Point numbers to Binary Floating Point:

This conversion is to be done to both the Integral and Fractional parts of the Decimal Floating Point numbers.

Example:

Conversion of 0.1 to Binary Floating Point:

Integral Part — 0

Binary conversion of Integral Part — 0.

Fractional Part — 0.1

Binary conversion of Fractional Part is given below,

0.1 x 2 = 0.2 // Take 0 and move 0.2 to next step

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Take 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Take 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Take 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

… … …

… … …

… … …

It goes on to Infinity and never ends.

Hence, 0.1 in Decimal Representation will be converted to 0.0001100110011… in Binary Representation.

Example:

Conversion of 0.2 to Binary Floating Point:

Integral Part — 0

Binary conversion of Integral Part — 0.

Fractional Part — 0.2

Binary conversion of Fractional Part is given below,

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Take 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Keep 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

0.2 x 2 = 0.4 // Take 0 and move 0.4 to next step

0.4 x 2 = 0.8 // Take 0 and move 0.8 to next step

0.8 x 2 = 1.6 // Take 1 and move 0.6 to next step

0.6 x 2 = 1.2 // Take 1 and move 0.2 to next step

… … …

… … …

… … …

It goes on to Infinity and never ends.

Hence, 0.2 in Decimal Representation will be converted to 0.001100110011… in Binary Representation.

Machines use Binary Representation to store numbers, hence for example to store 0.2 in computers, computers store 0.001100110011001100110011… as Binary Representation.

Issue with Floating Point operations:

So now in most of the programming languages (for example Python and Java), when we do Floating Point operations like,

Python:

>>> print(0.1 + 0.1 + 0.1 == 0.3)
False
>>> print(0.2 + 0.2 + 0.2 == 0.6)
False
>>>print(0.1 + 0.2 == 0.3)
False

Java:

0.1 + 0.1 + 0.1 ≠ 0.3 in Java
0.2 + 0.2 + 0.2 ≠ 0.6 in Java
0.1 + 0.2 ≠ 0.3 in Java

The above issues in most of the Programming Languages on machines is due to the reason that floating point numbers are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two.

0.1 in computers cannot be exactly represented in binary, 1/10 which is 0.1 when converted to binary it is infinitely recurring fraction as shown below,

0.00110011001100110011001100110011001100110011001100110011…

Nowadays in most of the machines, floating point numbers are approximated using a binary fraction with the numerator using the first 53 bits starting with the most significant bit and with the denominator as a power of two.

(Floating Point Number) ~= J / (2**N)

IEEE 754 binary64 contains 53 bits of precision, so when 1/10 is given as input to computers, it will be approximated using a binary fraction J / (2**N) where J is an integer which contains exactly 53 bits.

In case of 1/10,

Using the binary fraction J / (2**N),

1/10 ~= J / (2**N)

J ~= (2**N) / 10

We can choose N value as 56 since it leaves J with exactly 53 bits. For N = 56, it leaves J with exactly 53 bits.

J ~= (2**56) / 10

J ~= 7205759403792794

Hence the approximation for 1/10 done in IEEE 754 double precision is,

1/10 ~= 7205759403792794 / (2**56)

1/10 ~= 7205759403792794 / 72057594037927936

1/10 ~= 0.10000000000000000555111512312578270211815834045410156250

Hence, 1/10 and most floating-point fractions in computers is not exactly 0.1 it will be approximated to 0.10000000000000000555111512312578270211815834045410156250 and stored in computers.

The below python code shows 1/10 stored in computers and also the 1/10 approximated using the fraction J / (2**N).

1/10 stored in computers and 1/10 approximated using the fraction J / (2**N)

Due to this limitation only in most of the programming languages,

0.1 + 0.1 + 0.1 ≠ 0.3

0.2 + 0.2 + 0.2 ≠ 0.6

0.1 + 0.2 ≠ 0.3

Thankyou for reading this blog on Floating Point Arithmetic Effect: 0.2 + 0.2 + 0.2 ≠ 0.6 in most of the Programming Languages!!!

Reference: https://docs.python.org/3/tutorial/floatingpoint.html

--

--

No responses yet