How to work with floating point numbers in Python

To the start Python full stack development course we share solutions to the classic floating-point inaccuracy problem for beginners. In the material you will find examples of working with functions and classes designed specifically for solving problems with floating point numbers.

Floating point numbers are a fast and efficient way to store and work with numbers. But it is associated with a number of difficulties for beginners and experienced programmers! Here’s a classic example:

>>> 0.1 + 0.2 == 0.3

The first time you see this, you can be confused. This behavior is correct! Let’s talk about why floating point errors are so common, why they occur, and how to deal with them in Python.

The computer is deceiving you

You saw that 0.1 + 0.2 is not equal to 0.3, but the madness doesn’t end there. Here are a couple more confusing examples:

>>> 0.2 + 0.2 + 0.2 == 0.6

>>> 1.3 + 2.0 == 3.3

>>> 1.2 + 2.4 + 3.6 == 7.2

The problem also applies to comparison:

>>> 0.1 + 0.2 <= 0.3

>>> 10.4 + 20.8 > 31.2

>>> 0.8 - 0.1 > 0.7

What’s happening? When you enter the number 0.1 into the Python interpreter, it is stored in memory as a floating point number and converted. 0.1 is a base 10 decimal number, but floating point numbers are stored in binary notation. That is, base 0.1 is converted from 10 to 2.

The resulting binary number may not accurately represent the original number in base 10. 0.1 is one example. The binary representation would be 0.0(0011). That is, 0.1 is an infinitely repeating decimal number written with base 2. The same thing happens when ⅓ is written as a decimal number with base 10. It turns out an infinitely repeating decimal number 0.3 (3).

Computer memory is finite, so the infinitely repeating representation of the binary fraction 0.1 is rounded up to a finite fraction. Its value depends on the architecture of the computer (32-bit or 64-bit).

You can see the floating point value stored for 0.1 using the .as_integer_ratio() method. The floating point representation consists of a numerator and a denominator:

>>> numerator, denominator = (0.1).as_integer_ratio()
>>> f"0.1 ≈ {numerator} / {denominator}"
'0.1 ≈ 3602879701896397 / 36028797018963968'

To display a fraction up to 55 decimal places, use format():

>>> format(numerator / denominator, ".55f")

So 0.1 is rounded up to a number slightly larger than its true value.

Learn more about numeric methods like .as_integer_ratio() in my article 3 Things You Might Not Know About Numbers in Python (“3 Things You Might Not Know About Numbers in Python”).

This float representation error is more common than you might think.

Number representation error is very common

There are three reasons why a number is rounded up when it is represented as a floating point number:

  1. The number has more significant digits than the floating point allows.

  2. This is an irrational number.

  3. The number is rational, but without a finite binary representation.

64-bit floating point numbers have 16 or 17 significant digits. Any number with more significant digits is rounded up. Irrational numbers such as π and e cannot be represented as a finite fraction with an integer base. And, again, irrational numbers are rounded up anyway when stored as floating point numbers.

These two situations create an infinite set of numbers that cannot be exactly represented as a floating point number. But you’re not likely to run into these problems unless you’re a chemist dealing with tiny numbers, or a physicist dealing with astronomically large numbers.

What about infinite rational numbers, like 0.1 with base 2? This is where you’ll run into most of the trouble with floating point, and thanks to the math—which allows you to determine whether it’s a finite fraction or not—you’ll run into representation errors more often than you think.

With base 10, a fraction can be represented as a finite fraction if its denominator is a product of powers prime factors 10. The two prime factors of 10 are 2 and 5, so ½, ¼, ⅕, ⅛, and ⅒ are finite, but ⅓, ⅐, and ⅑ are not. Base 2 has only one prime factor, 2.

The only finite fractions here are those whose denominator is a power of 2. As a result, the fractions ⅓, ⅕, ⅙, ⅐, ⅑, and ⅒ are infinite when represented in binary notation.

Now our first example should become clearer. 0.1, 0.2 and 0.3 are rounded up when converted to floating point numbers:

>>> # -----------vvvv  Display with 17 significant digits
>>> format(0.1, ".17g")

>>> format(0.2, ".17g")

>>> format(0.3, ".17g")

Adding 0.1 and 0.2 results in a number just over 0.3:

>>> 0.1 + 0.2

And since 0.1 + 0.2 is slightly greater than 0.3, and 0.3 is represented by a number slightly less than 0.3, the expression 0.1 + 0.2 == 0.3 turns out to be False.

Every programmer in any language should know about the error of representing floating point numbers – and be able to deal with it. It is not unique to Python. The result of the output 0.1 + 0.2 in different languages ​​can be seen on the site with the appropriate name

How to compare floating point numbers in Python

So how do you deal with floating point representation errors when comparing floating point numbers in Python? The trick is to avoid checking for equality. Instead of ==, >= or <= always use the floating point function math.isclose():

>>> import math
>>> math.isclose(0.1 + 0.2, 0.3)

math.isclose() checks to see if the first argument is close enough to the second. That is, the distance between the two arguments is checked. It is equal to the absolute value of the difference between both values:

>>> a = 0.1 + 0.2
>>> b = 0.3
>>> abs(a - b)

If abs(a – b) is less than some percentage of the greater value of a or b, then a is said to be close enough to b to be considered “equal” to b. This percentage is called the relative error and is specified by the rel_tol named argument to math.isclose(), which defaults to 1e-9.

That is, if abs(a – b) is less than 0.00000001 * max(abs(a), abs(b)), then a and b are considered “close” to each other. This ensures that a and b will be approximately nine decimal places.

If necessary, you can change the relative error:

>>> math.isclose(0.1 + 0.2, 0.3, rel_tol=1e-20)

Of course, the relative error depends on the constraints of the problem, but for most everyday applications, the standard relative error should be sufficient. The problem occurs if a or b is zero and rel_tol is less than one. Then, no matter how close the non-zero value is to zero, the relative error guarantees that the proximity test will fail. As a fallback, the absolute error is applied here:

>>> # Relative check fails!
>>> # ---------------vvvv  Relative tolerance
>>> # ----------------------vvvvv  max(0, 1e-10)
>>> abs(0 - 1e-10) < 1e-9 * 1e-10

>>> # Absolute check works!
>>> # ---------------vvvv  Absolute tolerance
>>> abs(0 - 1e-10) < 1e-9

In math.isclose() this check is done automatically. The absolute error is defined using the named argument abs_tol. But abs_tol is 0.0 by default, so you have to set it manually if you want to check if the value is close to zero.

As a result, the result of the following comparison is returned in the math.isclose() function – with relative and absolute checks in one expression:

abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)

math.isclose() appeared in PEP 485 and available since Python 3.5.

When should you use math.isclose()?

In general, math.isclose() should be used when comparing floating point values. Let’s replace == with math.isclose():

>>> # Don't do this:
>>> 0.1 + 0.2 == 0.3

>>> # Do this instead:
>>> math.isclose(0.1 + 0.2, 0.3)

You have to be careful with >= and <= comparisons. Let's handle the equality separately using math.isclose() and then check for a strict comparison:

>>> a, b, c = 0.1, 0.2, 0.3

>>> # Don't do this:
>>> a + b <= c

>>> # Do this instead:
>>> math.isclose(a + b, c) or (a + b < c)

There are alternatives to math.isclose(). If you are working with NumPy, you can use numpy.allclose() and numpy.isclose():

>>> import numpy as np

>>> # Use numpy.allclose() to check if two arrays are equal
>>> # to each other within a tolerance.
>>> np.allclose([1e10, 1e-7], [1.00001e10, 1e-8])

>>> np.allclose([1e10, 1e-8], [1.00001e10, 1e-9])

>>> # Use numpy.isclose() to check if the elements of two arrays
>>> # are equal to each other within a tolerance
>>> np.isclose([1e10, 1e-7], [1.00001e10, 1e-8])
array([ True, False])

>>> np.isclose([1e10, 1e-8], [1.00001e10, 1e-9])
array([ True, True])

Keep in mind that standard relative and absolute errors are not the same as math.isclose(). The standard relative error for numpy.allclose() and numpy.isclose() is 1e-05 and the standard absolute error is 1e-08.

math.isclose() is especially handy for unit tests, although there are alternatives. Python’s built-in unittest module has the unittest.TestCase.assertAlmostEqual() method.

But it uses only the absolute difference test. And this is also an assertion, that is, if it fails, an AssertionError occurs, due to which it is unsuitable for comparisons in business logic.

A great alternative to math.isclose() for unit testing is the pytest.approx() function from pytest pytest. As with math.isclose(), it takes two arguments and returns whether they are equal or not, within some margin of error:

>>> import pytest
>>> 0.1 + 0.2 == pytest.approx(0.3)

Like math.isclose(), pytest.approx() has named arguments rel_tol and abs_tol to set relative and absolute errors. But the standard values ​​are different. rel_tol has 1e-6 and abs_tol has 1e-12.

If the first two arguments passed to pytest.approx() are array-like (that is, it’s a Python iterable object, like a list or tuple, or even a NumPy array), then pytest.approx() behaves like numpy.allclose() and returns then whether these two arrays are equal or not within the margin of error:

>>> import numpy as np                                                          
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == pytest.approx(np.array([0.3, 0.6])) 

For pytest.approx() , even dictionary values ​​will do:

>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == pytest.approx({'a': 0.3, 'b': 0.6})

Floating point numbers are great for working with numbers where absolute precision is not required. They are fast and efficient in terms of memory consumption. But, if precision is needed, there are a number of alternatives to floats to consider.

Exact floating point alternatives

Python has two built-in numeric types that provide full precision in situations where floating point numbers are inappropriate: Decimal and Fraction.

Decimal type

IN type Decimal can store decimal values ​​with exactly the precision you need. By default, 28 significant digits are stored in it (this number can be changed according to the specific task being solved):

>>> # Import the Decimal type from the decimal module
>>> from decimal import Decimal

>>> # Values are represented exactly so no rounding error occurs
>>> Decimal("0.1") + Decimal("0.2") == Decimal("0.3")

>>> # By default 28 significant figures are preserved
>>> Decimal(1) / Decimal(7)

>>> # You can change the significant figures if needed
>>> from decimal import getcontext
>>> getcontext().prec = 6  # Use 6 significant figures
>>> Decimal(1) / Decimal(7)

To learn more about the Decimal type, see documentation Python.

Fraction type

Alternative to floating point numbers − type Fraction. It can exactly store rational numbers. This fixes problems with representation errors that occur in floating point numbers:

>>> # import the Fraction type from the fractions module
>>> from fractions import Fraction

>>> # Instantiate a Fraction with a numerator and denominator
>>> Fraction(1, 10)
Fraction(1, 10)

>>> # Values are represented exactly so no rounding error occurs
>>> Fraction(1, 10) + Fraction(2, 10) == Fraction(3, 10)

The Fraction and Decimal types have many advantages over standard floating point values. But there are also disadvantages: lower speed and increased memory consumption.

If you don’t need absolute precision, it’s best to stick with floating point numbers. But in the case of financial and mission-critical applications, these disadvantages of the Fraction and Decimal types can be acceptable.


Floating point values ​​are both a boon and a curse at the same time. They provide fast arithmetic operations and efficient memory consumption at the expense of imprecise representation. From this article you learned:

  • Why are floating point numbers inaccurate.

  • Why the floating point representation error is common.

  • How to correctly compare floating point values.

  • How to accurately represent numbers using the Fraction and Decimal types.

Learn more about numbers in Python. For example, did you know that int is not the only integer type in Python? find outwhat else, as well as other little-known facts about numbers in my article.

And we will help you upgrade your skills or master a profession that is in demand at any time from the very beginning:

Choose another in-demand profession.

Brief catalog of courses and professions

Data Science and Machine Learning

Python, web development

Mobile development

Java and C#

From basics to depth

As well as

Similar Posts

Leave a Reply