This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Division error
Type: behavior Stage: resolved
Components: Interpreter Core Versions: Python 3.8, Python 3.7
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: Fenn Ehk, mark.dickinson, steven.daprano
Priority: normal Keywords:

Created on 2020-06-20 17:45 by Fenn Ehk, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Messages (3)
msg371948 - (view) Author: Fenn Ehk (Fenn Ehk) Date: 2020-06-20 17:45
When performing some basic calculations, the result is wrong.
0.4 + 8/100
Out[43]: 0.48000000000000004

0.3 + 8/100
Out[44]: 0.38

I thought it could be processor related and tried the same operation with R, but the result was correct. So I tried it on some online repls:
https://repl.it/languages/python3
https://www.learnpython.org/en/Basic_Operators

And the bug is there, it seems to exist in 3.7.6 and 3.8.3 (and probably all versions in between
Other examples of the error:
0.3 + 8/100
Out[50]: 0.38

0.4 + 8/100
Out[51]: 0.48000000000000004

0.4 + a
Out[52]: 0.48000000000000004

0.4 + 9/100
Out[53]: 0.49

0.7 + 9/100
Out[54]: 0.7899999999999999

0.7 + 10/100
Out[55]: 0.7999999999999999

0.7 + 10/100
Out[56]: 0.7999999999999999

0.7 + 11/100
Out[57]: 0.8099999999999999

0.7 + 12/100
Out[58]: 0.82

0.8 + 8/100
Out[59]: 0.88

0.8 + 9/100
Out[60]: 0.89

0.6 + 9/100
Out[61]: 0.69

0.7 + 9/100
Out[62]: 0.7899999999999999
msg371952 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-06-20 18:00
This isn't a bug in Python; it's a consequence of the what-you-see-is-not-what-you-get nature of binary floating-point.

The behaviour is explained in the tutorial, here: https://docs.python.org/3/tutorial/floatingpoint.html
msg371954 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2020-06-20 18:14
Further to what Mark said, I'm afraid you are mistaken when you thought that "the result was correct" on R. R cheats by not printing the full precision of the number, they just stop printing digits, giving a false impression of accuracy. You can prove this for yourself:

> 0.4 + 8/100
[1] 0.48
> (0.4 + 8/100) == 0.48
[1] FALSE

So even though the printed result *looks* like 0.48, it actually isn't. If you investigate carefully, you will probably find that the number R calculates is the same as Python.

And the same as Javascript:

js> 0.4 + 8/100
0.48000000000000004

and pretty much every programming language that uses 64-bit floats.


BTW, this is a FAQ:

https://docs.python.org/3/faq/design.html#why-are-floating-point-calculations-so-inaccurate

There are a ton of other resources on the web explaining this, since it occurs virtually everywhere, in every language with fixed-precision floating point numbers. For example:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

https://randomascii.wordpress.com/2012/05/20/thats-not-normalthe-performance-of-odd-floats/

https://randomascii.wordpress.com/2012/04/05/floating-point-complexities/
History
Date User Action Args
2022-04-11 14:59:32adminsetgithub: 85229
2020-06-20 18:14:59steven.dapranosetnosy: + steven.daprano
messages: + msg371954
2020-06-20 18:00:07mark.dickinsonsetstatus: open -> closed

nosy: + mark.dickinson
messages: + msg371952

resolution: not a bug
stage: resolved
2020-06-20 17:45:23Fenn Ehkcreate