This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: Incorrect type casting of float into int
Type: compile error Stage: resolved
Components: Windows Versions: Python 3.11, Python 3.10, Python 3.9
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: hbutt4319, paul.moore, steve.dower, tim.golden, tim.peters, zach.ware
Priority: normal Keywords:

Created on 2021-05-04 16:58 by hbutt4319, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Messages (2)
msg392919 - (view) Author: Backbench Family (hbutt4319) Date: 2021-05-04 16:58
y = int(1.999999999999999) # fifteen decimal points
1       # output is 1

y = int(1.9999999999999999) # sixteen decimal points
2       # output is 2
It shows 1 when we type fifteen decimal whereas when we add sixteen decimal points the output becomes 2.
msg392928 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2021-05-04 17:36
Please study the docs first:

That will give you the background to understand why `int()` has nothing to do with this.

>>> 1.9999999999999999

That is, `int()` was passed 2.0 to begin with, because the binary float closest to the decimal value 1.9999999999999999 is in fact 2.0.

If you can't live with that, use the `decimal` module instead:

>>> import decimal
>>> int(decimal.Decimal("1.9999999999999999"))
Date User Action Args
2022-04-11 14:59:45adminsetgithub: 88200
2021-05-04 17:36:05tim.peterssetstatus: open -> closed

nosy: + tim.peters
messages: + msg392928

resolution: not a bug
stage: resolved
2021-05-04 17:27:29shreyanavigyansetversions: + Python 3.9, Python 3.10, Python 3.11, - Python 3.6
2021-05-04 16:58:33hbutt4319create