This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author larry
Recipients larry, lemburg, mark.dickinson, rhettinger, serhiy.storchaka, stutzbach, vstinner, vxgmichel
Date 2020-02-10.01:54:48
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1581299688.7.0.0951990507208.issue39484@roundup.psfhosted.org>
In-reply-to
Content
Aha!  The crucial distinction is that IEEE 754 doubles have 52 bits of storage for the mantissa, but folks (e.g. Wikipedia, Mark Dickinson) describe this as "53 bits of precision" because that's easier saying "52 bits but you don't have to store the leading 1 bit".

To round the bases: the actual physical storage of a double is 1 sign bit + 52 mantissa bits + 11 exponent bits = 64 bits.  The current time in seconds is 31 bits, but we get the leading 1 for free so it only takes up 30 bits of the mantissa.  Therefore we only have 22 bits of precision left for the fractional second, therefore we're 8 bits short of being able to represent every billionth of a second.  We can represent approximately 0.4% of all distinct billionths of a second, which is just sliiightly more than 1/256 (0.39%).

Just to totally prove it to myself, I wrote a brute-force Python program.  It starts with 1581261916, then for i in range(one_billion) it adds i / one_billion to that number.  It then checks to see if that result is different from the previous result.  It detected 4194304 times the result was different, which is exactly 2**22.  QED.


p.s. I knew in my heart that I would never *actually* correct Mark Dickinson on something regarding floating point numbers
History
Date User Action Args
2020-02-10 01:54:48larrysetrecipients: + larry, lemburg, rhettinger, mark.dickinson, vstinner, stutzbach, serhiy.storchaka, vxgmichel
2020-02-10 01:54:48larrysetmessageid: <1581299688.7.0.0951990507208.issue39484@roundup.psfhosted.org>
2020-02-10 01:54:48larrylinkissue39484 messages
2020-02-10 01:54:48larrycreate