This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, rhettinger, serhiy.storchaka, tim.peters
Date 2020-08-19.15:10:54
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1597849854.95.0.491526837042.issue41513@roundup.psfhosted.org>
In-reply-to
Content
Here's an amusing cautionary tale:  when looking at correct-rounding failures for the fsum approach, I was baffled until I realized it was actually the _decimal_ method that was failing.  Simplest example I have is 9 instances of b=4.999999999999999, which is 1 ulp less than 5. Of course the best-possible hypot is 3*b rounded, but that's not what default decimal precision computes (when rounded back).  In fact, it bounces between "off by 1" and "correct" until decimal precision exceeds 96:

>>> pfailed = []
>>> for p in range(28, 300):
...     with decimal.localcontext() as c:
...         c.prec = p
...         if float(sum([decimal.Decimal(b) ** 2] * 9).sqrt()) != 3*b:
...             pfailed.append(p)
>>> pfailed
[28, 29, 30, 31, 34, 36, 37, 39, 41, 42, 43, 44, 45, 57, 77, 96]

In a nutshell, it takes in the ballpark of 50 decimal digits to represent b exactly, then twice as many to represent its square exactly, then another to avoid losing a digit when summing 9 of those.  So if decimal prec is less than about 100, it can lose info.

So the fsum method is actually more reliable (it doesn't lose any info until the fsum step).
History
Date User Action Args
2020-08-19 15:10:55tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, serhiy.storchaka
2020-08-19 15:10:54tim.peterssetmessageid: <1597849854.95.0.491526837042.issue41513@roundup.psfhosted.org>
2020-08-19 15:10:54tim.peterslinkissue41513 messages
2020-08-19 15:10:54tim.peterscreate