This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, rhettinger, serhiy.storchaka, terry.reedy, tim.peters
Date 2020-08-30.05:22:03
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1598764924.08.0.445407505655.issue41513@roundup.psfhosted.org>
In-reply-to
Content
About test_frac.py, I changed the main loop like so:

            got = [float(expected)] # NEW
            for hypot in hypots:
                actual = hypot(*coords)
                got.append(float(actual)) # NEW
                err = (actual - expected) / expected
                bits = round(1 / err).bit_length()
                errs[hypot][bits] += 1
            if len(set(got)) > 1: # NEW
                print(got) # NEW

That is, to display every case where the four float results weren't identical.

Result: nothing was displayed (although it's still running the n=1000 chunk, there's no sign that will change).  None of these variations made any difference to results users actually get.

Even the "worst" of these reliably develops dozens of "good bits" beyond IEEE double precision, but invisibly (under the covers, with no visible effect on delivered results).

So if there's something else that speeds the code, perhaps it's worth pursuing, but we're already long beyond the point of getting any payback for pursuing accuracy.
History
Date User Action Args
2020-08-30 05:22:04tim.peterssetrecipients: + tim.peters, rhettinger, terry.reedy, mark.dickinson, serhiy.storchaka
2020-08-30 05:22:04tim.peterssetmessageid: <1598764924.08.0.445407505655.issue41513@roundup.psfhosted.org>
2020-08-30 05:22:04tim.peterslinkissue41513 messages
2020-08-30 05:22:03tim.peterscreate