Message376098
About test_frac.py, I changed the main loop like so:
got = [float(expected)] # NEW
for hypot in hypots:
actual = hypot(*coords)
got.append(float(actual)) # NEW
err = (actual - expected) / expected
bits = round(1 / err).bit_length()
errs[hypot][bits] += 1
if len(set(got)) > 1: # NEW
print(got) # NEW
That is, to display every case where the four float results weren't identical.
Result: nothing was displayed (although it's still running the n=1000 chunk, there's no sign that will change). None of these variations made any difference to results users actually get.
Even the "worst" of these reliably develops dozens of "good bits" beyond IEEE double precision, but invisibly (under the covers, with no visible effect on delivered results).
So if there's something else that speeds the code, perhaps it's worth pursuing, but we're already long beyond the point of getting any payback for pursuing accuracy. |
|
Date |
User |
Action |
Args |
2020-08-30 05:22:04 | tim.peters | set | recipients:
+ tim.peters, rhettinger, terry.reedy, mark.dickinson, serhiy.storchaka |
2020-08-30 05:22:04 | tim.peters | set | messageid: <1598764924.08.0.445407505655.issue41513@roundup.psfhosted.org> |
2020-08-30 05:22:04 | tim.peters | link | issue41513 messages |
2020-08-30 05:22:03 | tim.peters | create | |
|