Message375721
My apologies if nobody cares about this ;-) I just think it's helpful if we all understand what really helps here.
> Pretty much the whole trick relies on computing
> "sumsq - result**2" to greater than basic machine
> precision.
But why is that necessary at all? Because we didn't compute the sqrt of the sum of the squares to begin with - the input to sqrt was a _rounded_-to-double approximation to the sum of squares. The correction step tries to account for the sum-of-squares bits sqrt() didn't see to begin with.
So, e.g., instead of doing
result = sqrt(to_float(parts))
a, b = split(result)
parts = add_on(-a*a, parts)
parts = add_on(-b*b, parts)
parts = add_on(-2.0*a*b, parts)
x = to_float(parts)
result += x / (2.0 * result)
it works just as well to do just
result = float((D(parts[0] - 1.0) + D(parts[1])).sqrt())
where D is the default-precision decimal.Decimal constructor. Then sqrt sees all the sum-of-squares bits we have (although the "+" rounds away those following the 28th decimal digit). |
|
Date |
User |
Action |
Args |
2020-08-20 16:26:51 | tim.peters | set | recipients:
+ tim.peters, rhettinger, mark.dickinson, serhiy.storchaka |
2020-08-20 16:26:50 | tim.peters | set | messageid: <1597940810.98.0.187117810009.issue41513@roundup.psfhosted.org> |
2020-08-20 16:26:50 | tim.peters | link | issue41513 messages |
2020-08-20 16:26:50 | tim.peters | create | |
|