This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, rhettinger, serhiy.storchaka, tim.peters
Date 2020-08-20.16:26:50
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1597940810.98.0.187117810009.issue41513@roundup.psfhosted.org>
In-reply-to
Content
My apologies if nobody cares about this ;-) I just think it's helpful if we all understand what really helps here.

> Pretty much the whole trick relies on computing
> "sumsq - result**2" to greater than basic machine
> precision.

But why is that necessary at all?  Because we didn't compute the sqrt of the sum of the squares to begin with - the input to sqrt was a _rounded_-to-double approximation to the sum of squares. The correction step tries to account for the sum-of-squares bits sqrt() didn't see to begin with.

So, e.g., instead of doing

    result = sqrt(to_float(parts))
    a, b = split(result)
    parts = add_on(-a*a, parts)
    parts = add_on(-b*b, parts)
    parts = add_on(-2.0*a*b, parts)
    x = to_float(parts)
    result += x / (2.0 * result)

it works just as well to do just

    result = float((D(parts[0] - 1.0) + D(parts[1])).sqrt())
    
where D is the default-precision decimal.Decimal constructor. Then sqrt sees all the sum-of-squares bits we have (although the "+" rounds away those following the 28th decimal digit).
History
Date User Action Args
2020-08-20 16:26:51tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, serhiy.storchaka
2020-08-20 16:26:50tim.peterssetmessageid: <1597940810.98.0.187117810009.issue41513@roundup.psfhosted.org>
2020-08-20 16:26:50tim.peterslinkissue41513 messages
2020-08-20 16:26:50tim.peterscreate