This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, martin.panter, ned.deily, rhettinger, serhiy.storchaka, steven.daprano, tim.peters, vstinner
Date 2016-09-17.04:51:08
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1474087869.33.0.602033583686.issue27761@psf.upfronthosting.co.za>
In-reply-to
Content
Let me give complete code for the last idea, also forcing the scaling multiplication to use the correct context:

    import decimal
    c = decimal.DefaultContext.copy()
    c.prec = 25
    c.Emax = decimal.MAX_EMAX
    c.Emin = decimal.MIN_EMIN

    def erootn(x, e, n,
              D=decimal.Decimal,
              D2=decimal.Decimal(2),
              pow=c.power,
              sub=c.subtract,
              mul=c.multiply,
              div=c.divide):
        g = x**(1.0/n) * 2.0**(e/n)
        Dg = D(g)
        return g - float(sub(Dg, div(mul(D(x), pow(D2, e)),
                                     pow(Dg, n-1)))) / n

    del decimal, c

Both pow() instances use an integer exponent, so the C implementation of decimal still avoids expensive exp() and ln() computations - it's all just basic - * / under the covers.

On a 32-bit box, the scaled input (x*2**e) must be <= 9.999...999E425000000, which is about 2**1411819443 (an exponent of about 1.4 billion).  On the close-to-0 side, approximately the reciprocals of those giant numbers.

The range is much wider on a 64-bit box (because Emax and Emin have much larger absolute values).

Then, e.g.,

>>> erootn(1, 1411819443, 1411819443)
2.0
>>> erootn(1, -1411819443, 1411819443)
0.5

So it's obviously perfect ;-)  Except that, because it reduces to the previous algorithm when e=0, Mark's counterexample (to correct rounding) still holds:

>>> erootn(1 + 2**-52, 0, 2)
1.0000000000000002

Boosting precision to 28 (from 25) is enough to repair that, but it doesn't bother me (there was never a claim that it was always correctly rounded - but at prec=25 it went through hours & hours of testing randomish inputs and `n` values without finding a less-than-perfect case - but those tests effectively always used e=0 too).

Basic error analysis for Newton's method usually goes like this:  suppose the true root is `t` and the current guess is `g = t*(1+e)` for some small `|e|`.  Then plug `t*(1+e)` into the equation and do a Taylor expansion around e=0.  In this case, dropping terms at or above cubic in `e` leaves that the new guess is about

t*(1 + (n-1)/2 * e**2)

That the relative error goes from `e` to (about) `e**2` (which is typical of Newton's method) is the source of the claim that the number of good bits approximately doubles on each iteration.  In this specific case, it's not quite that good:  the (n-1)/2 factor means convergence becomes slower the larger `n` is.  But `e` in this case is perhaps on the order of 2**-50, and `(n-1)/2` times 2**-100 (e**2) remains tiny for any plausible value of `n`.

Good enough for me ;-)
History
Date User Action Args
2016-09-17 04:51:09tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter, serhiy.storchaka
2016-09-17 04:51:09tim.peterssetmessageid: <1474087869.33.0.602033583686.issue27761@psf.upfronthosting.co.za>
2016-09-17 04:51:09tim.peterslinkissue27761 messages
2016-09-17 04:51:08tim.peterscreate