This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, martin.panter, ned.deily, rhettinger, steven.daprano, tim.peters, vstinner
Date 2016-08-27.02:49:24
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
I don't care about correct rounding here, but it is, e.g., a bit embarrassing that

>>> 64**(1/3)

Which you may or may not see on your box, depending on your platform pow(), but which you "should" see:  1/3 is not a third, it's a binary approximation that's a little less than 1/3, so 64 to that power should be less than 4.

There are two practical problems with `nroot`, both already noted:  (1) Its very cheap initial guess comes at the cost of requiring multiple iterations to get a guess good to 54 bits.  That can be repaired, and already showed how.  (2) The larger `n`, the more bits it requires.  As Mark noted, at n=50000 we're doing multi-million bit arithmetic.  That's inherent - and slow.

My simple fractions.Fraction function suffers the second problem too, but is even slower for large n.

But if you give up on guaranteed correct rounding, this is really quite easy:  as I noted before, a single Newton iteration approximately doubles the number of "good bits", so _any_ way of getting the effect of extra precision in a single iteration will deliver a result good enough for all practical purposes.  Note that an error bound of strictly less than 1 ulp is good enough to guarantee that the exact result is delivered whenever the exact result is representable.

Here's one way, exploiting that the decimal module has adjustable precision:

    import decimal
    def rootn(x, n):
        c = decimal.getcontext()
        oldprec = c.prec
            c.prec = 40
            g = decimal.Decimal(x**(1.0/n))
            g = ((n-1)*g + decimal.Decimal(x)/g**(n-1)) / n
            return float(g)
            c.prec = oldprec

Now memory use remains trivial regardless of n.  I haven't yet found a case where the result differs from the correctly-rounded `nroot()`, but didn't try very hard.  And, e.g., even at the relatively modest n=5000, it's over 500 times faster than `nroot` (as modified to use a scaled x**(1/n) as an excellent starting guess, so that it too always gets out after its first Newton step).  At n=50000, it's over 15000 times faster.

For n less than about 70, though, `nroot` is faster.  It's plenty fast for me regardless.

Note:  just in case there's confusion about this, `rootn(64.0, 3)` returns exactly 4.0 _not_ necessarily because it's "more accurate" than pow() on this box, but because it sees the exact 3 instead of the approximation to 1/3 pow() sees.  That approximation is perfectly usable to get a starting guess (64**(1/3)) good to nearly 53 bits.  The real improvement comes from the Newton step using _exactly_ 3.

That said, I have seen rare cases where this (and `nroot`) give a better result than the platform pow() even for n=2 (where 1/n _is_ exactly representable).
Date User Action Args
2016-08-27 02:49:27tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter
2016-08-27 02:49:27tim.peterssetmessageid: <>
2016-08-27 02:49:26tim.peterslinkissue27761 messages
2016-08-27 02:49:24tim.peterscreate