This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, martin.panter, ned.deily, rhettinger, serhiy.storchaka, steven.daprano, tim.peters, vstinner
Date 2016-08-27.05:44:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Serhiy, I don't know what you're thinking there, and the code doesn't make much sense to me.  For example, consider n=2.  Then m == n, so you accept the initial `g = x**(1.0/n)` guess.  But, as I said, there are cases where that doesn't give the best result, while the other algorithms do.  For example, on this box:

>>> serhiy(7.073208563506701e+46, 2)
>>> pow(7.073208563506701e+46, 0.5)  # exactly the same as your code

>>> nroot(7.073208563506701e+46, 2)  # best possible result
>>> import math
>>> math.sqrt(7.073208563506701e+46) # also achieved by sqrt()

On general principle, you can't expect to do better than plain pow() unless you do _something_ that gets the effect of using more than 53 mantissa bits - unless the platform pow() is truly horrible.  Using pow() multiple times is useless; doing Newton steps _in_ native float (C double) precision is useless; even some form of "binary search" is useless because just computing g**n (in native precision) suffers rounding errors worse than pow() suffered to begin with.

So long as you stick to native precision, you're fighting rounding errors at least as bad as the initial rounding errors you're trying to correct.  There's no a priori reason to even hope iterating will converge.
Date User Action Args
2016-08-27 05:44:20tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter, serhiy.storchaka
2016-08-27 05:44:20tim.peterssetmessageid: <>
2016-08-27 05:44:20tim.peterslinkissue27761 messages
2016-08-27 05:44:20tim.peterscreate