This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters mark.dickinson, martin.panter, ned.deily, rhettinger, serhiy.storchaka, steven.daprano, tim.peters, vstinner 2016-08-28.18:08:10 -1.0 Yes <1472407690.88.0.101608161536.issue27761@psf.upfronthosting.co.za>
Content
```Steven, you certainly _can_ ;-) check first whether `r**n == x`, but can you prove `r` is the best possible result when it's true?  Offhand, I can't.  I question it because it rarely seems to _be_ true (in well less than 1% of the random-ish test cases I tried).  An expensive test that's rarely true tends to make things slower overall rather than faster.

As to whether a Newton step won't make things worse, that requires seeing the exact code you're using.  There are many mathematically equivalent ways to code "a Newton step" that have different numeric behavior.  If for some unfathomable (to me) reason you're determined to stick to native precision, then - generally speaking - the numerically best way to do "a correction step" is to code it in the form:

r += correction  # where `correction` is small compared to `r`

Coding a Newton step here as, e.g., `r = ((n-1)*r + x/r**(n-1))/n` in native precision would be essentially useless:  multiple rounding errors show up in the result's last few bits, but the last few bits are the only ones we're _trying_ to correct.

When `correction` is small compared to `r` in `r += correction`, the rounding errors in the computation of `correction` show up in correction's last few bits, which are far removed from `r`'s last few bits (_because_ `correction` is small compared to `r`).  So that way of writing it _may_ be helpful.```
History
Date User Action Args
2016-08-28 18:08:10tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter, serhiy.storchaka