This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients mark.dickinson, martin.panter, ned.deily, rhettinger, serhiy.storchaka, steven.daprano, tim.peters, vstinner
Date 2016-09-16.21:51:09
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1474062669.98.0.223876627108.issue27761@psf.upfronthosting.co.za>
In-reply-to
Content
Mark, thanks for the counterexample!  I think I can fairly accuse you of thinking ;-)

I expect the same approach would be zippy for scaling x by 2**e, provided that the scaled value doesn't exceed the dynamic range of the decimal context.  Like so:

def erootn(x, e, n,
          D=decimal.Decimal,
          D2=decimal.Decimal(2),
          pow=c.power,
          sub=c.subtract,
          div=c.divide):
    g = x**(1.0/n) * 2.0**(e/n) # force e to float for Python 2?
    Dg = D(g)
    return g - float(sub(Dg, div(D(x)*D2**e, pow(Dg, n-1)))) / n

The multiple errors in the native-float-precision starting guess shouldn't hurt.  In this case D(x)*D2**e may not exactly equal the scaled input either, but the error(s) are "way at the end" of the extended-precision decimal format.

I don't know the range of `e` you have to worry about.  On my box, decimal.MAX_EMAX == 999999999999999999 ... but the docs say that on a 32-bit box it's "only" 425000000.  If forcing the context Emax to that isn't enough, then your code is still graceful but this other approach would need to get uglier.
History
Date User Action Args
2016-09-16 21:51:10tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter, serhiy.storchaka
2016-09-16 21:51:09tim.peterssetmessageid: <1474062669.98.0.223876627108.issue27761@psf.upfronthosting.co.za>
2016-09-16 21:51:09tim.peterslinkissue27761 messages
2016-09-16 21:51:09tim.peterscreate