Message276754
Mark, thanks for the counterexample! I think I can fairly accuse you of thinking ;-)
I expect the same approach would be zippy for scaling x by 2**e, provided that the scaled value doesn't exceed the dynamic range of the decimal context. Like so:
def erootn(x, e, n,
D=decimal.Decimal,
D2=decimal.Decimal(2),
pow=c.power,
sub=c.subtract,
div=c.divide):
g = x**(1.0/n) * 2.0**(e/n) # force e to float for Python 2?
Dg = D(g)
return g - float(sub(Dg, div(D(x)*D2**e, pow(Dg, n-1)))) / n
The multiple errors in the native-float-precision starting guess shouldn't hurt. In this case D(x)*D2**e may not exactly equal the scaled input either, but the error(s) are "way at the end" of the extended-precision decimal format.
I don't know the range of `e` you have to worry about. On my box, decimal.MAX_EMAX == 999999999999999999 ... but the docs say that on a 32-bit box it's "only" 425000000. If forcing the context Emax to that isn't enough, then your code is still graceful but this other approach would need to get uglier. |
|
Date |
User |
Action |
Args |
2016-09-16 21:51:10 | tim.peters | set | recipients:
+ tim.peters, rhettinger, mark.dickinson, vstinner, ned.deily, steven.daprano, martin.panter, serhiy.storchaka |
2016-09-16 21:51:09 | tim.peters | set | messageid: <1474062669.98.0.223876627108.issue27761@psf.upfronthosting.co.za> |
2016-09-16 21:51:09 | tim.peters | link | issue27761 messages |
2016-09-16 21:51:09 | tim.peters | create | |
|