Message272667
On Sun, Aug 14, 2016 at 10:52:42AM +0000, Mark Dickinson wrote:
> Same deal here: those aren't the actual errors; they're approximations
> to the errors, since the computations of the epsilons depends on (a)
> the usual floating-point rounding, and more significantly (b) the
> accuracy of the `y**n` computation. It's entirely possible that the
> value giving the smaller epsilon is actually the worse of the two
> approximations.
Let's call the two calculated roots y and w, where y is generated by
pow() and w is the "improved" version, and the actual true
mathematical root is r (which might fall between floats).
You're saying that it might turn out that abs(y**n - x) < abs(w**n - x),
but abs(w - r) < abs(y - r), so I return the wrong calculated root.
I can see how that could happen, but not what to do about it. I'm
tempted to say "that platform's pow() is weird, and the best I can do
is return whatever it returns". That way you're no worse than if I just
used pow() and didn't try to improve the result at all.
I think this would be a problem if I wanted to make nth_root() a public
function with a guarantee that it will be better than pow(), but at
worst here it just slows the geometric mean down a bit compared to a
naive pow(product, 1/n).
What else can I do? Since I'm only dealing with integer powers, should I
try using my own ipow(y, n) for testing?
def ipow(y, n):
x = 1.0
while n > 0:
if n % 2 == 1:
x *= y
n //= 2
y *= y
return x |
|
Date |
User |
Action |
Args |
2016-08-14 11:50:40 | steven.daprano | set | recipients:
+ steven.daprano, tim.peters, rhettinger, mark.dickinson, martin.panter |
2016-08-14 11:50:40 | steven.daprano | link | issue27761 messages |
2016-08-14 11:50:40 | steven.daprano | create | |
|