Message323392
Not that it matters: "ulp" is a measure of absolute error, but the script is computing some notion of relative error and _calling_ that "ulp". It can understate the true ulp error by up to a factor of 2 (the "wobble" of base 2 fp).
Staying away from denorms, this is an easy way to get one ulp with respect to a specific 754 double:
def ulp(x):
import math
mant, exp = math.frexp(x)
return math.ldexp(0.5, exp - 52)
Then, e.g.,
>>> x
1.9999999999999991
>>> y
1.9999999999999996
>>> y - x
4.440892098500626e-16
>>> oneulp = ulp(x)
>>> oneulp # the same as sys.float_info.epsilon for this x
2.220446049250313e-16
>>> (y - x) / oneulp
2.0
which is the true absolute error of y wrt x.
>>> x + 2 * oneulp == y
True
But:
>>> (y - x) / x
2.220446049250314e-16
>>> _ / oneulp
1.0000000000000004
understates the true ulp error by nearly a factor of 2, while the mathematically (but not numerically) equivalent spelling used in the script:
>>> (y/x - 1.0) / oneulp
1.0
understates it by exactly a factor of 2. |
|
Date |
User |
Action |
Args |
2018-08-10 21:18:48 | tim.peters | set | recipients:
+ tim.peters, rhettinger, mark.dickinson, serhiy.storchaka |
2018-08-10 21:18:48 | tim.peters | set | messageid: <1533935928.49.0.56676864532.issue34376@psf.upfronthosting.co.za> |
2018-08-10 21:18:48 | tim.peters | link | issue34376 messages |
2018-08-10 21:18:48 | tim.peters | create | |
|