New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve accuracy of stdev functions in statistics #90034
Comments
The standard deviation computation in the statistics module is still subject to error even though the mean and sum of square differences are computed exactly using fractions. The problem is that the exact fraction gets rounded to a float before going into math.sqrt() which also rounds. It would be better to compute a high precision square root directly from the exact fraction rather than from a fraction rounded to a float. Here are two ways to do it. With Cpython, the second way is twice as fast. With PyPy3, the speed is about the same. def frac_sqrt(x: Fraction) -> float:
'Return sqrt to greater precision than math.sqrt()'
# Needed because a correctly rounded square root of
# a correctly rounded input can still be off by 1 ulp.
# Here we avoid the initial rounding and work directly
# will the exact fractional input. The square root
# is first approximated with math.sqrt() and then
# refined with a divide-and-average step. Since the
# Newton-Raphson algorithm has quadratic convergence,
# one refinement step gives excellent accuracy.
a = Fraction(math.sqrt(x))
return float((x / a + a) / 2)
def deci_sqrt(x: Fraction) -> float:
ratio = Decimal(x.numerator) / Decimal(x.denominator)
return float(ratio.sqrt()) |
I'm not sure this is worth worrying about. We already have a very tight error bound on the result: if |
One thought: would the deci_sqrt approach help with value ranges where the values are well within float limits, but the squares of the values are not? E.g., on my machine, I currently get errors for both of the following:
It's hard to imagine that there are too many use-cases for values of this size, but it still feels a bit odd to be constrained to only half of the dynamic range of float. |
Instead of writing simple, mostly accurate code with math.fsum(), these functions have already applied labor intensive measures to get an exact mean and exact sum of square differences expressed in fractions. Then at the final step before the square root, it prematurely rounds to a float. This is easy to fix and has a single step cost comparable to that already paid for each input datum. In a sports analogy, we've run the ball almost the full length of the field and then failed to put the ball over the goal line. Part of the module's value proposition is that it strives to be more accurate than obvious implementations. The mean, variance, and pvariance function are correctly rounded. In this regard, the stdev and pstdev functions are deficient, but they could easily be made to be almost always correctly rounded.
Yes, the Emin and Emax for the default context is already almost big enough: Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999,
Emax=999999, capitals=1, clamp=0, flags=[],
traps=[InvalidOperation, DivisionByZero, Overflow]) We could bump that up to fully envelop operations on the sum of squares: Context(Emin=-9_999_999, Emax=9_999_999, prec=50) |
But if we only go from "faithfully rounded" to "almost always correctly rounded", it seems to me that we're still a couple of millimetres away from that goal line. It wouldn't be hard to go for _always_ correctly rounded and actually get it over.
I'm confused: big enough for what? I was thinking of the use-case where the inputs are all floats, in which case an Emax of 999 and an Emin of -999 would already be more than big enough. |
Yes, that would be the right thing to do. Does the technique you had in mind involve testing 1 ulp up or down to see whether its square is closer to the input?
Right. In haste, I confused max_exp=1024 with max_10_exp=308. Am still thinking that the precision needs to be bumped up a bit the 28 place default. |
Kinda sorta. Below is some code: it's essentially just pure integer operations, with a last minute conversion to float (implicit in the division in the case of the second branch). And it would need to be better tested, documented, and double-checked to be viable. def isqrt_rto(n):
"""
Square root of n, rounded to the nearest integer using round-to-odd.
"""
a = math.isqrt(n)
return a | (a*a != n)
def isqrt_frac_rto(n, m):
"""
Square root of n/m, rounded to the nearest integer using round-to-odd.
"""
quotient, remainder = divmod(isqrt_rto(4*n*m), 2*m)
return quotient | bool(remainder)
def sqrt_frac(n, m):
"""
Square root of n/m as a float, correctly rounded.
"""
quantum = (n.bit_length() - m.bit_length() - 1) // 2 - 54
if quantum >= 0:
return float(isqrt_frac_rto(n, m << 2 * quantum) << quantum)
else:
return isqrt_frac_rto(n << -2 * quantum, m) / (1 << -quantum) |
Here's the float-and-Fraction-based code that I'm using to compare the integer-based code against: def sqrt_frac2(n, m):
"""
Square root of n/m as a float, correctly rounded.
"""
f = fractions.Fraction(n, m)
# First approximation.
x = math.sqrt(n / m)
# Use the approximation to find a pair of floats bracketing the actual sqrt
if fractions.Fraction(x)**2 >= f:
x_lo, x_hi = math.nextafter(x, 0.0), x
else:
x_lo, x_hi = x, math.nextafter(x, math.inf)
# Compare true square root with the value halfway between the two floats.
mid = (fx_lo + fx_hi) / 2
if mid**2 < f:
return x_hi
elif mid**2 > f:
return x_lo
else:
# Tricky case: mid**2 == f, so we need to choose the "even" endpoint.
# Cheap trick: the addition in 0.5 * (x_lo + x_hi) will round to even.
return 0.5 * (x_lo + x_hi) |
Should the last line of sqrt_frac() be wrapped with float()? |
It's already a float - it's the result of an int / int division. |
Hmm. isqrt_frac_rto is unnecessarily complicated. Here's a more streamlined version of the code. import math
def isqrt_frac_rto(n, m):
"""
Square root of n/m, rounded to the nearest integer using round-to-odd.
"""
a = math.isqrt(n*m) // m
return a | (a*a*m != n)
def sqrt_frac(n, m):
"""
Square root of n/m as a float, correctly rounded.
"""
q = (n.bit_length() - m.bit_length() - 109) // 2
if q >= 0:
return float(isqrt_frac_rto(n, m << 2 * q) << q)
else:
return isqrt_frac_rto(n << -2 * q, m) / (1 << -q) |
I've opened a PR to make this easy to experiment with. It also worked with my frac_sqrt() and deci_sqrt(), but having all integer arithmetic and always correct rounding are nice wins. The only downside is that I completely understood the first two variants but am still studying the new one. Perhaps Tim will have a look as well. |
As a side effect of inlining the variance code, we also get to fix the error messages which were variance specific. |
Sorry - I should have added at least _some_ comments to it. Here's the underlying principle, which I think of as "The magic of round-to-odd": Suppose x is a positive real number and e is an integer satisfying 2^e <= x. Then assuming IEEE 754 binary64 floating-point format, the quantity:
rounds to the same value as x does under _any_ of the standard IEEE 754 rounding modes, including the round-ties-to-even rounding mode that we care about here. Here, to-odd is the function R -> Z that maps each integer to itself, but maps each non-integral real number x to the *odd* integer nearest x. (So for example all of 2.1, 2.3, 2.9, 3.0, 3.1, 3.9 map to 3.) This works because (x / 2^(e-54)) gives us an integer with at least 55 bits: the 53 bits we'll need in the eventual significand, a rounding bit, and then the to-odd supplies a "sticky" bit that records inexactness. Note that the principle works in the subnormal range too - no additional tricks are needed for that. In that case we just end up wastefully computing a few more bits than we actually _need_ to determine the correctly-rounded value. The code applies this principle to the case x = sqrt(n/m) and
so
and taking square roots gives
so taking e = (n.bit_length() - 1 - m.bit_length())//2, we have
Now putting q = e - 54, we need to compute
rounded to a float. The two branches both do this computation, by moving 2^q into either the numerator or denominator of the fraction as appropriate depending on the sign of q. |
Here's a reference for this use of round-to-odd: https://www.lri.fr/~melquion/doc/05-imacs17_1-expose.pdf I'm happy to provide any proofs that anyone feels are needed. |
Thanks Mark. It looks like I'll be getting a little education over the Thanksgiving holiday :-) Shown below is the code that I'm thinking of using to test for correct rounding. Is this the right way to do it? # Verify correct rounding. Find exact values for half the distance for i in range(10_000_000):
numerator: int = randrange(10 ** randrange(40)) + 1
denonimator: int = randrange(10 ** randrange(40)) + 1
x: Fraction = Fraction(numerator, denonimator)
|
Mark, would it preferable to use ldexp() to build the float? + return math.ldexp(isqrt_frac_rto(n << -2 * q, m), q)
|
Note that, on Windows, ldexp() in the presence of denorms can truncate. Division rounds, so
can fail. >>> import math
>>> d = math.nextafter(0.0, 1.0)
>>> d
5e-324
>>> d3 = 7 * d # .0000...0111
>>> d3
3.5e-323
>>> d3 / 4.0 # rounds
1e-323
>>> math.ldexp(d3, -2) # truncates
5e-324 or, perhaps more dramatically, >>> d3 / 8.0, math.ldexp(d3, -3)
(5e-324, 0.0) |
There's also a potential double-rounding issue with ldexp, when the first argument is an int: ldexp(n, e) will first round n to a float, and then (again for results in the subnormal range) potentially also need to round the result. >>> n = 2**53 + 1
>>> e = -1128
>>> math.ldexp(n, e)
0.0
>>> n / (1 << -e)
5e-324 I'm a bit (but only a bit) surprised and disappointed by the Windows issue; thanks, Tim. It seems to be okay on Mac (Intel, macOS 11.6.1): >>> import math
>>> d = math.nextafter(0.0, 1.0)
>>> d
5e-324
>>> d3 = 7 * d
>>> d3
3.5e-323
>>> d3 / 4.0
1e-323
>>> math.ldexp(d3, -2)
1e-323 |
Instead of calling float(), perhaps do an int/int division to match the other code path so that the function depends on only one mechanism for building the float result.
|
[Raymond]
Sure, works for me. |
Since I've failed to find a coherent statement and proof of the general principle I articulated anywhere online, I've included one below. (To be absolutely clear, the principle is not new - it's very well known, but oddly hard to find written down anywhere.) ---------------------- Setup: introducing jots *Notation.* R is the set of real numbers, Z is the set of integers, operators Fix a precision p > 0 and an IEEE 754 binary floating-point format with Let rnd : R -> F be the rounding function corresponding to any of the standard *Definition.* For the given fixed precision p, a *jot* is a subinterval of the This is a definition-of-convenience, invented purely for the purposes of this We've chosen the size of a jot so that between each consecutive pair a and b of Now here's the key point: for values that aren't exactly representable and *Observation.* If x and y belong to the same jot, then rnd(x) = rnd(y). This is the point of jots: they represent the wiggle-room that we have to *Note.* Between any two consecutive *subnormal* values, we have 4 or more Here's a lemma that we'll need shortly. *Lemma.* Suppose that I is an open interval of the form (m 2^e, (m+1) 2^e) *Proof.* If m < 2^(p+1) then this is immediate from the definition. In
so n * 2^(e+q-p) <= m * 2^e and (m + 1) * 2^e <= (n + 1) * 2^(e+q-p) So I is a subinterval of (n * 2^(e+q-p), (n+1) * 2^(e+q-p)), which is a jot. The magic of round-to-odd *Definition.* The function to-odd : R -> Z is defined by:
*Properties.* Some easy monotonicity properties of to-odd, with proofs left
Here's a restatement of the main principle. *Proposition.* With p and rnd as in the previous section, suppose that x is a y = 2^(e-p-1) to-odd(x / 2^(e-p-1)) Then rnd(y) = rnd(x). Proof of the principle In a nutshell, we show that either
Either way, since rnd is constant on jots, we get rnd(y) = rnd(x). Case 1: x = m * 2^(e-p) for some integer m. Then x / 2^(e-p-1) = 2m is Case 2: m * 2^(e-p) < x < (m + 1) * 2^(e-p) for some integer m. Then rearranging, 2m < x / 2^(e-p-1) < 2(m+1). So from the monotonicity 2m < to-odd(x / 2^(e-p-1)) < 2(m+1) And multiplying through by 2^(e-p-1) we get m * 2^(e-p) < y < (m+1) * 2^(e-p). So both x and y belong to the interval I = (m*2^(e-p), (m+1)*2^(e-p-1)). Furthermore, 2^e <= x < (m + 1) * 2^(e-p), so 2^p < m + 1, and 2^p <= m. |
Mark, ya, MS's Visual Studio's ldexp() has, far as I know, always worked this way. The code I showed was run under the 2019 edition, which we use to build the Windows CPython. Raymond, x = float(i) is screamingly obvious at first glance. x = i/1
looks like a coding error at first. The "reason" for different spellings in different branches looked obvious in the original: one branch needs to divide, and the other doesn't. So the original code was materially clearer to me. Both, not sure it helps, but this use of round-to-odd appears akin to the decimal module's ROUND_05UP, which rounds an operation result in such a way that, if it's rounded again - under any rounding mode - to a narrower precision, you get the same narrower result as if you had used that rounding mode on the original operation to that narrower precision to begin with. Decimal only needs to adjust the value of the last retained digit to, effectively, "encode" all possibilities, but binary needs two trailing bits. "Round" and "sticky" are great names for them :-) |
[Tim]
Objects/longobject.c::long_true_divide() uses ldexp() internally. Will it suffer the same issues with subnormals on Windows? Is CPython int/int true division guaranteed to be correctly rounded? |
No, it should be fine. All the rounding has already happened at the point where ldexp is called, and the result of the ldexp call is exact.
Funny you should ask. :-) There's certainly no documented guarantee, and there _is_ a case (documented in comments) where the current code may not return correctly rounded results on machines that use x87: there's a fast path where both numerator and denominator fit into an IEEE 754 double without rounding, and we then do a floating-point division. But we can't hit that case with the proposed code, since the numerator will always have at least 55 bits, so the fast path is never taken. |
Doesn't look like it will. In context, looks like it's ensuring that ldexp can only lose trailing 0 bits, so that _whatever_ ldexp does in the way of rounding is irrelevant. But it's not doing this because of Windows - it's to prevent "double-rounding" errors regardless of platform.
If there's some promise of that in the docs, I don't know where it is. But the code clearly intends to strive for correct rounding. Ironically, PEP-238 guarantees that if it is correctly rounded, that's purely by accident ;-) : """ But i/j is emphatically not implemented via float(i)/float(j). |
Sketch of proof: Here we have: shift = Py_MAX(diff, DBL_MIN_EXP) - DBL_MANT_DIG - 2; from which (assuming IEEE 754 as usual) shift >= -1076. (DBL_MIN_EXP = -1021, DBL_MANT_DIG = 53) Here we round away the last two or three bits of x, after which x is guaranteed to be a multiple of 4:
Then after converting the PyLong x to a double dx with exactly the same value here we make the ldexp call: result = ldexp(dx, (int)shift); At this point dx is a multiple of 4 and shift >= -1076, so the result of the ldexp scaling is a multiple of 2**-1074, and in the case of a subnormal result, it's already exactly representable. For the int/int division possibly not being correctly rounded on x87, see here. It won't affect _this_ application, but possibly we should fix this anyway. Though the progression of time is already effectively fixing it for us, as x87 becomes less and less relevant. |
Concrete example of int/int not being correctly rounded on systems using x87 instructions: on those systems, I'd expect to see 1/2731 return a result of 0.00036616623947272064 (0x1.7ff4005ffd002p-12), as a result of first rounding to 64-bit precision and then to 53-bit. The correctly-rounded result is 0.0003661662394727206 (0x1.7ff4005ffd001p-12). |
I would like to see this fixed. It affects our ability to |
But I would like to leave it alone. Extended precision simply is not an issue on any current platform I'm aware of ("not even Windows"), and I would, e.g., hate trying to explain to users why
(assuming we're not also proposing to take float division away from the HW). It's A Feature that
whenever I and J are both representable as floats. If extended precision is an issue on some platform, fine, let them speak up. On x87 we could document that CPython assumes the FPU's "precision control" is set to 53 bits. |
Raymond, Mark, Tim, I have been reading this whole thread. Thank you all. I am in awe and a little bit intimidated by how much I still have to learn about floating point arithmetic. |
Thank you all for looking at this. It's unlikely that anyone will ever notice the improvement, but I'm happy with it and that's all the matters ;-) |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: