New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sequence index bug in random.choice #68734
Comments
It seems there is minor bug in random.choice. I've got traceback from my server with IndexError from random.choice, but sequence wasn't empty (seq value was: u'\u0411\u0413\u0414\u0416\u0418\u041b\u0426\u042b\u042d\ Maybe I mistaken, but only explanation that I have for this exception is rounding by int() of random value that was very close to 1. TL;RD:
>>> int(0.99999999999999995)
1
>>> seq = 'test'
>>> seq[int(0.99999999999999995 * len(seq))] # logic from random.choice
IndexError: string index out of range Is it plausible explanation of exception or I'am wrong? |
Your example of int(0.99999999999999995) returning 1 is misleading, because 0.999...95 is already 1.0. (1.0 - 1/2**53) = 0.9999999999999999 is the nearest float distinguishable from 1.0. It seems to me that either random() may return 1.0 exactly (although I've never seen it) or that 0.9999999999999999*len(s) rounds up to len(s), which I guess is more likely. Sure enough, that first happens with a string of length 2049: py> x = 0.9999999999999999 However your string has length 35, and it certainly doesn't happen there: py> int(x*len(s)) |
That shouldn't be possible. Although the code does assume C doubles have at least 53 bits of mantissa precision (in which case it does arithmetic that's exact in at least 53 bits - cannot round up to 1.0; but _could_ round up if the platform C double has less than 53 bits of precision).
Very surprising! Which platform & Python is that? The loop runs to completion on my box: Python 2.7.9 (default, Dec 10 2014, 12:28:03) [MSC v.1500 64 bit (AMD64)] on win32 |
FYI, where x = 1.0 - 2.**-53, I believe it's easy to show this under IEEE double precision arithmetic: For every finite, normal, double y > 0.0,
under the default (nearest/even) rounding mode. That implies
for every int i > 0 representable as an IEEE double (including monstrous ints like 123456789 << 300). Which makes your output _extremely_ surprising, Steven ;-) |
|
On Thu, Jul 02, 2015 at 04:05:45AM +0000, Tim Peters wrote:
Python 2.7.2 (default, May 18 2012, 18:25:10) I get the same result on Python 2.6 and 3.3, but *not* using Jython or Don't you love floating point mysteries? I cannot imagine how confusing |
I'm unable to reproduce this using Linux-amd64, Python 2.7.6, gcc 4.8.2. |
On my setup, this number is rounded 1.0: >>> float('0.99999999999999995').hex()
'0x1.0000000000000p+0' CPU: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz |
Steven, there's something wrong with the arithmetic on your machine, but I can't guess what from here (perhaps you have a non-standard rounding mode enabled, perhaps your CPU is broken, ...). In binary, (2**53-1)/2**53 * 2049 is: 0.11111111111111111111111111111111111111111111111111111 which is exactly: 100000000000.11111111111111111111111111111111111111111 011111111111 I inserted a blank after the 53rd bit. Because the tail (011111111111) is "less than half", under any rounding mode _except_ "to +inf" that should be rounded to the 53-bit result: 100000000000.11111111111111111111111111111111111111111 That's strictly less than 2049, so int() of that should deliver 2048. For a start, is it the multiplication that's broken on your machine, or the int() part? These are the expected hex values (same as the binary values shown above, but easier to get Python to show - and these should be identical across all 754-conforming boxes using the default rounding mode): >>> x = 1.-2.**-53
>>> y = 2049.0
>>> x.hex()
'0x1.fffffffffffffp-1'
>>> y.hex()
'0x1.0020000000000p+11'
>>> (x*y).hex()
'0x1.001ffffffffffp+11' |
"perhaps you have a non-standard rounding mode enabled" I think Mark added code to python3 at least to handle non-standard rounding modes being set, but I'm not sure. |
On Thu, Jul 02, 2015 at 05:35:53PM +0000, Tim Peters wrote:
It's not just me. Others have confirmed the same behaviour, but only https://mail.python.org/pipermail/python-list/2015-July/693481.html Thread begins here:
Looks like multiplication: >>> x = 1.-2.**-53
>>> assert x.hex() == '0x1.fffffffffffffp-1'
>>> assert (2049.0).hex() == '0x1.0020000000000p+11'
>>> (x*2049.0).hex() # Should be '0x1.001ffffffffffp+11'
'0x1.0020000000000p+11' I'd like to see what result the OP gets if he runs this. Serge is using |
Thanks for the legwork, Steven! So far it looks like a gcc bug when using -m32 (whether ints, longs and/or pointers are 4 or 8 bytes _should_ make no difference to anything in Jason Swails's C example). But it may be a red herring anyway: there's only one chance in 2**53 of getting a random.random() result equal to 1.-2.**-53 to begin with. |
I'm guessing this is a "double rounding" problem due to gcc not restricting an Intel FPU to using 53 bits of precison:
The internal Intel "extended precision" format has 64 bits in the mantissa. The last line there has 65 bits in all (53 to the left of the blank, and 12 to the right). Rounding _that_ to fit in 64 bits throws away the rightmost "1" bit, which is "exactly half", and so nearest/even rounding bumps what's left up by 1, leaving the 64-bit: 100000000000.11111111111111111111111111111111111111111 10000000000 in the extended-precision register. Rounding that _again_ to fit in 53 bits then yields the observed 100000000001.0 result. No int i with 0 < i < 2049 produces the same kind of double-rounding surprise. And with that I have to bow out - people have spent many years arguing with gcc developers about their floating-point decisions, and they rarely budge. Why does -m32 have something to do with it? "Just because" and "tough luck" are typically the only responses you'll get ;-) |
Should also note that double rounding cannot account for the _original_ symptom here. Double rounding surprises on Intel chips require an exact product at least 65 bits wide, but the OP's sequence is far too short to create such a product. (Steven's 2049 just barely requires 12 bits, which when multiplied by a 53-bit value can require 12+53 = 65 bits to hold the full product.) |
Just to defend gcc. :) I still cannot reproduce Steven's problem Steven's gcc [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] is very old I'd try a modern vanilla version. |
OK, so we don't know what caused the OPs original problem, and at the moment we don't have enough info to figure it out. Serge, you'll have to find some way to get more information on exactly what is failing in order for this issue to make progress. |
People have posted the asm now: https://mail.python.org/pipermail/python-list/2015-July/693509.html The fmull result appears to be the same. The good gcc versions set fmull -0x10(%ebp) The bad versions dump the result of fmull to memory, using fmull 0x18(%esp) I suspect that there are passages in various standards that allow |
Agreed with Tim Peters about this not being possible with fully compliant IEEE 754 arithmetic (see http://stackoverflow.com/a/3041071/270986 for a sketch of a proof), but it's certainly a possibility with double rounding, as Steven's result demonstrates. And 32-bit Linux is currently the worst offender for double rounding: IIRC OS X uses the SSE2 instructions exclusively for floating-point arithmetic, and 64-bit Linux tends to do the same. Windows uses the x87 FPU, but sets the FPU precision to 53-bits, so you don't see double rounding within the normal range (though it's still possible with computations having subnormal results). So I'd say that yes, this *is* a bug in random.choice, though it's one that should show up very rarely indeed. |
Mark, note that the sequence in the OP's original report only contains 35 elements. That, alas, makes "double rounding" irrelevant to this bug report. That is, while random.choice() can suffer double-rounding surprises in _some_ cases, it cannot in the case actually reported here: in the 64-bit extended-precision format, there are at least 64 - (53 + (35).bit_length()) = 5 trailing zeroes in any possible random.random() * 35 result. IOW, all such results are exact in 64-bit arithmetic, so the first "cut back to 64 bits" rounding is a no-op. |
Tim: yes, I agree that this shouldn't happen for the string posted. |
I would like to close this as "won't fix". Add given the infinitesimal probability of occurrence (and long standing code), I don't think there needs to be a change to the documentation either. |
I've been running this snippet for almost 72 hours now: s = u"БГДЖИЛЦЫЭu042eЯАВЕКМРТu042312456789"
while True:
state = random.getstate()
try:
a = random.choice(s)
except IndexError:
break with no results yet. I cannot replicate the original error. |
Raymond, there are (at least) two bugs here:
Nobody knows how rare it is. I suspect, but have not proved, that 1. - 2.**-53 is the only random.random() result for which it's possible that double-rounding can cause int(i * random.random()) == i. Given that unlikely - but possible - value, there are at least half a million ints 0 < i < 1000000000 for which equality holds (but only on platforms where the double rounding bug is possible, which doesn't include any platform I use ;-) ). That probably should get a report of its own. It's unintended and unanticipated behavior, causing simple code to raise a wholly unexpected & unpredictable exception. That's "a bug" to me. |
I've created a new bpo-24567 for the double-rounding bug. I have taken the liberty of copying the nosy list from this issue to the new one, apologies if that is inappropriate. |
[Tim]
I'm sure this is true. Any other random value is at most 1 - 2**-52, and we're always going to have (1 - 2**-52) * i <= next_below(i), (where * represents multiplication in the rationals, with unrounded result), and since next_below(i) is representable both in the extended 64-bit precision and the target 53-bit precision, neither of the roundings will change that inequality. |
This exception occurred only once and I can't reproduce it. Additional system specs in attachment. |
Any chance that another program (C-extension) had set the FPU control |
Thanks, Mark! That's convincing. Just as a sanity check, I tried all ints in 1 through 4 billion (inclusive) against 1. - 2.**-52, with no failures. Although that was with ad hoc Python code simulating various rounding methods using scaled integers, so may have its own bugs. |
Marking this as closed -- the original bug doesn't seem to be reproducible. See additional reasons and discussion in Issue bpo-24567. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: