Message54220
Logged In: YES
user_id=31435
It looks like there are two bugs here. One is that the "integer
addition" detail doesn't make sense, since the user isn't doing
any integer addition here (sorry, no, repr() is irrelevant to
this).
Second, it shouldn't be complaining in the last two cases at
alll. If the numbers truly were out of range, then
rangeobject.c's range_new() would have raised a "too many
items" exception. Note:
>>> from sys import maxint as m
>>> xrange(0, m, 2)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
OverflowError: integer addition
>>> xrange(-m, m, 2)
xrange(-2147483647, 2147483647, 2)
>>>
The second xrange() there contains twice as many items as
the first one, but doesn't complain. It's code in PyRange_New
() that's making the bogus complaint, and I can't figure out
what it thinks it's doing.
The code in get_len_of_range() is correct.
The code in PyRange_New() is both overly permissive (e.g., it
silently lets "(len - 1) * step" overflow), and overly restrictive
(e.g, I can't see why it should matter if "last > (PyInt_GetMax
() - step))" -- the only thing in that specific branch that
*should* matter is whether the integer addition "start + (len -
1) * step" overflowed (which it isn't checking for correctly,
even assuming the multiplication didn't overflow).
The obvious fix for xrange() is to speed range_new() by
throwing away its call to the broken PyRange_New().
range_new() is already doing a precise job of checking
for "too big", and already knows everything it needs to
construct the right rangeobject.
That would leave the PyRange_New() API call with broken
overflow checking, but it's not called from anywhere else in
the core. |
|
Date |
User |
Action |
Args |
2007-08-23 16:08:18 | admin | link | issue1003935 messages |
2007-08-23 16:08:18 | admin | create | |
|