This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author connelly
Recipients
Date 2004-12-06.07:04:14
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=1039782


Yes, I run into this bug all the time.  For example, I may 
want to generate all strings of length n:

BINARY_CHARSET = ''.join([chr(i) for i in range(256)])

def all_strs(n, charset=BINARY_CHARSET):
  m = len(charset)
  for i in xrange(m**n):
    yield ''.join([charset[i//m**j%m] for j in range(n)])

This is correct algorithmically, but fails due to the buggy 
Python xrange() function.  So I end up writing my own irange
() function.

Other cases where I've run into this problem: Sieving for 
primes starting at a given large prime, checking for integer 
solutions to an equation starting with a large integer, and 
searching very large integer sets.

itertools.count and itertools.repeat are similarly annoying.  
Often I want to search the list of all positive integers starting 
with 1, and have to use an awkward while loop to do this, or 
write my own replacement for itertools.count.

There should be little performance hit for apps which use 
xrange(n), where n fits in the system's integer size.  xrange
() can just return an iterator which returns system ints, and 
the only performance penalty is an extra if statement in the 
call to xrange (and there is no performance penalty in the 
iterator's next() method).

Finally, this move appears consistent with Python's values, ie 
simplicity, long ints supported with no extra effort, avoid 
gotchas for newbies, no special cases, etc.
History
Date User Action Args
2007-08-23 16:08:18adminlinkissue1003935 messages
2007-08-23 16:08:18admincreate