Message174551
For the concept of "reasonable", it should be noted that this behaviour will affect code that works with reasonably sized sequences despite the largeness of the parameter.
Consider an extremely large array. To work with such an array, one would typically break it into small segments. However, to simplify the code and reduce bugs it makes sense to use a consistent indexing method on each segment. The size of its parameter does not say anything about the size of a segment. Consider a class which implements virtual arrays.
def __getitem__(...):
...
start,stop,step=slice.indices(start,stop,step).indices(12600000000)
while True:
if step>0 and start>=stop: break
if step<0 and start<=stop: break
p=pageid(start)
make_page_resident(p)
do work ...
start=start+step
As you can see, slice.indices should not be limited to sys.maxsize. If Python can perform the arithmetic calculation sys.maxsize+1 then slice.indices(sys.maxsize+1) should also work. The usage of slice.indices is to ensure consistent behaviour of the slicing operator. Another workaround for this bug:
5. write your own implementation of slice.indices
I consider this a workaround. The correct way to handle the index parameter to __getitem__ and __setitem__ is to use slice.indices. That way, if the semantics of slicing changes in future versions of Python, your class will behave consistently. It seems to me that this is the main reason why slice.indices exists at all: to prevent inconsistent behaviour when people implement __getitem__ and __setitem__. |
|
Date |
User |
Action |
Args |
2012-11-02 18:23:37 | Paul.Upchurch | set | recipients:
+ Paul.Upchurch, mark.dickinson, ned.deily, eric.araujo, hynek, serhiy.storchaka |
2012-11-02 18:23:37 | Paul.Upchurch | set | messageid: <1351880617.53.0.00228096022658.issue14794@psf.upfronthosting.co.za> |
2012-11-02 18:23:37 | Paul.Upchurch | link | issue14794 messages |
2012-11-02 18:23:36 | Paul.Upchurch | create | |
|