This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author josh.r
Recipients josh.r, swanson
Date 2015-07-24.00:34:06
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1437698048.27.0.491986326612.issue24700@psf.upfronthosting.co.za>
In-reply-to
Content
You're correct about what is going on; aside from bypassing a bounds check (when not compiled with asserts enabled), the function it uses to get each index is the same as that used to implement indexing at the Python layer. It looks up the getitem function appropriate to the type code over and over, then calls it to create the PyLongObject and performs a rich compare.

The existing behavior is probably necessary to work with array subclasses, but it's also incredibly slow as you noticed. Main question is whether to keep the slow path for subclasses, or (effectively) require that array subclasses overriding __getitem__ also override he rich comparison operators to make them work as expected.

For cases where the signedness and element size are identical, it's trivial to acquire readonly buffers for both arrays and directly compare the memory (with memcmp for EQ/NE or size 1 elements, wmemcmp for appropriately sized wider elements, and simple loops for anything else).
History
Date User Action Args
2015-07-24 00:34:08josh.rsetrecipients: + josh.r, swanson
2015-07-24 00:34:08josh.rsetmessageid: <1437698048.27.0.491986326612.issue24700@psf.upfronthosting.co.za>
2015-07-24 00:34:08josh.rlinkissue24700 messages
2015-07-24 00:34:07josh.rcreate