This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author serhiy.storchaka
Recipients serhiy.storchaka
Date 2015-02-24.08:08:30
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1424765312.55.0.269687406134.issue23507@psf.upfronthosting.co.za>
In-reply-to
Content
Currently tuples creating uses free lists. But this optimization is not enough. When reuse cached tuple for arguments of repeatedly called functions, the performance of some builtins can be significantly increased.

$ ./python -m timeit -s "f = lambda x: x" -s "s = list(range(1000))" -- "list(filter(f, s))"
Unpatched: 1000 loops, best of 3: 773 usec per loop
Patched:   1000 loops, best of 3: 558 usec per loop

$ ./python -m timeit -s "f = lambda x: x" -s "s = list(range(1000))" -- "list(map(f, s))"
Unpatched: 1000 loops, best of 3: 689 usec per loop
Patched:   1000 loops, best of 3: 556 usec per loop

$ ./python -m timeit -s "f = lambda x: x" -s "s = list(range(1000))" -- "sorted(s, key=f)"
Unpatched: 1000 loops, best of 3: 758 usec per loop
Patched:   1000 loops, best of 3: 550 usec per loop

The same effect can be achieved for itertools functions.

I don't propose to commit this complicated patch, but these results can be used as a guide to the optimization of tuple creating. It is surprising to me that this patch has any effect at all.
History
Date User Action Args
2015-02-24 08:08:32serhiy.storchakasetrecipients: + serhiy.storchaka
2015-02-24 08:08:32serhiy.storchakasetmessageid: <1424765312.55.0.269687406134.issue23507@psf.upfronthosting.co.za>
2015-02-24 08:08:32serhiy.storchakalinkissue23507 messages
2015-02-24 08:08:31serhiy.storchakacreate