This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients larry, serhiy.storchaka, vstinner
Date 2019-01-11.10:33:35
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1547202815.65.0.254681647419.issue35582@roundup.psfhosted.org>
In-reply-to
Content
$ ./python -m timeit "format('abc')"
Unpatched:  5000000 loops, best of 5: 65 nsec per loop
Patched:    5000000 loops, best of 5: 42.4 nsec per loop

-23 ns on 65 ns: this is very significant! I spent like 6 months to implement "FASTCALL" to avoid a single tuple to pass positional arguments and it was only 20 ns faster per call. Additional 23 ns make the code way faster compared than Python without FASTCALL! I estimate something like 80 ns => 42 ns: 2x faster!

$ ./python -m timeit "'abc'.replace('x', 'y')"
Unpatched:  5000000 loops, best of 5: 101 nsec per loop
Patched:    5000000 loops, best of 5: 63.8 nsec per loop

-38 ns on 101 ns: that's even more significant! Wow, that's very impressive!

Please merge your PR, I want it now :-D

Can you maybe add a vague sentence in the Optimizations section of What's New in Python 3.8 ? Something like: "Parsing positional arguments in builtin functions has been made more efficient."? I'm not sure if "builtin" is the proper term here. Functions using Argument Clinic to parse their arguments?
History
Date User Action Args
2019-01-11 10:33:36vstinnersetrecipients: + vstinner, larry, serhiy.storchaka
2019-01-11 10:33:35vstinnersetmessageid: <1547202815.65.0.254681647419.issue35582@roundup.psfhosted.org>
2019-01-11 10:33:35vstinnerlinkissue35582 messages
2019-01-11 10:33:35vstinnercreate