This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gregory.p.smith
Recipients gregory.p.smith, methane, nascheme, pablogsal, tianon
Date 2019-07-10.07:40:41
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1562744442.17.0.593509736692.issue36044@roundup.psfhosted.org>
In-reply-to
Content
In my experience, people overthink what needs to go into a CPython profiling run.  Sure, our default PROFILE_TASK is rather unfortunate because it takes a very long time by including runs of super slow tests that won't meaningfully contribute profile data (multiprocessing, subprocess, concurrent_futures, heavy I/O, etc).

But despite somewhat popular belief, it is not actually a problem that the suite exercises other sides of branches, because by and large just by executing Python code at all and exercising C extension module code, it still acts like most any Python program and spends most of the time on the common critical paths - regardless tests that trigger specific number of executions of other paths.  Those executions pale in comparison to the ordinary ones in anywhere critical.

I don't recommend making any claim about something "harming" the profile without reliable data to prove it.

Feel free to tune what test.regrtest --pgo enables or disables by default.  But try to do it in a scientific data based manner rather than guessing.  Decreasing the total wall time for a default --enable-optimizations build would be a good thing for everyone, provided the resulting interpreter remains "effectively similar" in speed.  If you somehow manage to find something that actually speeds up the resulting interpreter, amazing!

https://github.com/python/pyperformance is your friend for real world performance measurement.  Patience is key.  The builds and benchmark runs are slow.

One thing --enable-optimizations does _not_ do today is enable link time optimization by default.  Various toolchain+platform versions were having problems successfully generating a working interpreter with LTO enabled.  If you want a potential large speed up in the interpreter, figuring out how to get link time optimization on top of the existing PGO working reliably, detecting when toolchains+platform combos where it will be reliable, and enabling it by default on such systems is _likely_ the largest possible gain still to be had.
History
Date User Action Args
2019-07-10 07:40:42gregory.p.smithsetrecipients: + gregory.p.smith, nascheme, methane, pablogsal, tianon
2019-07-10 07:40:42gregory.p.smithsetmessageid: <1562744442.17.0.593509736692.issue36044@roundup.psfhosted.org>
2019-07-10 07:40:42gregory.p.smithlinkissue36044 messages
2019-07-10 07:40:41gregory.p.smithcreate