This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author neonene
Recipients Mark.Shannon, brandtbucher, erlendaasland, gvanrossum, kj, lemburg, malin, neonene, pablogsal, paul.moore, rhettinger, steve.dower, tim.golden, vstinner, zach.ware
Date 2021-10-16.15:26:05
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1634397966.41.0.0665737379238.issue45116@roundup.psfhosted.org>
In-reply-to
Content
msg402954
>https://github.com/faster-cpython/tools

According to the suggested stats and pgomgr.exe, I experimentally moved LOAD_FAST and LOAD_CONST cases out of switch as below.

        if (opcode == LOAD_FAST) {
            ...
            DISPATCH();
        }

        if (opcode == LOAD_CONST) {
            ...
            DISPATCH();
        }

        switch (opcode) {


x64 performance results after patched (msvc2019)

Good inliner ver.
    3.10.0+    1.03x faster than before
    28d28e0~1  1.04x faster
    3.8.12     1.03x faster

Bad inliner ver. (too big evalfunc. Has msvc2022 increased the capacity?)
    3.10.0/rc2 1.00x faster
    3.11a1+    1.02x faster


It seems to me since quite a while ago the optimizer has stopped at some place after successful inlining. So the performance may be sensitive to code changes and it could be possible to detect where the optimization is aborted.

(Benchmarks: switch-case_unarranged_bench.txt)
History
Date User Action Args
2021-10-16 15:26:06neonenesetrecipients: + neonene, lemburg, gvanrossum, rhettinger, paul.moore, vstinner, tim.golden, Mark.Shannon, zach.ware, steve.dower, malin, pablogsal, brandtbucher, erlendaasland, kj
2021-10-16 15:26:06neonenesetmessageid: <1634397966.41.0.0665737379238.issue45116@roundup.psfhosted.org>
2021-10-16 15:26:06neonenelinkissue45116 messages
2021-10-16 15:26:06neonenecreate