Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement per-opcode cache in ceval #70407

Closed
1st1 opened this issue Jan 27, 2016 · 42 comments
Closed

implement per-opcode cache in ceval #70407

1st1 opened this issue Jan 27, 2016 · 42 comments
Assignees
Labels
3.8 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@1st1
Copy link
Member

1st1 commented Jan 27, 2016

BPO 26219
Nosy @gvanrossum, @brettcannon, @ncoghlan, @vstinner, @ned-deily, @methane, @markshannon, @ericsnowcurrently, @1st1, @jstasiak, @MojoVampire, @DimitrisJim, @dopplershift, @rowillia
PRs
  • bpo-26219: per opcode cache for LOAD_GLOBAL #12884
  • bpo-26219: remove unused code #13775
  • bpo-26219: Fix compiler warning #13809
  • bpo-26219: Fix compiler warning in _PyCode_InitOpcache() #13997
  • [3.8] bpo-26219: Fix compiler warning in _PyCode_InitOpcache() (GH-13997) #14000
  • Dependencies
  • bpo-26058: PEP 509: Add ma_version to PyDictObject
  • bpo-26110: Speedup method calls 1.2x
  • Files
  • opcache1.patch
  • opcache2.patch
  • opcache3.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/1st1'
    closed_at = <Date 2019-06-04.15:15:23.302>
    created_at = <Date 2016-01-27.17:11:27.004>
    labels = ['interpreter-core', '3.8', 'performance']
    title = 'implement per-opcode cache in ceval'
    updated_at = <Date 2019-12-27.19:07:14.427>
    user = 'https://github.com/1st1'

    bugs.python.org fields:

    activity = <Date 2019-12-27.19:07:14.427>
    actor = 'eric.snow'
    assignee = 'yselivanov'
    closed = True
    closed_date = <Date 2019-06-04.15:15:23.302>
    closer = 'vstinner'
    components = ['Interpreter Core']
    creation = <Date 2016-01-27.17:11:27.004>
    creator = 'yselivanov'
    dependencies = ['26058', '26110']
    files = ['41729', '41776', '42658']
    hgrepos = ['333']
    issue_num = 26219
    keywords = ['patch']
    message_count = 42.0
    messages = ['259040', '259042', '259051', '259052', '259060', '259064', '259065', '259344', '259371', '259410', '259522', '259523', '262623', '264528', '264532', '264533', '264534', '264658', '264660', '264664', '264671', '264674', '264676', '310845', '310956', '310960', '310961', '310965', '311028', '344418', '344419', '344421', '344470', '344587', '344591', '345300', '345315', '354089', '354090', '354091', '354122', '354181']
    nosy_count = 15.0
    nosy_names = ['gvanrossum', 'brett.cannon', 'ncoghlan', 'vstinner', 'ned.deily', 'methane', 'Mark.Shannon', 'eric.snow', 'francismb', 'yselivanov', 'jstasiak', 'josh.r', 'Jim Fasarakis-Hilliard', 'Ryan May', 'Roy Williams']
    pr_nums = ['12884', '13775', '13809', '13997', '14000']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue26219'
    versions = ['Python 3.8']

    @1st1 1st1 self-assigned this Jan 27, 2016
    @1st1 1st1 added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Jan 27, 2016
    @brettcannon
    Copy link
    Member

    I assume there's going to be a patch or more of a description of what your idea is? :)

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 27, 2016

    Yeah, I needed a URL of the issue for my email to python-dev ;)

    Here's a link to the email, that explains a lot about this patch: https://mail.python.org/pipermail/python-dev/2016-January/142945.html

    The patch is also attached (opcache1.patch).

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 27, 2016

    BTW, there are a couple of unit-tests that fail. Both can be easily fixed.

    To really move this thing forward, we need to profile the memory usage. First, it would be interesting to see how much additional memory is consumed if we optimize every code object. That will give us an idea of the overhead, and if it is significant, we can experiment with various heuristics to minimize it.

    @brettcannon
    Copy link
    Member

    If you run hg.python.org/benchmarks on Linux it has a flag to measure memory.

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 27, 2016

    If you run hg.python.org/benchmarks on Linux it has a flag to measure memory.

    Great. I'll re-run the benchmarks. BTW, the latest results are here: https://gist.github.com/1st1/aed69d63a2ff4de4c7be

    @vstinner
    Copy link
    Member

    If I understand correctly, this change requires to wait until the PEP-509 is accepted, right? Well, I'm not really suprised, since global cache is mentioned as an use case of the PEP-509, and other global cache patches are mentioned in Prior Art.

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 27, 2016

    Yes, this patch depends on PEP-509.

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 1, 2016

    Attaching a new version of the patch. Issues bpo-26058 and bpo-26110 need to be merged before we can start reviewing it.

    @vstinner
    Copy link
    Member

    vstinner commented Feb 2, 2016

    I'm concerned by the test_descr failure.

    ======================================================================
    ERROR: test_vicious_descriptor_nonsense (test.test_descr.ClassPropertiesAndMethods)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/home/haypo/prog/python/default/Lib/test/test_descr.py", line 4176, in test_vicious_descriptor_nonsense
        self.assertEqual(c.attr, 1)
    AttributeError: 'C' object has no attribute 'attr'
    
    ----------------------------------------------------------------------
    
    
    

    ======================================================================
    FAIL: test_objecttypes (test.test_sys.SizeofTest)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/home/haypo/prog/python/default/Lib/test/test_sys.py", line 895, in test_objecttypes
        check(get_cell().__code__, size('5i9Pi3P'))
      File "/home/haypo/prog/python/default/Lib/test/support/__init__.py", line 1475, in check_sizeof
        test.assertEqual(result, size, msg)
    AssertionError: 192 != 160 : wrong size for <class 'code'>: got 192, expected 160
    
    ----------------------------------------------------------------------
    
    
    

    ======================================================================
    FAIL: test_pycfunction (test.test_gdb.PyBtTests)
    Verify that "py-bt" displays invocations of PyCFunction instances
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/home/haypo/prog/python/default/Lib/test/test_gdb.py", line 857, in test_pycfunction
        self.assertIn('#0 <built-in method gmtime', gdb_output)
    AssertionError: '#0 <built-in method gmtime' not found in 'Breakpoint 1 at 0x6518e3: file ./Modules/timemodule.c, line 338.\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library "/lib64/libthread_db.so.1".\n\nBreakpoint 1, time_gmtime (self=<module at remote 0x7ffff7e6ff58>, args=(1,)) at ./Modules/timemodule.c:338\n338\t    if (!parse_time_t_args(args, "|O:gmtime", &when))\n#1 <built-in method gmtime of module object at remote 0x7ffff7e6ff58>\n#4 Frame 0x7ffff7fb27b8, for file <string>, line 3, in foo ()\n#7 Frame 0x7ffff7f423f8, for file <string>, line 5, in bar ()\n#10 Frame 0x7ffff7fb25e0, for file <string>, line 6, in <module> ()\n'

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 2, 2016

    Victor,

    Thanks for the initial review. I'll work on the patch sometime later next week.

    As for test_vicious_descriptor_nonsense -- yeah, I saw that too. I know what's going on there and I know how to fix that. FWIW that test tests a very counter-intuitive and quirky CPython implementation detail.

    @francismb
    Copy link
    Mannequin

    francismb mannequin commented Feb 3, 2016

    From the two checks on Python/compile.c:

    + expr_ty meth = e->v.Call.func;
    [...]
    + /* Check [...] that
    + the call doesn't have keyword parameters. */
    [...]
    + /* Check that there are no *varargs types of arguments. */
    [...]

    I just wonder how many times those kind of checks/guards are done
    on the whole Cpython code base (the second one seems expensive).

    (Naive Idea):
    Wouldn't be some kind of fast function description (e.g. bit flags
    or 'e->v.Call.func.desc') generally helpful? The function description
    could have: 'has_keywords' or 'has_varargs', ...

    Thanks in advance!

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 4, 2016

    From the two checks on Python/compile.c:

    Stuff done in compile.c is cached in *.pyc files.

    The for-loop you saw shouldn't take a lot of time - it iterates through function parameters, which an average function doesn't have more than 3-6.

    @vstinner
    Copy link
    Member

    See the issue bpo-26647 which may make the implementation of this cache simpler.

    See also my message about inline caching:
    https://bugs.python.org/issue26647#msg262622

    @gvanrossum
    Copy link
    Member

    Victor brought this patch to my attention as the motivation for PEP-509. Unfortunately the patch doesn't apply cleanly and I don't have time to try and piece together all the different parts. From the description on python-dev you linked to there are actually a few different patches that all work together and result in a significant speedup. Is there any chance that you could get all this up to date so it applies cleanly to the CPython 3.6 repo (the hg version!), and report some realistic benchmarks? The speedups reported sound great, but I worry that there's a catch (e.g. memory cost that doesn't become apparent until you have a large program, or minimal speedup on realistic code). This is currently holding up approval of PEP-509.

    @1st1
    Copy link
    Member Author

    1st1 commented Apr 29, 2016

    Hi Guido, I'll try to update the patch soon.

    but I worry that there's a catch (e.g. memory cost that doesn't become apparent until you have a large program, or minimal speedup on realistic code).

    Here's an excerpt from my email [1] to Python-dev on memory footprint:

    To measure the max/average memory impact, I tuned my code to optimize
    *every* code object on *first* run. Then I ran the entire Python test
    suite. Python test suite + standard library both contain around 72395
    code objects, which required 20Mb of memory for caches. The test
    process consumed around 400Mb of memory. Thus, the absolute worst case
    scenario, the overhead is about 5%.

    [1] https://mail.python.org/pipermail/python-dev/2016-February/143025.html

    @gvanrossum
    Copy link
    Member

    Thanks, that's a cool stat. Please do update the patch.

    @1st1
    Copy link
    Member Author

    1st1 commented Apr 29, 2016

    Alright, attaching a rebased patch (opcache3.patch).

    Some caveats:

    1. The patch embeds a fairly outdated PEP-509 implementation.

    2. A PoC implementation of LOAD_METHOD opcode that should be really cleaned-up (and some parts of it rewritten).

    3. I was going to spend more time on ceval.c code to make it lighter, and, perhaps, break large chunks of code into small functions.

    All in all, this patch can be used for benchmarking, but it's not yet ready for a thorough code review.

    Last thing, if you flip OPCODE_CACHE_STATS to 1 in ceval.c, python will print debug stats on opcode cache.

    @gvanrossum
    Copy link
    Member

    I'm confused by the relationship between this and bpo-26110. That seems to be a much simpler patch (which also doesn't apply cleanly). If 26110 really increases method calls by 20%, what does this add? (By which I mean (a) what additional optimizations does it have, and (b) what additional speedup does it have?)

    I'm asking because I'm still trying to look for reasons why I should accept PEP-509, and this is brought up as a reason.

    @gvanrossum
    Copy link
    Member

    I'm also looking for some example code that would show clearly the kind of speedup we're talking about.

    @1st1
    Copy link
    Member Author

    1st1 commented May 2, 2016

    I'm confused by the relationship between this and bpo-26110.

    This patch embeds the implementation of 26110. I'm no longer sure it was a good idea to have two issues instead of one, everybody seems to be confused about that ;)

    That seems to be a much simpler patch (which also doesn't apply cleanly). If 26110 really increases method calls by 20%, what does this add? (By which I mean (a) what additional optimizations does it have, and (b) what additional speedup does it have?)

    I'm sorry for the long response, please bare with me. This issue is complex, and it's very hard to explain it all in a short message.

    The patch from 26110 implements LOAD_METHOD/CALL_METHOD pair of opcodes. The idea is that we can avoid instantiation of BoundMethod objects for code that looks like "something.method(...)'. I wanted to first get in shape the patch from 26110, commit it, and then, use the patch from this issue to add additional speedups.

    This patch implements a generic per-opcode cache. Using that cache, it speeds up LOAD_GLOBAL, LOAD_ATTR, and LOAD_METHOD (from 26110) opcodes. The cache works on per-code object basis, only optimizing code objects that were run more than 1,000 times.

    • LOAD_GLOBAL uses cache to store pointers for requested names. Since the cache is per-opcode, the name is always the same for the given LOAD_GLOBAL. The cache logic uses PEP-509 to invalidate the cache (although the cache is almost never invalidated for real code).

    This optimization makes micro optimizations like "def smth(len=len)" obsolete. LOAD_GLOBAL becomes much faster, almost as fast as LOAD_FAST.

    • LOAD_ATTR uses Armin Rigo's clever types cache, and a modified PyDict_GetItem (PyDict_GetItemHint), which accepts a suggested position of the value in the hash table. Basically, LOAD_ATTR stores in its cache a pointer to the type of the object it works with, its tp_version_tag, and a hint for PyDict_GetItemHint. When we have a cache hit, LOAD_ATTR becomes super fast, since it only needs to lookup key/value in type's dict by a known offset (the real code is a bit more complex, to handle all edge cases of descriptor protocol etc).

    Python programs have a lot of LOAD_ATTRs. It also seems that dicts of classes are usually stable, and that objects rarely shadow class attributes.

    • LOAD_METHOD is optimized very similarly to LOAD_ATTR. The idea is that if the opcode is optimized, then we simply store a pointer to the function we want to call with CALL_METHOD. Since 'obj.method' usually implies that 'method' is implemented on 'obj.class' this optimization makes LOAD_METHOD even faster. That's how s.startswith('abc') becomes as fast as s[:3] == 'abc'.

    If 26110 really increases method calls by 20%, what does this add?

    Speeds up method calls another 15%, speeds up global name lookups and attribute lookups.

    (b) what additional speedup does it have

    Here're some benchmark results: https://gist.github.com/1st1/b1978e17ee8b82cc6432

    • call_method micro benchmark is 35% faster
    • 2to3 is 7-8% faster
    • richards - 18% faster
    • many other benchmarks are 10-15% faster. Those that appear slower aren't stable, i.e. one run they are slower, another they are faster.

    I'd say each of the above optimizations speeds up macro-benchmarks by 2-4%. Combined, they speed up CPython 7-15%.

    I'm asking because I'm still trying to look for reasons why I should accept PEP-509, and this is brought up as a reason.

    Re PEP-509 and these patches:

    1. I want to first find time to finish up and commit 26110.
    2. Then I'll do some careful benchmarking for this patch and write another update with results.
    3. This patch adds to the complexity of ceval, but if it really speeds up CPython it's well worth it. PEP-509 will need to be approved if we decide to move forward with this patch.

    I'd say that if you aren't sure about PEP-509 right now, then we can wait a couple of months and decide later.

    @gvanrossum
    Copy link
    Member

    OK, I get it. I think it would be really helpful if bpo-26110 was updated, reviewed and committed -- it sound like a good idea on its own, and it needs some burn-in time due to the introduction of two new opcodes. (That's especially important since there's also a patch introducing wordcode, i.e. bpo-26647 and undoubtedly the two patches will conflict.) It also needs to show benchmark results on its own (but I think you've got that).

    I am also happy to see the LOAD_GLOBAL optimization, and it alone may be sufficient to save PEP-509 (though I would recommend editing the text of the PEP dramatically to focus on a brief but crisp description of the change itself and the use that LOAD_GLOBAL would make of it and the microbench results; it currently is a pain to read the whole thing).

    I have to read up on what you're doing to LOAD_ATTR/LOAD_METHOD. In the mean time I wonder how that would fare in a world where most attr lookups are in namedtuples.

    As a general recommendation I would actually prefer more separate patches (even though it's harder on you), just with very clearly stated relationships between them.

    A question about the strategy of only optimizing code objects that are called a lot. Doesn't this mean that a "main" function containing a major inner loop doesn't get the optimization it might deserve?

    PS. I like you a lot but there's no way I'm going to "bare" with you. :-)

    @1st1
    Copy link
    Member Author

    1st1 commented May 2, 2016

    OK, I get it. I think it would be really helpful if bpo-26110 was updated, reviewed and committed -- it sound like a good idea on its own, and it needs some burn-in time due to the introduction of two new opcodes. (That's especially important since there's also a patch introducing wordcode, i.e. bpo-26647 and undoubtedly the two patches will conflict.) It also needs to show benchmark results on its own (but I think you've got that).

    Right. Victor asked me to review the wordcode patch (and maybe even commit), and I'll try to do that this week. LOAD_METHOD/CALL_METHOD patch needs some refactoring, right now it's a PoC-quality. I agree it has to go first.

    I am also happy to see the LOAD_GLOBAL optimization, and it alone may be sufficient to save PEP-509 (though I would recommend editing the text of the PEP dramatically to focus on a brief but crisp description of the change itself and the use that LOAD_GLOBAL would make of it and the microbench results; it currently is a pain to read the whole thing).

    Alright. I'll work on this with Victor.

    I have to read up on what you're doing to LOAD_ATTR/LOAD_METHOD. In the mean time I wonder how that would fare in a world where most attr lookups are in namedtuples.

    I think there will be no difference, but I can add a microbenchmark and see.

    As a general recommendation I would actually prefer more separate patches (even though it's harder on you), just with very clearly stated relationships between them.

    NP. This is a big change to review, and the last thing I want is to accidentally make CPython slower.

    A question about the strategy of only optimizing code objects that are called a lot. Doesn't this mean that a "main" function containing a major inner loop doesn't get the optimization it might deserve?

    Right, the "main" function won't be optimized. There are two reasons of why I added this threshold of 1000 calls before optimizations:

    1. We want to limit the memory overhead. We, generally, don't want to optimize module code objects, and code objects that are called just a few times over long periods of time. We can introduce additional heuristic to detect long running loops, but I'm afraid that it will add overhead to the loops that don't need this optimization.

    2. We want the environment to become stable -- the builtins and globals namespaces to be fully populated (and, perhaps, mutated), classes fully initialized etc.

    PS. I like you a lot but there's no way I'm going to "bare" with you. :-)

    Haha, this is my typo of the month, I guess ;)

    @gvanrossum
    Copy link
    Member

    All sounds good. Glad the issue of long-running loops is at least on your
    radar.

    @ncoghlan
    Copy link
    Contributor

    With the 3.7 beta deadline just around the corner, 3.8 will be the next opportunity to reconsider this idea.

    @ncoghlan ncoghlan added the 3.8 only security fixes label Jan 27, 2018
    @1st1
    Copy link
    Member Author

    1st1 commented Jan 28, 2018

    With the 3.7 beta deadline just around the corner, 3.8 will be the next opportunity to reconsider this idea.

    Nick, Ned, Guido,

    Would it be OK if we add this to 3.7beta2?

    I feel kind of bad about this one... few thoughts:

    1. The speedup is about 5-10% on macro-benchmarks! This is significant.

    2. Inada-san offered me help with this, and I declined, saying that I plan to do more research/experiments and commit it before 3.7. And I didn't have time to do it because of other work I did for CPython.

    3. Even though the optimization is invasive, I feel confident about it. It just so happens that whatever modification we do to ceval, it will be covered 100% with tests or any code you execute. Any regression or anomaly should become apparent almost immediately.

    4. We'll have quite a bit of time to see if this works or not. This isn't a new functionality, so we can always pull it from the first RC build if necessary.

    @gvanrossum
    Copy link
    Member

    It's up to the release manager, but personally it feels like you're pushing
    too hard. Since you've done that frequently in the past I think we should
    help you by declining your request. 3.8 is right around the corner (on a
    Python release timescale) and this will give people a nice incentive to
    upgrade. Subsequent betas exist to stabilize the release, not to put in
    more new stuff.

    Now maybe we'll need to come up with a different way to do releases, but
    that's more a python-dev discussion.

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 28, 2018

    It's up to the release manager, but personally it feels like you're pushing
    too hard. Since you've done that frequently in the past I think we should
    help you by declining your request.

    NP, I totally understand.

    Now maybe we'll need to come up with a different way to do releases, but
    that's more a python-dev discussion.

    That would be great.

    @ned-deily
    Copy link
    Member

    I concur with Guido and Nick; let's target this for 3.8.

    @vstinner
    Copy link
    Member

    Would it be OK if we add this to 3.7beta2?

    IMHO we need more time than beta1=>beta2 window to review properly the design, test the implementation, fix any potential regression, etc.

    Such change is perfect for the *start of a development cycle, but IMHO it's bad when are close to the *end*.

    beta1=>final is supposed to only be made of bugfixes, not introducing new bugs... (I bet that adding any feature fatally introduces new bugs)

    @methane
    Copy link
    Member

    methane commented Jun 3, 2019

    New changeset 91234a1 by Inada Naoki in branch 'master':
    bpo-26219: per opcode cache for LOAD_GLOBAL (GH-12884)
    91234a1

    @methane
    Copy link
    Member

    methane commented Jun 3, 2019

    I committed cache only for LOAD_GLOBAL, which is much simpler than
    LOAD_ATTR or LOAD_METHOD.

    Caches for other instructions will be implemented 3.9 or later.

    @methane
    Copy link
    Member

    methane commented Jun 3, 2019

    New changeset 395420e by Inada Naoki in branch 'master':
    bpo-26219: remove unused code (GH-13775)
    395420e

    @vstinner
    Copy link
    Member

    vstinner commented Jun 3, 2019

    The opcode cache introduced an issue in reference leak hunting: see bpo-37146.

    @vstinner
    Copy link
    Member

    vstinner commented Jun 4, 2019

    New changeset ea9f168 by Victor Stinner in branch 'master':
    bpo-26219: Fix compiler warning in _PyCode_InitOpcache() (GH-13809)
    ea9f168

    @vstinner
    Copy link
    Member

    vstinner commented Jun 4, 2019

    I understand that the initial issue (add an opcode cache) has been implemented. Well done Yury and INADA-san!

    Please open new issues for follow-up like the new optimization ideas ;-)

    @vstinner vstinner closed this as completed Jun 4, 2019
    @vstinner
    Copy link
    Member

    New changeset 376ce98 by Victor Stinner in branch 'master':
    bpo-26219: Fix compiler warning in _PyCode_InitOpcache() (GH-13997)
    376ce98

    @vstinner
    Copy link
    Member

    New changeset 996e526 by Victor Stinner (Miss Islington (bot)) in branch '3.8':
    bpo-26219: Fix compiler warning in _PyCode_InitOpcache() (GH-13997) (GH-14000)
    996e526

    @markshannon
    Copy link
    Member

    "What's new in Python 3.8" says that this change speeds up the LOAD_GLOBAL opcode by 40%.
    That seems meaningless as a program cannot have LOAD_GLOBAL in isolation so any test would have other bytecodes as well.

    What's the evidence for the claimed speedup?
    What's the speedup on the benchmark suite?

    @vstinner
    Copy link
    Member

    vstinner commented Oct 7, 2019

    What's the evidence for the claimed speedup?
    What's the speedup on the benchmark suite?

    #12884 (comment)
    points to:
    https://github.com/methane/sandbox/tree/master/2019/opcache_load_global#opcache-for-load_global

    == Microbenchmark ==

    $ master/python -m perf timeit -s 'def foo(): int; str; bytes; float; int; str; bytes; float' -- 'foo()'
    .....................
    Mean +- std dev: 213 ns +- 5 ns
    
    $ opcache/python -m perf timeit -s 'def foo(): int; str; bytes; float; int; str; bytes; float' -- 'foo()'
    .....................
    Mean +- std dev: 120 ns +- 2 ns

    == pyperformance ==

    $ ./cpython/python -m perf compare_to master.json opcache_load_global.json -G  --min-speed=2
    Slower (2):
    - pickle: 19.1 us +- 0.2 us -> 19.7 us +- 0.8 us: 1.03x slower (+3%)
    - unpickle_list: 8.66 us +- 0.04 us -> 8.85 us +- 0.06 us: 1.02x slower (+2%)

    Faster (23):

    • scimark_lu: 424 ms +- 22 ms -> 384 ms +- 4 ms: 1.10x faster (-9%)
    • regex_compile: 359 ms +- 4 ms -> 330 ms +- 1 ms: 1.09x faster (-8%)
    • django_template: 250 ms +- 3 ms -> 231 ms +- 2 ms: 1.08x faster (-8%)
    • unpickle_pure_python: 802 us +- 12 us -> 754 us +- 9 us: 1.06x faster (-6%)
    • pickle_pure_python: 1.04 ms +- 0.01 ms -> 991 us +- 15 us: 1.05x faster (-5%)
    • hexiom: 20.8 ms +- 0.2 ms -> 19.8 ms +- 0.1 ms: 1.05x faster (-5%)
    • logging_simple: 18.4 us +- 0.2 us -> 17.6 us +- 0.2 us: 1.05x faster (-4%)
    • sympy_expand: 774 ms +- 5 ms -> 741 ms +- 3 ms: 1.04x faster (-4%)
    • json_dumps: 28.1 ms +- 0.2 ms -> 27.0 ms +- 0.2 ms: 1.04x faster (-4%)
    • logging_format: 20.4 us +- 0.2 us -> 19.6 us +- 0.3 us: 1.04x faster (-4%)
    • richards: 147 ms +- 2 ms -> 141 ms +- 1 ms: 1.04x faster (-4%)
    • meteor_contest: 189 ms +- 1 ms -> 182 ms +- 1 ms: 1.04x faster (-4%)
    • xml_etree_iterparse: 226 ms +- 2 ms -> 217 ms +- 2 ms: 1.04x faster (-4%)
    • sympy_str: 358 ms +- 3 ms -> 345 ms +- 4 ms: 1.04x faster (-4%)
    • sqlalchemy_imperative: 44.0 ms +- 1.2 ms -> 42.4 ms +- 1.2 ms: 1.04x faster (-4%)
    • sympy_sum: 167 ms +- 1 ms -> 161 ms +- 1 ms: 1.04x faster (-4%)
    • nqueens: 217 ms +- 1 ms -> 211 ms +- 1 ms: 1.03x faster (-3%)
    • fannkuch: 1.09 sec +- 0.01 sec -> 1.07 sec +- 0.00 sec: 1.03x faster (-3%)
    • raytrace: 1.11 sec +- 0.02 sec -> 1.08 sec +- 0.01 sec: 1.03x faster (-3%)
    • dulwich_log: 122 ms +- 1 ms -> 119 ms +- 1 ms: 1.03x faster (-3%)
    • logging_silent: 419 ns +- 5 ns -> 410 ns +- 5 ns: 1.02x faster (-2%)
    • sympy_integrate: 33.5 ms +- 0.1 ms -> 32.8 ms +- 0.2 ms: 1.02x faster (-2%)
    • pathlib: 40.8 ms +- 0.4 ms -> 40.0 ms +- 0.5 ms: 1.02x faster (-2%)

    Benchmark hidden because not significant (32): (...)

    @markshannon
    Copy link
    Member

    Given that
    def foo(): int; str; bytes; float; int; str; bytes; float
    can be trivially be rewritten as
    def foo(): pass
    I think that benchmark is meaningless.

    I really don't think we should be making claims like "40% speedup" for such contrived examples.

    It looks like the speedup is real, but only about 2% speedup overall.
    The "what's new" should reflect that, IMO.

    @brettcannon
    Copy link
    Member

    I personally think it would be fine to change the wording to say "measurable speed-up" and not attribute a specific number.

    @methane
    Copy link
    Member

    methane commented Oct 8, 2019

    On Mon, Oct 7, 2019 at 9:41 PM Mark Shannon <report@bugs.python.org> wrote:

    Mark Shannon <mark@hotpy.org> added the comment:

    Given that
    def foo(): int; str; bytes; float; int; str; bytes; float
    can be trivially be rewritten as
    def foo(): pass
    I think that benchmark is meaningless.

    Do you mean every microbenchmark measuring single feature is meaningless?

    I think it is worth enough for readers who understand what
    LOAD_GLOBAL instruction means.
    (e.g. People who using techniques like int_ = int before a loop.)

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.8 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    8 participants