This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author yselivanov
Recipients brett.cannon, francismb, gvanrossum, ncoghlan, vstinner, yselivanov
Date 2016-05-02.21:38:31
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
> OK, I get it. I think it would be really helpful if issue 26110 was updated, reviewed and committed -- it sound like a good idea on its own, and it needs some burn-in time due to the introduction of two new opcodes. (That's especially important since there's also a patch introducing wordcode, i.e. issue 26647 and undoubtedly the two patches will conflict.) It also needs to show benchmark results on its own (but I think you've got that).

Right.  Victor asked me to review the wordcode patch (and maybe even commit), and I'll try to do that this week.  LOAD_METHOD/CALL_METHOD patch needs some refactoring, right now it's a PoC-quality.  I agree it has to go first.

> I am also happy to see the LOAD_GLOBAL optimization, and it alone may be sufficient to save PEP 509 (though I would recommend editing the text of the PEP dramatically to focus on a brief but crisp description of the change itself and the use that LOAD_GLOBAL would make of it and the microbench results; it currently is a pain to read the whole thing).

Alright.  I'll work on this with Victor.

> I have to read up on what you're doing to LOAD_ATTR/LOAD_METHOD. In the mean time I wonder how that would fare in a world where most attr lookups are in namedtuples.

I think there will be no difference, but I can add a microbenchmark and see.

> As a general recommendation I would actually prefer more separate patches (even though it's harder on you), just with very clearly stated relationships between them.

NP. This is a big change to review, and the last thing I want is to accidentally make CPython slower.

> A question about the strategy of only optimizing code objects that are called a lot. Doesn't this mean that a "main" function containing a major inner loop doesn't get the optimization it might deserve?

Right, the "main" function won't be optimized.  There are two reasons of why I added this threshold of 1000 calls before optimizations:

1. We want to limit the memory overhead.  We, generally, don't want to optimize module code objects, and code objects that are called just a few times over long periods of time.  We can introduce additional heuristic to detect long running loops, but I'm afraid that it will add overhead to the loops that don't need this optimization.

2. We want the environment to become stable -- the builtins and globals namespaces to be fully populated (and, perhaps, mutated), classes fully initialized etc.

> PS. I like you a lot but there's no way I'm going to "bare" with you. :-)

Haha, this is my typo of the month, I guess ;)
Date User Action Args
2016-05-02 21:38:31yselivanovsetrecipients: + yselivanov, gvanrossum, brett.cannon, ncoghlan, vstinner, francismb
2016-05-02 21:38:31yselivanovsetmessageid: <>
2016-05-02 21:38:31yselivanovlinkissue26219 messages
2016-05-02 21:38:31yselivanovcreate