This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pablogsal
Recipients Mark.Shannon, methane, nascheme, pablogsal, vstinner, yselivanov
Date 2020-10-22.03:21:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
After and is being clear that we can leverage some cache for different information in the evaluation loop to speed up CPython. This observation is also based on the fact that although Python is dynamic, there is plenty of code that does not exercise said dynamism and therefore factoring out the "dynamic" parts of the execution by using a cache mechanism can yield excellent results. 

So far we have two big improvements in performance for caching LOAD_ATTR and LOAD_GLOBAL (in some cases up to 10-14%) but I think we can do much much more. Here are some observations of what I think we can do:

* Instead of adding more caches using the current mechanism, which adds some inlined code in every opcode in the evaluation loop, we can try to formalize some kind of caching mechanism that has some better API that will allow adding more opcodes in the future. Having the code inline in ceval.c is going to become difficult to maintain if we keep adding more stuff directly there.

* Instead of handling the specialization in the same opcode as the original one (LOAD_ATTR is doing the slow and the fast path) we could mutate the original code object and replacing the slow and generic opcodes for the more specialized ones and these will also be in charge of changing it back to the generic and slow ones if the assumptions that activated them appear.

Obviously, mutating code objects is scary, so we could have some "specialized" version of the bytecode in the cache and use that if is present. Ideas that we could do with this cached stuff:

- For binary operators, we can grab both operands, resolve the addition function and cache that together with the types and the version tags and if the types and the version tags are the same, use directly the addition function instead of resolving it.

- For loading methods, we could cache the bound method as proposed by Yury originally here:

- We could also do the same for operations like "some_container[]" if the container is some builtin. We can substitute/specialize the opcode for someone that directly uses built-in operations instead of the generic BINARY_SUBSCR.

The plan will be:

- Making some infrastructure/framework for the caching that allows us to optimize/deoptimize individual opcodes.
- Refactor the existing specialization for LOAD_GLOBAL/LOAD_ATTR to use said infrastructure.
- Thinking of what operations could benefit from specialization and start adding them one by one.
Date User Action Args
2020-10-22 03:21:38pablogsalsetrecipients: + pablogsal, nascheme, vstinner, methane, Mark.Shannon, yselivanov
2020-10-22 03:21:38pablogsalsetmessageid: <>
2020-10-22 03:21:38pablogsallinkissue42115 messages
2020-10-22 03:21:37pablogsalcreate