This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pablogsal
Recipients Mark.Shannon, methane, nascheme, pablogsal, vstinner, yselivanov
Date 2020-10-22.04:45:10
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1603341910.38.0.621223380042.issue42115@roundup.psfhosted.org>
In-reply-to
Content
> - Rewriting code objects in place is wrong, IMO: you always need to have a way to deoptimize the entire thing, so you need to keep the original one. It might be that you have well defined and static types for the first 10000 invocations and something entirely different on 10001. So IMO we need a SpecializedCode object with the necessary bailout guards.

Imagine that we have a secondary copy of the bytecode in the cache inside the code object and we mutate that instead. The key difference with the current cache infrastructure is that we don't accumulate all the optimizations on the same opcode, which can be very verbose. Instead, we change the generic opcode to a more specialised to optimize and we change it back to deoptimize. The advantage is that BINARY_SUBSCRIPT for example won't be this gigantic block of text that will do different things depending if is specialising for dicts or lists or tuples, but we will have a different opcode for every of them, which I think is much easier to manage.
History
Date User Action Args
2020-10-22 04:45:10pablogsalsetrecipients: + pablogsal, nascheme, vstinner, methane, Mark.Shannon, yselivanov
2020-10-22 04:45:10pablogsalsetmessageid: <1603341910.38.0.621223380042.issue42115@roundup.psfhosted.org>
2020-10-22 04:45:10pablogsallinkissue42115 messages
2020-10-22 04:45:10pablogsalcreate