This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Mark.Shannon
Recipients Mark.Shannon
Date 2021-10-19.17:24:13
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1634664253.61.0.347658329299.issue45527@roundup.psfhosted.org>
In-reply-to
Content
Every time we get a cache hit in, e.g. LOAD_ATTR_CACHED, we increment the saturating counting. Takes a dependent load and a store, as well as the shift. For fast instructions like BINARY_ADD_FLOAT, this represents a significant portion of work done in the instruction.

If we don't bother to record the hit, we reduce the overhead of fast, specialized instructions.

The cost is that may have re-optimize more often.
For those instructions with high hit-to-miss ratios, which is most, this be barely measurable.
The cost for type unstable and un-optimizable instruction shouldn't be much changed.

Initial experiments show ~1% speedup.
History
Date User Action Args
2021-10-19 17:24:13Mark.Shannonsetrecipients: + Mark.Shannon
2021-10-19 17:24:13Mark.Shannonsetmessageid: <1634664253.61.0.347658329299.issue45527@roundup.psfhosted.org>
2021-10-19 17:24:13Mark.Shannonlinkissue45527 messages
2021-10-19 17:24:13Mark.Shannoncreate