This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Mark.Shannon
Recipients Mark.Shannon, brandtbucher, neonene, pablogsal
Date 2022-03-02.16:18:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1646237912.63.0.106432674773.issue46841@roundup.psfhosted.org>
In-reply-to
Content
It's not an UNPACK_SEQUENCE slowdown, it's a silly benchmark ;)

https://github.com/python/pyperformance/blob/main/pyperformance/data-files/benchmarks/bm_unpack_sequence/run_benchmark.py#L6

What I *think* is happening is that the inline cache takes the size of the function (in code units) from about 4800 to about 5200, crossing our threshold for quickening (currently set to 5000).

When we quicken in-place, there will be no need for a threshold and this issue will disappear. We should probably up the threshold for now, just to keep the charts looking good.
History
Date User Action Args
2022-03-02 16:18:32Mark.Shannonsetrecipients: + Mark.Shannon, pablogsal, brandtbucher, neonene
2022-03-02 16:18:32Mark.Shannonsetmessageid: <1646237912.63.0.106432674773.issue46841@roundup.psfhosted.org>
2022-03-02 16:18:32Mark.Shannonlinkissue46841 messages
2022-03-02 16:18:32Mark.Shannoncreate