This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author scoder
Recipients Mark.Shannon, jtaylor, lemburg, pitrou, rhettinger, scoder, serhiy.storchaka, tim.peters
Date 2015-07-10.04:34:49
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
It's generally worth running the benchmark suite for this kind of optimisation. Being mostly Python code, it should benefit quite clearly from dictionary improvements, but it should also give an idea of how much of an improvement actual Python code (and not just micro-benchmarks) can show. And it can help detecting unexpected regressions that would not necessarily be revealed by micro-benchmarks.

And I'm with Mark: when it comes to performance optimisations, repeating even a firm intuition doesn't save us from validating that this intuition actually matches reality. Anything that seems obvious at first sight may still be proven wrong by benchmarks, and has often enough been so in the past.
Date User Action Args
2015-07-10 04:34:50scodersetrecipients: + scoder, lemburg, tim.peters, rhettinger, pitrou, Mark.Shannon, jtaylor, serhiy.storchaka
2015-07-10 04:34:50scodersetmessageid: <>
2015-07-10 04:34:50scoderlinkissue23601 messages
2015-07-10 04:34:49scodercreate