Author scoder
Recipients Mark.Shannon, jtaylor, lemburg, pitrou, rhettinger, scoder, serhiy.storchaka, tim.peters
Date 2015-07-10.04:34:49
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1436502890.04.0.568606733677.issue23601@psf.upfronthosting.co.za>
In-reply-to
Content
It's generally worth running the benchmark suite for this kind of optimisation. Being mostly Python code, it should benefit quite clearly from dictionary improvements, but it should also give an idea of how much of an improvement actual Python code (and not just micro-benchmarks) can show. And it can help detecting unexpected regressions that would not necessarily be revealed by micro-benchmarks.

https://hg.python.org/benchmarks/

And I'm with Mark: when it comes to performance optimisations, repeating even a firm intuition doesn't save us from validating that this intuition actually matches reality. Anything that seems obvious at first sight may still be proven wrong by benchmarks, and has often enough been so in the past.
History
Date User Action Args
2015-07-10 04:34:50scodersetrecipients: + scoder, lemburg, tim.peters, rhettinger, pitrou, Mark.Shannon, jtaylor, serhiy.storchaka
2015-07-10 04:34:50scodersetmessageid: <1436502890.04.0.568606733677.issue23601@psf.upfronthosting.co.za>
2015-07-10 04:34:50scoderlinkissue23601 messages
2015-07-10 04:34:49scodercreate