Message246544
It's generally worth running the benchmark suite for this kind of optimisation. Being mostly Python code, it should benefit quite clearly from dictionary improvements, but it should also give an idea of how much of an improvement actual Python code (and not just micro-benchmarks) can show. And it can help detecting unexpected regressions that would not necessarily be revealed by micro-benchmarks.
https://hg.python.org/benchmarks/
And I'm with Mark: when it comes to performance optimisations, repeating even a firm intuition doesn't save us from validating that this intuition actually matches reality. Anything that seems obvious at first sight may still be proven wrong by benchmarks, and has often enough been so in the past. |
|
Date |
User |
Action |
Args |
2015-07-10 04:34:50 | scoder | set | recipients:
+ scoder, lemburg, tim.peters, rhettinger, pitrou, Mark.Shannon, jtaylor, serhiy.storchaka |
2015-07-10 04:34:50 | scoder | set | messageid: <1436502890.04.0.568606733677.issue23601@psf.upfronthosting.co.za> |
2015-07-10 04:34:50 | scoder | link | issue23601 messages |
2015-07-10 04:34:49 | scoder | create | |
|