This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author eltoder
Recipients eltoder, mark.dickinson, pitrou, rhettinger, twouters
Date 2011-03-11.22:31:07
SpamBayes Score 2.29327e-09
Marked as misclassified No
Message-id <>
Thomas, can you clarify, does loading interns all constants in co_consts, or do you mean that they are mostly small numbers and the like?

Without interning I think that in-memory size difference is even bigger than on-disk, as you get one entry in tuple and the object itself. I'm sure I can cook up a test that will show some perf difference, because of cache misses or paging. You can say that this is not real world code, and you will likely be right. But in real world (before you add inlining and constant propagation) constant folding doesn't make a lot of difference too, yet people asked for it and peepholer does it. Btw, Antoine just improved it quite a bit (Issue11244), so size difference with my patch should increase.

My rationale for the patch is that 1) it's real very simple 2) it removes feeling of half-done job when you look at the bytecode.
Date User Action Args
2011-03-11 22:31:09eltodersetrecipients: + eltoder, twouters, rhettinger, mark.dickinson, pitrou
2011-03-11 22:31:09eltodersetmessageid: <>
2011-03-11 22:31:07eltoderlinkissue11462 messages
2011-03-11 22:31:07eltodercreate