This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author belopolsky
Recipients belopolsky, loewis, pitrou, r.david.murray, rhettinger, serhiy.storchaka, vstinner
Date 2013-10-09.19:35:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1381347337.7.0.498404205197.issue19187@psf.upfronthosting.co.za>
In-reply-to
Content
> In embedded systems, every byte of memory counts 

It is not just embedded systems.  The range 192 KB to 1.5 MB is where typical L2 cache sizes are these days.  I would expect that the intern dictionary is accessed very often and much more often than the actual strings.  In theory, if it fits in L2 cache, the performance will be much higher than if it is even slightly over.

This said, when I was looking at this 3-4 years ago, I did not see measurable improvements in my programs.
History
Date User Action Args
2013-10-09 19:35:37belopolskysetrecipients: + belopolsky, loewis, rhettinger, pitrou, vstinner, r.david.murray, serhiy.storchaka
2013-10-09 19:35:37belopolskysetmessageid: <1381347337.7.0.498404205197.issue19187@psf.upfronthosting.co.za>
2013-10-09 19:35:37belopolskylinkissue19187 messages
2013-10-09 19:35:37belopolskycreate