This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients alexandre.vassalotti, blaisorblade, christian.heimes, lemburg, pitrou, rhettinger, skip.montanaro
Date 2009-01-03.13:22:37
SpamBayes Score 0.014734576
Marked as misclassified No
Message-id <1230988965.10091.2.camel@localhost>
In-reply-to <495EC14C.7010405@cheimes.de>
Content
> I'm not an expert in this kind of optimizations. Could we gain more
> speed by making the dispatcher table more dense? Python has less than
> 128 opcodes (len(opcode.opmap) == 113) so they can be squeezed in a
> smaller table. I naively assume a smaller table increases the amount of
> cache hits.

I don't think so. The end of the current table, which doesn't correspond
to any valid opcodes, will not be cached anyway. The upper limit to be
considered is the maximum value for a valid opcode, which is 147.
Reducing that to 113 may reduce cache pressure, but only by a tiny bit.

Of course, only experimentation could tell us for sure :)
History
Date User Action Args
2009-01-03 13:22:39pitrousetrecipients: + pitrou, lemburg, skip.montanaro, rhettinger, christian.heimes, alexandre.vassalotti, blaisorblade
2009-01-03 13:22:38pitroulinkissue4753 messages
2009-01-03 13:22:37pitroucreate