This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author bioinformed
Recipients
Date 2007-06-10.03:19:35
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
The cache is global to make it fast, have a deterministic and low memory footprint, and be easy to invalidate.  It is distinct from the standard Python dict implementation since it need not be as general nor fully associative, as cache collisions are not subject to re-probing.  Neither does it even go through the motions of memory management, which would be rather hard to get the standard dict implementation to emulate.  It is kept drastically minimal in the interests of performance and goes to great pains to hide and minimize the impact to the rest of the interpreter.  In this regard it seems quite successful, as it even removes the need for the manually inlined code you commented on before.  It also survives harsh stress-testing and presents no detectable performance regressions in any of my benchmarks (to the contrary, many operations are significantly faster).

While a per-type-object cache sounds feasible, it is hard for me to see how it could be as efficient in the general case -- for one, the dict look up fast path is just so much heavier.  Please feel free to take that as a challenge to produce a patch that contradicts my intuition.
History
Date User Action Args
2007-08-23 15:58:01adminlinkissue1700288 messages
2007-08-23 15:58:01admincreate