Author inada.naoki
Recipients inada.naoki, rhettinger, serhiy.storchaka
Date 2017-12-25.06:53:11
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1514184792.46.0.213398074469.issue32422@psf.upfronthosting.co.za>
In-reply-to
Content
> Please stop revising every single thing you look at.  The traditional design of LRU caches used doubly linked lists for a reason.  In particular, when there is a high hit rate, the links can be updated without churning the underlying dictionary.

I don't proposing removing doubly linked list; OrderedDict uses
doubly-linked list too, and I found no problem for most-hit scenario.

On the other hand, I found problem of OrderedDict for most mis hit
scenario.

Now I think lru_cache's implementation is better OrderedDict.
PyODict is slower than lru_cache's dict + linked list because of
historical reason (compatibility with pure Python implemantation.)

So I stop trying to remove lru_cache's own implementation.
I'll try to reduce overhead of lru_cache, by removing GC header
from link node.
History
Date User Action Args
2017-12-25 06:53:12inada.naokisetrecipients: + inada.naoki, rhettinger, serhiy.storchaka
2017-12-25 06:53:12inada.naokisetmessageid: <1514184792.46.0.213398074469.issue32422@psf.upfronthosting.co.za>
2017-12-25 06:53:12inada.naokilinkissue32422 messages
2017-12-25 06:53:11inada.naokicreate