This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author cami
Recipients cami, serhiy.storchaka
Date 2012-11-03.18:19:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1351966777.77.0.541110677033.issue16394@psf.upfronthosting.co.za>
In-reply-to
Content
No. The sample code is a demonstration how to do it, it's by no means a full-fledged patch.

The drawback of the current implementation is that if you tee n-fold, and then advance one of the iterators m times, it fills n queues with m references each, for a total of (n-1)*m references. The documentation explicitly mentions this is unfortunate.

I only demonstrate that it is perfectly unnecessary to fill n separate queues, as you can use a single queue and index into it. Instead of storing duplicate references, you can store a counter with each cached item reference. Replacing duplicates by refcounts, it turns (n-1)*m references into 2*m references (half of which being the counters). 

Not in the demo code: you can improve this further by storing items in iterator-local queues when that iterator is the only one that still needs to return it, and in a shared queue with refcount when there are more of them. That way, you eleminate the overhead of storing (item, 1) instead of item for a fix-cost per-iterator.
History
Date User Action Args
2012-11-03 18:19:37camisetrecipients: + cami, serhiy.storchaka
2012-11-03 18:19:37camisetmessageid: <1351966777.77.0.541110677033.issue16394@psf.upfronthosting.co.za>
2012-11-03 18:19:37camilinkissue16394 messages
2012-11-03 18:19:37camicreate