Message174659
No. The sample code is a demonstration how to do it, it's by no means a full-fledged patch.
The drawback of the current implementation is that if you tee n-fold, and then advance one of the iterators m times, it fills n queues with m references each, for a total of (n-1)*m references. The documentation explicitly mentions this is unfortunate.
I only demonstrate that it is perfectly unnecessary to fill n separate queues, as you can use a single queue and index into it. Instead of storing duplicate references, you can store a counter with each cached item reference. Replacing duplicates by refcounts, it turns (n-1)*m references into 2*m references (half of which being the counters).
Not in the demo code: you can improve this further by storing items in iterator-local queues when that iterator is the only one that still needs to return it, and in a shared queue with refcount when there are more of them. That way, you eleminate the overhead of storing (item, 1) instead of item for a fix-cost per-iterator. |
|
Date |
User |
Action |
Args |
2012-11-03 18:19:37 | cami | set | recipients:
+ cami, serhiy.storchaka |
2012-11-03 18:19:37 | cami | set | messageid: <1351966777.77.0.541110677033.issue16394@psf.upfronthosting.co.za> |
2012-11-03 18:19:37 | cami | link | issue16394 messages |
2012-11-03 18:19:37 | cami | create | |
|