This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients Dennis Sweeney, rhettinger
Date 2019-11-30.22:10:15
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1575151816.7.0.147937905792.issue38938@roundup.psfhosted.org>
In-reply-to
Content
Several thoughts:

* heapq.merge() could have plausibly been in the itertools module instead of the heapq module.  However, since it already has a home, there is no reason to move it or to create duplication with an intermodule dependency.  We certainly don't have a rule that the itertools module must house everything accepts an iterator as an input and returns an iterator in the output.

* The iter_merge() recipe is a bit sprawling and IMHO not a useful teaching aid (a primary goal of the itertools recipe), so it doesn't have much documentation value.  Instead, consider submitting this to the "more-itertools" project which is referenced by the docs.  There it would likely be used directly rather than as a recipe.

* Over the next few days, I'll take a closer look at the binary tree approach versus a heap approach.  I like how it intrinsically avoids the overhead of the 4-tuple while maintaining order, but I want to test how well it does with slow comparison keys (i.e. does the total number of comparisons get worse).  Am not sure whether the tree needs to be rebalanced as some of the input streams become exhausted.  I have a vague recollection that Knuth's TAOCP deemed the heap approach to be algorithmically superior, but need to dig out my books to be verify that.

* We could just build-out the current heapq.merge() code to include a special case for only two input iterables.  That would speed-up a common case by avoiding the overhead of tracking a 4-tuple for each element.  If you want to submit a PR for that, I would be happy to take a close look and verify the timings.

* It might be interesting to time the pure python variants against "sorted(chain(a, b, c, d, e))".  It isn't quite an apples-to-apples comparison because the latter pulls all the data in memory and it doesn't the runs pre-segregated, but it might give a sense of how well merge() could do if we decided to go gonzo on optimizations.  Until now, we've been satisfied with letting the heap structure minimize the number of comparisons and there hasn't been a focus on reducing the constant factor overhead.
History
Date User Action Args
2019-11-30 22:10:16rhettingersetrecipients: + rhettinger, Dennis Sweeney
2019-11-30 22:10:16rhettingersetmessageid: <1575151816.7.0.147937905792.issue38938@roundup.psfhosted.org>
2019-11-30 22:10:16rhettingerlinkissue38938 messages
2019-11-30 22:10:15rhettingercreate