This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Dennis Sweeney
Recipients Dennis Sweeney, bbayles, rhettinger, serhiy.storchaka, tim.peters
Date 2020-03-14.21:24:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
First, as I posted at, there is a theoretical advantage of fewer comparisons in all cases, and the new algorithm would be especially dominant when one iterable keeps winning. (I'm given to understand that "lists very often do have exploitable partial order in real life" ;-)

> making heaqq.merge a class looks unrelated to the declared goal.

Is there a better way to implement a C accelerator for a Python function that returns a generator? I figured it would be better to have the pure-python implementation match the C implementation.

As for the timing, I'm running Windows and pyperf gives a "WARNING: unable to increase process priority", and a "WARNING: no operation available for your platform" when I run "pyperf system tune", but I'm still able to get the following results:

.\python.bat -m pyperf timeit -s "from random import random; from heapq import merge; from collections import deque; iters = [sorted(random() for j in range(10_000)) for i in range(20)]" "deque(merge(*iters), maxlen=0)"

    Master: Mean +- std dev: 88.0 ms +- 10.3 ms
    This PR: Mean +- std dev: 28.2 ms +- 1.0 ms

The above I'll call "merging 20 of size 10_000." 
For "merging 2 of size 100_000", I get:
    Master: Mean +- std dev: 34.4 ms +- 3.2 ms
    This PR: Mean +- std dev: 10.6 ms +- 0.7 ms

Merging 10_000 of size 100:

    Master: Mean +- std dev: 1.56 sec +- 0.11 sec
    This PR: Mean +- std dev: 628 ms +- 22 ms
Date User Action Args
2020-03-14 21:24:32Dennis Sweeneysetrecipients: + Dennis Sweeney, tim.peters, rhettinger, serhiy.storchaka, bbayles
2020-03-14 21:24:32Dennis Sweeneysetmessageid: <>
2020-03-14 21:24:32Dennis Sweeneylinkissue38938 messages
2020-03-14 21:24:32Dennis Sweeneycreate