This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients PedanticHacker, Stefan Pochmann, mark.dickinson, mcognetta, rhettinger, serhiy.storchaka, tim.peters
Date 2021-12-24.19:40:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
If people are keen to speed comb() for larger arguments, the approach in Stefan's "" is very much worth looking at. I've played with that idea before, and it runs circles around any other approach I've seen.

The only divisions it does are of integers no larger than n by primes no larger than k (the "effective" k: min(k, n-k)). The current CPython implementation guarantees the latter fit in a C "long long", so the divisor is never notably stressful.

The downside is that it requires O(log(n) * k) temp storage, to hold list(range(n, n-k, -1)), and O(k) temp storage to hold a sieve to find the primes <= k.

A subtlety: Stefan's `myprod()` is also important to its speed gains when k gets large. For example, given xs = list(range(1, 1000000)), takes over 40 times longer than myprod(xs). myprod() is obscurely coded (presumably for peak speed), but effectively arranges the multiplications in a complete binary tree (e.g., myprod([1, 2, 3, 4, 5, 6, 7, 8]) is evaluated as ((1*2)*(3*4))*((5*6)*(7*8))). This gives Karatsuba multiplication good chances to get exploited at higher levels in the tree.

The "divide-and-conquer" recurrence already checked in also bought speed gains by getting Karatsuba in play, but is much less effective overall because it can still need divisions of giant ints by giant ints. Then again, it's frugal with memory.
Date User Action Args
2021-12-24 19:40:20tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, serhiy.storchaka, PedanticHacker, mcognetta, Stefan Pochmann
2021-12-24 19:40:20tim.peterssetmessageid: <>
2021-12-24 19:40:20tim.peterslinkissue37295 messages
2021-12-24 19:40:20tim.peterscreate