Issue24076
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2015-04-29 18:36 by lukasz.langa, last changed 2022-04-11 14:58 by admin. This issue is now closed.
Files | ||||
---|---|---|---|---|
File name | Uploaded | Description | Edit | |
pylong_freelist.patch | scoder, 2015-05-01 14:02 | review | ||
unpack_single_digits.patch | scoder, 2018-08-12 12:36 |
Pull Requests | |||
---|---|---|---|
URL | Status | Linked | Edit |
PR 28469 | merged | scoder, 2021-09-20 08:10 | |
PR 28493 | merged | pablogsal, 2021-09-21 16:52 |
Messages (35) | |||
---|---|---|---|
msg242238 - (view) | Author: Łukasz Langa (lukasz.langa) * | Date: 2015-04-29 18:36 | |
I got a report that summing numbers is noticably slower on Python 3. This is easily reproducible: $ time python2.7 -c "print sum(xrange(3, 10**9, 3)) + sum(xrange(5, 10**9, 5)) - sum(xrange(15, 10**9, 15))" 233333333166666668 real 0m6.165s user 0m6.100s sys 0m0.032s $ time python3.4 -c "print(sum(range(3, 10**9, 3)) + sum(range(5, 10**9, 5)) - sum(range(15, 10**9, 15)))" 233333333166666668 real 0m16.413s user 0m16.086s sys 0m0.089s I can't tell from initial poking what's the core issue here. Both examples produce equivalent bytecode, the builtin_sum() function is only noticably different in the fact that it uses PyLong_* across the board, including PyLong_AsLongAndOverlow. We'll need to profile this, which I didn't have time for yet. |
|||
msg242241 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2015-04-29 19:02 | |
Can't reproduce on 32-bit Linux. $ time python2.7 -c "print sum(xrange(3, 10**9, 3)) + sum(xrange(5, 10**9, 5)) - sum(xrange(15, 10**9, 15))" 233333333166666668 real 1m11.614s user 1m11.376s sys 0m0.056s $ time python3.4 -c "print(sum(range(3, 10**9, 3)) + sum(range(5, 10**9, 5)) - sum(range(15, 10**9, 15)))" 233333333166666668 real 1m11.658s user 1m10.980s sys 0m0.572s $ python2.7 -m timeit -n1 -r1 "sum(xrange(3, 10**9, 3)) + sum(xrange(5, 10**9, 5)) - sum(xrange(15, 10**9, 15))" 1 loops, best of 1: 72 sec per loop $ python3.4 -m timeit -n1 -r1 "sum(range(3, 10**9, 3)) + sum(range(5, 10**9, 5)) - sum(range(15, 10**9, 15))" 1 loops, best of 1: 72.5 sec per loop $ python2.7 -m timeit -s "a = list(range(10**6))" -- "sum(a)" 10 loops, best of 3: 114 msec per loop $ python3.4 -m timeit -s "a = list(range(10**6))" -- "sum(a)" 10 loops, best of 3: 83.5 msec per loop What is sys.int_info on your build? |
|||
msg242242 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2015-04-29 19:04 | |
I reproduce under 64-bit Linux. So this may be because the Python long digit (30 bits) is smaller than the C long (64 bits). Lukasz: is there a specific use case? Note you can use Numpy for such calculations. |
|||
msg242243 - (view) | Author: Łukasz Langa (lukasz.langa) * | Date: 2015-04-29 19:23 | |
Serhiy, this is 64-bit specific. Antoine, as far as I can tell, the main use case is: "Don't make it look like migrating to Python 3 is a terrible performance downgrade." As we discussed on the language summit this year [1], we have to be at least not worse to look appealing. This might be a flawed benchmark but people will make them anyway. In this particular case, there's internal usage at Twitter that unearthed it. The example is just a simplified repro. Some perf degradations were expected, like switching text to Unicode. In this case, the end result computed by both 2.7 and 3.4 is the same so we should be able to address this. [1] http://lwn.net/Articles/640224/ |
|||
msg242244 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2015-04-29 19:34 | |
If that's due to the different representation of Python 2's int type and Python 3's int type then I don't see an easy solution to this. |
|||
msg242259 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2015-04-30 03:50 | |
Łukasz: there are three ingredients here - sum, (x)range and the integer addition that sum will be performing at each iteration. Is there any chance you can separate the effects on your machine? On my machine (OS X, 64-bit), I'm seeing *some* speed difference in the integer arithmetic, but not enough to explain the whole of the timing mismatch. One thing we've lost in Python 3 is the fast path for small-int addition *inside* the ceval loop. It may be possible to restore something there. |
|||
msg242260 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2015-04-30 04:08 | |
Throwing out sum, I'm seeing significant slowdown simply from xrange versus range: taniyama:Desktop mdickinson$ python2 -m timeit -s 'x = xrange(3, 10**9, 3)' 'for e in x: pass' 10 loops, best of 3: 5.01 sec per loop taniyama:Desktop mdickinson$ python3 -m timeit -s 'x = range(3, 10**9, 3)' 'for e in x: pass' 10 loops, best of 3: 8.62 sec per loop |
|||
msg242262 - (view) | Author: Stefan Behnel (scoder) * | Date: 2015-04-30 05:30 | |
> there are three ingredients here - sum, (x)range and the integer addition that sum will be performing at each iteration. ... not to forget the interpreter startup time on his machine. :) I did a tiny bit of profiling and about 90% of the time seems to be spent creating and deallocating throw-away PyLong objects. My guess is that it simply lacks a free-list in _PyLong_New(). |
|||
msg242294 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2015-04-30 23:35 | |
It seems we (like the benchmarks posted) are spending a whole lot of time on something that's probably not relevant to any real-world situation. If someone has actual code that suffers from this, it would be good to know about it. (note by the way that summing on a range() can be done O(1): it's just a variation on a arithmetic series) |
|||
msg242300 - (view) | Author: Stefan Behnel (scoder) * | Date: 2015-05-01 06:09 | |
I don't think it's irrelevant. Throw-away integers are really not uncommon. For-loops use them quite often, non-trivial arithmetic expressions can create a lot of intermediate temporaries. Speeding up the create-delete cycle of PyLong sounds like a very obvious thing to do. Imagine some code that iterates over a list of integers, applies some calculation to them, and then stores them in a new list, maybe even using a list comprehension or so. If you could speed up the intermediate calculation by avoiding overhead in creating temporary PyLong objects, such code could benefit a lot. I suspect that adding a free-list for single-digit PyLong objects (the most common case) would provide some visible benefit. |
|||
msg242302 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2015-05-01 10:44 | |
Le 01/05/2015 08:09, Stefan Behnel a écrit : > > I don't think it's irrelevant. Throw-away integers are really not uncommon. For-loops use them quite often, non-trivial arithmetic expressions can create a lot of intermediate temporaries. Speeding up the create-delete cycle of PyLong sounds like a very obvious thing to do. That may be a good thing indeed. I'm just saying that the benchmarks people are worried about here are completely pointless. |
|||
msg242310 - (view) | Author: Stefan Behnel (scoder) * | Date: 2015-05-01 14:02 | |
I tried implementing a freelist. Patch attached, mostly adapted from the one in dictobject.c, but certainly needs a bit of cleanup. The results are not bad, about 10-20% faster: Original: $ ./python -m timeit 'sum(range(1, 100000))' 1000 loops, best of 3: 1.86 msec per loop $ ./python -m timeit -s 'l = list(range(1000, 10000))' '[(i*2+5) // 7 for i in l]' 1000 loops, best of 3: 1.05 msec per loop With freelist: $ ./python -m timeit 'sum(range(1, 100000))' 1000 loops, best of 3: 1.52 msec per loop $ ./python -m timeit -s 'l = list(range(1000, 10000))' '[(i*2+5) // 7 for i in l]' 1000 loops, best of 3: 931 usec per loop |
|||
msg242357 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2015-05-01 23:46 | |
Antoine asked: > If someone has actual code that suffers from this, it would be good to know about it. You might have missed Łukasz' earlier comment: "In this particular case, there's internal usage at Twitter that unearthed it. The example is just a simplified repro." |
|||
msg242914 - (view) | Author: Stefan Behnel (scoder) * | Date: 2015-05-11 20:08 | |
Issue 24165 was created to pursue the path of a free-list for PyLong objects. |
|||
msg323443 - (view) | Author: Stefan Behnel (scoder) * | Date: 2018-08-12 12:36 | |
FWIW, a PGO build of Py3.7 is now about 20% *faster* here than my Ubuntu 16/04 system Python 2.7, and for some (probably unrelated) reason, the system Python 3.5 is another 2% faster on my side. IMHO, the only other thing that seems obvious to try would be to inline the unpacking of single digit PyLongs into sum(). I attached a simple patch that does that, in case someone wants to test it out. For non-PGO builds, it's about 17% faster for me. Didn't take the time to benchmark PGO builds with it. |
|||
msg402136 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2021-09-18 17:39 | |
@Stefan > FWIW, a PGO build of Py3.7 is now about 20% *faster* here than my Ubuntu 16/04 system Python 2.7 Does that mean we can close this issue? Or do I misunderstand what you are comparing? 32 vs. 64 bits? PGO vs. non-PGO? OTOH on my Mac I still find that 3.10 with PGO is still more than twice as slow than 2.7. Thinking about it that's a bit odd, since (presumably) the majority of the work in sum() involves a long int result (even though the values returned by range() all fit in 30 bits, the sum quickly exceeds that). (earlier) > I suspect that adding a free-list for single-digit PyLong objects (the most common case) would provide some visible benefit. If my theory is correct that wouldn't help this particular case, right? FWIW just "for i in [x]range(15, 10**9, 15): pass" is about the same speed in Python 2.7 as in 3.11. |
|||
msg402201 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2021-09-20 07:22 | |
> OTOH on my Mac I still find that 3.10 with PGO is still > more than twice as slow than 2.7. > Thinking about it that's a bit odd, since (presumably) > the majority of the work in sum() involves a long int result > (even though the values returned by range() all fit in 30 bits, > the sum quickly exceeds that). The actual accumulation of a long int result is still as fast as it ever was. The main difference from Py2.7 isn't the addition, it is that detecting and extracting a small int added has become expensive. -- Python 2 fastpath -------------------------------------- if (PyInt_CheckExact(item)) { // Very cheap long b = PyInt_AS_LONG(item); // Very cheap long x = i_result + b; // Very cheap if ((x^i_result) >= 0 || (x^b) >= 0) { // Semi cheap i_result = x; // Zero cost Py_DECREF(item); // Most expensive step, but still cheap continue; } } -- Python 3 fastpath -------------------------------------- if (PyLong_CheckExact(item) || PyBool_Check(item)) { // Cheap long b = PyLong_AsLongAndOverflow(item, &overflow); // Super Expensive if (overflow == 0 && // Branch predictable test (i_result >= 0 ? (b <= LONG_MAX - i_result) // Slower but better test : (b >= LONG_MIN - i_result))) { i_result += b; // Very cheap Py_DECREF(item); continue; } } -- Supporting function ------------------------------------ long PyLong_AsLongAndOverflow(PyObject *vv, int *overflow) // OMG, this does a lot of work { /* This version by Tim Peters */ PyLongObject *v; unsigned long x, prev; long res; Py_ssize_t i; int sign; int do_decref = 0; /* if PyNumber_Index was called */ *overflow = 0; if (vv == NULL) { PyErr_BadInternalCall(); return -1; } if (PyLong_Check(vv)) { v = (PyLongObject *)vv; } else { v = (PyLongObject *)_PyNumber_Index(vv); if (v == NULL) return -1; do_decref = 1; } res = -1; i = Py_SIZE(v); switch (i) { case -1: res = -(sdigit)v->ob_digit[0]; break; case 0: res = 0; break; case 1: res = v->ob_digit[0]; break; default: sign = 1; x = 0; if (i < 0) { sign = -1; i = -(i); } while (--i >= 0) { prev = x; x = (x << PyLong_SHIFT) | v->ob_digit[i]; if ((x >> PyLong_SHIFT) != prev) { *overflow = sign; goto exit; } } /* Haven't lost any bits, but casting to long requires extra * care (see comment above). */ if (x <= (unsigned long)LONG_MAX) { res = (long)x * sign; } else if (sign < 0 && x == PY_ABS_LONG_MIN) { res = LONG_MIN; } else { *overflow = sign; /* res is already set to -1 */ } } exit: if (do_decref) { Py_DECREF(v); } return res; } |
|||
msg402210 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-20 08:13 | |
I created a PR from my last patch, inlining the unpacking of single digit integers. Since most integers should fit into a single digit these days, this is as fast a path as it gets. https://github.com/python/cpython/pull/28469 |
|||
msg402281 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2021-09-20 22:32 | |
> I created a PR from my last patch, inlining the unpacking > of single digit integers. Thanks, that gets to the heart of the issue. I marked the PR as approved (though there is a small coding nit you may want to fix). |
|||
msg402282 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2021-09-20 23:17 | |
The patch looks fine, but it looks a bit like benchmark chasing. Is the speed of builtin sum() of a sequence of integers important enough to do this bit of inlining? (It may break if we change the internals of Py_Long, as Mark Shannon has been wanting to do for a while -- see https://github.com/faster-cpython/ideas/issues/42.) |
|||
msg402285 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-21 05:31 | |
> The patch looks fine, but it looks a bit like benchmark chasing. Is the speed of builtin sum() of a sequence of integers important enough to do this bit of inlining? Given that we already accepted essentially separate loops for the int, float and everything else cases, I think the answer is that it doesn't add much to the triplication. > It may break if we change the internals of Py_Long, as Mark Shannon has been wanting to do for a while I would assume that such a structural change would come with suitable macros to unpack the special 0-2 digit integers. Those would then apply here, too. As it stands, there are already some modules distributed over the source tree that use direct digit access: ceval.c, _decimal.c, marshal.c. They are easy to find with grep and my PR just adds one more. |
|||
msg402286 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2021-09-21 05:36 | |
Sounds good, you have my blessing. |
|||
msg402295 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-21 09:01 | |
New changeset debd80403721b00423680328d6adf160a28fbff4 by scoder in branch 'main': bpo-24076: Inline single digit unpacking in the integer fastpath of sum() (GH-28469) https://github.com/python/cpython/commit/debd80403721b00423680328d6adf160a28fbff4 |
|||
msg402298 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2021-09-21 09:21 | |
What are microbenchmark results for PR 28469 in comparison with the baseline? |
|||
msg402302 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-21 09:40 | |
Original: $ ./python -m timeit -s 'd = list(range(2**61, 2**61 + 10000))' 'sum(d)' 500 loops, best of 5: 712 usec per loop $ ./python -m timeit -s 'd = list(range(2**30, 2**30 + 10000))' 'sum(d)' 2000 loops, best of 5: 149 usec per loop $ ./python -m timeit -s 'd = list(range(2**29, 2**29 + 10000))' 'sum(d)' 2000 loops, best of 5: 107 usec per loop $ ./python -m timeit -s 'd = list(range(10000))' 'sum(d)' 2000 loops, best of 5: 107 usec per loop New: $ ./python -m timeit -s 'd = list(range(2**61, 2**61 + 10000))' 'sum(d)' 500 loops, best of 5: 713 usec per loop $ ./python -m timeit -s 'd = list(range(2**30, 2**30 + 10000))' 'sum(d)' 2000 loops, best of 5: 148 usec per loop $ ./python -m timeit -s 'd = list(range(2**29, 2**29 + 10000))' 'sum(d)' 5000 loops, best of 5: 77.4 usec per loop $ ./python -m timeit -s 'd = list(range(10000))' 'sum(d)' 5000 loops, best of 5: 77.2 usec per loop Seems to be 28% faster for the single digit case and exactly as fast as before with larger integers. Note that these are not PGO builds. |
|||
msg402306 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2021-09-21 09:55 | |
Thank you. Could you please test PGO builds? |
|||
msg402310 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-21 10:29 | |
Hmm, thanks for insisting, Serhiy. I was accidentally using a debug build this time. I'll make a PGO build and rerun the microbenchmarks. |
|||
msg402313 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-21 11:21 | |
Old, with PGO: $ ./python -m timeit -s 'd = list(range(2**61, 2**61 + 10000))' 'sum(d)' 1000 loops, best of 5: 340 usec per loop $ ./python -m timeit -s 'd = list(range(2**30, 2**30 + 10000))' 'sum(d)' 2000 loops, best of 5: 114 usec per loop $ ./python -m timeit -s 'd = list(range(2**29, 2**29 + 10000))' 'sum(d)' 5000 loops, best of 5: 73.4 usec per loop $ ./python -m timeit -s 'd = list(range(10000))' 'sum(d)' 5000 loops, best of 5: 73.3 usec per loop $ ./python -m timeit -s 'd = [0] * 10000' 'sum(d)' 5000 loops, best of 5: 78.7 usec per loop New, with PGO: $ ./python -m timeit -s 'd = list(range(2**61, 2**61 + 10000))' 'sum(d)' 1000 loops, best of 5: 305 usec per loop $ ./python -m timeit -s 'd = list(range(2**30, 2**30 + 10000))' 'sum(d)' 2000 loops, best of 5: 115 usec per loop $ ./python -m timeit -s 'd = list(range(2**29, 2**29 + 10000))' 'sum(d)' 5000 loops, best of 5: 52.4 usec per loop $ ./python -m timeit -s 'd = list(range(10000))' 'sum(d)' 5000 loops, best of 5: 54 usec per loop $ ./python -m timeit -s 'd = [0] * 10000' 'sum(d)' 5000 loops, best of 5: 45.8 usec per loop The results are a bit more mixed with PGO optimisation (I tried a couple of times), not sure why. Might just be normal fluctuation, bad benchmark value selection, or accidental PGO tuning, can't say. In any case, the 1-digit case (10000, 2**29) is again about 28% faster and none of the other cases seems (visibly) slower. I think this is a very clear net-win. |
|||
msg402315 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2021-09-21 12:48 | |
Thank you again Stefan. Now no doubts are left. BTW, pyperf gives more stable results. I use it if have any doubts (either the results of timeit are not stable or the difference is less than say 10%). |
|||
msg402325 - (view) | Author: Pablo Galindo Salgado (pablogsal) * | Date: 2021-09-21 16:45 | |
Unfortunately commit debd80403721b00423680328d6adf160a28fbff4 introduced a reference leak: ❯ ./python -m test test_grammar -R : 0:00:00 load avg: 2.96 Run tests sequentially 0:00:00 load avg: 2.96 [1/1] test_grammar beginning 9 repetitions 123456789 ......... test_grammar leaked [12, 12, 12, 12] references, sum=48 test_grammar failed (reference leak) == Tests result: FAILURE == 1 test failed: test_grammar Total duration: 1.1 sec Tests result: FAILURE debd80403721b00423680328d6adf160a28fbff4 is the first bad commit commit debd80403721b00423680328d6adf160a28fbff4 Author: scoder <stefan_ml@behnel.de> Date: Tue Sep 21 11:01:18 2021 +0200 bpo-24076: Inline single digit unpacking in the integer fastpath of sum() (GH-28469) .../Core and Builtins/2021-09-20-10-02-12.bpo-24076.ZFgFSj.rst | 1 + Python/bltinmodule.c | 10 +++++++++- 2 files changed, 10 insertions(+), 1 deletion(-) create mode 100644 Misc/NEWS.d/next/Core and Builtins/2021-09-20-10-02-12.bpo-24076.ZFgFSj.rst |
|||
msg402326 - (view) | Author: Pablo Galindo Salgado (pablogsal) * | Date: 2021-09-21 16:53 | |
Opened #28493 to fix the refleak |
|||
msg402328 - (view) | Author: Pablo Galindo Salgado (pablogsal) * | Date: 2021-09-21 16:53 | |
Sorry, I meant PR 28493 |
|||
msg402334 - (view) | Author: Pablo Galindo Salgado (pablogsal) * | Date: 2021-09-21 17:39 | |
New changeset 1c7e98dc258a0e7ccd2325a1aefc4aa2de51e1c5 by Pablo Galindo Salgado in branch 'main': bpo-24076: Fix reference in sum() introduced by GH-28469 (GH-28493) https://github.com/python/cpython/commit/1c7e98dc258a0e7ccd2325a1aefc4aa2de51e1c5 |
|||
msg402410 - (view) | Author: Stefan Behnel (scoder) * | Date: 2021-09-22 08:03 | |
Sorry for that, Pablo. I knew exactly where the problem was, the second I read your notification. Thank you for resolving it so quickly. |
|||
msg402421 - (view) | Author: Pablo Galindo Salgado (pablogsal) * | Date: 2021-09-22 10:30 | |
Always happy to help :) |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:58:16 | admin | set | github: 68264 |
2021-09-22 10:30:16 | pablogsal | set | messages: + msg402421 |
2021-09-22 08:03:04 | scoder | set | messages: + msg402410 |
2021-09-21 17:39:11 | pablogsal | set | messages: + msg402334 |
2021-09-21 16:53:32 | pablogsal | set | messages: + msg402328 |
2021-09-21 16:53:16 | pablogsal | set | messages: + msg402326 |
2021-09-21 16:52:41 | pablogsal | set | pull_requests: + pull_request26888 |
2021-09-21 16:45:45 | pablogsal | set | nosy:
+ pablogsal messages: + msg402325 |
2021-09-21 12:48:29 | serhiy.storchaka | set | messages: + msg402315 |
2021-09-21 11:22:53 | scoder | set | status: open -> closed resolution: fixed stage: patch review -> resolved |
2021-09-21 11:21:49 | scoder | set | messages: + msg402313 |
2021-09-21 10:29:41 | scoder | set | messages: + msg402310 |
2021-09-21 09:55:23 | serhiy.storchaka | set | messages: + msg402306 |
2021-09-21 09:40:00 | scoder | set | messages: + msg402302 |
2021-09-21 09:39:43 | scoder | set | messages: - msg402301 |
2021-09-21 09:38:36 | scoder | set | messages: + msg402301 |
2021-09-21 09:21:38 | serhiy.storchaka | set | messages: + msg402298 |
2021-09-21 09:01:22 | scoder | set | messages: + msg402295 |
2021-09-21 05:36:23 | gvanrossum | set | messages: + msg402286 |
2021-09-21 05:31:20 | scoder | set | messages: + msg402285 |
2021-09-20 23:17:22 | gvanrossum | set | messages: + msg402282 |
2021-09-20 22:32:13 | rhettinger | set | messages: + msg402281 |
2021-09-20 08:13:17 | scoder | set | messages:
+ msg402210 versions: + Python 3.11, - Python 3.7 |
2021-09-20 08:10:59 | scoder | set | stage: needs patch -> patch review pull_requests: + pull_request26868 |
2021-09-20 07:22:26 | rhettinger | set | messages: + msg402201 |
2021-09-18 17:39:37 | gvanrossum | set | nosy:
+ gvanrossum messages: + msg402136 |
2018-08-12 12:44:11 | pitrou | set | nosy:
- pitrou |
2018-08-12 12:36:17 | scoder | set | files:
+ unpack_single_digits.patch messages: + msg323443 versions: - Python 3.5 |
2017-04-11 08:16:30 | louielu | set | title: sum() several times slower on Python 3 -> sum() several times slower on Python 3 64-bit |
2017-04-11 08:14:25 | louielu | set | versions: + Python 3.7 |
2015-05-11 20:08:55 | scoder | set | messages: + msg242914 |
2015-05-01 23:46:19 | steven.daprano | set | nosy:
+ steven.daprano messages: + msg242357 |
2015-05-01 14:02:21 | scoder | set | files:
+ pylong_freelist.patch keywords: + patch messages: + msg242310 |
2015-05-01 10:44:15 | pitrou | set | messages: + msg242302 |
2015-05-01 06:09:58 | scoder | set | messages: + msg242300 |
2015-04-30 23:35:11 | pitrou | set | messages: + msg242294 |
2015-04-30 05:30:12 | scoder | set | nosy:
+ scoder messages: + msg242262 |
2015-04-30 04:08:30 | mark.dickinson | set | messages: + msg242260 |
2015-04-30 03:50:27 | mark.dickinson | set | nosy:
+ mark.dickinson messages: + msg242259 |
2015-04-29 19:34:25 | pitrou | set | messages:
+ msg242244 versions: - Python 3.4 |
2015-04-29 19:23:39 | lukasz.langa | set | messages: + msg242243 |
2015-04-29 19:04:22 | pitrou | set | messages: + msg242242 |
2015-04-29 19:02:03 | serhiy.storchaka | set | nosy:
+ serhiy.storchaka messages: + msg242241 |
2015-04-29 18:36:58 | lukasz.langa | create |