This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Use calloc() instead of malloc() for int << int (lshift)
Type: performance Stage:
Components: Versions: Python 3.5
process
Status: closed Resolution: rejected
Dependencies: Superseder:
Assigned To: Nosy List: josh.r, pitrou, vstinner
Priority: normal Keywords: patch

Created on 2014-05-02 20:51 by vstinner, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
long_lshift.patch vstinner, 2014-05-02 20:51 review
bench_long_rshift.py vstinner, 2014-05-02 20:53
long_lshift2.patch vstinner, 2014-05-02 21:10 review
Messages (12)
msg217787 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-02 20:51
Attached patch modifies long_lshift() to allocate the result using calloc() instead of malloc(). The goal is to avoid allocating physical memory for the least significat digits (zeros). According to my tests in issue #21233 (calloc), Linux and Windows have an optimized calloc() functions and at least Linux avoids allocating memory for pages initialized by zeros until pages at modified.
msg217788 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-02 20:53
bench_long_rshift.py: Microbenchmark for int << int (lshift) operation. Results on Linux:

Common platform:
Bits: int=32, long=64, long long=64, size_t=64, void*=64
Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09)
Python unicode implementation: PEP 393
Timer: time.perf_counter
CPU model: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
Platform: Linux-3.13.9-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug
CFLAGS: -Wno-unused-result -Werror=declaration-after-statement -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes

Platform of campaign orig:
SCM: hg revision=5b0fda8f5718 tag=tip branch=default date="2014-05-02 22:31 +0200"
Python version: 3.5.0a0 (default:5b0fda8f5718, May 2 2014, 22:47:06) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)]
Date: 2014-05-02 22:47:12
Timer precision: 46 ns

Platform of campaign calloc:
Timer precision: 49 ns
Python version: 3.5.0a0 (default:5b0fda8f5718+, May 2 2014, 22:45:58) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)]
Date: 2014-05-02 22:46:42
SCM: hg revision=5b0fda8f5718+ tag=tip branch=default date="2014-05-02 22:31 +0200"

----------------+-------------+----------------
Tests           |        orig |          calloc
----------------+-------------+----------------
1 << (2 ** 0)   |   39 ns (*) |           38 ns
1 << (2 ** 1)   |   38 ns (*) |           37 ns
1 << (2 ** 3)   |   38 ns (*) |           37 ns
1 << (2 ** 5)   |   42 ns (*) |     39 ns (-8%)
1 << (2 ** 10)  |   43 ns (*) |    37 ns (-14%)
1 << (2 ** 15)  |  157 ns (*) |    90 ns (-43%)
1 << (2 ** 20)  | 3.91 us (*) |    74 ns (-98%)
1 << (2 ** 25)  |  206 us (*) |   86 ns (-100%)
1 << (2 ** 28)  | 4.06 ms (*) |  4.1 us (-100%)
123 << 1        |   11 ns (*) |           12 ns
12345678 << 1   |   11 ns (*) |           12 ns
12345678 << 100 |   11 ns (*) |           11 ns
----------------+-------------+----------------
Total           | 4.27 ms (*) | 4.57 us (-100%)
----------------+-------------+----------------
msg217789 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 20:59
Looks like you forgot to actually use the use_calloc parameter you put in the long_alloc prototype. It accepts it, but it's never used or passed to another function. So right now, the only difference in behavior is that there is an extra layer of function calls involved in _PyLong_New.

On that note, might it make sense to change all other calls to _PyLong_New within longobject.c to use long_alloc(size, 0) to at least make long object's themselves avoid the overhead of the extra function call? Maybe I'm overestimating how much of a difference an extra C level function call will make, but it seems like PyLongs make new PyLongs an awful lot.
msg217790 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 21:00
Given your benchmarks show improvements (you posted while I was typing my last comment), I'm guessing it's just the posted patch that's wrong, and your local changes actually use use_calloc?
msg217791 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-02 21:02
Without the patch, 1 << (2**29) allocates 69.9 MB. With the patch, 1 << (2**29) allocates 0.1 MB (104 KB).

Without the patch, 

$ ./python
Python 3.5.0a0 (default:5b0fda8f5718, May  2 2014, 22:47:06) 
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux
>>> import os
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS:	    6164 kB
>>> x=1 << (2**29)
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS:	   76064 kB

With the patch:

haypo@selma$ ./python
Python 3.5.0a0 (default:5b0fda8f5718+, May  2 2014, 22:55:47) 
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS:	    5864 kB
>>> x=1 << (2**29)
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS:	    5968 kB
msg217792 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-02 21:10
> Looks like you forgot to actually use the use_calloc parameter you put in the long_alloc prototype. It accepts it, but it's never used or passed to another function.

Oh f###, you're right. See new patch.

I ran again the new benchmark: my (updated) patch makes most operations slower, except for "1 << (2 ** 28)". For 1 << (2 ** 28), it's much faster, but it is worth to modify long_lshift() for this rare use case?


Updated benchmark:

Common platform:
Timer: time.perf_counter
Timer info: namespace(adjustable=False, implementation='clock_gettime(CLOCK_MONOTONIC)', monotonic=True, resolution=1e-09)
Platform: Linux-3.13.9-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug
CPU model: Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz
Python unicode implementation: PEP 393
Bits: int=32, long=64, long long=64, size_t=64, void*=64
CFLAGS: -Wno-unused-result -Werror=declaration-after-statement -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes

Platform of campaign orig:
Python version: 3.5.0a0 (default:5b0fda8f5718, May 2 2014, 22:47:06) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)]
SCM: hg revision=5b0fda8f5718 tag=tip branch=default date="2014-05-02 22:31 +0200"
Timer precision: 46 ns
Date: 2014-05-02 22:47:12

Platform of campaign calloc2:
Python version: 3.5.0a0 (default:5b0fda8f5718+, May 2 2014, 23:04:59) [GCC 4.8.2 20131212 (Red Hat 4.8.2-7)]
Date: 2014-05-02 23:05:23
SCM: hg revision=5b0fda8f5718+ tag=tip branch=default date="2014-05-02 22:31 +0200"
Timer precision: 45 ns

----------------+-------------+----------------
Tests           |        orig |         calloc2
----------------+-------------+----------------
1 << (2 ** 0)   |   39 ns (*) |    50 ns (+30%)
1 << (2 ** 1)   |   38 ns (*) |    47 ns (+22%)
1 << (2 ** 3)   |   38 ns (*) |    48 ns (+26%)
1 << (2 ** 5)   |   42 ns (*) |    50 ns (+17%)
1 << (2 ** 10)  |   43 ns (*) |    53 ns (+23%)
1 << (2 ** 15)  |  157 ns (*) |   172 ns (+10%)
1 << (2 ** 20)  | 3.91 us (*) |         3.95 us
1 << (2 ** 25)  |  206 us (*) |    218 us (+5%)
1 << (2 ** 28)  | 4.06 ms (*) | 4.29 us (-100%)
123 << 1        |   11 ns (*) |           11 ns
12345678 << 1   |   11 ns (*) |           11 ns
12345678 << 100 |   11 ns (*) |           11 ns
----------------+-------------+----------------
Total           | 4.27 ms (*) |   226 us (-95%)
----------------+-------------+----------------
msg217793 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-02 21:12
> Without the patch, 1 << (2**29) allocates 69.9 MB. With the patch, 1 << (2**29) allocates 0.1 MB (104 KB).

This is still true with long_lshift2.patch (except that in my new test, it allocates 196 kB).
msg217795 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 21:16
While you're doing this, might it make sense to add a special case to long_pow so it identifies cases where a (digit-sized) value with an absolute value equal to a power of 2 is being raised to a positive integer exponent, and convert said cases to equivalent left shifts? 

Identifying powers of 2 can be done in constant time with no memory overhead for register sized values (see http://graphics.stanford.edu/~seander/bithacks.html#DetermineIfPowerOf2 ); if you've already handled the case for 0, then checking if it's a power of 2 is just:

    (v & (v - 1)) == 0

It adds a little complexity, but even just handling 2 alone specially would at least make it so the semi-common case of making a large power of 2 doesn't have significantly different performance using 2 ** (numbits - 1) instead of 1 << (numbits - 1).
msg217796 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 21:19
I swear, I need to refresh before I post a long comment.

If this is slowing everything down a little just to make 1 << (2 ** 29) faster (and did you really mean 1 << (1 << 29) ? :-) ), then I'd say drop it.
msg217798 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 21:27
One possible way to salvage it: Have you considered declaring long_alloc as Py_LOCAL_INLINE, or, now that I've checked #5553, a macro for long_alloc, so it gets inlined, and doesn't add the check overhead to everything (since properly inlining with a constant argument to use_calloc should make the optimizer trim the unused code path at compile time)?

Probably not very PEP7 friendly to macro-ize something that long I'm guessing...
msg217799 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-05-02 21:43
And now that I'm thinking about it, the probable cause of the slowdown is that, for any PyLongObject that's smaller than PAGESIZE (give or take), using Calloc means calloc is just calling memset to zero the PyLongObject header and the "used" part of the high order bits along with everything else, where the original Malloc code avoided performing two passes over the memory, initializing each byte in order, and writing each one exactly once. The Calloc approach uses two passes, and the second one isn't wholly sequential, so it may be getting bit by the CPU cache not keeping up.
msg218361 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-05-12 20:48
The optimization of 2**n looks to be only useful for very large value of n, result larger than 1 MB. This use case is very rare, and you should probably use another library (GMP, numpy, or something else) for such large numbers. I close the issue.
History
Date User Action Args
2022-04-11 14:58:03adminsetgithub: 65618
2014-05-12 20:48:01vstinnersetstatus: open -> closed
resolution: rejected
messages: + msg218361
2014-05-02 21:43:47josh.rsetmessages: + msg217799
2014-05-02 21:27:34josh.rsetmessages: + msg217798
2014-05-02 21:19:54josh.rsetmessages: + msg217796
2014-05-02 21:16:55josh.rsetmessages: + msg217795
2014-05-02 21:12:06vstinnersetmessages: + msg217793
2014-05-02 21:10:15vstinnersetfiles: + long_lshift2.patch

messages: + msg217792
2014-05-02 21:02:53vstinnersetmessages: + msg217791
2014-05-02 21:00:55josh.rsetmessages: + msg217790
2014-05-02 20:59:22josh.rsetnosy: + josh.r
messages: + msg217789
2014-05-02 20:53:53vstinnersetfiles: + bench_long_rshift.py

messages: + msg217788
2014-05-02 20:51:47vstinnercreate