New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use calloc() instead of malloc() for int << int (lshift) #65618
Comments
Attached patch modifies long_lshift() to allocate the result using calloc() instead of malloc(). The goal is to avoid allocating physical memory for the least significat digits (zeros). According to my tests in issue bpo-21233 (calloc), Linux and Windows have an optimized calloc() functions and at least Linux avoids allocating memory for pages initialized by zeros until pages at modified. |
bench_long_rshift.py: Microbenchmark for int << int (lshift) operation. Results on Linux: Common platform: Platform of campaign orig: Platform of campaign calloc: ----------------+-------------+---------------- |
Looks like you forgot to actually use the use_calloc parameter you put in the long_alloc prototype. It accepts it, but it's never used or passed to another function. So right now, the only difference in behavior is that there is an extra layer of function calls involved in _PyLong_New. On that note, might it make sense to change all other calls to _PyLong_New within longobject.c to use long_alloc(size, 0) to at least make long object's themselves avoid the overhead of the extra function call? Maybe I'm overestimating how much of a difference an extra C level function call will make, but it seems like PyLongs make new PyLongs an awful lot. |
Given your benchmarks show improvements (you posted while I was typing my last comment), I'm guessing it's just the posted patch that's wrong, and your local changes actually use use_calloc? |
Without the patch, 1 << (2**29) allocates 69.9 MB. With the patch, 1 << (2**29) allocates 0.1 MB (104 KB). Without the patch, $ ./python
Python 3.5.0a0 (default:5b0fda8f5718, May 2 2014, 22:47:06)
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux
>>> import os
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS: 6164 kB
>>> x=1 << (2**29)
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS: 76064 kB With the patch: haypo@selma$ ./python
Python 3.5.0a0 (default:5b0fda8f5718+, May 2 2014, 22:55:47)
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS: 5864 kB
>>> x=1 << (2**29)
>>> os.system("grep RSS /proc/%s/status" % os.getpid())
VmRSS: 5968 kB |
Oh f###, you're right. See new patch. I ran again the new benchmark: my (updated) patch makes most operations slower, except for "1 << (2 ** 28)". For 1 << (2 ** 28), it's much faster, but it is worth to modify long_lshift() for this rare use case? Updated benchmark: Common platform: Platform of campaign orig: Platform of campaign calloc2: ----------------+-------------+---------------- |
This is still true with long_lshift2.patch (except that in my new test, it allocates 196 kB). |
While you're doing this, might it make sense to add a special case to long_pow so it identifies cases where a (digit-sized) value with an absolute value equal to a power of 2 is being raised to a positive integer exponent, and convert said cases to equivalent left shifts? Identifying powers of 2 can be done in constant time with no memory overhead for register sized values (see http://graphics.stanford.edu/~seander/bithacks.html#DetermineIfPowerOf2 ); if you've already handled the case for 0, then checking if it's a power of 2 is just:
It adds a little complexity, but even just handling 2 alone specially would at least make it so the semi-common case of making a large power of 2 doesn't have significantly different performance using 2 ** (numbits - 1) instead of 1 << (numbits - 1). |
I swear, I need to refresh before I post a long comment. If this is slowing everything down a little just to make 1 << (2 ** 29) faster (and did you really mean 1 << (1 << 29) ? :-) ), then I'd say drop it. |
One possible way to salvage it: Have you considered declaring long_alloc as Py_LOCAL_INLINE, or, now that I've checked bpo-5553, a macro for long_alloc, so it gets inlined, and doesn't add the check overhead to everything (since properly inlining with a constant argument to use_calloc should make the optimizer trim the unused code path at compile time)? Probably not very PEP-7 friendly to macro-ize something that long I'm guessing... |
And now that I'm thinking about it, the probable cause of the slowdown is that, for any PyLongObject that's smaller than PAGESIZE (give or take), using Calloc means calloc is just calling memset to zero the PyLongObject header and the "used" part of the high order bits along with everything else, where the original Malloc code avoided performing two passes over the memory, initializing each byte in order, and writing each one exactly once. The Calloc approach uses two passes, and the second one isn't wholly sequential, so it may be getting bit by the CPU cache not keeping up. |
The optimization of 2**n looks to be only useful for very large value of n, result larger than 1 MB. This use case is very rare, and you should probably use another library (GMP, numpy, or something else) for such large numbers. I close the issue. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: