msg336082 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-02-20 13:00 |
A common pattern in libraries doing I/O is to receive data in chunks, put them in a list, then join them all together using b"".join(chunks). For example, see http.client.HTTPResponse._safe_read. When the output is large, the memory copies can block the interpreter for a non-trivial amount of time, and prevent multi-threaded scaling. If the GIL could be dropped during the memcpys it could improve parallel I/O performance in some high-bandwidth scenarios (36050 mentions a case where I've run into this serialisation bottleneck in practice).
Obviously it could hurt performance to drop the GIL for small cases. As far as I know numpy uses thresholds to decide when it's worth dropping the GIL and it seems to work fairly well.
|
msg358783 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) * |
Date: 2019-12-21 20:21 |
Could you show evidences that dropping the GIL can help you? bytes.join() needs to perform operations which needs to hold the GIL (allocating the memory, iterating the list, getting the data of bytes-like objects). I afraid that the cost of memcpy() is a minor part of this.
|
msg358784 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2019-12-21 20:31 |
If there is a large chunk (e.g. several MBs), dropping the GIL during the memcpy of that chunk may be beneficial. This kind of optimization may be applicable in other similar cases (such as extending a bytearray or a BytesIO object).
|
msg358800 - (view) |
Author: Inada Naoki (methane) * |
Date: 2019-12-23 00:30 |
https://github.com/python/cpython/blob/068768faf6b82478de239d7ab903dfb249ad96a4/Objects/stringlib/join.h#L105-L126
It seems we can release GIL during iterating the buffer array.
Even though there is no one big chunk, it would be beneficial if the output size is large enough (8MB+?).
|
msg358803 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-12-23 05:12 |
> It seems we can release GIL during iterating the buffer array.
That's what I had in mind. Naturally it would require a bit of benchmarking to pick a threshold such that the small case doesn't lose performance due to locking overheads. If no one else is working on it, I can give that a try early next year.
|
msg359084 - (view) |
Author: Josh Rosenberg (josh.r) * |
Date: 2019-12-31 01:45 |
This will introduce a risk of data races that didn't previously exist. If you do:
ba1 = bytearray(b'\x00') * 50000
ba2 = bytearray(b'\x00') * 50000
... pass references to thread that mutates them ...
ba3 = b''.join((ba1, ba2))
then two things will change from the existing behavior:
1. If the thread in question attempts to write to the bytearrays in place, then it could conceivably write data that is only partially picked up (ba1[0], ba1[40000] = 2, 3 could end up copying the results of the second write without the first; at present, it could only copy the first without the second)
2. If the thread tries to change the size of the bytearrays during the join (ba1 += b'123'), it'll die with a BufferError that wasn't previously possible
#1 isn't terrible (as noted, data races in that case already existed, this just lets them happen in more ways), but #2 is a little unpleasant; code that previously had simple data races (the data might be inconsistent, but the code ran and produced some valid output) can now fail hard, nowhere near the actual call to join that introduced the behavioral change.
I don't think this sinks the patch (loudly breaking code that was silently broken before isn't awful), but I feel like a warning of some kind in the documentation (if only a simple compatibility note in What's New) might be appropriate.
|
msg359097 - (view) |
Author: Inada Naoki (methane) * |
Date: 2019-12-31 04:42 |
> 2. If the thread tries to change the size of the bytearrays during the join (ba1 += b'123'), it'll die with a BufferError that wasn't previously possible
Makes sense. We shouldn't drop GIL while having buffer of arbitrary objects.
|
msg359102 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-12-31 06:02 |
If we want to be conservative, we could only drop the GIL if all the buffers pass the PyBytes_CheckExact test. Presumably that won't encounter any of these problems because bytes objects are immutable?
|
msg359105 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) * |
Date: 2019-12-31 07:46 |
Please provide benchmarks that demonstrate the benefit of this change.
We also need to add a test for join() which covers the new code.
|
msg359108 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-12-31 09:24 |
I've attached a benchmark script and CSV results for master (whichever version that was at the point I forked) and with unconditional dropping of the GIL. It shows up to 3x performance improvement when using 4 threads. That's on my home desktop, which is quite old (Sandy Bridge). I'm expecting more significant gains on server CPUs, whose memory systems are optimised for multi-threaded workloads. The columns are chunk size, number of chunks, number of threads, and per-thread throughput.
There are also cases where using multiple threads is a slowdown, but I think that's an artifact of the benchmark. It repeatedly joins the same strings, so performance is higher when they all fit in the cache; when using 4 threads that execute in parallel, the working set is 4x larger and may cease to fit in cache. In real-world usage one is unlikely to be joining the same strings again and again.
In the single-threaded case, the benchmark seems to show that for 64K+, performance is improved by dropping the GIL (which I'm guessing must be statistical noise, since there shouldn't be anything contending for it), which is my reasoning behind the 65536 threshold.
I'll take a look at extra unit tests soon. Do you know off the top of your head where to look for existing `join` tests to add to?
|
msg359110 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-12-31 09:53 |
> I'll take a look at extra unit tests soon. Do you know off the top of your head where to look for existing `join` tests to add to?
Never mind, I found it: https://github.com/python/cpython/blob/92709a263e9cec0bc646ccc1ea051fc528800d8d/Lib/test/test_bytes.py#L535-L559
Do you think it would be sufficient to change the stress test from joining 1000 items to joining 100000 items?
|
msg359111 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2019-12-31 11:07 |
> Do you think it would be sufficient to change the stress test from joining 1000 items to joining 100000 items?
Actually that won't work, because the existing stress test is using a non-empty separator. I'll add another version of that stress test that uses an empty separator.
|
msg359182 - (view) |
Author: Inada Naoki (methane) * |
Date: 2020-01-02 05:35 |
> In the single-threaded case, the benchmark seems to show that for 64K+, performance is improved by dropping the GIL (which I'm guessing must be statistical noise, since there shouldn't be anything contending for it), which is my reasoning behind the 65536 threshold.
Yes, single-threaded case shouldn't affected by this change. Dropping GIL is just a very little overhead.
So you need to focus on multi-threaded case when considering threshold.
As far as seeing your result, 65536 threshold seems too small.
See 512*8192 case. It shows 1479->980 regression.
But I don't think your script shows real performance.
I updated your patch, and I saw small improvement on 512K and significant improvement on 1M.
|
msg359193 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2020-01-02 12:17 |
I'm realising that the benchmark makes it difficult to see what's going on because it doesn't separate overhead costs (slowdowns because releasing/acquiring the GIL is not free, particularly when contended) from cache effects (slowdowns due to parallel threads creating more cache pressure than threads that take turns). inada.naoki's version of the benchmark is better here because it uses the same input data for all the threads, but the output data will still be different in each thread.
For example, on my system I see a big drop in speedup (although I still get speedup) with the new benchmark once the buffer size gets to 2MB per thread, which is not surprising with an 8MB L3 cache.
My feeling is that we should try to ignore cache effects when picking a threshold, because we can't predict them generically (they'll vary by workload, thread count, CPU etc) whereas users can benchmark specific use cases to decide whether multithreading gives them a benefit. If the threshold is too low then users can always choose not to use multi-threading (and in general one doesn't expect much from it in Python) but if the threshold is too high then users have no recourse. That being said, 65536 does still seem a bit low based on the results available.
I'll try to write a variant of the benchmark in which other threads just spin in Python without creating memory pressure to see if that gives a different picture. I'll also run the benchmark on a server CPU when I'm back at work.
|
msg359225 - (view) |
Author: Inada Naoki (methane) * |
Date: 2020-01-03 02:42 |
> (slowdowns because releasing/acquiring the GIL is not free, particularly when contended)
Yes, it's relatively high. We shouldn't release the GIL only for ~0.5ms. That's why 1MB~ seems nice threshold.
> If the threshold is too low then users can always choose not to use multi-threading (and in general one doesn't expect much from it in Python)
I don't like this idea. We shouldn't force users to change their program from multi threaded to single threaded. So we need to avoid large performance regression.
> but if the threshold is too high then users have no recourse. That being said, 65536 does still seem a bit low based on the results available.
But they can get the benefit when input is too large. 65536 seems "too low" to me.
|
msg359328 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2020-01-05 10:33 |
I've written a variant of the benchmark in which one thread does joins and the other does unrelated CPU-bound work that doesn't touch memory much. It also didn't show much benefit to thresholds below 512KB. I still want to test things on a server-class CPU, but based on the evidence so far I'm okay with a 1MB threshold.
|
msg359347 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2020-01-05 14:49 |
I ran the test on a Xeon machine (Skylake-XP) and it also looks like performance is only improved from 1MB up (somewhat to my surprise, given how poor single-threaded memcpy performance is on that machine). So I've updated the pull request with that threshold.
|
msg360108 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2020-01-16 09:38 |
I think I've addressed the concerns that were raised in this bug, but let me know if I've missed any.
|
msg360938 - (view) |
Author: Inada Naoki (methane) * |
Date: 2020-01-29 07:09 |
New changeset d07d9f4c43bc85a77021bcc7d77643f8ebb605cf by Bruce Merry in branch 'master':
bpo-36051: Drop GIL during large bytes.join() (GH-17757)
https://github.com/python/cpython/commit/d07d9f4c43bc85a77021bcc7d77643f8ebb605cf
|
msg361075 - (view) |
Author: Skip Montanaro (skip.montanaro) * |
Date: 2020-01-30 17:04 |
I think to avoid compiler warnings about 'save' perhaps being used uninitialized, it should be initialized to NULL when declared on line 21 of Objects/stringlib/join.h.
|
msg361276 - (view) |
Author: Inada Naoki (methane) * |
Date: 2020-02-03 10:03 |
New changeset 869c0c99b94ff9527acc1ca060164ab3d1bdcc53 by Inada Naoki in branch 'master':
bpo-36051: Fix compiler warning. (GH-18325)
https://github.com/python/cpython/commit/869c0c99b94ff9527acc1ca060164ab3d1bdcc53
|
|
Date |
User |
Action |
Args |
2022-04-11 14:59:11 | admin | set | github: 80232 |
2020-02-03 10:04:15 | methane | set | status: open -> closed stage: patch review -> resolved |
2020-02-03 10:03:42 | methane | set | messages:
+ msg361276 |
2020-02-03 09:31:44 | methane | set | stage: resolved -> patch review pull_requests:
+ pull_request17698 |
2020-01-30 17:04:16 | skip.montanaro | set | status: closed -> open files:
+ join.diff
nosy:
+ skip.montanaro messages:
+ msg361075
|
2020-01-29 07:10:10 | methane | set | status: open -> closed resolution: fixed stage: patch review -> resolved |
2020-01-29 07:09:33 | methane | set | messages:
+ msg360938 |
2020-01-16 09:52:47 | vstinner | set | nosy:
- vstinner
|
2020-01-16 09:38:23 | bmerry | set | messages:
+ msg360108 |
2020-01-05 14:49:42 | bmerry | set | messages:
+ msg359347 |
2020-01-05 10:33:38 | bmerry | set | messages:
+ msg359328 |
2020-01-03 02:42:34 | methane | set | messages:
+ msg359225 |
2020-01-02 12:17:19 | bmerry | set | messages:
+ msg359193 |
2020-01-02 05:35:13 | methane | set | files:
+ benchjoin2.py
messages:
+ msg359182 |
2019-12-31 11:07:34 | bmerry | set | messages:
+ msg359111 |
2019-12-31 09:53:48 | bmerry | set | messages:
+ msg359110 |
2019-12-31 09:24:43 | bmerry | set | messages:
+ msg359108 |
2019-12-31 09:14:38 | bmerry | set | files:
+ benchjoin.py |
2019-12-31 09:13:54 | bmerry | set | files:
+ new.csv |
2019-12-31 09:13:37 | bmerry | set | files:
+ old.csv |
2019-12-31 07:46:15 | serhiy.storchaka | set | messages:
+ msg359105 |
2019-12-31 06:02:53 | bmerry | set | messages:
+ msg359102 |
2019-12-31 04:42:58 | methane | set | messages:
+ msg359097 |
2019-12-31 01:45:05 | josh.r | set | nosy:
+ josh.r messages:
+ msg359084
|
2019-12-30 15:46:42 | bmerry | set | keywords:
+ patch stage: patch review pull_requests:
+ pull_request17193 |
2019-12-23 05:12:29 | bmerry | set | messages:
+ msg358803 |
2019-12-23 00:30:21 | methane | set | messages:
+ msg358800 |
2019-12-21 20:33:51 | pitrou | set | nosy:
+ methane
|
2019-12-21 20:31:43 | pitrou | set | nosy:
+ pitrou messages:
+ msg358784
|
2019-12-21 20:21:54 | serhiy.storchaka | set | nosy:
+ serhiy.storchaka messages:
+ msg358783
|
2019-12-21 06:37:46 | BTaskaya | set | versions:
+ Python 3.9, - Python 3.8 |
2019-12-21 06:37:22 | BTaskaya | set | nosy:
+ vstinner, BTaskaya
|
2019-02-20 14:20:00 | SilentGhost | set | title: (Performance) Drop the GIL during large bytes.join operations? -> Drop the GIL during large bytes.join operations? type: performance versions:
+ Python 3.8, - Python 3.7 |
2019-02-20 13:18:24 | remi.lapeyre | set | nosy:
+ remi.lapeyre
|
2019-02-20 13:00:54 | bmerry | create | |