New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bytearray front-slicing not optimized #63287
Comments
If you delete a slice at the end of a bytearray, it is naturally optimized (thanks to the resizing strategy). However, if you delete a slice at the front of a bytearray, it is not: a memmove() gets done every time. $ ./python -m timeit "b=bytearray(10000)" "while b: b[-1:] = b''"
100 loops, best of 3: 5.67 msec per loop
$ ./python -m timeit "b=bytearray(10000)" "while b: b[:1] = b''"
100 loops, best of 3: 6.67 msec per loop
$ ./python -m timeit "b=bytearray(50000)" "while b: b[-1:] = b''"
10 loops, best of 3: 28.3 msec per loop
$ ./python -m timeit "b=bytearray(50000)" "while b: b[:1] = b''"
10 loops, best of 3: 61.1 msec per loop
$ ./python -m timeit "b=bytearray(100000)" "while b: b[-1:] = b''"
10 loops, best of 3: 59.4 msec per loop
$ ./python -m timeit "b=bytearray(100000)" "while b: b[:1] = b''"
10 loops, best of 3: 198 msec per loop This makes implementing a fifo using bytearray a bit suboptimal. It shouldn't be very hard to improve. |
And the same is for a list. List and bytearray are wrong types for front deleting. I don't think we should increase the size of bytearray, and complicate and slowdown it for such special purpose. If you want to implement a fifo using bytearray more optimal, defer the deleting until used size less than a half of allocated size. See for example XMLPullParser.read_events() in Lib/xml/etree/ElementTree.py. |
There is no bytedeque().
I don't think it would really slow it down. It should be a simple
Of course, I wrote that code. Still, doing it manually is suboptimal |
Here is a patch. Benchmarks (under Linux where realloc is fast; the gap may be wider under Windows): $ ./python -m timeit "b=bytearray(100000)" "while b: b[:1] = b''"
-> before: 225 msec per loop
-> after: 60.4 msec per loop
$ ./python -m timeit "b=bytearray(100000)" "while b: b[:200] = b''"
-> before: 1.17 msec per loop
-> after: 350 usec per loop |
Could you please provide an example which uses this feature? |
A generic example is to parse messages out of a TCP stream. Basically Mercurial has another implementation strategy for a similar thing: |
I found an interesting comment in the following issue: "I think the trouble we get into is chunkbuffer() creates new large strings by concatenation and causes memory fragmentation. Keeping a list of chunks might be more efficient." http://bz.selenic.com/show_bug.cgi?id=1842#c17 @antoine: Do you know if your patch may reduce the memory fragmentation on "bytearray front-slicing"? |
It reduces the number of allocations so, yes, it can reduce memory |
Could you please add unit tests for check that ob_start is used instead of memmove()? I didn't find a function for that in _testcapi. I tried to test it using sys.getsizeof(), but the size is not reliable (the bytearray buffer is not always shrinked, it depends on the new size). The best is probably to add a new function in _testcapi to get private attributes: ob_exports, ob_alloc, ob_start, ob_bytes. Using these attributes, it becomes easy to check that fast-path are correctly optimized (eg. increases ob_start instead of getting a new ob_bytes buffer). |
How would I do that? Most of the time we don't unit-test performance |
Results under Windows:
PCbuild\amd64\python.exe -m timeit "b=bytearray(100000)" "while b: b[-1:] = b''" PCbuild\amd64\python.exe -m timeit "b=bytearray(100000)" "while b: b[:1] = b''"
PCbuild\amd64\python.exe -m timeit "b=bytearray(100000)" "while b: b[-1:] = b''" PCbuild\amd64\python.exe -m timeit "b=bytearray(100000)" "while b: b[:1] = b''" |
Could you please show concrete code in which you are going to use this optimization? |
There's no need to "use" this optimization. Any networking code that has to find message boundaries and split on them will benefit. If you don't understand that, I'm not willing to explain it for you. |
Deleting a slice at the front of a bytearray have linear complexity from the size of a bytearray (in any case del b[:1] is a little faster than b[:1] = b''). I doubt than any performance critical code do it instead of increasing an index in constant time. |
Increasing an index requires that you compact the bytearray from time to |
The same is true with your patch. |
I don't understand. What is "true with my patch"? |
You increase internal index in a bytearray. A bytearray with small visible length can consume much hidden memory. |
No, because PyByteArray_Resize() is always called afterwards to ensure |
I.e. the bytearray is compacted from time to time. |
@serhiy: "I doubt than any performance critical code do it instead of increasing an index in constant time." Sorry, I don't get your point. It's not become Python is inefficient that developers must develop workarounds. Antoine's patch is simple, elegant, and offer better performances for "free". |
Is there a problem with that? |
No more than with msg198657.
I'm not sure that "workarounds" are much worst than using this optimization. At least we still not seen real code which will benefit from this optimization.
It offer better performances for "free" only for suboptimal code which currently have O(N) instead of O(1). One of most used cases for bytearrays is accumulating. And the patch slow down this case. $ ./python -m timeit "b = bytearray(); a = b'x'" "for i in range(10000): b += a" "bytes(b)" Without patch: 4.3 msec per loop |
I see no difference here. You are seeing a 10% slowdown, which is possibly a measurement glitch. The bottom line is that the performance remains approximately the same.
The problem is the "suboptimal code" is also the natural way to write such code. If you know a simple and idiomatic way to write an optimal bytes FIFO, then please share it with us. Otherwise, I will happily ignore your line of argument here. |
Please don't use the raw timeit command for micro-benchmarks, it is not reliable. For example, I'm unable to reproduce your "slow down" (7% on a microbenchmark is not that large). My micro-benchmark using my benchmark.py script: Common platform: Platform of campaign original: Platform of campaign patched: ------------+-------------+------------ So performances are the same with the patch. |
Oh, by the way:
I'm not sure that it is what you expected: bytearray() is only initialized once ("setup" of timeit). You probably want to reinitialize at each loop. |
There is no "setup" of timeit here. And you forgot bytes(b) after accumulating loop. bench_bytearray.py shows me 10% slowdown for 10**3 and 10**5 bytes tests. Of course it can be a measurement glitch. On other hand, there are no measurements which show positive effect of the patch for real code. Currently we consider only hypothetic code and can't compare it with alternatives.
Please share this written in the "natural way" real code with us. I can't compare with alternatives a code which I don't see. |
I'm sorry, I don't want to spend more time on such a minor issue. The I would have liked more constructive criticism (perhaps the patch is |
I don't understand why you avoid to show any examples which benefit. Shouldn't optimizing patches prove their efficient? |
Many micro-optimizations get committed without proving themselves on a A 4x improvement on a micro-benchmark is very likely to make a |
However, the patch had a bug in the resizing logic. Here is a new patch fixing that (+ an additional test). |
Other benchmarks for the new patch (exercising FIFO-like behaviour: some data is appended at one end, and popped at the other): timeit -s "b=bytearray(100000);s=b'x'*100" "b[:100] = b''; b.extend(s)" For comparison, popping from the end (LIFO-like): timeit -s "b=bytearray(100000);s=b'x'*100" "b[-100:] = b''; b.extend(s)" |
If there is a code that uses the deleting from the beginning of a bytearray. I just pray to demonstrate this code. Perhaps you only intend to write a code that will use it. Wonderful. I want to look at it and make sure that the same problem can not be solved just as effective in another way. |
I took me some time, but Antoine explained me the use case on IRC :-) The patch is useful is the bytearray is used as a FIFO: remove front, append tail. It can be seen as an implementation for BufferedReader. Consumer/producer is a common pattern, especially consuming one end (front) and produce at the other end (tail). |
I adapted my micro-benchmark to measure the speedup: bench_bytearray2.py. Result on bytea_slice2.patch: Common platform: Platform of campaign original: Platform of campaign patched: ------------------------+-------------+------------ ----------------------------+-------------------+------------- ----------------------------+------------------+------------ @serhiy: I see "zero" difference in the append loop micro-benchmark. I added the final cast to bytes() @antoine: Your patch rocks, 30x faster! (I don't care of the 8% slowdown in the nanosecond timing). |
Here is a slightly modified patch implementing Serhiy's suggestion. |
bytea_slice3.patch looks simpler than bytea_slice2.patch, I prefer it. |
I don't see much sense in differences between bytea_slice2.patch and bytea_slice3.patch, because bytea_slice3.patch is not smaller and simpler than bytea_slice2.patch. I meant that you can continue use self->ob_bytes instead of PyByteArray_AS_STRING(self) if self->ob_bytes points not to the start of physical buffer, but to the start of logical byte array. *This* will simplify the patch a lot. |
It will make the diff smaller but it will not "simplify" the patch. |
New changeset 499a96611baa by Antoine Pitrou in branch 'default': |
The commit produced compiled errors on Windows, but I've since fixed them. |
Side effect of this change is that bytearray's data now can be non-aligned. We should examine all places which relies on this. |
The C API makes no guarantees as to alignment of private data areas, so The remaining question is whether the bytearray implementation relies on |
I think the changes for this issue are causing the crash and unexpected buffer expansion described in bpo-23985. Appending to a bytearray() can overstep the memory buffer because it doesn’t account for ob_start when checking for resizing. And “del” can expand the allocated memory due to an off-by-one error. Please have a look at my patches. Perhaps there are other operations that also need patching to account for ob_start. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: