This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: asyncore.dispatcher_with_send - increase the send buffer size
Type: performance Stage: resolved
Components: Library (Lib) Versions: Python 3.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: giampaolo.rodola, josiah.carlson, josiahcarlson, neologix, pitrou, python-dev, xdegaye
Priority: normal Keywords: easy, needs review, patch

Created on 2012-12-26 13:51 by neologix, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
asyncore_buffsize.diff neologix, 2012-12-26 13:51 review
test_asyncore.py neologix, 2012-12-26 13:51
asyncore_buffsize.diff neologix, 2012-12-28 17:49 review
Messages (15)
msg178212 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-12-26 13:51
As noted in issue #12498, asyncore.dispatcher_with_send only sends 512 bytes at a time: this is 1/3 of the Ethernet MTU, which reduces greatly the attainable bandwidth and increases the CPU usage.
Here's a patch bumping it to 8192 (and making it a class member so that derived classes can customize it if needed, although not documented).

Here's the result of a simplistic benchmark using asyncore.dispatcher_with_send to send data to a server:
Without patch:
"""
$ time ./python ~/test_asyncore.py localhost 4242

real    0m6.098s
user    0m4.472s
sys     0m1.436s
"""

With patch:
"""
$ time ./python ~/test_asyncore.py localhost 4242

real    0m0.937s
user    0m0.796s
sys     0m0.112s
"""

Of course, this is using the loopback interface, but it shows that the performance gain can non negligible. The test script is attached (use "netcat -l -p 4242" as server).
msg178214 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-26 14:00
A couple of years ago I conducted similar tests and it turned out that 64k was the best compromise:
http://code.google.com/p/pyftpdlib/issues/detail?id=94
Twisted uses 128k.
I'd be for using 64k and also change asynchat.async_chat.ac_*_buffer_size in accordance.
msg178389 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-12-28 16:35
> A couple of years ago I conducted similar tests and it turned out that 64k was the best compromise:
> http://code.google.com/p/pyftpdlib/issues/detail?id=94
> Twisted uses 128k.
> I'd be for using 64k and also change asynchat.async_chat.ac_*_buffer_size in accordance.

That sounds reasonable.

However, the mere fact that such constants are duplicated tends to
make me think that async_chat should actually be a subclass of
asyncore.asyncore_with_send (or move c_*_buffer_size to
asyncore.asyncore). Sane default buffering constants belong more to
asyncore than asynchat, no?
msg178396 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-28 17:05
IMO no. asyncore.dispatcher_with_send should not exist in the first place as it basically is a castrated version of asynchat.async_chat with less capabilities. I'd say it's there only for an historical reason.

Moving ac_*_buffer_size to asyncore.dispatcher makes no sense as it offers no buffer capabilities whatsoever in the first place.
msg178400 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-12-28 17:28
I agree that 64K seems better here (on Linux).
There is another problem with dispatcher_with_send: the buffering algorithm (both when appending and popping) is quadratic. You can easily observe it with your test script, when growing the DATA. async_chat looks much saner in that respect, I wonder why the same algorithm couldn't it be re-used.

(regardless, reading the asyncore code really hurts the eyes :-/)
msg178403 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-12-28 17:49
Alright, here's a simple patch bumping the buffers to 64K for asyncore
and asynchat.

> There is another problem with dispatcher_with_send: the buffering algorithm (both when appending and popping) is quadratic. You can easily observe it with your test script, when growing the DATA. async_chat looks much saner in that respect, I wonder why the same algorithm couldn't it be re-used.

Yeah, I noticed that.
But even in asynchat, there's a lot of copying going on, length
computations performed twice in a row, etc.

> (regardless, reading the asyncore code really hurts the eyes :-/)

Indeed.
msg178407 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-28 18:05
> But even in asynchat, there's a lot of copying going on, length
> computations performed twice in a row, etc.

What/where do you mean exactly?
I see little value in focusing efforts towards things such as initiate_with_send which are not supposed to be used but asynchat.async_chat is different.
msg178413 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-12-28 18:45
> What/where do you mean exactly?

"""
187     def push (self, data):
188         sabs = self.ac_out_buffer_size
189         if len(data) > sabs:
190             for i in range(0, len(data), sabs):
191                 self.producer_fifo.append(data[i:i+sabs])
"""

len(data) is called twice

"""
228             # handle classic producer behavior
229             obs = self.ac_out_buffer_size
230             try:
231                 data = first[:obs]
232             except TypeError:
233                 data = first.more()
"""

Here, I think that len(first) <= self.ac_out_buffer_size, by
definition. So the slicing is actually just a copy (I realize that it
has the side effect of checking whether it's a buffer or a producer).

memoryview is also great to avoid copies when sending/receiving to a socket.
msg178415 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-28 19:27
> memoryview is also great to avoid copies when sending/receiving to a socket.

That's interesting. How exactly? Would producer_fifo have to change from a deque() to a memoryview() object? In that case we might have to take backward compatibility into account (producer_fifo already changed in 2.6 and if I'm not mistaken that might have caused some problems).
Maybe it makes sense to file a separate issue to address asynchat enhancements.
msg178434 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-12-28 21:45
>> memoryview is also great to avoid copies when sending/receiving to a socket.
>
> That's interesting. How exactly? Would producer_fifo have to change from a deque() to a memoryview() object? In that case we might have to take backward compatibility into account (producer_fifo already changed in 2.6 and if I'm not mistaken that might have caused some problems).
> Maybe it makes sense to file a separate issue to address asynchat enhancements.

Probably.
For an example of how it might be used, you can have a look here:
http://stackoverflow.com/questions/6736771/buffers-and-memoryview-objects-explained-for-the-non-c-programmer
and here:
http://eli.thegreenplace.net/2011/11/28/less-copies-in-python-with-the-buffer-protocol-and-memoryviews/

Also, IIRC, Antoine used memoryviews when rewriting parts of
multiprocessing in pure Python.
msg178436 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-12-28 21:49
> Would producer_fifo have to change from a deque() to a memoryview()
> object?

A memoryview is not a container, it's a view over an existing container.

> In that case we might have to take backward compatibility into account 
> (producer_fifo already changed in 2.6 and if I'm not mistaken that
> might have caused some problems).

Does asyncore expose its implementation details?
msg178693 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-31 15:37
> Does asyncore expose its implementation details?

I was talking about asynchat. What is supposed to change is asynchat.async_chat.producer_fifo attribute which is currently a collections.deque object.
msg178694 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-12-31 15:37
BTW, the patch looks ok to me.
msg178734 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-01-01 15:33
New changeset 2eddf7c2efe6 by Charles-François Natali in branch 'default':
Issue #16787: Increase asyncore and asynchat default output buffers size, to
http://hg.python.org/cpython/rev/2eddf7c2efe6
msg178735 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-01-01 15:45
Closing!
History
Date User Action Args
2022-04-11 14:57:39adminsetgithub: 60991
2013-01-01 15:45:37neologixsetstatus: open -> closed
resolution: fixed
messages: + msg178735

stage: patch review -> resolved
2013-01-01 15:33:36python-devsetnosy: + python-dev
messages: + msg178734
2012-12-31 15:37:45giampaolo.rodolasetmessages: + msg178694
2012-12-31 15:37:03giampaolo.rodolasetmessages: + msg178693
2012-12-28 21:49:16pitrousetmessages: + msg178436
2012-12-28 21:45:28neologixsetmessages: + msg178434
2012-12-28 19:27:04giampaolo.rodolasetmessages: + msg178415
2012-12-28 18:45:28neologixsetmessages: + msg178413
2012-12-28 18:05:30giampaolo.rodolasetmessages: + msg178407
2012-12-28 17:49:01neologixsetfiles: + asyncore_buffsize.diff

messages: + msg178403
2012-12-28 17:28:53pitrousetnosy: + pitrou
messages: + msg178400
2012-12-28 17:05:36giampaolo.rodolasetmessages: + msg178396
2012-12-28 16:35:18neologixsetmessages: + msg178389
2012-12-26 14:00:16giampaolo.rodolasetnosy: + josiahcarlson, josiah.carlson
messages: + msg178214
2012-12-26 13:51:19neologixsetfiles: + test_asyncore.py
2012-12-26 13:51:05neologixcreate