Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add asyncio.BufferedProtocol #76432

Closed
1st1 opened this issue Dec 8, 2017 · 16 comments
Closed

Add asyncio.BufferedProtocol #76432

1st1 opened this issue Dec 8, 2017 · 16 comments
Assignees
Labels
3.7 (EOL) end of life topic-asyncio type-feature A feature request or enhancement

Comments

@1st1
Copy link
Member

1st1 commented Dec 8, 2017

BPO 32251
Nosy @pitrou, @asvetlov, @methane, @ambv, @1st1
PRs
  • bpo-32251: Implement asyncio.BufferedProtocol. #4755
  • bpo-32251: Fix docs #5408
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/1st1'
    closed_at = <Date 2018-01-28.21:43:53.514>
    created_at = <Date 2017-12-08.02:04:18.773>
    labels = ['type-feature', '3.7', 'expert-asyncio']
    title = 'Add asyncio.BufferedProtocol'
    updated_at = <Date 2018-01-29.04:51:11.041>
    user = 'https://github.com/1st1'

    bugs.python.org fields:

    activity = <Date 2018-01-29.04:51:11.041>
    actor = 'yselivanov'
    assignee = 'yselivanov'
    closed = True
    closed_date = <Date 2018-01-28.21:43:53.514>
    closer = 'yselivanov'
    components = ['asyncio']
    creation = <Date 2017-12-08.02:04:18.773>
    creator = 'yselivanov'
    dependencies = []
    files = []
    hgrepos = []
    issue_num = 32251
    keywords = ['patch']
    message_count = 16.0
    messages = ['307830', '307832', '307841', '307846', '307856', '307857', '307859', '307861', '307862', '308019', '308020', '308027', '308059', '308060', '311002', '311055']
    nosy_count = 5.0
    nosy_names = ['pitrou', 'asvetlov', 'methane', 'lukasz.langa', 'yselivanov']
    pr_nums = ['4755', '5408']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue32251'
    versions = ['Python 3.7']

    @1st1 1st1 added the 3.7 (EOL) end of life label Dec 8, 2017
    @1st1 1st1 self-assigned this Dec 8, 2017
    @1st1
    Copy link
    Member Author

    1st1 commented Dec 8, 2017

    A couple emails from async-sig for the context:

    1. https://mail.python.org/pipermail/async-sig/2017-October/000392.html
    2. https://mail.python.org/pipermail/async-sig/2017-December/000423.html

    I propose to add another Protocol base class to asyncio: BufferedProtocol.  It will have 'get_buffer()' and 'buffer_updated(nbytes)' methods instead of 'data_received()':

        class asyncio.BufferedProtocol:

            def get_buffer(self) -> memoryview:
                pass

            def buffer_updated(self, nbytes: int):
                pass

    When the protocol's transport is ready to receive data, it will call protocol.get_buffer().  The latter must return an object that implements the buffer protocol.  The transport will request a writable buffer over the returned object and receive data into that buffer.

    When the sock.recv_into(buffer) call is done, protocol.buffer_updated(nbytes) method will be called.  The number of bytes received into the buffer will be passed as a first argument.

    I've implemented the proposed design in uvloop (branch 'get_buffer', [1]) and adjusted your benchmark [2] to use it.  Here are benchmark results from my machine (macOS):

    vanilla asyncio: 120-135 Mb/s
    uvloop: 320-330 Mb/s
    uvloop/get_buffer: 600-650 Mb/s.

    [1] https://github.com/MagicStack/uvloop/tree/get_buffer
    [2] https://gist.github.com/1st1/1c606e5b83ef0e9c41faf21564d75ad7

    @1st1
    Copy link
    Member Author

    1st1 commented Dec 8, 2017

    I've made a PR that implements the change for selector_events.py.

    With the change:

    vanilla asyncio: 120-135 Mb/s
    vanilla asyncio/get_buffer: 220-230 Mb/s
    uvloop: 320-330 Mb/s
    uvloop/get_buffer: 600-650 Mb/s

    If we decide to go forward with the proposed design, I'll update the PR with support for proactor_events

    @asvetlov
    Copy link
    Contributor

    asvetlov commented Dec 8, 2017

    Numbers are great!

    Couple questions.

    1. What happens if size of read data is greater than pre-allocated buffer?
    2. Is flow control logic changed or not? If I understand correctly pause_reading() / resume_reading() continue to work as earlier.

    @pitrou
    Copy link
    Member

    pitrou commented Dec 8, 2017

    I have another question: what happens if there is a partial read?

    For example, let's says I return a 1024-bytes buffer in get_buffer(), but recv_into() receives data in 512 chunks. Is it:

    1. getbuffer() is called, returns 1024 bytes buffer
    2. recv_into() receives 512 bytes, writes them in buf[0:512]
    3. recv_into() receives another 512 bytes, writes them in buf[512:1024]

    or is it:

    1. getbuffer() is called, returns 1024 bytes buffer
    2. recv_into() receives 512 bytes, writes them in buf[0:512]
    3. getbuffer() is called, returns another 1024 bytes buffer
    4. recv_into() receives 512 bytes, writes them in newbuf[0:512]

    @1st1
    Copy link
    Member Author

    1st1 commented Dec 8, 2017

    1. What happens if size of read data is greater than pre-allocated buffer?

    Let's say we have 2Kb of data in the socket's network buffer, and we only preallocated 0.5Kb in our buffer. We will the receive 0.5Kb in our buffer, 'Protocol.buffer_updated()' will be called with nbytes=512, and 1.5Kb of data will be left in the network buffer. So the loop will call get_buffer()/buffer_updated() again, and the cycle will continue until there's no data left.

    1. Is flow control logic changed or not? If I understand correctly pause_reading() / resume_reading() continue to work as earlier.

    Flow control will continue working absolutely the same for BufferedProtocols.

    I have another question: what happens if there is a partial read?
    For example, let's says I return a 1024-bytes buffer in get_buffer(), but recv_into() receives data in 512 chunks. Is it:

    It will be as follows:

    1. Protocol.get_buffer() is called, returns 1024 bytes buffer
    2. recv_into() receives 512 bytes, writes them in buf[0:512]
    3. Protocol.buffer_updated() is called with nbytes=512

    Now it's the responsibility of the Protocol to return a correct view over buffer the next time get_buffer() is called.

    The general idea is to:

    1. allocate a big buffer

    2. keep track of how much data we have in that buffer, let's say we have a 'length' integer for that.

    3. when get_buffer() is called, return 'memoryview(big_buffer)[length:]'

    4. when buffer_updated(nbytes) is called, do 'length += nbytes; parse_buffer_if_possible()'

    I've implemented precisely this approach here: https://gist.github.com/1st1/1c606e5b83ef0e9c41faf21564d75ad7#file-get_buffer_bench-py-L27-L43

    @pitrou
    Copy link
    Member

    pitrou commented Dec 8, 2017

    Do you think it's possible to fold BufferedProtocol into Protocol?
    i.e., either get_buffer() returns None (the default) and data_received() is called with a bytes object, or get_buffer() returns a writable buffer and buffer_updated() is called with the number of bytes received into the buffer.

    This would allow StreamReader to implement readinto().

    @1st1
    Copy link
    Member Author

    1st1 commented Dec 8, 2017

    Do you think it's possible to fold BufferedProtocol into Protocol?

    It would be a backwards incompatible change :(

    Coincidentally, there might be protocols that already implement 'get_buffer()' and 'buffer_updated()' methods that do something completely different.

    This would allow StreamReader to implement readinto().

    We can easily refactor StreamReader to use 'BufferedProtocol'. Methods like 'readexactly()' would benefit from that.

    @asvetlov
    Copy link
    Contributor

    asvetlov commented Dec 8, 2017

    New protocol may speed up not only reader.readexactly().
    Every reader.feed_data() appends a new chunk to existing buffer.
    We can try to read into unused tail of the buffer instead.

    @ambv
    Copy link
    Contributor

    ambv commented Dec 8, 2017

    +1 on the idea, I would use this.

    @methane
    Copy link
    Member

    methane commented Dec 11, 2017

    Looks nice. Can it speed up aiohttp too?

    @asvetlov
    Copy link
    Contributor

    Yes.
    aiohttp uses own streams but public API and internal implementation are pretty close to asyncio streams.
    Moreover C accelerated HTTP parser should work with proposed BufferedProtocol seamlessly.

    @pitrou
    Copy link
    Member

    pitrou commented Dec 11, 2017

    See https://eklitzke.org/goroutines-nonblocking-io-and-memory-usage for an interesting discussion of the drawbacks of some buffer handling idioms.

    @1st1
    Copy link
    Member Author

    1st1 commented Dec 11, 2017

    See https://eklitzke.org/goroutines-nonblocking-io-and-memory-usage for an interesting discussion of the drawbacks of some buffer handling idioms.

    Thanks for the link!

    It does make sense to use a pool of buffers for the proposed BufferedProtocol when you need to keep thousands of long-open connections. The current design makes that easy: when BufferedProtocol.get_buffer() is called you either take a buffer from the pool or allocate a temporary new one.

    For use-cases like DB connections (asyncpg) a permanently allocated buffer per protocol instance is a good solution too, as usually there's a fairly limited number of open DB connections.

    @1st1
    Copy link
    Member Author

    1st1 commented Dec 11, 2017

    > Looks nice. Can it speed up aiohttp too?

    Yes.
    aiohttp uses own streams but public API and internal implementation are pretty close to asyncio streams.
    Moreover C accelerated HTTP parser should work with proposed BufferedProtocol seamlessly.

    I did some benchmarks, and it looks like BufferedProtocol can make httptools up to 5% faster for relatively small requests < 10Kb. Don't expect big speedups there.

    For asyncpg, benchmarks that fetch a lot of data (50-100Kb) get faster up to 15%. So we'll definitely use the BufferedProtocol in asyncpg.

    For applications that need to handle megabytes of data per request (like Antoine's benchmark) the speedup will be up to 2x.

    @1st1
    Copy link
    Member Author

    1st1 commented Jan 28, 2018

    New changeset 631fd38 by Yury Selivanov in branch 'master':
    bpo-32251: Implement asyncio.BufferedProtocol. (bpo-4755)
    631fd38

    @1st1 1st1 closed this as completed Jan 28, 2018
    @1st1 1st1 added the type-feature A feature request or enhancement label Jan 28, 2018
    @1st1
    Copy link
    Member Author

    1st1 commented Jan 29, 2018

    New changeset 07627e9 by Yury Selivanov in branch 'master':
    bpo-32251: Fix docs (bpo-5408)
    07627e9

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.7 (EOL) end of life topic-asyncio type-feature A feature request or enhancement
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants