This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author aymeric.augustin
Recipients aymeric.augustin, metathink, vstinner, yselivanov
Date 2017-03-28.15:31:11
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1490715071.32.0.216492800241.issue29930@psf.upfronthosting.co.za>
In-reply-to
Content
For context, websockets calls `yield from self.writer.drain()` after each write in order to provide backpressure.

If the output buffer fills up, calling API coroutines that write to the websocket connection becomes slow and hopefully the backpressure will propagate (in a correctly implemented application).

This is a straightforward application of the only use case described in the documentation.

----

I would find it annoying to have to serialize calls to drain() myself. It doesn't feel like something the "application" should care about. (websockets is the application from asyncio's perspective.)

I'm wondering if it could be a problem if a bunch of corountines were waiting on drain() and got released simultaneously. I don't think it would be a problem for websockets. Since my use case seems typical, there's a good chance this also applies to other apps.

So I'm in favor of simply allowing an arbitrary number of coroutines to wait on drain() in parallel, if that's feasible.
History
Date User Action Args
2017-03-28 15:31:11aymeric.augustinsetrecipients: + aymeric.augustin, vstinner, yselivanov, metathink
2017-03-28 15:31:11aymeric.augustinsetmessageid: <1490715071.32.0.216492800241.issue29930@psf.upfronthosting.co.za>
2017-03-28 15:31:11aymeric.augustinlinkissue29930 messages
2017-03-28 15:31:11aymeric.augustincreate