Author pfalcon
Recipients gvanrossum, pfalcon, vstinner, yselivanov
Date 2015-06-14.07:50:47
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1434268250.18.0.99884254892.issue24449@psf.upfronthosting.co.za>
In-reply-to
Content
This issue was brought is somewhat sporadic manner on python-tulip mailing list, hence this ticket. The discussion on the ML:

https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/knMvVGxp2WsJ
(all other messages below threaded from this)

https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/lGqT54yupOIJ
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/U0NBC1jLGSgJ
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/zIx59jj8krsJ
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/zSpjGKv23ioJ
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/3mfGI8HIe_gJ
https://groups.google.com/d/msg/python-tulip/JA0-FC_pliA/rM4fyA9qlY4J

Summary of arguments:

1. This would make such async_write() (a tentative name) symmetrical in usage with read() method (i.e. be a coroutine to be used with "yield from"/"await"), which certainly reduce user confusion and will help novices to learn/use asyncio.

2. write() method is described (by transitively referring to WriteTransport.write()) as "This method does not block; it buffers the data and arranges for it to be sent out asynchronously." Such description implies requirement of unlimited data buffering. E.g., being fed with 1TB of data, it still must buffer it. Bufferings of such size can't/won't work in practice - they only will lead to excessive swapping and/or termination due to out of memory conditions. Thus, providing only synchronous high-level write operation goes against basic system reliability/security principles.

3. The whole concept of synchronous write in an asynchronous I/O framework stems from: 1) the way it was done in some pre-existing Python async I/O frameworks ("pre-existing" means brought up with older versions of Python and based on concepts available at that time; many people use word "legacy" in such contexts); 2) on PEP3153, which essentially captures ideas used in the aforementioned pre-existing Python frameworks. PEP3153 was rejected; it also contains some "interesting" claims like "Considered API alternatives - Generators as producers - [...] - nobody produced actually working code demonstrating how they could be used." That wasn't true at the time of PEP writing (http://www.dabeaz.com/generators/ , 2008, 2009), and asyncio is actually *the* framework which uses generators as producers.

asyncio also made a very honorable step of uniting generators/coroutine and Transport paradigm - note that as PEP3153 shows, Transport proponents contrasted it with coroutine-based design. But asyncio also blocked (in both senses) high-level I/O on Transport paradigm. What I'm arguing is not that Transports are good or bad, but that there should be a way to consistently use coroutine paradigm for I/O in asyncio - for people who may appreciate it. This will also enable alternative implementations of asyncio subsets without Transport layer, with less code size, and thus more suitable for constrained environments.

Proposed change is to add following to asyncio.StreamWriter implementation:

@coroutine
def async_write(self, data):
    self.write(data)

I.e. default implementation will be just coroutine version of synchronous write() method. The messages linked above discuss alternative implementations (which are really interesting for complete alternative implementations of asyncio).


The above changes are implemented in MicroPython's uasyncio package, which asyncio subset for memory-constrained systems.

Thanks for your consideration!
History
Date User Action Args
2015-06-14 07:50:50pfalconsetrecipients: + pfalcon, gvanrossum, vstinner, yselivanov
2015-06-14 07:50:50pfalconsetmessageid: <1434268250.18.0.99884254892.issue24449@psf.upfronthosting.co.za>
2015-06-14 07:50:50pfalconlinkissue24449 messages
2015-06-14 07:50:47pfalconcreate