Message412064
Hi Andrew, thank you for your answer. I am experimenting with coroutines, as I am pretty new to them. My idea was to let the writer drain while other packets where read, and thus I am waiting for the writer_drain right before starting writer.write again. Isn't that the correct wait to overlap the readings and the writings?
If I modify my initial code to look like:
async def forward_stream(reader: StreamReader, writer: StreamWriter, event: asyncio.Event, source: str):
writer_drain = writer.drain() # <--- awaitable is created here
while not event.is_set():
try:
data = await asyncio.wait_for(reader.read(1024), 1) # <-- CancelledError can be caught here, stack unwinds and writer_drain is never awaited, sure.
except asyncio.TimeoutError:
continue
except asyncio.CancelledError:
event.set()
break
... # the rest is not important for this case
await writer_drain
so that in case the task is cancelled, writer_drain will be awaited outside of the loop. This works, at the cost of having to introduce code specific for testing purposes (which feels wrong). In "production", the workflow of this code will be to loose the connection, break out of the loop, and wait for the writer stream to finish... but I am not introducing any method allowing me to cancel the streams once the script is running.
In the same way leaked tasks are "swallowed", which I have tested and works, shouldn't be these cases also handled by the tearDownClass method of IsolatedAsyncioTestCase? |
|
Date |
User |
Action |
Args |
2022-01-29 09:19:55 | bluecarrot | set | recipients:
+ bluecarrot, asvetlov, yselivanov |
2022-01-29 09:19:55 | bluecarrot | set | messageid: <1643447995.77.0.590668156831.issue46568@roundup.psfhosted.org> |
2022-01-29 09:19:55 | bluecarrot | link | issue46568 messages |
2022-01-29 09:19:55 | bluecarrot | create | |
|