This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Asyncio loop blocks with a lot of parallel tasks
Type: resource usage Stage: resolved
Components: asyncio Versions: Python 3.6
process
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: Nosy List: asvetlov, decaz, terry.reedy, yselivanov
Priority: normal Keywords:

Created on 2018-03-21 15:48 by decaz, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (9)
msg314207 - (view) Author: Marat Sharafutdinov (decaz) * Date: 2018-03-21 15:48
I want to schedule a lot of parallel tasks, but it becomes slow with loop blocking:

```python
import asyncio

task_count = 10000

async def main():
    for x in range(1, task_count + 1):
        asyncio.ensure_future(f(x))

async def f(x):
    if x % 1000 == 0 or x == task_count:
        print(f'Run f({x})')
    await asyncio.sleep(1)
    loop.call_later(1, lambda: asyncio.ensure_future(f(x)))

loop = asyncio.get_event_loop()
loop.set_debug(True)
loop.run_until_complete(main())
loop.run_forever()
```

Outputs:
```
Executing <Task finished coro=<main() done, defined at test_aio.py:5> result=None created at /usr/lib/python3.6/asyncio/base_events.py:446> took 0.939 seconds

...

Executing <TimerHandle when=1841384.785427177 _set_result_unless_cancelled(<Future finis...events.py:275>, None) at /usr/lib/python3.6/asyncio/futures.py:339 created at /usr/lib/python3.6/asyncio/tasks.py:480> took 0.113 seconds

...

Executing <Task pending coro=<f() running at test_aio.py:12> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fe89344fcd8>()] created at /usr/lib/python3.6/asyncio/base_events.py:275> created at test_aio.py:13> took 0.100 seconds

...
```

What can be another way to schedule a lot of parallel tasks?
msg314209 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2018-03-21 15:55
Well, there's nothing we can do here, it's just a lot of work for a single-threaded process to get a 10000 tasks going.  You'll get the same picture in any other async Python framework.
msg314215 - (view) Author: Marat Sharafutdinov (decaz) * Date: 2018-03-21 17:46
But why if I use multiprocessing (run 100 tasks by 100 workers) it still continue blocking loop within some workers? Are 100 tasks "a lot of work" for asyncio loop?

```python
import asyncio
from multiprocessing import Process

worker_count = 100
task_count = 100

def worker_main(worker_id):
    async def main():
        for x in range(1, task_count + 1):
            asyncio.ensure_future(f(x))

    async def f(x):
        if x % 1000 == 0 or x == task_count:
            print(f'[WORKER-{worker_id}] Run f({x})')
        await asyncio.sleep(1)
        loop.call_later(1, lambda: asyncio.ensure_future(f(x)))

    loop = asyncio.get_event_loop()
    loop.set_debug(True)
    loop.run_until_complete(main())
    loop.run_forever()

if __name__ == '__main__':
    for worker_id in range(worker_count):
        worker = Process(target=worker_main, args=(worker_id,), daemon=True)
        worker.start()
    while True:
        pass
```
msg314216 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2018-03-21 18:09
The "blocking" you observe is caused by Python GC.  If I add "import gc; gc.disable()" the warnings disappear.
msg314275 - (view) Author: Marat Sharafutdinov (decaz) * Date: 2018-03-22 17:51
Does this mean that GC uses most part of CPU time so the loop blocks?

And another question: do you have any plans to optimize the loop so it would be possible to run really lot of tasks in parallel?

Thanks.
msg314281 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2018-03-22 18:06
> Does this mean that GC uses most part of CPU time so the loop blocks?

GC stops all Python code in the OS process from running.  Because of the GIL code in threads will obviously be stopped too.  This is true for both CPython and PyPy at this moment.

> And another question: do you have any plans to optimize the loop so it would be possible to run really lot of tasks in parallel?

The only way of doing this is to have a few asyncio OS processes (because of the GIL we can't implement M:N scheduling in a single Python process).  So it's not going to happen in the foreseeable future :(
msg314298 - (view) Author: Andrew Svetlov (asvetlov) * (Python committer) Date: 2018-03-23 08:17
The problem of the example is: all 10000 tasks starts in the same moment, than waits for 1 sec each and at the same moment every task clones itself.

Adding a jitter into example can solve the issue.
msg314299 - (view) Author: Marat Sharafutdinov (decaz) * Date: 2018-03-23 09:03
Concerning the example adding a jitter is useful, thanks!

But anyway in case there will be something not constant as sleeping for 1 sec is, the problem will continue to appear.
msg314345 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2018-03-24 00:06
If the unforeseeable future arrives, someone can reopen or open a new issue.
History
Date User Action Args
2022-04-11 14:58:58adminsetgithub: 77296
2018-03-24 00:06:52terry.reedysetstatus: open -> closed

nosy: + terry.reedy
messages: + msg314345

resolution: wont fix
stage: resolved
2018-03-23 09:03:55decazsetmessages: + msg314299
2018-03-23 08:17:04asvetlovsetmessages: + msg314298
2018-03-22 18:06:01yselivanovsetmessages: + msg314281
2018-03-22 17:51:23decazsetmessages: + msg314275
2018-03-21 18:09:20yselivanovsetmessages: + msg314216
2018-03-21 17:46:30decazsetmessages: + msg314215
2018-03-21 15:55:47yselivanovsetmessages: + msg314209
2018-03-21 15:48:58decazcreate