This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Vojtěch Boček
Recipients Vojtěch Boček, asvetlov, yselivanov
Date 2018-11-19.14:26:55
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1542637615.61.0.788709270274.issue35279@psf.upfronthosting.co.za>
In-reply-to
Content
By default, asyncio spawns as many as os.cpu_count() * 5 threads to run I/O on. When combined with beefy machines (e.g. kubernetes servers) with, says, 56 cores, it results in very high memory usage.

This is amplified by the fact that the `concurrent.futures.ThreadPoolExecutor` threads are never killed, and are not re-used until `max_workers` threads are spawned.

Workaround:

    loop.set_default_executor(concurrent.futures.ThreadPoolExecutor(max_workers=8))

This is still not ideal as the program might not need max_workers threads, but they are still spawned anyway.

I've hit this issue when running asyncio program in kubernetes. It created 260 idle threads and then ran out of memory.

I think the default max_workers should be limited to some max value and ThreadPoolExecutor should not spawn new threads unless necessary.
History
Date User Action Args
2018-11-19 14:26:55Vojtěch Bočeksetrecipients: + Vojtěch Boček, asvetlov, yselivanov
2018-11-19 14:26:55Vojtěch Bočeksetmessageid: <1542637615.61.0.788709270274.issue35279@psf.upfronthosting.co.za>
2018-11-19 14:26:55Vojtěch Bočeklinkissue35279 messages
2018-11-19 14:26:55Vojtěch Bočekcreate