msg248732 - (view) |
Author: Matt Spitz (Matt Spitz) |
Date: 2015-08-17 14:59 |
https://hg.python.org/cpython/file/3.4/Lib/concurrent/futures/thread.py#l114
ThreadPoolExecutor will keep spawning new threads, even if existing threads are waiting for new work. We should check against the queue length when deciding to spawn a new thread to avoid creating unnecessary threads.
|
msg248749 - (view) |
Author: Matt Spitz (Matt Spitz) |
Date: 2015-08-17 20:09 |
On further investigation, it appears that we can't just check against the queue length, as it doesn't indicate whether threads are doing work or idle.
A change here will need a counter/semaphore to keep track of the number of idle/working threads, which may have negative performance implications.
|
msg266716 - (view) |
Author: Torsten Landschoff (torsten) * |
Date: 2016-05-30 20:09 |
For demonstration purposes, here is a small example specifically for Linux which shows how each request starts a new thread even though the client blocks for each result.
|
msg266772 - (view) |
Author: Josh Rosenberg (josh.r) * |
Date: 2016-05-31 19:58 |
Is there a good reason to worry about overeager worker spawning? ProcessPoolExecutor spawns all workers when the first work item is submitted ( https://hg.python.org/cpython/file/3.4/Lib/concurrent/futures/process.py#l361 ), only ThreadPoolExecutor even makes an effort to limit the number of threads spawned. Threads are typically more lightweight than processes, and with the recent GIL improvements, the CPython specific costs associated with threads (particularly threads that are just sitting around waiting on a lock) are fairly minimal.
It just seems like if eager process spawning isn't a problem, neither is (cheaper) eager thread spawning.
|
msg279144 - (view) |
Author: David MacKenzie (dmacnet) |
Date: 2016-10-21 17:54 |
If each worker thread ties up other resources in an application, such as handles to server connections, conserving threads could have a significant impact. That's the situation for an application I am involved with.
I've written and tested a patch to make this change, using a second Queue for the worker threads to notify the executor in the main thread by sending a None when they finish a WorkItem and are therefore idle and ready for more work. It's a fairly simple patch. It does add a little more overhead to executing a job, inevitably. I can submit the patch if there's interest. Otherwise, perhaps the TODO comment in thread.py should be rewritten to explain why it's not worth doing.
|
msg279145 - (view) |
Author: David MacKenzie (dmacnet) |
Date: 2016-10-21 18:10 |
This issue seems to overlap with 14119.
|
msg315007 - (view) |
Author: iunknwn (iunknwn) * |
Date: 2018-04-05 22:45 |
I've submitted a PR that should resolve this - it uses a simple atomic counter to ensure new threads are not created if existing threads are idle.
One concern I do have - while writing the patch, I noticed the existing submit method (specifically the adjust_thread_count function) isn't thread safe. I've added more details in the PR.
|
msg315090 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-04-08 18:32 |
I'm not fond of this proposal. The existing behaviour is harmless; especially for a thread pool, since threads are cheap resources. Improving the logic a bit might seem nice, but it also complicates the executor implementation a bit more.
Besides, once the N threads are spawned, they remain alive until the executor is shut down. So all it takes is a spike in incoming requests and you don't save resources anymore.
|
msg315091 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-04-08 18:33 |
> If each worker thread ties up other resources in an application, such as handles to server connections, conserving threads could have a significant impact.
You may want to implement a pooling mechanism for those connections, independent of the thread pool. It is also probably more flexible (you can implement whichever caching and lifetime logic benefits your application).
|
msg315092 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-04-08 18:42 |
Side note:
> One concern I do have - while writing the patch, I noticed the existing submit method (specifically the adjust_thread_count function) isn't thread safe.
True. The executor is obviously thread-safe internally (as it handles multiple worker threads). But the user should not /call/ it from multiple threads.
(most primitives exposed by the Python stdlib are not thread-safe, except for the simplest ones such as lists, dicts etc.)
|
msg315260 - (view) |
Author: iunknwn (iunknwn) * |
Date: 2018-04-13 19:36 |
The existing behavior seems strange (and isn't well documented). The code had a TODO comment from bquinlan to implement idle thread recycling, so that was why I made the change.
That said, if threads are cheap, why not just create all the work threads on initialization, and then remove all the logic entirely?
Also, regarding the executor and thread-safety, there's an example in the current docs showing a job being added to the executor from a worker thread (it's part of the example on deadlocks, but it focuses on max worker count, not on the executor's thread-safety).
|
msg315261 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-04-13 20:04 |
> That said, if threads are cheap, why not just create all the work threads on initialization, and then remove all the logic entirely?
That would sound reasonable to me. bquinlan has been absent for a long time, so I wouldn't expect an answer from him on this issue.
> Also, regarding the executor and thread-safety, there's an example in the current docs showing a job being added to the executor from a worker thread
Actually, looking at the code again, submit() is protected by the shutdown_lock, so it seems it should be thread-safe. That's on git master btw.
|
msg315264 - (view) |
Author: iunknwn (iunknwn) * |
Date: 2018-04-13 20:18 |
Alright - I'll put together another patch that removes the logic, and spins up all threads during initialization.
Do you want me to create a completely new PR, or just update my existing one?
|
msg315265 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-04-13 20:23 |
Creating a new PR would be cleaner IMHO.
|
msg315471 - (view) |
Author: iunknwn (iunknwn) * |
Date: 2018-04-19 02:01 |
Done - as recommend, I've opened a new PR that changes the behavior to spawn all worker threads when the executor is created. This eliminates all the thread logic from the submit function.
|
msg315485 - (view) |
Author: Inada Naoki (methane) * |
Date: 2018-04-19 11:04 |
Why not just remove TODO comment?
Thread is cheap, but not zero-cost.
|
msg315715 - (view) |
Author: iunknwn (iunknwn) * |
Date: 2018-04-24 22:32 |
I feel like there are two reasonable options here:
1) We can implement a thread pool with basic resource tracking. This means idle threads get recycled, and threads that have been sitting idle for a while are terminated as demand drops, so resources can be reclaimed. This wouldn't require very much work - we would just need to modify some of the queue operations to have timeouts, and track the number of idle threads (most of that's already in my first PR). We could easily add options like min_threads and idle_thread_timeout as options in kwargs to the init routine.
2) We can skip all tracking, and spin a fixed number of threads at initialization. This removes the complexity of locks and counts, and means the thread pool executor will work identically to the process pool executor (which also eagerly spawns resources). If we want this, this is ready to go in the second PR.
I personally like option 1 because it feels closer to other languages I've worked in, but I'd like a bit more guidance from the reviewers before proceeding.
|
msg341917 - (view) |
Author: Brian Quinlan (bquinlan) * |
Date: 2019-05-08 17:58 |
When I first wrote and started using ThreadPoolExecutor, I had a lot of code like this:
with ThreadPoolExecutor(max_workers=500) as e:
e.map(download, images)
I didn't expect that `images` would be a large list but, if it was, I wanted all of the downloads to happen in parallel.
I didn't want to have to explicitly take into account the list size when starting the executor (e.g. max_works=min(500, len(images))) but I also didn't want to create 500 threads up front when I only needed a few.
My use case involved transient ThreadPoolExecutors so I didn't have to worry about idle threads.
In principle, I'd be OK with trying to avoid unnecessary thread creation if the implementation can be simple and efficient enough.
https://github.com/python/cpython/pull/6375 seems simple enough but I haven't convinced myself that it works yet ;-)
|
msg341935 - (view) |
Author: Brian Quinlan (bquinlan) * |
Date: 2019-05-08 19:50 |
After playing with it for a while, https://github.com/python/cpython/pull/6375 seems reasonable to me.
It needs tests and some documentation.
Antoine, are you still -1 because of the complexity increase?
|
msg342796 - (view) |
Author: Thomas (tilsche) * |
Date: 2019-05-18 12:50 |
We ran into this issue in the context of asyncio which uses an internal ThreadPoolExecutor to provide an asynchronous getaddrinfo / getnameinfo.
We observed an async application spawned more and more threads through several reconnects. With a maximum of 5 x CPUs these were dozens of threads which easily looked like a resource leak.
At least in this scenario I would strongly prefer to correctly reuse idle threads.
Spawning all possible threads on initialization in such a transparent case would be quite bad. Imagine having a process-parallel daemon that running a apparently single-threaded asyncio loop but then getting these executors for doing a single asyncio.getaddrinfo. Now you run 80 instances on an 80 core machine you get 32.000 extra implicit threads.
Now you can argue whether the default executor in asyncio is good as is, but if the executors properly reuse threads, it would be quite unlikely to be a practical problem.
|
msg342798 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2019-05-18 14:36 |
Thomas, I think that's a good argument, so perhaps we should do this (strive to reuse threads) after all.
|
msg343243 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2019-05-22 21:30 |
New changeset 904e34d4e6b6007986dcc585d5c553ee8ae06f95 by Antoine Pitrou (Sean) in branch 'master':
bpo-24882: Let ThreadPoolExecutor reuse idle threads before creating new thread (#6375)
https://github.com/python/cpython/commit/904e34d4e6b6007986dcc585d5c553ee8ae06f95
|
msg343244 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2019-05-22 21:34 |
Thank you for your contribution iunknwn!
|
|
Date |
User |
Action |
Args |
2022-04-11 14:58:19 | admin | set | github: 69070 |
2019-05-22 21:34:37 | pitrou | set | status: open -> closed resolution: fixed messages:
+ msg343244
stage: patch review -> resolved |
2019-05-22 21:30:00 | pitrou | set | messages:
+ msg343243 |
2019-05-18 14:36:41 | pitrou | set | messages:
+ msg342798 |
2019-05-18 13:07:31 | pierreglaser | set | nosy:
+ pierreglaser
|
2019-05-18 12:50:35 | tilsche | set | nosy:
+ tilsche messages:
+ msg342796
|
2019-05-08 19:50:37 | bquinlan | set | messages:
+ msg341935 |
2019-05-08 17:58:29 | bquinlan | set | messages:
+ msg341917 |
2018-04-24 22:32:23 | iunknwn | set | messages:
+ msg315715 |
2018-04-19 11:04:02 | methane | set | nosy:
+ methane messages:
+ msg315485
|
2018-04-19 02:01:36 | iunknwn | set | messages:
+ msg315471 |
2018-04-19 01:59:20 | iunknwn | set | pull_requests:
+ pull_request6224 |
2018-04-13 20:23:48 | pitrou | set | messages:
+ msg315265 |
2018-04-13 20:18:33 | iunknwn | set | messages:
+ msg315264 |
2018-04-13 20:04:38 | pitrou | set | messages:
+ msg315261 |
2018-04-13 19:36:55 | iunknwn | set | messages:
+ msg315260 |
2018-04-08 18:42:56 | pitrou | set | messages:
+ msg315092 |
2018-04-08 18:33:07 | pitrou | set | messages:
+ msg315091 |
2018-04-08 18:32:20 | pitrou | set | nosy:
+ tomMoral
messages:
+ msg315090 versions:
+ Python 3.8 |
2018-04-08 18:25:06 | ned.deily | set | nosy:
+ bquinlan, pitrou
|
2018-04-05 22:45:50 | iunknwn | set | messages:
+ msg315007 components:
+ Library (Lib) |
2018-04-05 22:30:25 | iunknwn | set | nosy:
+ iunknwn
|
2018-04-04 21:38:00 | python-dev | set | keywords:
+ patch stage: patch review pull_requests:
+ pull_request6087 |
2016-10-21 18:10:06 | dmacnet | set | messages:
+ msg279145 |
2016-10-21 17:54:16 | dmacnet | set | nosy:
+ dmacnet messages:
+ msg279144
|
2016-05-31 19:58:31 | josh.r | set | nosy:
+ josh.r messages:
+ msg266772
|
2016-05-30 20:09:36 | torsten | set | files:
+ many_threads.py nosy:
+ torsten messages:
+ msg266716
|
2015-08-17 20:09:44 | Matt Spitz | set | messages:
+ msg248749 |
2015-08-17 20:03:25 | Matt Spitz | set | title: ThreadPoolExceutor doesn't reuse threads until #threads == max_workers -> ThreadPoolExecutor doesn't reuse threads until #threads == max_workers |
2015-08-17 14:59:00 | Matt Spitz | create | |