This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author yus2047889
Recipients yus2047889
Date 2020-01-03.21:44:04
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1578087844.56.0.81762061225.issue39207@roundup.psfhosted.org>
In-reply-to
Content
This came up from a supporting library but the actual issue is within concurrent.futures.ProcessPool.

Discussion can be found at https://github.com/agronholm/apscheduler/issues/414

ProcessPoolExecutor does not properly spin down and spin up new processes. Instead, it simply re-claims existing processes to re-purpose them for new jobs. Is there no option or way to make it so that instead of re-claiming existing processes, it spins down the process and then spins up another one. This behavior is a lot better for garbage collection and will help to prevent memory leaks. 

ProcessPoolExecutor also spins up too many processes and ignores the max_workers argument. An example is my setting max_workers=10, but I am only utilizing 3 processes. One would expect given the documentation that I would have at most 4 processes, the main process, and the 3 worker processes. Instead, ProcessPoolExecutor spawns all 10 max_workers and lets the other 7 just sit there, even though they are not necessary.
History
Date User Action Args
2020-01-03 21:44:04yus2047889setrecipients: + yus2047889
2020-01-03 21:44:04yus2047889setmessageid: <1578087844.56.0.81762061225.issue39207@roundup.psfhosted.org>
2020-01-03 21:44:04yus2047889linkissue39207 messages
2020-01-03 21:44:04yus2047889create