This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: ProcessPoolExecutor(max_workers=64) crashes on Windows
Type: behavior Stage: resolved
Components: Cross-Build Versions: Python 3.6
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: bquinlan Nosy List:
Priority: normal Keywords: patch

Created on 2016-05-01 20:45 by diogocp, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 13132 merged bquinlan, 2019-05-06 19:07
PR 13206 closed miss-islington, 2019-05-08 18:05
PR 13643 merged miss-islington, 2019-05-29 02:38
Messages (12)
msg264608 - (view) Author: Diogo Pereira (diogocp) Date: 2016-05-01 20:45
I'm using Python 3.5.1 x86-64 on Windows Server 2008 R2. Trying to run the ProcessPoolExecutor example [1] generates this exception:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "C:\Program Files\Python35\lib\threading.py", line 914, in _bootstrap_inner
    self.run()
  File "C:\Program Files\Python35\lib\threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Program Files\Python35\lib\concurrent\futures\process.py", line 270, in _queue_management_worker
    ready = wait([reader] + sentinels)
  File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 859, in wait
    ready_handles = _exhaustive_wait(waithandle_to_obj.keys(), timeout)
  File "C:\Program Files\Python35\lib\multiprocessing\connection.py", line 791, in _exhaustive_wait
    res = _winapi.WaitForMultipleObjects(L, False, timeout)
ValueError: need at most 63 handles, got a sequence of length 64


The problem seems to be related to the value of the Windows constant MAXIMUM_WAIT_OBJECTS (see [2]), which is 64. This machine has 64 logical cores, so ProcessPoolExecutor defaults to 64 workers.

Lowering max_workers to 63 or 62 still results in the same exception, but max_workers=61 works fine.


[1] https://docs.python.org/3.5/library/concurrent.futures.html#processpoolexecutor-example
[2] https://hg.python.org/cpython/file/80d1faa9735d/Modules/_winapi.c#l1339
msg265007 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2016-05-06 19:11
The example runs fine, in about 1 second, on my 6 core (which I guess is 12 logical cores) Pentium.  I am guessing that the default number of workers needs to be changed, at least on Windows, to min(#logical_cores, 60)
msg265086 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2016-05-07 18:23
Just noting that the `multiprocessing` module can be used instead.  In the example, add

    import multiprocessing as mp

and change

        with concurrent.futures.ProcessPoolExecutor() as executor:

to

        with mp.Pool() as executor:

That's all it takes.  On my 4-core Win10 box (8 logical cores), that continued to work fine even when passing 1024 to mp.Pool() (although it obviously burned time and RAM to create over a thousand processes).

Some quick Googling strongly suggests there's no reasonably general way to overcome the Windows-defined MAXIMUM_WAIT_OBJECTS=64 for implementations that call the Windows WaitForMultipleObjects().
msg265206 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2016-05-09 16:15
> Some quick Googling strongly suggests there's no reasonably general way to overcome the Windows-defined MAXIMUM_WAIT_OBJECTS=64 for implementations that call the Windows WaitForMultipleObjects().

The recommended way to deal with this is to spin up threads to do the wait (which sounds horribly inefficient, but threads on Windows are cheap, especially if they are waiting on kernel objects), and then wait on each thread.

Personally I think it'd be fine to make the _winapi module do that transparently for WaitForMultipleObjects, as it's complicated to get right (you need to ensure you map back to the original handle, timeouts and cancellation get complicated, there are real race conditions (mainly for auto-reset events), etc.), but in all circumstances it's better than just failing immediately. Handling it within multiprocessing isn't a bad idea, but won't help other users.

I'd love to write the code to do it, but I doubt I'll get time (especially since I'm missing the PyCon US sprints this year). Happy to help someone else through it. We're going to see Python being used on more and more multicore systems over time, where this will become a genuine issue.
msg340390 - (view) Author: Robert Collins (rbcollins) * (Python committer) Date: 2019-04-17 10:59
This is now showing up in end user tools like black: https://github.com/ambv/black/issues/564
msg341545 - (view) Author: Brian Quinlan (bquinlan) * (Python committer) Date: 2019-05-06 15:48
If no one has short-term plans to improve multiprocessing.connection.wait, then I'll update the docs to list this limitation, ensure that ProcessPoolExecutor never defaults to >60 processes on windows and raises a ValueError if the user explicitly passes a larger number.
msg341571 - (view) Author: Brian Quinlan (bquinlan) * (Python committer) Date: 2019-05-06 17:36
BTW, the 61 process limit comes from:

63 - <the result queue reader> - <the thread wakeup reader>
msg341918 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2019-05-08 18:04
New changeset 39889864c09741909da4ec489459d0197ea8f1fc by Steve Dower (Brian Quinlan) in branch 'master':
bpo-26903: Limit ProcessPoolExecutor to 61 workers on Windows (GH-13132)
https://github.com/python/cpython/commit/39889864c09741909da4ec489459d0197ea8f1fc
msg343858 - (view) Author: Ned Deily (ned.deily) * (Python committer) Date: 2019-05-29 03:12
New changeset 8ea0fd85bc67438f679491fae29dfe0a3961900a by Ned Deily (Miss Islington (bot)) in branch '3.7':
bpo-26903: Limit ProcessPoolExecutor to 61 workers on Windows (GH-13132) (GH-13643)
https://github.com/python/cpython/commit/8ea0fd85bc67438f679491fae29dfe0a3961900a
msg365886 - (view) Author: Mike Hommey (Mike Hommey) Date: 2020-04-07 03:01
This is still a problem in python 3.7 (and, I guess 3.8).

When not even giving a max_workers, it fails with a ValueError exception on _winapi.WaitForMultipleObjects, with the message "need at most 63 handles, got a sequence of length 63"

That happens with max_workers=None and max_workers=61 ; not max_workers=60.

I wonder if there's an off-by-one in this test: https://github.com/python/cpython/blob/7668a8bc93c2bd573716d1bea0f52ea520502b28/Modules/_winapi.c#L1708
msg365901 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2020-04-07 10:25
More likely there's been another change to the events that are listened to by multiprocessing, which didn't update the overall limit.

File a new bug, please.
msg366314 - (view) Author: Ray Donnelly (Ray Donnelly) * Date: 2020-04-13 13:36
I took the liberty of filing this: https://bugs.python.org/issue40263

Cheers.
History
Date User Action Args
2022-04-11 14:58:30adminsetgithub: 71090
2022-01-29 19:57:11iritkatriellinkissue39339 superseder
2021-11-04 13:56:41eryksunsetnosy: - Alex.Willmer, ahmedsayeed1982
2021-11-04 13:54:46eryksunsetmessages: - msg405712
2021-11-04 12:13:08ahmedsayeed1982setversions: + Python 3.6, - Python 3.7, Python 3.8
nosy: + Alex.Willmer, ahmedsayeed1982, - tim.peters, terry.reedy, paul.moore, bquinlan, rbcollins, tim.golden, ned.deily, sbt, zach.ware, steve.dower, davin, diogocp, Ray Donnelly, Mike Hommey

messages: + msg405712

components: + Cross-Build, - Windows
2020-04-13 13:36:26Ray Donnellysetnosy: + Ray Donnelly
messages: + msg366314
2020-04-07 10:25:16steve.dowersetmessages: + msg365901
2020-04-07 03:01:49Mike Hommeysetnosy: + Mike Hommey
messages: + msg365886
2019-05-29 03:13:32ned.deilysetversions: - Python 3.5, Python 3.6, Python 3.9
2019-05-29 03:12:47ned.deilysetnosy: + ned.deily
messages: + msg343858
2019-05-29 02:38:41miss-islingtonsetpull_requests: + pull_request13539
2019-05-09 17:37:35bquinlansetstatus: open -> closed
resolution: fixed
stage: patch review -> resolved
2019-05-08 18:05:04miss-islingtonsetpull_requests: + pull_request13117
2019-05-08 18:04:58steve.dowersetmessages: + msg341918
2019-05-06 19:07:35bquinlansetkeywords: + patch
stage: needs patch -> patch review
pull_requests: + pull_request13045
2019-05-06 17:36:09bquinlansetmessages: + msg341571
2019-05-06 15:53:57bquinlansetassignee: bquinlan
2019-05-06 15:48:56bquinlansetmessages: + msg341545
2019-04-17 10:59:22rbcollinssetnosy: + rbcollins

messages: + msg340390
versions: + Python 3.7, Python 3.8, Python 3.9
2016-07-02 20:22:58davinsetnosy: + davin
2016-05-09 16:16:21steve.dowersetstage: needs patch
type: behavior
versions: + Python 3.6
2016-05-09 16:15:50steve.dowersetmessages: + msg265206
2016-05-07 18:23:03tim.peterssetnosy: + tim.peters
messages: + msg265086
2016-05-07 11:19:53pitrousetnosy: + sbt
2016-05-06 19:11:20terry.reedysetnosy: + terry.reedy, bquinlan
messages: + msg265007
2016-05-01 20:45:38diogocpcreate