This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: multiprocessing.Pool._terminate_pool restarts workers during shutdown
Type: behavior Stage:
Components: Library (Lib) Versions: Python 3.5, Python 2.7
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: ecatmur, sbt
Priority: normal Keywords:

Created on 2013-09-26 08:53 by ecatmur, last changed 2022-04-11 14:57 by admin.

Repositories containing patches
Messages (2)
msg198432 - (view) Author: Edward Catmur (ecatmur) Date: 2013-09-26 08:53
There is a race condition in multiprocessing.Pool._terminate_pool that can result in workers being restarted during shutdown (process shutdown or pool.terminate()).

        worker_handler._state = TERMINATE    # <~~~~ race from here
        task_handler._state = TERMINATE

        debug('helping task handler/workers to finish')
        cls._help_stuff_finish(inqueue, task_handler, len(pool))

        assert result_handler.is_alive() or len(cache) == 0

        result_handler._state = TERMINATE
        outqueue.put(None)                  # sentinel

        # We must wait for the worker handler to exit before terminating
        # workers because we don't want workers to be restarted behind our back.
        debug('joining worker handler')
        worker_handler.join()                # <~~~~~ race to here

At any point between setting worker_handler._state = TERMINATE and joining the worker handler, if the intervening code causes a worker to exit then it is possible for the worker handler to fail to notice that it has been shutdown and so attempt to restart the worker:

    def _handle_workers(pool):
        thread = threading.current_thread()

        # Keep maintaining workers until the cache gets drained, unless the pool
        # is terminated.
        while thread._state == RUN or (pool._cache and thread._state != TERMINATE):
            # <~~~~~~ race here
        # send sentinel to stop workers
        util.debug('worker handler exiting')

We noticed this initially because in the absence of the fix to #14881 a ThreadPool trying to restart a worker fails and hangs the process.  In the presence of the fix to #14881 there is no immediate issue, but trying to restart a worker process/thread on pool shutdown is clearly unwanted and could result in bad things happening e.g. at process shutdown.

To trigger the race with ThreadPool, it is enough just to pause the _handle_workers thread after checking its state and before calling _maintain_pool:

import multiprocessing.pool
import time
class ThreadPool(multiprocessing.pool.ThreadPool):
    def _maintain_pool(self):
        super(ThreadPool, self)._maintain_pool()
    def _repopulate_pool(self):
        assert self._state == multiprocessing.pool.RUN
        super(ThreadPool, self)._repopulate_pool()
pool = ThreadPool(4) x: x, range(5))

Exception in thread Thread-5:
Traceback (most recent call last):
  File ".../cpython/Lib/", line 657, in _bootstrap_inner
  File ".../cpython/Lib/", line 605, in run
    self._target(*self._args, **self._kwargs)
  File ".../cpython/Lib/multiprocessing/", line 358, in _handle_workers
  File ".../", line 6, in _maintain_pool
    super(ThreadPool, self)._maintain_pool()
  File ".../cpython/Lib/multiprocessing/", line 232, in _maintain_pool
  File ".../", line 8, in _repopulate_pool
    assert self._state == multiprocessing.pool.RUN

In this case, the race occurs when ThreadPool._help_stuff_finish puts sentinels on inqueue to make the workers finish.

It is also possible to trigger the bug with multiprocessing.pool.Pool:

import multiprocessing.pool
import time
class Pool(multiprocessing.pool.Pool):
    def _maintain_pool(self):
        super(Pool, self)._maintain_pool()
    def _repopulate_pool(self):
        assert self._state == multiprocessing.pool.RUN
        super(Pool, self)._repopulate_pool()
    def _handle_tasks(taskqueue, put, outqueue, pool):
        _real_handle_tasks(taskqueue, put, outqueue, pool)
_real_handle_tasks = multiprocessing.pool.Pool._handle_tasks
multiprocessing.pool.Pool._handle_tasks = Pool._handle_tasks
pool = Pool(4), range(10))
pool.map_async(str, range(10))

In this case, the race occurs when _handle_tasks checks thread._state, breaks out of its first loop, and sends sentinels to the workers.

The terminate/join can be omitted, in which case the bug will occur at gc or process shutdown when the pool's atexit handler runs.  The bug is avoided if terminate is replaced with close, and we are using this workaround.
msg198434 - (view) Author: Edward Catmur (ecatmur) Date: 2013-09-26 09:45
Suggested patch:

Move the worker_handler.join() to immediately after setting the worker handler thread state to TERMINATE.  This is a safe change as nothing in the moved-over code affects the worker handler thread, except by terminating workers which is precisely what we don't want to happen.  In addition, this is near-equivalent behaviour to current close() + join(), which is well-tested.

Also: write tests; and modify Pool.__init__ to refer to its static methods using self rather than class name, to make them overridable for testing purposes.
Date User Action Args
2022-04-11 14:57:51adminsetgithub: 63295
2013-09-26 09:45:36ecatmursethgrepos: + hgrepo211
messages: + msg198434
2013-09-26 09:03:26pitrousetnosy: + sbt
2013-09-26 08:53:28ecatmurcreate