This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author izbyshev
Recipients Adq, gregory.p.smith, izbyshev, terry.reedy, vinay.sajip
Date 2020-10-26.15:49:38
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1603727378.89.0.48417480234.issue42097@roundup.psfhosted.org>
In-reply-to
Content
(Restored test.py attachment)

The issue happens due to an incorrect usage of `multiprocessing.Pool`.

```
    # Set up multiprocessing pool, initialising logging in each subprocess
    with multiprocessing.Pool(initializer=process_setup, initargs=(get_queue(),)) as pl:
        # 100 seems to work fine, 500 fails most of the time.
        # If you're having trouble reproducing the error, try bumping this number up to 1000
        pl.map(do_work, range(10000))

    if _globalListener is not None:
        # Stop the listener and join the thread it runs on.
        # If we don't do this, we may lose log messages when we exit.
        _globalListener.stop()
```

Leaving `with` statement causes `pl.terminate()` [1, 2]
Since multiprocessing simply sends SIGTERM to all workers, a worker might be killed while it holds the cross-process lock guarding `_globalQueue`. In this case, `_globalListener.stop()` blocks forever trying to acquire that lock (to add a sentinel to `_globalQueue` to make a background thread stop monitoring it).

Consider using `Pool.close()` and `Pool.join()` to properly wait for task completion.

[1] https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.pool.Pool.terminate
[2] https://docs.python.org/3.9/library/multiprocessing.html#programming-guidelines
History
Date User Action Args
2020-10-26 15:49:38izbyshevsetrecipients: + izbyshev, terry.reedy, gregory.p.smith, vinay.sajip, Adq
2020-10-26 15:49:38izbyshevsetmessageid: <1603727378.89.0.48417480234.issue42097@roundup.psfhosted.org>
2020-10-26 15:49:38izbyshevlinkissue42097 messages
2020-10-26 15:49:38izbyshevcreate