Message379668
(Restored test.py attachment)
The issue happens due to an incorrect usage of `multiprocessing.Pool`.
```
# Set up multiprocessing pool, initialising logging in each subprocess
with multiprocessing.Pool(initializer=process_setup, initargs=(get_queue(),)) as pl:
# 100 seems to work fine, 500 fails most of the time.
# If you're having trouble reproducing the error, try bumping this number up to 1000
pl.map(do_work, range(10000))
if _globalListener is not None:
# Stop the listener and join the thread it runs on.
# If we don't do this, we may lose log messages when we exit.
_globalListener.stop()
```
Leaving `with` statement causes `pl.terminate()` [1, 2]
Since multiprocessing simply sends SIGTERM to all workers, a worker might be killed while it holds the cross-process lock guarding `_globalQueue`. In this case, `_globalListener.stop()` blocks forever trying to acquire that lock (to add a sentinel to `_globalQueue` to make a background thread stop monitoring it).
Consider using `Pool.close()` and `Pool.join()` to properly wait for task completion.
[1] https://docs.python.org/3.9/library/multiprocessing.html#multiprocessing.pool.Pool.terminate
[2] https://docs.python.org/3.9/library/multiprocessing.html#programming-guidelines |
|
Date |
User |
Action |
Args |
2020-10-26 15:49:38 | izbyshev | set | recipients:
+ izbyshev, terry.reedy, gregory.p.smith, vinay.sajip, Adq |
2020-10-26 15:49:38 | izbyshev | set | messageid: <1603727378.89.0.48417480234.issue42097@roundup.psfhosted.org> |
2020-10-26 15:49:38 | izbyshev | link | issue42097 messages |
2020-10-26 15:49:38 | izbyshev | create | |
|