This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: multiprocessing.Pool hangs forever on segfault
Type: behavior Stage: resolved
Components: Versions: Python 3.4
process
Status: closed Resolution: duplicate
Dependencies: Superseder: multiprocessing.Pool shouldn't hang forever if a worker process dies unexpectedly
View: 22393
Assigned To: Nosy List: Jonas Obrist, brianboonstra, jnoller, sbt
Priority: normal Keywords: patch

Created on 2015-08-24 17:05 by Jonas Obrist, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
pool_segfault.py Jonas Obrist, 2015-08-24 18:00 Program showing how a segfaulting Pool.apply will hang forever
process_segfault.py Jonas Obrist, 2015-08-24 18:00 Program showing that Process will run as expected if target segfaults
patch.diff Jonas Obrist, 2015-08-25 18:22 patch that adds a warning if a worker exits prematurely review
Messages (4)
msg249068 - (view) Author: Jonas Obrist (Jonas Obrist) * Date: 2015-08-24 17:05
When using multiprocessing.Pool, if the function run in the pool segfaults, the program will simply hang forever. However when using multiprocessing.Process directly, it runs fine, setting the exitcode to -11 as expected.

I would expect the Pool to behave similar to Process, or at the very least an exception to be raised instead of just silently hanging forever.

I was able to reproduce this issue both on Linux (Ubuntu 15.04) and Mac OS X.
msg249077 - (view) Author: Jonas Obrist (Jonas Obrist) * Date: 2015-08-24 21:56
So the reason this is happening is very simple:

When using Pool.apply, the task (function) is sent to the task queue, which is consumed by the worker. At this point the task is "in progress". However, the worker dies without being able to finish the task or in any other way tell the Pool that it can't finish the task. The actual process is then ended by the Pool but the task is still in limbo, so any attempt at getting a result will hang forever.

I'm not sure there's a straight forward way to solve this (the ways I can think of from the top of my head involve adding quite a bit of overhead to the Pool so it keeps track of which process/worker is handling which task at a given time, so if it exits prematurely this task can be finished), but at the very least this case should be documented I think.
msg249147 - (view) Author: Jonas Obrist (Jonas Obrist) * Date: 2015-08-25 18:22
I've added a patch that would simply warn the user if a worker exits prematurely.
msg250474 - (view) Author: Brian Boonstra (brianboonstra) Date: 2015-09-11 13:58
See also issue 22393
History
Date User Action Args
2022-04-11 14:58:20adminsetgithub: 69115
2015-09-16 17:01:10berker.peksagsetsuperseder: multiprocessing.Pool shouldn't hang forever if a worker process dies unexpectedly
stage: resolved
2015-09-16 11:47:34Jonas Obristsetstatus: open -> closed
resolution: duplicate
2015-09-11 13:58:47brianboonstrasetnosy: + brianboonstra
messages: + msg250474
2015-08-25 18:22:37Jonas Obristsetfiles: + patch.diff
keywords: + patch
messages: + msg249147
2015-08-24 21:56:57Jonas Obristsetmessages: + msg249077
2015-08-24 18:00:55Jonas Obristsetfiles: + process_segfault.py
2015-08-24 18:00:30Jonas Obristsetfiles: + pool_segfault.py
2015-08-24 18:00:09Jonas Obristsetfiles: - process_segfault.py
2015-08-24 18:00:06Jonas Obristsetfiles: - pool_segfault.py
2015-08-24 18:00:01Jonas Obristsetfiles: - setup.py
2015-08-24 17:59:57Jonas Obristsetfiles: - segfault.c
2015-08-24 17:06:39Jonas Obristsetfiles: + process_segfault.py
2015-08-24 17:06:22Jonas Obristsetfiles: + setup.py
2015-08-24 17:05:56Jonas Obristsetfiles: + segfault.c
2015-08-24 17:05:17Jonas Obristcreate