This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author João Eiras
Recipients João Eiras
Date 2020-02-25.17:24:15
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1582651456.06.0.685882029108.issue39752@roundup.psfhosted.org>
In-reply-to
Content
Hi.

When one of the processes in a multiprocessing.pool picks up a task then then somehow crashes (and by crash I mean crashing the python process with something like a SEGV) or is killed, the pool in the main process will notice one of the workers died and will repopulate the pool, but it does not keep track which task was being handled by the process that died. As consequence, a caller waiting for a result will get stuck forever.

Example:
    with multiprocessing.Pool(1) as pool:
        result = pool.map_async(os._exit, [1]).get(timeout=2)

I found this because I was trying to use a lock with a spawned process on linux and that caused a crash and my program froze, but that is another issue.
History
Date User Action Args
2020-02-25 17:24:16João Eirassetrecipients: + João Eiras
2020-02-25 17:24:16João Eirassetmessageid: <1582651456.06.0.685882029108.issue39752@roundup.psfhosted.org>
2020-02-25 17:24:16João Eiraslinkissue39752 messages
2020-02-25 17:24:15João Eirascreate