Message362651
Hi.
When one of the processes in a multiprocessing.pool picks up a task then then somehow crashes (and by crash I mean crashing the python process with something like a SEGV) or is killed, the pool in the main process will notice one of the workers died and will repopulate the pool, but it does not keep track which task was being handled by the process that died. As consequence, a caller waiting for a result will get stuck forever.
Example:
with multiprocessing.Pool(1) as pool:
result = pool.map_async(os._exit, [1]).get(timeout=2)
I found this because I was trying to use a lock with a spawned process on linux and that caused a crash and my program froze, but that is another issue. |
|
Date |
User |
Action |
Args |
2020-02-25 17:24:16 | João Eiras | set | recipients:
+ João Eiras |
2020-02-25 17:24:16 | João Eiras | set | messageid: <1582651456.06.0.685882029108.issue39752@roundup.psfhosted.org> |
2020-02-25 17:24:16 | João Eiras | link | issue39752 messages |
2020-02-25 17:24:15 | João Eiras | create | |
|