This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: multiprocessing.Queue hangs when process on other side dies
Type: Stage:
Components: Library (Lib) Versions: Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: davin, kormang, myles.steinhauser, pitrou, shnizzedy, takluyver
Priority: normal Keywords:

Created on 2021-04-11 11:52 by kormang, last changed 2022-04-11 14:59 by admin.

Messages (6)
msg390774 - (view) Author: Marko (kormang) Date: 2021-04-11 11:52
When child process dies unexpectedly Queue.get waits indefinitely.

Here is example:

import os
import signal
import multiprocessing

def child_func(qa, qb):
    input = qa.get()
    print('Child received: ', input)
    os.kill(os.getpid(), signal.SIGTERM)
    qb.put('B')
    exit(0)

qa = multiprocessing.Queue()
qb = multiprocessing.Queue()
process = multiprocessing.Process(target=child_func, args=(qa, qb))
process.start()

qa.put('A')
try:
    input = qb.get()
    print('Parent received: ', input)
except Exception as ex:
    print(ex)
process.join()
exit(0)
msg390777 - (view) Author: Marko (kormang) Date: 2021-04-11 12:03
Possible duplicate of issue22393
msg390779 - (view) Author: Marko (kormang) Date: 2021-04-11 12:18
Somewhat related issue43806 with asyncio.StreamReader
msg406953 - (view) Author: Thomas Kluyver (takluyver) * Date: 2021-11-24 19:34
I think this is expected. The queue itself doesn't know that one particular process is meant to put data into it. It just knows that there's no data to get, so .get() blocks as the docs say it should.

This doesn't apply to issue22393, because the pool knows about its worker processes, so if one dies before completing a task, it can know something is wrong.

You could add a method to 'half close' a queue, so it can only be used for receiving, but not sending. If you called this in the parent process after starting the child, then if the child died, the queue would know that nothing could ever put data into it, and .get() could error. The channels API in Trio allows this, and it's the same idea I've just described at the OS level in issue43806.
msg407709 - (view) Author: Marko (kormang) Date: 2021-12-05 12:49
Yes, something like that would indeed be really helpful.
How likely is that something like that gets implemented?
msg407785 - (view) Author: Thomas Kluyver (takluyver) * Date: 2021-12-06 10:37
It's not my decision, so I can't really say. But the Queue API is pretty stable, and exists 3 times over in Python (the queue module for use with threads, in multiprocessing and in asyncio). So I'd guess that anyone wanting to add to that API would need to make a compelling case for why it's important, and be prepared for a lot of wrangling over API details (like method names and exceptions).

If you want to push that idea, you could try the ideas board on the Python discourse forum: https://discuss.python.org/c/ideas/6 .

You might also want to look at previous discussions about adding a Queue.close() method: issue29701 and issue40888.
History
Date User Action Args
2022-04-11 14:59:44adminsetgithub: 87971
2021-12-06 10:37:11takluyversetmessages: + msg407785
2021-12-05 12:49:39kormangsetmessages: + msg407709
2021-11-24 19:34:13takluyversetnosy: + takluyver
messages: + msg406953
2021-10-18 03:11:18myles.steinhausersetnosy: + myles.steinhauser
2021-08-18 14:02:51shnizzedysetnosy: + shnizzedy
2021-04-16 20:22:50terry.reedysetnosy: + pitrou, davin
2021-04-11 12:18:32kormangsetmessages: + msg390779
2021-04-11 12:03:48kormangsetmessages: + msg390777
2021-04-11 11:52:50kormangcreate