This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: asyncio: child process exit isn't detected if its stdin/stdout/stderr FDs have been inherited by a child process
Type: behavior Stage: resolved
Components: Versions: Python 3.4
Status: closed Resolution: rejected
Dependencies: Superseder:
Assigned To: Nosy List: gvanrossum, neologix
Priority: normal Keywords:

Created on 2013-10-21 10:15 by neologix, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Messages (3)
msg200744 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-10-21 10:15
(This is a spinoff from

When SIGCHLD is received, _sig_chld() is executed:

    def _sig_chld(self):
        transp = self._subprocesses.get(pid)
        if transp is not None:

Then, here's _process_exited():

    def _process_exited(self, returncode):
        assert returncode is not None, returncode
        assert self._returncode is None, self._returncode
        self._returncode = returncode

And here's _try_finish():

    def _try_finish(self):
        assert not self._finished
        if self._returncode is None:
        if all(p is not None and p.disconnected
               for p in self._pipes.values()):
            self._finished = True
            self._loop.call_soon(self._call_connection_lost, None)

Thus, _UnixSubprocessTransport protocol's connection_lost is only
called if the all() expression is true:
and it's true only if all the subprocess pipes have been disconnected
(or if we didn't setup any pipe).

Unfortunately, this might very well never happen: imagine that the
subprocess forks a process: this grand-child process inherits the
child process's pipes, so when the child process exits, we won't
receive any notification, since this grand-child process still has
open FDs pointing to the original child's stdin/stdout/stderr.

The following simple test will hang until the background 'sleep' exits:
diff -r 47618b00405b Lib/test/test_asyncio/
--- a/Lib/test/test_asyncio/      Sat Oct 19 10:45:48 2013 +0300
+++ b/Lib/test/test_asyncio/      Sun Oct 20 19:32:37 2013 +0200
@@ -1059,6 +1059,23 @@

     @unittest.skipIf(sys.platform == 'win32',
                      "Don't support subprocess for Windows yet")
+    def test_subprocess_inherit_fds(self):
+        proto = None
+        @tasks.coroutine
+        def connect():
+            nonlocal proto
+            transp, proto = yield from self.loop.subprocess_shell(
+                functools.partial(MySubprocessProtocol, self.loop),
+                'sleep 60 &')
+            self.assertIsInstance(proto, MySubprocessProtocol)
+        self.loop.run_until_complete(connect())
+        self.loop.run_until_complete(proto.completed)
+        self.assertEqual(0, proto.returncode)
+    @unittest.skipIf(sys.platform == 'win32',
+                     "Don't support subprocess for Windows yet")
     def test_subprocess_close_after_finish(self):
         proto = None
         transp = None

If waitpid() returns a process's PID, then the process is done,
there's no reason to further wait for pipe's disconnection: they can
be used as a hint that the process terminated, but there's definitely
not required...
msg200812 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2013-10-21 16:49
This is by design. Let me try to defend the design.

As long as one of the pipes is still open the parent might be interested in it. The Protocol object does get notified of the process's exit, via process_exited(), and if at that point it wants to be done, it can close the pipes itself. (To do that, call transport.get_pipe_transport(FD).close().) Once that's done the subprocess protocol's connection_lost() method will be called.

I suppose an alternative approach might be to assume that when the subprocess exist the parent should close the pipes and be done, but there's a race condition there: there may still be data in one of the pipes (stdout or stderr) that should be processed before calling connection_lost(), similar to how we delay the socket connection_lost() call until we have processed all data read from it.

So I don't think that alternative is viable.
msg200816 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-10-21 17:50
> This is by design. Let me try to defend the design.

OK, if that's a know limitation, then that's fine.
It would be nice to add maybe a note somewhere in the documentation, so that people don't get bitten by this (and also probably add a test for process_exited).
Date User Action Args
2022-04-11 14:57:52adminsetgithub: 63525
2013-10-21 17:50:28neologixsetstatus: open -> closed
resolution: rejected
messages: + msg200816

stage: needs patch -> resolved
2013-10-21 16:49:41gvanrossumsetmessages: + msg200812
2013-10-21 15:52:51gvanrossumsetnosy: + gvanrossum
2013-10-21 10:15:57neologixcreate