This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author lemur
Recipients amy20_z, belopolsky, lemur, vstinner
Date 2008-12-11.02:05:17
SpamBayes Score 0.0
Marked as misclassified No
Message-id <1228961121.2.0.457144310867.issue4216@psf.upfronthosting.co.za>
In-reply-to
Content
I'm running python 2.5.2 on Ubuntu 8.10.

I believe I've also encountered the problem reported here.  The scenario
in my case was the following:

1. Python process A uses subprocess.Popen to create another python
process (B).  Process B is created with stdout=PIPE and stderr=PIPE.
Process A communicates with process B using communicate().

2. Python process B, starts a ssh process (process C) which is invoked
to open a new control socket in master mode.  Process C is started
without pipes so it gets its std{in,out,err} from process B.  Process C
is going to run for a long time.  That is, it will run until a command
is sent to the control socket to close the ssh connexion.

3. Process B does not wait for process C to end, so it ends right away.

4. Python process A remains stuck in communicate() until process C (ssh)
dies even though process B has ended already.

Analysis:

The reason for this is that process C (ssh) gets its stdout and stderr
from process B.  But process C keeps both stdout and stderr opened until
it is terminated.  So process A does not get an EOF on the pipes it
opened for communicating with process B until process C ends.

The set of conditions which will trigger the effect is not outlandish.
However, it is specific enough that testing by executing "pwd" or "ls
-l", or "echo blah" or any other simple command won't trigger it.

In my case, I fixed the problem by changing the code of process B to
invoke process C with stdout and stderr set to PIPE and close those
pipes as soon as process B is satisfied that process C is started
properly.  In this way, process A does not block.

(FYI, process A in my case is the python testing tool nosetests.  I use
nosetests to test a backup script written in python and that script
invokes ssh.)

It seems that in general subprocess creators might have two needs:

1. Create a subprocess and communicate with it until there is no more
data to be passed to its stdin or data to be read from its std{out,err}.

2. Create a subprocess and communicate with it *only* until *this*
process dies.  After it is dead, neither stdout nor stderr are of any
interest.

Currently, need 1 is addressed by communicate() but not need 2.  In my
scenario above, I was able to work around the problem by modifying
process B but there are going to be cases where process B is not
modifiable (or at least not easily modifiable).  In those cases, process
A has to be able to handle it.
History
Date User Action Args
2008-12-11 02:05:21lemursetrecipients: + lemur, belopolsky, vstinner, amy20_z
2008-12-11 02:05:21lemursetmessageid: <1228961121.2.0.457144310867.issue4216@psf.upfronthosting.co.za>
2008-12-11 02:05:20lemurlinkissue4216 messages
2008-12-11 02:05:17lemurcreate