Author christian.heimes
Recipients christian.heimes, gvanrossum
Date 2007-12-13.21:14:17
SpamBayes Score 0.000500035
Marked as misclassified No
Message-id <>
In-reply-to <>
Guido van Rossum wrote:
> That is done precisely to *avoid* blocking. I believe the only reason
> your example blocks is because you wait before reading -- you should
> do it the other way around, do all I/O first and *then* wait for the
> process to exit.

I believe so, too. The subprocess docs aren't warning about the problem.
I've seen a fair share of programmers who fall for the trap - including
me a few weeks ago.

> I disagree. I don't believe it will block unless you make the mistake
> of waiting for the process first.

Consider yet another example

>>> p = Popen(someprogram, stdin=PIPE, stdout=PIPE)
>>> p.stdin.write(10MB of data)

someprogram processes the incoming data in small blocks. Let's say 1KB
and 1MB stdin and stdout buffer. It reads 1KB from stdin and writes 1KB
to stdout until the stdout buffer is full. The program stops and waits
for for Python to free the stdout buffer. However the python code is
still writing data to the limited stdin buffer.

>>> data =

Is the scenario realistic?

I tried it.

*** This works although it is slow
$ cat img_0948.jpg | convert - png:- >test

*** This example does not work. The test file is created but no data is
written to the file.

p = subprocess.Popen(["convert", "-",  "png:-"],
                     stdin=subprocess.PIPE, stdout=subprocess.PIPE)

img = open("img_0948.jpg", "rb")
with open("test", "wb") as f:

*** It works with communicate:
with open("test", "wb") as f:
    out, err = p.communicate(

Date User Action Args
2007-12-13 21:14:18christian.heimessetspambayes_score: 0.000500035 -> 0.000500035
recipients: + christian.heimes, gvanrossum
2007-12-13 21:14:17christian.heimeslinkissue1606 messages
2007-12-13 21:14:17christian.heimescreate