This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: subprocess.py stdout of childprocess always buffered.
Type: behavior Stage:
Components: Versions: Python 2.4, Python 2.5
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: astrand, gvanrossum, jason.w.kim
Priority: normal Keywords:

Created on 2007-10-05 20:09 by jason.w.kim, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Messages (3)
msg56247 - (view) Author: Jason Kim (jason.w.kim) Date: 2007-10-05 20:09
Hi. 

I am currently using subprocess.py (2.4.4 and above) to try to have a
portable way of running a sub task in linux and windows.

I ran into a strange problem - a program runs and is "timed out"
but the the subprocess's stdout and stderr are not fully "grabbed"
So I've been trying various ways to force "flush" the stdout and stderr
of the child process so that at least partial results can be saved.

The "problem" app being spawned off is very simple:
--------------------------------------
#include <stdio.h>

int main() {
  int i = 0;
  for (i = 0; i < 1000; ++i) {
    printf("STDOUT boo %d\n",i);
    fprintf(stdout,"STDOUT sleeping %d\n",i);
    fprintf(stderr,"STDERR sleeping %d\n",i);
    //fflush(stdout);
    //fflush(stderr);
    sleep(1);
  } 
}

-----------------------------------------------

i.e. it just dumps its output to both stdout and stderr. The issue that
I am seeing is that no matter what options I tried to place for
subprocess(), the ONLY output I see from the executed process are

"STDERR sleeping " lines, UNLESS I uncomment the fflush(stdout) line in
the application.

Executing the script with python -u doesn't seem to help either.
Now, if the task completes normally, then I am able to grab the entire
stdout and stderr produced by the subprocess. The issue is that I can't
seem to grab the partial output for stdout, and there does not seem to
be a way to make the file descriptors returned by pipe() to be unbuffered.

So the question is: what is the preferred method of forcing the pipe()
file descriptors created by subprocess.__init__() to be fully unbuffered?

Second, is there a better way of doing this?
i.e. a portable way to spawn off a task, with an optional timeout, grab
any partial results from the task's stdout and stderr, and grab the
return code from the child task?

Any hints and advice will be greatly appreciated.

Thank you.

The relevant snippet of python code is:

import threading
from signal import *
from subprocess import *
import time
import string
import copy
import re
import sys
import os
from glob import glob
from os import path
import thread

class task_wrapper():
   def run(s):
      if s.timeout > 0:
        #print "starting timer for ",s.timeout
        s.task_timer = threading.Timer(s.timeout, task_wrapper.cleanup, [s])
        s.task_timer.start()
      s.task_start_time = time.time()
      s.task_end_time = s.task_start_time
      s.subtask=Popen(s.cmd, bufsize=0, env=s.env, stdout=PIPE, stderr=PIPE)
      s.task_out, s.task_err = s.subtask.communicate()


  def kill(s, subtask):
    """ attempts a portable way to kill things
    First, flush the buffer
    """
    print "killing", subtask.pid
    sys.stdout.flush()
    #s.subtask.stdin.flush()
    print "s.subtask.stdout.fileno()=",s.subtask.stdout.fileno()
    print "s.subtask.stderr.fileno()=",s.subtask.stderr.fileno()
    #os.fsync(s.subtask.stderr.fileno())
    #os.fsync(s.subtask.stdout.fileno())    
    s.subtask.stdout.flush()
    s.subtask.stderr.flush()
    
    if os.name == "posix":
      os.kill(subtask.pid, SIGKILL)
    elif os.name == "nt":
      import win32api
      win32api.TerminateProcess(subtask._handle ,9)

  def cleanup(s, mode="TIMEOUT"):
    s.timer_lock.acquire()
    if s.task_result == None:
    if mode == "TIMEOUT":
       s.msg( """ Uhoh, subtask took too long""")
       s.kill(s.subtask) 
       ....
msg56251 - (view) Author: Peter Åstrand (astrand) * (Python committer) Date: 2007-10-06 07:04
Most probably, this is not a problem with the Python side or the pipes,
but the libc streams in the application. stderr is normally unbuffered,
while stdout is buffered. You can see this by running the app with grep:

$ ./a.out | grep STDOUT
STDERR sleeping 0
STDERR sleeping 1
STDERR sleeping 2
STDERR sleeping 3

If you throw in a:

setvbuf(stdout, NULL, _IONBF, 0);

things will work the way you expect:

$ ./a.out | grep STDOUT
STDERR sleeping 0
STDOUT boo 0
STDOUT sleeping 0
STDERR sleeping 1
STDOUT boo 1
msg56255 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2007-10-06 22:36
This is how C stdio works in the subprocess.  Python's subprocess.py has
nothing to do with it and can't do anything about it.
History
Date User Action Args
2022-04-11 14:56:27adminsetgithub: 45582
2007-10-06 22:36:54gvanrossumsetstatus: open -> closed
nosy: + gvanrossum
resolution: not a bug
messages: + msg56255
2007-10-06 07:05:00astrandsetnosy: + astrand
messages: + msg56251
2007-10-05 20:09:30jason.w.kimcreate