This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: write failure ignored in Py_Finalize()
Type: Stage:
Components: Interpreter Core Versions: Python 2.3
Status: closed Resolution: accepted
Dependencies: Superseder:
Assigned To: Nosy List: doko, loewis, meyering, rhettinger, tim.peters, wom-work
Priority: high Keywords:

Created on 2004-11-26 23:02 by doko, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Messages (14)
msg23325 - (view) Author: Matthias Klose (doko) * (Python committer) Date: 2004-11-26 23:02
[forwarded from]

Write errors on stdout may be ignored, and hence may 
result in loss of valuable user data. 
Here's a quick demo: 
$ ./close-bug 
$ ./close-bug > /dev/full && echo unreported write failure 
unreported write failure 
$ cat close-bug 
import sys 
def main (): 
        print 'foo' 
        sys.stdout.close () 
    except IOError, e: 
        sys.stderr.write ('write failed: %s\n' % e) 
        sys.exit (1) 
if __name__ == '__main__': 
    main () 

This particular failure comes from the following
unchecked fflush 
of stdout in pythonrun.c: 
  static void 
          while (nexitfuncs > 0) 
Flushing the stream manually, python does raise an

Please note that simply adding a test for fflush
failure is 
not sufficient.  If you change the above to do this: 
  if (fflush(stdout) != 0) 
      ...handle error... 
It will appear to solve the problem. 
But here is a counterexample: 
import sys 
def main (): 
        print "x" * 4095 
        sys.stdout.close () 
    except IOError, e: 
        sys.stderr.write ('write failed: %s\n' % e) 
        sys.exit (1) 

if __name__ == '__main__': 
    main () 
If you run the above with stdout redirected to /dev/full, 
it will silently succeed (exit 0) in spite of a write
That's what happens on my debian unstable system. 
Instead of just checking the fflush return value, 
it should also check ferror: 
  if (fflush(stdout) != 0 || ferror(stdout)) 
      ...handle error... 

msg23326 - (view) Author: Ben Hutchings (wom-work) Date: 2004-11-29 10:43
Logged In: YES 

Each Python file object has a pointer to the function to be
called on the C stream when close is called on the file
object.  Normally this is fclose, but Python doesn't want
the standard streams closed so their file objects are
created with the function pointer set to NULL, making close
a no-op on the underlying files.  I'm going to attach a
patch that changes the function for stdout and stderr to be
fflush, so that the streams stay open but write errors are
detected at the time of an explicit close.  I don't see the
relevance of the exitfuncs.
msg23327 - (view) Author: Ben Hutchings (wom-work) Date: 2004-11-29 10:51
Logged In: YES 

I can't see how to upload an attachment to this bug, so see
patch 1075147.
msg23328 - (view) Author: Jim Meyering (meyering) Date: 2004-12-02 09:20
Logged In: YES 

I submitted the original report (and text above).
The patch that just calls fflush is not enough, as the second sample script illustrates.
Here's a patch that does a little better

--- Python/sysmodule.c.~2.126.~ 2004-08-12 20:19:17.000000000 +0200
+++ Python/sysmodule.c  2004-12-02 09:59:09.058953816 +0100
@@ -927,6 +927,13 @@ settrace() -- set the global debug traci
 /* end of sys_doc */ ;
+static int
+_check_and_flush (FILE *stream)
+  int prev_fail = ferror (stream);
+  return fflush (stream) || prev_fail ? EOF : 0;
 PyObject *
@@ -941,8 +948,8 @@ _PySys_Init(void)
        sysdict = PyModule_GetDict(m);
        sysin = PyFile_FromFile(stdin, "<stdin>", "r", NULL);
-       sysout = PyFile_FromFile(stdout, "<stdout>", "w", NULL);
-       syserr = PyFile_FromFile(stderr, "<stderr>", "w", NULL);
+       sysout = PyFile_FromFile(stdout, "<stdout>", "w", _check_and_flush);
+       syserr = PyFile_FromFile(stderr, "<stderr>", "w", _check_and_flush);
        if (PyErr_Occurred())
                return NULL;
 #ifdef MS_WINDOWS

With that patch, the failing script now evokes a diagnostic
and nonzero exit.

  $ ./python write-4096 > /dev/full
  write failed: (0, 'Error')

But, as you can see, the diagnostic leaves a lot to be desired.
It should say ``write failes: [Errno 28] No space left on device''.
It'll take a more significant change to propagate errno from the
failing fputs/fwrite/etc. to where it can be used.
msg23329 - (view) Author: Ben Hutchings (wom-work) Date: 2004-12-06 23:11
Logged In: YES 

The second sample script works for me (in that it raises the
expected exception) with or without my patch. If the error
status of the file has been set by some other operation
without Python noticing it then surely that's a bug in the
code that calls that other operation.
msg23330 - (view) Author: Jim Meyering (meyering) Date: 2004-12-06 23:27
Logged In: YES 

Even with python-2.4 (built fresh from CVS this morning),
I can still reproduce the problem on a Linux-2.6.9/ext3 system:

  /p/p/python-2.4/bin/python write-4096 > /dev/full && echo fail

The size that provokes the failure depends on the I/O block size
of your system, so you might need something as big as 131072
on some other type of system.
msg23331 - (view) Author: Ben Hutchings (wom-work) Date: 2004-12-07 00:33
Logged In: YES 

OK, I can reproduce the remaining problem if I substitute
1023 for 4095. The culprit seems to be the unchecked fputs()
in PyFile_WriteString, which is used for the spaces and
newlines generated by the print statement but not for the
objects. I think that's a separate bug.
msg23332 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2004-12-19 21:47
Logged In: YES 

Tim, what do you think?
msg23333 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2004-12-19 22:24
Logged In: YES 

Sorry, don't care enough to spend time on it (not a bug I've 
had, not one I expect to have, don't care if it never 
changes).  Suggest not using /dev/full as an output device 
msg23334 - (view) Author: Ben Hutchings (wom-work) Date: 2004-12-19 23:38
Logged In: YES 

Tim, these bugs are quite difficult to trigger, but they can
hide any kind of file error and lose arbitrarily large
amounts of data.

Here, the following program will run indefinitely:

full = open('/dev/full', 'w')
while 1:
    print >>full, 'x' * 1023
    print >>full

It seems to be essential that both the character that fills
the file buffer (here it is 1024 bytes long) and the next
are generated implicitly by print - otherwise the write
error will be detected.
msg23335 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2005-01-19 22:28
Logged In: YES 

I don't think the patch is right. If somebody explicitly
invokes sys.stdout.close(), this should have the same effect
as invoking fclose(stdout) in C.

It currently doesn't, but with meyering's patch from
2004-12-02 10:20, it still doesn't, so the patch is incorrect.

It might be better to explicitly invoke fclose() if the file
object has no associated f_close function.
msg23336 - (view) Author: Jim Meyering (meyering) Date: 2005-01-20 09:24
Logged In: YES 

Hi Martin,

I would have done that, but sys.stdout.close is already
defined *not* to close stdout.  Here's the relevant FAQ:

1.4.7   Why doesn't closing sys.stdout (stdin, stderr)
really close it?
msg23337 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2005-01-23 09:51
Logged In: YES 

Thanks for the report and the patch. Committed as

NEWS 1.1232
sysmodule.c 2.127
NEWS 1.1193.2.15
NEWS 1.831.4.164
msg23338 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2005-01-27 18:59
Logged In: YES 

The patch turns out to be incorrect: fflushing(stdin) causes 
undefined behaviour, and indeed does cause problems on OS 
X (fflush fails, and python returns with a non-zero exit 
Reverting the close function for stdin: 
NEWS 1.831.4.166 
NEWS 1.1193.2.16 
NEWS 1.1234 
sysmodule.c 2.128 
Date User Action Args
2022-04-11 14:56:08adminsetgithub: 41233
2004-11-26 23:02:52dokocreate