This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Built-in module _io can lose data from buffered files in reference cycles
Type: behavior Stage: patch review
Components: IO Versions: Python 3.7, Python 3.6
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: nascheme Nosy List: Arusekk, Volker Weißmann, jstasiak, martin.panter, methane, miss-islington, nascheme, nitishch, njs, pitrou, serhiy.storchaka, tim.peters, vstinner, xgdomingo
Priority: normal Keywords: patch

Created on 2013-04-27 07:33 by arigo, last changed 2022-04-11 14:57 by admin.

Files
File name Uploaded Description Edit
y.py arigo, 2013-04-27 07:33
gcio.py pitrou, 2014-12-04 10:28
pyio.diff arigo, 2014-12-04 14:06 review
buffer_crash.py nascheme, 2017-09-05 18:22
buffer_not_flushed.py nascheme, 2017-12-14 19:27
buffer_register_flush.txt nascheme, 2017-12-19 18:42
Pull Requests
URL Status Linked Edit
PR 1908 merged nascheme, 2017-06-01 17:56
PR 3337 merged nascheme, 2017-09-05 04:06
PR 3372 merged nascheme, 2017-09-06 00:25
PR 3372 merged nascheme, 2017-09-06 00:25
PR 4826 merged pitrou, 2017-12-12 23:01
PR 4847 closed nascheme, 2017-12-13 19:22
PR 23135 merged Volker Weißmann, 2020-11-03 19:35
PR 23527 merged miss-islington, 2020-11-27 00:42
PR 23528 merged miss-islington, 2020-11-27 00:42
Messages (71)
msg187890 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2013-04-27 07:33
In Python 2, a buffered file opened for writing is flushed by the C library when the process exit.  In Python 3, the _pyio and _io modules don't do it reliably.  They rely on __del__ being called, which is not neccesarily the case.  The attached example ends with the file 'foo' empty (tested with Python 3.3 and the current trunk).  The same example in Python 2 using the built-in open() cannot end with 'foo' empty.
msg187898 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-04-27 11:40
There are actually several issues here:

* collection of globals at shutdown is wonky: you should add an explicit "del a,f; gc.collect()" at the end of the script

* order of tp_clear calls, or another issue with TextIOWrapper: if you open the file in binary mode ("f = open('...', 'wb'); f.write(b'bar')"), the data gets flushed during the GC run
msg187903 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-04-27 12:41
Code relying on garbage collection/shutdown hook to flush files is borked:
- it's never been guaranteed by the Python specification (neither does Java/C#...)
- even with an implementation based on C stdio streams, streams won't get flushed in case of _exit()/abort()/asynchronous signal

Furthermore, I think it's inherently unsafe: in a multi-threaded program, flushing requires locking, and doing this at shutdown is a recipe for deadlock/arbitrary delay (image you have a daemon thread currently holding the lock, e.g. for stdout).

In short, that's an application bug to me.
msg187922 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2013-04-27 18:46
It used to be a consistently reliable behavior in Python 2 (and we made it so in PyPy too), provided of course that the process exits normally; but it no longer is in Python 3.  Well I can see the reasons for not flushing files, if it's clearly documented somewhere as a change of behavior from Python 2.

However I'm complaining about the current behavior: files are flushed *most of the time*.  That's a behavior that is clearly misleading, or so I would think.  I'm rather sure that there are many small scripts and large programs out there relying on automatic flushing, and then one day they'll hit a case where the file is not flushed and get the worst kind of error: a file unexpectedly truncated at 99% of its length, in a way that cannot be reproduced by small examples.

Feel free to close anyway as not-a-bug; I won't fight the Python 3 behavior, because Python 2 works as expected.
msg187966 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-04-28 07:33
> It used to be a consistently reliable behavior in Python 2 (and we made it so in PyPy too), provided of course that the process exits normally; but it no longer is in Python 3.  Well I can see the reasons for not flushing files, if it's clearly documented somewhere as a change of behavior from Python 2.

When you say Python 2, I assume you mean CPython 2, right?
Because - AFAICT - files got flushed only by accident, not by design.
For example, I suspect that Jython doesn't flush files on exit (since
the JVM doesn't), and I guess IronPython neither.

> However I'm complaining about the current behavior: files are flushed *most of the time*.

That's the problem with implementation-defined behavior ;-)
msg188075 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-04-29 16:46
"In Python 2, a buffered file opened for writing is flushed by the C library when the process exit."


"When you say Python 2, I assume you mean CPython 2, right?
Because - AFAICT - files got flushed only by accident, not by design."

It looks to be a feature of the standard C library, at least the GNU libc. Its libio library installs an exit handler flushing all open files. You can see it if you set a breaking on write() using gdb:


Breakpoint 5, write () at ../sysdeps/unix/syscall-template.S:82
82      ../sysdeps/unix/syscall-template.S: Aucun fichier ou dossier de ce type.

(gdb) where
#0  write () at ../sysdeps/unix/syscall-template.S:82
#1  0xb7e33ba5 in _IO_new_file_write (f=0x8254ef0, data=0x81f9798, n=3) at fileops.c:1289
#2  0xb7e33a84 in new_do_write (fp=0x8254ef0, data=0x81f9798 "bar\267\300\207\366\267\220\227\037\b\220\227\037\b\004d", to_do=3) at fileops.c:543
#3  0xb7e350fe in _IO_new_do_write (fp=0x8254ef0, data=0x81f9798 "bar\267\300\207\366\267\220\227\037\b\220\227\037\b\004d", to_do=3) at fileops.c:516
#4  0xb7e354b5 in _IO_new_file_overflow (f=0x8254ef0, ch=-1) at fileops.c:894
#5  0xb7e36c40 in _IO_flush_all_lockp (do_lock=0) at genops.c:849
#6  0xb7e36d8e in _IO_cleanup () at genops.c:1010
#7  0xb7df5f92 in __run_exit_handlers (status=0, listp=0xb7f683e4, run_list_atexit=true) at exit.c:91
#8  0xb7df5fdd in __GI_exit (status=0) at exit.c:100
#9  0xb7ddc4db in __libc_start_main (main=0x8058f90 <main>, argc=2, ubp_av=0xbffff1b4, init=0x8156960 <__libc_csu_init>, fini=0x81569d0 <__libc_csu_fini>, rtld_fini=0xb7fed280 <_dl_fini>, stack_end=0xbffff1ac)
    at libc-start.c:258
#10 0x08058fd1 in _start ()

Source of _IO_flush_all_lockp() in the GNU libc:

http://sourceware.org/git/?p=glibc.git;a=blob;f=libio/genops.c;h=390d8d24b5fb04f6a35b8fec27e700b9a9d641d4;hb=HEAD#l816

So the glibc maintains a list of open "FILE*" files: _IO_list_all, which is protected by list_all_lock lock.
msg188077 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-04-29 17:13
> "When you say Python 2, I assume you mean CPython 2, right?
> Because - AFAICT - files got flushed only by accident, not by design."
>
> It looks to be a feature of the standard C library, at least the GNU libc. Its libio library installs an exit handler flushing all open files. You can see it if you set a breaking on write() using gdb:

Yes, it's guaranteed by POSIX/ANSI (see man exit).
I was refering to the fact that the automatic flushing of files upon
exit is a mere side effect of the implementation based atop stdio
stream in cpython 2. It's no guaranteed by any Python spec (and I
can't really think of any platform other than C that makes such
guarantee).
msg188095 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-04-29 20:22
>> It looks to be a feature of the standard C library, at least the GNU libc.
> Yes, it's guaranteed by POSIX/ANSI (see man exit).

Hum, POSIX (2004) is not so strict:
"Whether open streams are flushed or closed, or temporary files are
removed is implementation-defined."
http://pubs.opengroup.org/onlinepubs/009695399/functions/exit.html

2013/4/29 Charles-François Natali <report@bugs.python.org>:
>
> Charles-François Natali added the comment:
>
>> "When you say Python 2, I assume you mean CPython 2, right?
>> Because - AFAICT - files got flushed only by accident, not by design."
>>
>> It looks to be a feature of the standard C library, at least the GNU libc. Its libio library installs an exit handler flushing all open files. You can see it if you set a breaking on write() using gdb:
>
> Yes, it's guaranteed by POSIX/ANSI (see man exit).
> I was refering to the fact that the automatic flushing of files upon
> exit is a mere side effect of the implementation based atop stdio
> stream in cpython 2. It's no guaranteed by any Python spec (and I
> can't really think of any platform other than C that makes such
> guarantee).
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue17852>
> _______________________________________
msg188130 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2013-04-30 06:44
> Hum, POSIX (2004) is not so strict:
> "Whether open streams are flushed or closed, or temporary files are
> removed is implementation-defined."
> http://pubs.opengroup.org/onlinepubs/009695399/functions/exit.html

You're looking at the wrong section:

"The exit() function shall then flush all open streams with unwritten
buffered data, close all open streams"

Then a couple lines below:

"The _Exit() [CX]   and _exit()  functions shall not call functions
registered with atexit() nor any registered signal handlers. Whether
open streams are flushed or closed, or temporary files are removed is
implementation-defined."

It's guaranteed for exit(), not _Exit()/_exit().
msg188137 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-04-30 07:54
> It's guaranteed for exit(), not _Exit()/_exit().

Oops ok, thanks.

2013/4/30 Charles-François Natali <report@bugs.python.org>:
>
> Charles-François Natali added the comment:
>
>> Hum, POSIX (2004) is not so strict:
>> "Whether open streams are flushed or closed, or temporary files are
>> removed is implementation-defined."
>> http://pubs.opengroup.org/onlinepubs/009695399/functions/exit.html
>
> You're looking at the wrong section:
>
> "The exit() function shall then flush all open streams with unwritten
> buffered data, close all open streams"
>
> Then a couple lines below:
>
> "The _Exit() [CX]   and _exit()  functions shall not call functions
> registered with atexit() nor any registered signal handlers. Whether
> open streams are flushed or closed, or temporary files are removed is
> implementation-defined."
>
> It's guaranteed for exit(), not _Exit()/_exit().
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue17852>
> _______________________________________
msg231376 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-11-19 12:10
In 3.4+ the example script always writes string "bar" to file "foo". Tested by running it in a loop hundreds times. Cleaning up at shutdown was enhanced in 3.4.
msg231386 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-11-19 15:36
Is anyone interested to work on the "maintain a list of open file objects" idea?

I consider that Python 3 does its best to flush data at exit, but it's not a good practice to rely on the destructors to flush data. I mean, there is no warranty that files will be written in the correct order and that files will stay consistent. It's much safer to explicitly call the clode() method.

By the way, Python 3 now emits ResourceWarning warnings when files are destroyed without being explicitly closed. I enhanced the code handling warnings during Python shutdown to show these warnings in more use cases, but it is still not perfect. I opened the issue #21788 as a reminder that the code can still be enhanced.

For the "maintain a list of objects" idea, I expect race conditions with threads, signals and processes (fork). We already have known issues with locks + fork. I expect even more issues if we maintain a list of open files.

I propose to close the issue as "wontfix". Python 3 does its best, but please call explicitly the close() method!

Maybe the undefined behaviour should be documented (with a warning?) in the buffered writer of the io module, and maybe also in the open() function?
msg231391 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-11-19 16:37
I'm +1 on closing this. Agree with Charles-François that it's never been guaranteed by the Python specification. Pythonic way to work with files is to use the "with" statement, or, if you need long living file stream, careful close files in the "finally" block or in __exit__ method of some class which is used as context manager. Non-observance of this rule can be considered as a bug (see issue22831).
msg231420 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-11-20 10:26
Victor: there is the GIL, you don't need any locking.
msg232103 - (view) Author: Nikolaus Rath (nikratio) * Date: 2014-12-04 05:09
Serhiy, I believe this still happens in Python 3.4, but it is harder to reproduce. I couldn't get Armin's script to produce the problem either, but I'm pretty sure that this is what causes e.g. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771452#60.
msg232109 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2014-12-04 07:27
> Serhiy, I believe this still happens in Python 3.4, but it is harder to reproduce. I couldn't get Armin's script to produce the problem either, but I'm pretty sure that this is what causes e.g. https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771452#60.

Maybe, as it's been said several times, it's an application bug: files
should be closed explicitly (or better with a context manager of
course).
msg232113 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 09:57
I hate to repeat myself, but if the C standard says "files are flushed at exit"; if Python 2 follows this standard; and if Python 3 follows it most of the time but *not always*...  then it seems to me that something is very, very buggy in the worst possible way (rare data loss).
msg232114 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-12-04 10:14
Do you have working in 3.4+ example?
msg232116 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 10:24
If I understood correctly, Python 3.4 tries harder to find cycles and call destructors at the end of the program, but that's not a full guarantee.  For example you can have a reference from a random C extension module.

While trying to come up with an example, I found one that I don't fully understand, but the point is that it shows how easy it is to defeats it:

    import sys
    f = open('foo.txt', 'w')
    f.write('abc')
    def func(*args):
        return func
    sys.settrace(func)
msg232117 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 10:26
(Ah, it's probably a reference from the trace function -> func_globals -> f).
msg232118 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 10:27
Here is an example.
msg232120 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 10:30
Note the example I posted doesn't involve the shutdown sequence. It calls gc.collect() explicitly.
msg232121 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 10:39
To add to the confusion: Antoine's example produces an empty file on the current trunk cd282dd0cfe8.  When I first tried it on a slightly older trunk (157 changesets ago), it correctly emitted a file with "barXXX ", but only if "gc.collect()" was present.  Without "gc.collect()" it didn't.  This seems at odds with the explanation that "gc.collect()" should be what occurs at shut-down now.
msg232122 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 10:46
The problem is that the order of tp_finalize calls is arbitrary when there is a reference cycle (same thing, of course, with tp_clear). So depending on the exact layout of the garbage list, the TextIOWrapper could be collected before the BufferedWriter and the FileIO (good), or after (bad).

I don't see an easy way to solve this. Either _io should provide hints to the GC (which kind of hints?), or the tp_finalize should be customized to somehow call a dependent wrapper's tp_finalize (how? by keeping a weakref?).
msg232123 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 10:50
Maybe accepting the fact that relying on finalizers is a bad idea here?
msg232124 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 10:51
You mean encourage people to explicitly use "with" or call close()? Yes, that's the simplest answer. It's why we added ResourceWarning.
msg232126 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-12-04 11:05
> The problem is that the order of tp_finalize calls is arbitrary when there is a reference cycle

If we want to guaranty that all files are properly flushed at exit, the best option is to maintain a list of open files.

I'm not interested to implement that, it sounds too complex. For example, open("text.txt", "w") creates 3 objects: FileIO, BufferedWriter, TextIOWrapper. You have to register these 3 objects and flush them in the right order. Python is dynamic, you are free to modify objects, for example using the detach() method or by modifying the buffer attribute of the TextIOWrapper.

How do you decide which object should be flushed? In which order? We have to take care of signals, threads and forks, stay portable, etc.
msg232127 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 11:06
Le 04/12/2014 12:05, STINNER Victor a écrit :
> 
>> The problem is that the order of tp_finalize calls is arbitrary
>> when
there is a reference cycle
> 
> If we want to guaranty that all files are properly flushed at exit,
the best option is to maintain a list of open files.

I am not talking about files flushing at exit.
msg232129 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-12-04 11:27
> How do you decide which object should be flushed? In which order? We have to
> take care of signals, threads and forks, stay portable, etc.

The order doesn't matter. If you call flush() of TextIOWrapper, flushes of 
buffered writer and raw file will be called automatically. And flush() can be 
called multiple times.
msg232131 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 12:53
Antoine: sorry if I wasn't clear enough.  Obviously you want to encourage people to close their files, but I think personally that it is very bad for the implementation to *most of the time* work anyway and only rarely fail to flush the files.  So, speaking only about the implementation, it is (imho) a bad idea to rely on finalizers to flush the files, and something else should be done.

Victor: it does not sound complicated to me to keep the BufferedWriter objects in a doubly-chained list.  You're overthinking the issue: there are no multithread issue (we have a GIL) and you don't have to keep track of all 3 objects created by "open".
msg232132 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 12:54
Le 04/12/2014 13:53, Armin Rigo a écrit :
> 
> So, speaking only about the
implementation, it is (imho) a bad idea to rely on finalizers to flush
the files, and something else should be done.

But what else?
msg232133 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 13:08
Antoine: I'm trying to explain what in the three lines that follow the parts you quoted.  I already tried to explain it a few times above.  Now I feel that I'm not going anywhere, so I'll quote back myself from 2013-04-27: "Feel free to close anyway as not-a-bug; I won't fight the Python 3 behavior, because Python 2 works as expected."
msg232134 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2014-12-04 13:10
Armin: I don't see how linked lists would help.
msg232137 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-12-04 13:39
How it looks to me. Should be global collection of weak references (WeakSet?). 
Every instance of TextIOWrapper, BufferedWriter and BufferedRandom add itself to 
this collection on create and remove on close. A function registered with 
atexit calls flush() for all methods in the collection.

This solution doesn't require changes to finalization machinery, only to the io 
module.

I'm not particularly interested to implement that because I'm not convinced 
that Python should provide such guaranties. But the implementation shouldn't 
be too complex.

More general solution could be to introduce special method (__predel__) which 
should be called before breaking reference loops (or just before __del__ if 
there is no loop). In case of output streams it will call flush() (but not 
close()!).
msg232139 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2014-12-04 14:06
Here is a proof-of-concept.  It changes both _pyio.py and bufferedio.c and was tested with the examples here.  (See what I meant with "linked lists".)  The change in textio.c is still missing, but should be very similar to bufferedio.c.  This is similar to the implementation in PyPy.
msg232140 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2014-12-04 15:41
By the way, I fixed various issues to ensure that ResourceWarning are
displayed at Python exit. There are still corner cases when the
warnings are not emited. For example, when a daemon thread is used. Or
when the warning comes very late during Python finalization.

Recent example of tricky issue related to this warning: #22898. IMO
it's a bug, it must be fixed and it should be possible to emit the
warning.
msg232146 - (view) Author: Nikolaus Rath (nikratio) * Date: 2014-12-04 18:23
This will probably be too radial, but I think it should at least be mentioned as a possible option.
 
We could just not attempt to implicitly flush buffers in the finalizer at all. This means scripts relying on this will break, but in contrast to the current behavior they will break consistently so it's easy to find and fix the problem.

I think this is still preferable to any solution that does not guarantee flushing in 100% of the cases.
msg294868 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-05-31 20:08
Well, I just spent a couple of hours debugging a problem caused by this issue.  You could argue that I should be calling close() on all of my file-like objects but I agree with Armin that the current "most of the time it works" behaviour is quite poor.

In my case, the issue is exactly what Antoine Pitrou suggests: if the FileIO object gets finalized before the BufferedIO object wrapping it then the buffered data gets lost. That depends on the order that the GC calls finalizers.

Armin's suggestion of keeping a list of open buffered files and flushing them before exiting seems like a simple and robust solution.  Having a warning for unclosed files is fine but in order to make porting Python 2 code as painless as possible, matching the safer behavior of Python 2 would be worth the extra bit of flush logic.
msg294910 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-06-01 07:27
Neil Schemenauer: "Well, I just spent a couple of hours debugging a problem caused by this issue."

Did you get any ResourceWarning? You need to run python3 with -Wd to see them.

By the way, I enhanced ResourceWarning in Python 3.6: if you enable tracemalloc, these warnings now log the traceback where the leakded resource was created ;-)
https://docs.python.org/dev/whatsnew/3.6.html#warnings
msg294956 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-06-01 16:25
"Did you get any ResourceWarning?"

I already knew that explicitly closing the file would fix the issue.  However, think of the millions of lines of Python 2 that hopefully will be converted to Python 3.  There will be many ResourceWarning errors.  It is not reasonable to think that everyone is just going to fix them when they see them.  Applications that used to be reliable will now randomly lose data.

What I spent hours debugging is trying to figure out why when the open file is collected on interpreter exit, buffered data not flushed.  I knew the object was not part of an uncollectable garbage cycle.  I initially suspected there could be some bug in the _io module or maybe even a kernel bug.

It turns out that in Python 2 the buffer and the open file is a single object and so when the finalizer gets called, the data is always correctly written out.  In Python 3, the buffer and the underlying file object are separate and they can get finalized in different order depending on essentially random factors.

Fixing this particular issue is not difficult as Armin's code demonstrates.  I don't think blaming the application programmer is the solution.  Obviously we keep the warnings and still tell programmers to close their files or use context managers.
msg301295 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-09-05 03:18
New changeset e38d12ed34870c140016bef1e0ff10c8c3d3f213 by Neil Schemenauer in branch 'master':
bpo-17852: Maintain a list of BufferedWriter objects.  Flush them on exit. (#1908)
https://github.com/python/cpython/commit/e38d12ed34870c140016bef1e0ff10c8c3d3f213
msg301297 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-09-05 05:13
New changeset db564238db440d4a2d8eb9d60ffb94ef291f6d30 by Neil Schemenauer in branch 'master':
Revert "bpo-17852: Maintain a list of BufferedWriter objects.  Flush them on exit. (#1908)" (#3337)
https://github.com/python/cpython/commit/db564238db440d4a2d8eb9d60ffb94ef291f6d30
msg301329 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-09-05 16:33
> New changeset e38d12ed34870c140016bef1e0ff10c8c3d3f213 by Neil Schemenauer in branch 'master':
> bpo-17852: Maintain a list of BufferedWriter objects.  Flush them on exit. (#1908)

This change introduced a regression:



======================================================================
FAIL: test_4_daemon_threads (test.test_threading.ThreadJoinOnShutdown)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_threading.py", line 781, in test_4_daemon_threads
    self.assertFalse(err)
AssertionError: b'pthread_cond_destroy: Resource busy\npthread_mutex_destroy: Resource busy' is not false

http://buildbot.python.org/all/builders/x86%20Tiger%203.x/builds/1137

> New changeset db564238db440d4a2d8eb9d60ffb94ef291f6d30 by Neil Schemenauer in branch 'master':
> Revert "bpo-17852: Maintain a list of BufferedWriter objects.  Flush them on exit. (#1908)" (#3337)

This revert fixed the regression.

Did you revert the change because of the failure?
msg301331 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-09-05 16:37
> FAIL: test_4_daemon_threads (test.test_threading.ThreadJoinOnShutdown)

Oh, it also failed on:

http://buildbot.python.org/all/builders/x86%20Gentoo%20Installed%20with%20X%203.x/builds/957/steps/test/logs/stdio

and

http://buildbot.python.org/all/builders/x86%20Gentoo%20Non-Debug%20with%20X%203.x/builds/942/steps/test/logs/stdio
msg301353 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-09-05 18:22
I reverted because of the crash in test_threading.  I'm pretty sure there is a bug with the locking of bufferedio.c, related to threads and flush.  Here is the stacktrace I get (my patch applied, I'm trying to write a Python test that triggers the SEGV without my flush patch).

Attached is the script that triggers the crash.


Thread 3 received signal SIGSEGV, Segmentation fault.
0x00000001000c06d4 in PyObject_GetAttr (
    v=<unknown at remote 0xdbdbdbdbdbdbdbdb>, name='seek')
    at ../Objects/object.c:882
882         PyTypeObject *tp = Py_TYPE(v);
(gdb) bt
#0  0x00000001000c06d4 in PyObject_GetAttr (
    v=<unknown at remote 0xdbdbdbdbdbdbdbdb>, name='seek')
    at ../Objects/object.c:882
#1  0x00000001000440e9 in PyObject_CallMethodObjArgs (
    callable=<unknown at remote 0xdbdbdbdbdbdbdbdb>, name='seek')
    at ../Objects/call.c:1212
#2  0x00000001003005a2 in _buffered_raw_seek (self=0x1053a39b8, target=0,
    whence=1) at ../Modules/_io/bufferedio.c:742
#3  0x000000010030015e in buffered_flush_and_rewind_unlocked (self=0x1053a39b8)
    at ../Modules/_io/bufferedio.c:851
#4  0x00000001002fed12 in buffered_flush (self=0x1053a39b8, args=0x0)
    at ../Modules/_io/bufferedio.c:869
#5  0x00000001002feb4d in _PyIO_atexit_flush ()
    at ../Modules/_io/bufferedio.c:1863
[...]
msg301361 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-09-05 19:13
Just apply the following patch to the original PR and it should work fine:

diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c
index 50c87c1..2ba98f2 100644
--- a/Modules/_io/bufferedio.c
+++ b/Modules/_io/bufferedio.c
@@ -409,12 +409,12 @@ static void
 buffered_dealloc(buffered *self)
 {
     self->finalizing = 1;
+    if (self->next != NULL)
+        remove_from_linked_list(self);
     if (_PyIOBase_finalize((PyObject *) self) < 0)
         return;
     _PyObject_GC_UNTRACK(self);
     self->ok = 0;
-    if (self->next != NULL)
-        remove_from_linked_list(self);
     if (self->weakreflist != NULL)
         PyObject_ClearWeakRefs((PyObject *)self);
     Py_CLEAR(self->raw);
@@ -1860,8 +1860,12 @@ void _PyIO_atexit_flush(void)
     while (buffer_list_end.next != &buffer_list_end) {
         buffered *buf = buffer_list_end.next;
         remove_from_linked_list(buf);
-        buffered_flush(buf, NULL);
-        PyErr_Clear();
+        if (buf->ok && !buf->finalizing) {
+            Py_INCREF(buf);
+            buffered_flush(buf, NULL);
+            Py_DECREF(buf);
+            PyErr_Clear();
+        }
     }
 }
msg301363 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-09-05 19:22
To elaborate a bit on the patch:
- it is pointless to call flush() if the buffered is in a bad state (self->ok == 0) or it has started finalizing already
- you need to own the reference, since flush() can release the GIL and, if the reference is borrowed, the refcount can fall to 0 in another thread and the whole object deallocated under your feet
msg302758 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-09-22 17:17
New changeset 0a1ff24acfc15d8c7f2dc41000a6f3d9a31e7480 by Neil Schemenauer in branch 'master':
bpo-17852: Maintain a list of BufferedWriter objects.  Flush them on exit. (#3372)
https://github.com/python/cpython/commit/0a1ff24acfc15d8c7f2dc41000a6f3d9a31e7480
msg305653 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-06 16:38
Neil pushed the commit 0a1ff24acfc15d8c7f2dc41000a6f3d9a31e7480. What is the status of this issue? Can we now close it?
msg305665 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-11-06 17:41
Yes, my bad.  I thought that accepting the pull would close the bug.
msg308170 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-12-12 23:05
I just discovered that the fix is incorrect. See PR #4826 for reversion and a quick explanation.
msg308177 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-12-13 00:39
New changeset 317def9fdb29893df1ab380d396fcdd2eafe0588 by Victor Stinner (Antoine Pitrou) in branch 'master':
bpo-17852: Revert incorrect fix based on misunderstanding of _Py_PyAtExit() semantics (#4826)
https://github.com/python/cpython/commit/317def9fdb29893df1ab380d396fcdd2eafe0588
msg308236 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-13 19:30
I created a new PR which uses the atexit module instead of using _Py_PyAtExit.  I think registering in io.py is okay.

I see that atexit is now implemented in C.  Rather than registering in io.py, we could create a C API to register callbacks (i.e. atexit_register).  That would perhaps be a bit cleaner.
msg308330 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-14 19:27
Attached is a script that triggers the non-flushing behaviour for me.  I don't think it is reliable since it depends on the order that FileIO AND BufferedWriter are finalized when the gc finds them in a reference cycle.

BTW, it is arguable that the root cause of this bug is that we no longer to topological ordering when calling finalizers.  Originally, the cycle GC would refuse to call __del__ methods because once they are part of a cycle, there is no defined topological ordering.  So, we linked that garage to gc.garbage rather than calling __del__ and have potentially undefined results.  Now the GC has been changed to call __del__ anyhow.   I haven't studied how this has been changed but this non-flushing bug is a result.  Having the buffer added to gc.garbage would also result in the data not being flushed but arguably it would be more understandable what's going on.  I'm not arguing that we should go back to that, just that current behaviour can be subtle and confusing.
msg308332 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-12-14 19:58
The reason Python 2 did well here is simply that Python 2 had a single Python object (the file object) for each actual file.  Python 3 has several of them (the raw IO object, the buffered IO object, possibly the text IO wrapper), and so suddenly the finalization order matters.
msg308412 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-15 17:22
In the process of trying to write a test for this, I now realize that PR 4847 is not really a fix.  If the underlying file gets closed during an earlier gc.collect() and not during shutdown, the extra flushing step is not going to help.  So, using atexit is not really a fix.  Maybe there is something else we can do, need to think about it more.
msg308417 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-12-15 18:16
I'm not well known with GC, but if it be possible to call destructors in the reversed chronological order, this could help with this issue.
msg308419 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-15 18:36
Using reversed chronological order would work in 99% of the cases probably but then you would have that rare case where it didn't work.  So, I'm not too keen on that approach.

I think this is a classic problem with finalization and GC, probably should do some actual reading of literature.  One idea that occurs to me is that the underlying file object could keep a weakref to the higher level (e.g. buffer) objects.  Then, if the underlying file gets closes, it could make sure to call flush on the higher level objects first.  The weakref would ensure that a ref cycle is not created.
msg308424 - (view) Author: Nikolaus Rath (nikratio) * Date: 2017-12-15 20:16
Given the apparent difficulties getting this right, how about not attempting to implicitly flush buffers in the finalizer at all? This means scripts relying on this will break, but in contrast to the current behavior they will break consistently so it's easy to find and fix the problem.
msg308666 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-19 18:42
Welp, another day another attempt.  As mentioned in the PR 4847, atexit is not the answer.  If the raw/buffered file pair are part of a reference cycle and the GC cleans it before atexit runs, then the buffered data can get lost.

I attempted to implement my weakref idea (i.e. raw file keeps a weakref to the buffered file, calls flush before the raw file gets closed).  That doesn't work either because the GC clears the weakref before calling __del__.

The attached patch "buffer_register_flush.txt" does seem to work.  The downside is that it creates a reference cycle between the raw and buffered file objects.  Perhaps that is not a problem since unless you call close() on the raw file, you will be leaking resources anyhow.  In the patch, calling close() removes the reference cycle.

I still feel like this is worth fixing, as ugly as the implementation is.
msg308668 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-12-19 18:51
I think it would be quite disruptive to create a reference cycle each time open() is called.  It may also break user scripts.
msg308681 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2017-12-19 20:42
Yeah, I think you are correct.  Currently files not part of reference cycles get properly flushed and closed by the reference counter.  Implementing my "buffer_register_flush" patch would cause files to be closed only by the cyclic garbage collector (if not explicitly closed).  That would mean a script that opens a large number of files could run out of file descriptors.  Basically, we lose one of the main advantages of reference counting.

Probably the GC should keep track of how many files are open and call collect() when a threshold is reached.  Still, forcing every file to be collected only by the cyclic GC seems too ugly of a solution to this issue.
msg308787 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-12-20 19:30
Here's another idea: have a new type field (a tp_something) that exposes the GC priority for this type.  It could be an integer between -8 and 7, for example (0 being the default).  Then tp_finalize would be called in sorted priority order.

The tricky part of course is to make this acceptable performance-wise (i.e. avoid walking the collectable list 16 times, once for each priority level).  For example by creating bucket lists for each priority level (only for objects with a non-NULL tp_finalize).
msg310602 - (view) Author: (Arusekk) * Date: 2018-01-24 15:38
Since the issue seems to have been active lately, may I suggest my view on solving it.

One solution that comes to my mind is to keep a weak reference to the file, and to register an atexit function per file (and to unregister it when the file is closed). Example concept-illustrating python code for the _pyio module:

import atexit, weakref

# ...

class TextIOWrapper(TextIOBase):
    def __init__(self):
        # ...
        self._weakclose = operator.methodcaller('close')
        atexit.register(self._weakclose, weakref.proxy(self))

    # ...

    def close(self):
        atexit.unregister(self._weakclose)
        # and now actually close the file

There is a possibility of a negative impact arising from the use of operator.methodcaller, because it may return something cached in future versions of python. There is also the issue of unregistering an atexit function during atexit processing. But the general idea seems to be simple and worth considering.
msg310648 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2018-01-25 00:13
Using atexit is not the solution because the data can be lost even while the program is running, not just at interpreter shutdown.  The problem occurs if the buffered file object and the underlying file object are both part of a reference cycle.  Then, when the cycle is destroyed by the GC module, if the file __del__ is called before the buffer __del__, data is lost.

So far, the best solution I've come up with is as follows:  split the buffered file object into two objects, a wrapper object that will be returned from open() and an underlying object that holds the actual buffer.  Make the underlying file object keep references to all the buffers that are using the file.  When the file _del__ gets called, first flush all of the buffers and then close the file.  Splitting the buffered file object is required so that we don't create reference cycles.
msg310659 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2018-01-25 07:19
> I attempted to implement my weakref idea (i.e. raw file keeps a weakref to the buffered file, calls flush before the raw file gets closed).  That doesn't work either because the GC clears the weakref before calling __del__.

This may be a bit of a left-field or too-big-a-hammer suggestion, but as far I can tell from this thread [1] it probably is technically possible to modify the GC to clear weakrefs after calling __del__. Nick wasn't a fan (he likes the invariant that weakrefs can't trigger a resurrection), but if all else fails it might be worth re-raising.

You could add a secondary reference count on FileIO recording how many BufferedIO wrappers there are around it; then it's __del__ would skip calling close() if there are still BufferedIO wrappers, and BufferedIO.__del__ would decrement the reference count and close the underlying file if it hits zero and FileIO.__del__ had already been called.

[1] https://mail.python.org/pipermail/python-dev/2016-October/146747.html
msg355064 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-10-21 12:21
I marked bpo-38548 as duplicate of this issue. Copy of msg355062:

Consider the following program:

f = open("out.txt", "w")
f.write("abc\n")
exit(0)

Please note the absence of f.close().
The documentation 
https://docs.python.org/3/tutorial/inputoutput.html#reading-and-writing-files
says that you should use f.close() or with f = open(), but is not clear whether the program above without f.close() is guaranteed to write. The tutorial says:
"If you don’t explicitly close a file, Python’s garbage collector will eventually destroy the object and close the open file for you, but the file may stay open for a while. Another risk is that different Python implementations will do this clean-up at different times."
For me this sounds like even without f.close() the file is guaranteed to be written. If it is not guaranteed to be written, you should fix the documentation, if it is guaranteed to be written, then I will open another issue because the following program does not write into out.txt on my machine:

from sympy.core import AtomicExpr
class MyWeirdClass(AtomicExpr):
	def __init__(self):
		pass
f = open("out.txt", "w")
f.write("abc\n")
exit(0)

Note: sys.version is: "3.7.3 (default, Oct  7 2019, 12:56:13) \n[GCC 8.3.0]"
msg381924 - (view) Author: Inada Naoki (methane) * (Python committer) Date: 2020-11-27 00:42
New changeset c8aaf71dde4888864c0c351e2f935f87652c3d54 by Volker-Weissmann in branch 'master':
 bpo-17852: Doc: Fix the tutorial about closing files (GH-23135)
https://github.com/python/cpython/commit/c8aaf71dde4888864c0c351e2f935f87652c3d54
msg381925 - (view) Author: miss-islington (miss-islington) Date: 2020-11-27 00:52
New changeset 01fcde89d7d56321078be1739e759fece61d0a2b by Miss Islington (bot) in branch '3.8':
bpo-17852: Doc: Fix the tutorial about closing files (GH-23135)
https://github.com/python/cpython/commit/01fcde89d7d56321078be1739e759fece61d0a2b
msg381926 - (view) Author: Volker Weißmann (Volker Weißmann) * Date: 2020-11-27 00:57
The PR below fixed it:

https://github.com/python/cpython/pull/23135
msg382232 - (view) Author: miss-islington (miss-islington) Date: 2020-12-01 10:53
New changeset 4a44f53aa85c9400471ab084bc8fd8169065fc41 by Miss Islington (bot) in branch '3.9':
[3.9] bpo-17852: Doc: Fix the tutorial about closing files (GH-23135) (GH-23527)
https://github.com/python/cpython/commit/4a44f53aa85c9400471ab084bc8fd8169065fc41
History
Date User Action Args
2022-04-11 14:57:44adminsetgithub: 62052
2020-12-01 10:53:50miss-islingtonsetmessages: + msg382232
2020-11-27 00:57:11Volker Weißmannsetmessages: + msg381926
2020-11-27 00:52:02miss-islingtonsetmessages: + msg381925
2020-11-27 00:42:23miss-islingtonsetpull_requests: + pull_request22410
2020-11-27 00:42:10miss-islingtonsetnosy: + miss-islington
pull_requests: + pull_request22409
2020-11-27 00:42:01methanesetnosy: + methane
messages: + msg381924
2020-11-14 11:33:09nikratiosetnosy: - nikratio
2020-11-14 11:08:00neologixsetnosy: - neologix
2020-11-12 23:46:45jstasiaksetnosy: + jstasiak
2020-11-03 19:35:52Volker Weißmannsetnosy: + Volker Weißmann

pull_requests: + pull_request22049
stage: needs patch -> patch review
2019-10-21 12:21:13vstinnersetmessages: + msg355064
2019-10-21 12:20:44vstinnerlinkissue38548 superseder
2018-01-27 20:03:47ppperrysettitle: Built-in module _io can lose data from buffered files at exit -> Built-in module _io can lose data from buffered files in reference cycles
2018-01-25 07:19:14njssetnosy: + njs
messages: + msg310659
2018-01-25 00:13:20naschemesetmessages: + msg310648
2018-01-24 15:38:06Arusekksetnosy: + Arusekk
messages: + msg310602
2017-12-20 19:30:57pitrousetmessages: + msg308787
stage: patch review -> needs patch
2017-12-19 20:42:52naschemesetmessages: + msg308681
2017-12-19 18:51:12pitrousetmessages: + msg308668
2017-12-19 18:42:23naschemesetfiles: + buffer_register_flush.txt
assignee: nascheme
messages: + msg308666
2017-12-15 20:16:28nikratiosetmessages: + msg308424
2017-12-15 19:01:19nitishchsetnosy: + nitishch
2017-12-15 18:36:23naschemesetmessages: + msg308419
2017-12-15 18:16:07serhiy.storchakasetmessages: + msg308417
2017-12-15 17:22:11naschemesetmessages: + msg308412
2017-12-14 22:19:49arigosetnosy: - arigo
2017-12-14 19:58:24pitrousetmessages: + msg308332
2017-12-14 19:27:01naschemesetfiles: + buffer_not_flushed.py

messages: + msg308330
2017-12-13 19:30:25naschemesetmessages: + msg308236
2017-12-13 19:22:40naschemesetstage: needs patch -> patch review
pull_requests: + pull_request4735
2017-12-13 00:39:30vstinnersetmessages: + msg308177
2017-12-12 23:05:06pitrousetstatus: closed -> open
resolution: fixed ->
messages: + msg308170

stage: resolved -> needs patch
2017-12-12 23:01:46pitrousetpull_requests: + pull_request4718
2017-11-06 17:41:04naschemesetstatus: open -> closed
resolution: fixed
messages: + msg305665

stage: backport needed -> resolved
2017-11-06 16:38:23vstinnersetmessages: + msg305653
2017-11-04 09:14:43berker.peksagsetstage: needs patch -> backport needed
2017-09-22 17:17:32naschemesetmessages: + msg302758
2017-09-06 00:25:25naschemesetpull_requests: + pull_request3383
2017-09-06 00:25:23naschemesetpull_requests: + pull_request3382
2017-09-06 00:10:05rhettingersettitle: Built-in module _io can loose data from buffered files at exit -> Built-in module _io can lose data from buffered files at exit
2017-09-05 19:22:54pitrousetmessages: + msg301363
2017-09-05 19:13:20pitrousetnosy: + pitrou

messages: + msg301361
versions: + Python 3.6, Python 3.7, - Python 3.4, Python 3.5
2017-09-05 18:22:15naschemesetfiles: + buffer_crash.py

messages: + msg301353
2017-09-05 16:37:10vstinnersetmessages: + msg301331
2017-09-05 16:33:54vstinnersetmessages: + msg301329
2017-09-05 05:13:19naschemesetmessages: + msg301297
2017-09-05 04:06:54naschemesetpull_requests: + pull_request3352
2017-09-05 03:18:43naschemesetmessages: + msg301295
2017-06-01 20:01:06xgdomingosetnosy: + xgdomingo
2017-06-01 17:56:42naschemesetpull_requests: + pull_request1987
2017-06-01 16:25:36naschemesetmessages: + msg294956
2017-06-01 07:27:10vstinnersetmessages: + msg294910
2017-05-31 20:08:39naschemesetnosy: + nascheme
messages: + msg294868
2015-02-13 22:58:43martin.pantersetnosy: + martin.panter
2014-12-04 18:23:49nikratiosetmessages: + msg232146
2014-12-04 15:41:13vstinnersetmessages: + msg232140
2014-12-04 14:06:13arigosetfiles: + pyio.diff
keywords: + patch
2014-12-04 14:06:01arigosetmessages: + msg232139
2014-12-04 13:50:35pitrousetnosy: - pitrou
2014-12-04 13:39:37serhiy.storchakasetmessages: + msg232137
2014-12-04 13:10:29pitrousetmessages: + msg232134
2014-12-04 13:08:17arigosetmessages: + msg232133
2014-12-04 12:54:18pitrousetmessages: + msg232132
2014-12-04 12:53:14arigosetmessages: + msg232131
2014-12-04 11:27:38serhiy.storchakasetmessages: + msg232129
2014-12-04 11:06:59pitrousetmessages: + msg232127
2014-12-04 11:05:04vstinnersetmessages: + msg232126
2014-12-04 10:51:28pitrousetmessages: + msg232124
2014-12-04 10:50:24arigosetmessages: + msg232123
2014-12-04 10:46:20pitrousetnosy: + tim.peters
messages: + msg232122
2014-12-04 10:39:38arigosetmessages: + msg232121
2014-12-04 10:30:59pitrousetmessages: + msg232120
2014-12-04 10:28:05pitrousetfiles: + gcio.py
2014-12-04 10:27:46pitrousetmessages: + msg232118
stage: test needed -> needs patch
2014-12-04 10:26:52arigosetmessages: + msg232117
2014-12-04 10:24:42arigosetmessages: + msg232116
2014-12-04 10:14:03serhiy.storchakasetstage: test needed
messages: + msg232114
versions: + Python 3.5, - Python 3.3
2014-12-04 09:57:57arigosetmessages: + msg232113
2014-12-04 07:27:05neologixsetmessages: + msg232109
2014-12-04 05:09:52nikratiosetmessages: + msg232103
2014-12-04 05:04:05nikratiosetnosy: + nikratio
2014-11-20 10:26:19arigosetmessages: + msg231420
2014-11-19 16:38:00serhiy.storchakasetmessages: + msg231391
2014-11-19 15:36:19vstinnersetmessages: + msg231386
2014-11-19 12:10:26serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg231376
2013-04-30 07:54:49vstinnersetmessages: + msg188137
2013-04-30 06:44:36neologixsetmessages: + msg188130
2013-04-29 20:22:25vstinnersetmessages: + msg188095
2013-04-29 17:13:52neologixsetmessages: + msg188077
2013-04-29 16:46:19vstinnersetmessages: + msg188075
2013-04-29 16:27:58vstinnersetnosy: + vstinner
2013-04-28 07:33:07neologixsetmessages: + msg187966
2013-04-27 18:46:38arigosetmessages: + msg187922
2013-04-27 12:41:35neologixsetnosy: + neologix
messages: + msg187903
2013-04-27 11:40:28pitrousetnosy: + pitrou
messages: + msg187898
2013-04-27 07:33:30arigocreate