Message123243
> This issue remembers me #3618 (opened 2 years ago): I proposed to use
> RLock instead of Lock, but RLock was implemented in Python and were
> too slow. Today, we have RLock implemented in C and it may be possible
> to use them. Would it solve this issue?
I think it's more complicated. If you use an RLock, you can reenter the
routine while the object is in an unknown state, so the behaviour can be
all kinds of wrong.
> > The lock is precisely there so that the buffered object doesn't
> > have to be MT-safe or reentrant. It doesn't seem reasonable
> > to attempt to restore the file to a "stable" state in the middle
> > of an inner routine.
>
> Oh, so release the lock around the calls to
> _bufferedwriter_raw_write() (aorund PyObject_CallMethodObjArgs() in
> _bufferedwriter_raw_write()) and PyErr_CheckSignals() is not a good
> idea? Or is it just complex because the buffer object have to be in a
> consistent state?
Both :)
> > (in a more sophisticated version, we could store pending writes
> > so that they get committed at the end of the currently
> > executing write)
>
> If the pending write fails, who gets the error?
Yes, I finally think it's not a good idea. flush() couldn't work
properly anyway, because it *has* to flush the buffer before returning. |
|
Date |
User |
Action |
Args |
2010-12-03 13:24:01 | pitrou | set | recipients:
+ pitrou, gregory.p.smith, isandler, amaury.forgeotdarc, vstinner, stutzbach |
2010-12-03 13:23:55 | pitrou | link | issue10478 messages |
2010-12-03 13:23:55 | pitrou | create | |
|