This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients amaury.forgeotdarc, gregory.p.smith, isandler, pitrou, stutzbach, vstinner
Date 2010-12-03.13:23:55
SpamBayes Score 1.3848223e-10
Marked as misclassified No
Message-id <1291382633.3624.4.camel@localhost.localdomain>
In-reply-to <1291381898.36.0.506299178501.issue10478@psf.upfronthosting.co.za>
Content
> This issue remembers me #3618 (opened 2 years ago): I proposed to use
> RLock instead of Lock, but RLock was implemented in Python and were
> too slow. Today, we have RLock implemented in C and it may be possible
> to use them. Would it solve this issue?

I think it's more complicated. If you use an RLock, you can reenter the
routine while the object is in an unknown state, so the behaviour can be
all kinds of wrong.

> > The lock is precisely there so that the buffered object doesn't 
> > have to be MT-safe or reentrant. It doesn't seem reasonable
> > to attempt to restore the file to a "stable" state in the middle
> > of an inner routine.
> 
> Oh, so release the lock around the calls to
> _bufferedwriter_raw_write() (aorund PyObject_CallMethodObjArgs() in
> _bufferedwriter_raw_write()) and PyErr_CheckSignals() is not a good
> idea? Or is it just complex because the buffer object have to be in a
> consistent state?

Both :)

> > (in a more sophisticated version, we could store pending writes 
> > so that they get committed at the end of the currently 
> > executing write)
> 
> If the pending write fails, who gets the error?

Yes, I finally think it's not a good idea. flush() couldn't work
properly anyway, because it *has* to flush the buffer before returning.
History
Date User Action Args
2010-12-03 13:24:01pitrousetrecipients: + pitrou, gregory.p.smith, isandler, amaury.forgeotdarc, vstinner, stutzbach
2010-12-03 13:23:55pitroulinkissue10478 messages
2010-12-03 13:23:55pitroucreate