This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vinay.sajip
Recipients vinay.sajip, yateenjoshi
Date 2010-01-10.19:06:42
SpamBayes Score 2.3380702e-09
Marked as misclassified No
Message-id <1263150406.71.0.17397571285.issue7664@psf.upfronthosting.co.za>
In-reply-to
Content
Please clarify exsctly what you mean by "multiprocessing logger". Note that logging does not support logging to the same file from concurrent processes (threads *are* supported). See

http://docs.python.org/library/logging.html#logging-to-a-single-file-from-multiple-processes

for more information.

Also, I don't believe your fix is appropriate for the core logging module, and it's not clear to me why a lock failure would occur if the disk was full. It might be that way on Solaris (don't have use of a Solaris box), but not in general. In fact, this appears from your stack trace to be a problem in some custom handler you are using (defined in the file cloghandler.py on your system).

In any event, if you believe you can recover from the error, the right thing to do is to subclass the file handler you are using and override its handleError method to attempt recovery.

Did you post this problem on comp.lang.python? There are bound to be other Solaris users there who may be able to reproduce your problem and/or give you more advice about it.

Closing, as this is not a logging bug AFAICT.
History
Date User Action Args
2010-01-10 19:06:46vinay.sajipsetrecipients: + vinay.sajip, yateenjoshi
2010-01-10 19:06:46vinay.sajipsetmessageid: <1263150406.71.0.17397571285.issue7664@psf.upfronthosting.co.za>
2010-01-10 19:06:44vinay.sajiplinkissue7664 messages
2010-01-10 19:06:42vinay.sajipcreate