This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: python logger does not handle IOError Exception
Type: behavior Stage:
Components: Extension Modules Versions: Python 2.6
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: vinay.sajip Nosy List: vinay.sajip, yateenjoshi
Priority: normal Keywords:

Created on 2010-01-10 10:58 by yateenjoshi, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Messages (4)
msg97504 - (view) Author: Yateen V. Joshi (yateenjoshi) Date: 2010-01-10 10:58
I am running an in house application that uses multiprocessing logger. 
This application ftp's files from a remote host and keep them on a local
disk. Here is the scenario - 
While pulling the files, I make the local disk full (100%). The application hangs, it can not log further. Apart from other exceptions, following are prominent ones. If I further create cleanup the disk, the application does not proceed. It 
remains hung. I have to restart the application. 

  File "/export/home/yateen/ess/3rdparty/python/solaris/lib/python2.6/logging/__init__.py", line 1059, in error
    self._log(ERROR, msg, args, **kwargs)
  File "/export/home/yateen/ess/3rdparty/python/solaris/lib/python2.6/logging/__init__.py", line 1141, in _log
    self.handle(record)
  File "/export/home/yateen/ess/3rdparty/python/solaris/lib/python2.6/logging/__init__.py", line 1151, in handle
    self.callHandlers(record)
  File "/export/home/yateen/ess/3rdparty/python/solaris/lib/python2.6/logging/__init__.py", line 1188, in callHandlers
    hdlr.handle(record)
  File "/export/home/yateen/ess/3rdparty/python/solaris/lib/python2.6/logging/__init__.py", line 671, in handle
    self.release()
  File "build/bdist.solaris-2.10-sun4u/egg/cloghandler.py", line 189, in release
    self.stream.flush()


I tried looking into the logging module's code and could not find IOError exception handling there. After going through
various situations, and trial-errors (I am not a python Guru), I found following fix working - 
method handle(record), file logging/__init__.py, line#670
original code - 
        rv = self.filter(record)
        if rv:
            self.acquire()
            try:
                self.emit(record)
            finally:
                self.release()
                             
        return rv

Modified code - 
        rv = self.filter(record)
        if rv:
            self.acquire()
            try:
                self.emit(record)
            finally:
                try:
                  self.release()
                except (IOError, OSError):
                  pass
        return rv

What this tells is if there is an error in locks handling, simply leave it and proceed. With this fix, when the disk is 
cleaned up, application proceeds properly. 

What I want to know if this is an appropriate fix? Can I go ahead with it?

Thanks,
Yateen..
msg97542 - (view) Author: Vinay Sajip (vinay.sajip) * (Python committer) Date: 2010-01-10 19:06
Please clarify exsctly what you mean by "multiprocessing logger". Note that logging does not support logging to the same file from concurrent processes (threads *are* supported). See

http://docs.python.org/library/logging.html#logging-to-a-single-file-from-multiple-processes

for more information.

Also, I don't believe your fix is appropriate for the core logging module, and it's not clear to me why a lock failure would occur if the disk was full. It might be that way on Solaris (don't have use of a Solaris box), but not in general. In fact, this appears from your stack trace to be a problem in some custom handler you are using (defined in the file cloghandler.py on your system).

In any event, if you believe you can recover from the error, the right thing to do is to subclass the file handler you are using and override its handleError method to attempt recovery.

Did you post this problem on comp.lang.python? There are bound to be other Solaris users there who may be able to reproduce your problem and/or give you more advice about it.

Closing, as this is not a logging bug AFAICT.
msg97576 - (view) Author: Yateen V. Joshi (yateenjoshi) Date: 2010-01-11 06:20
ok, thanks. I will try with subclass for handler.
A question however, I do not see IOError exception being handled 
anywhere in the logging module. Is it not possible that the same would
occur when we are trying to 'emit' a record to a log file? Or some other
IO related processing is happening at logger level?
msg97579 - (view) Author: Vinay Sajip (vinay.sajip) * (Python committer) Date: 2010-01-11 07:08
> From: Yateen V. Joshi <report@bugs.python.org>

> A question however, I do not see IOError exception being handled 
> anywhere in the logging module. Is it not possible that the same would
> occur when we are trying to 'emit' a record to a log file? Or some other
> IO related processing is happening at logger level?

If you look carefully at the code in the module, you will see code like

except:
    self.handleError(record)

which *will* catch IOError. In handleError, you would have access to the exception via sys.exc_info().
History
Date User Action Args
2022-04-11 14:56:56adminsetgithub: 51913
2010-01-11 07:08:18vinay.sajipsetmessages: + msg97579
2010-01-11 06:20:20yateenjoshisetmessages: + msg97576
2010-01-10 19:06:44vinay.sajipsetstatus: open -> closed
messages: + msg97542

assignee: vinay.sajip
resolution: not a bug
stage: test needed ->
2010-01-10 14:44:20r.david.murraysetpriority: normal
nosy: + vinay.sajip

stage: test needed
2010-01-10 10:58:30yateenjoshicreate