This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author aronacher
Recipients aronacher
Date 2010-09-04.15:32:06
SpamBayes Score 5.551115e-17
Marked as misclassified No
Message-id <1283614329.92.0.49636905905.issue9775@psf.upfronthosting.co.za>
In-reply-to
Content
It's hard to say what exactly is to blame here, but I will try to outline the problem as good as I can and try to track it down:

A library of mine is using a Thread that is getting entries from a multiprocessing.Queue periodically.  What I find when the python interpreter is shutting down is this on stderr:

Error in sys.exitfunc:
Traceback (most recent call last):
  File "python2.6/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "python2.6/multiprocessing/util.py", line 270, in _exit_function
    info('process shutting down')
TypeError: 'NoneType' object is not callable

Tracking down the issue shows that something has a __del__ [i have not found the object, i was under the impression the ProcessAwareLogger monkeypatch was, but apprently it's not the culprit] and clears out the module.  When the exit handler is running info is already set to None.  It can be easily checked if that is the issue when a weird monkepatch is added:

def fix_logging_in_multiprocesing():
    from multiprocessing import util, process
    import logging
    util._check_logger_class()
    old_class = logging.getLoggerClass()
    def __del__(self):
        util.info = util.debug = lambda *a, **kw: None
        process._cleanup = lambda *a, **kw: None
    old_class.__del__ = __del__
  
I originally thought that the destructor of the ProcessAwareLogger class was the issue, but apparently not so because it does not have one.

Interestingly if one looks into the util.py module the following comment can be found:

def _check_logger_class():
    '''
    Make sure process name is recorded when loggers are used
    '''
    # XXX This function is unnecessary once logging is patched
    import logging
    if hasattr(logging, 'multiprocessing'):
        return
    ...

This is interesting because the logging monkeypatch is unused if logging is multiprocessing aware (which it should be in 2.6 at least).  However apparently at one point the toplevel multiprocessing import was removed which makes this test fall all the time.  Looking at the current 26 branch it appears that the monkeypatch was removed by jesse noller in [68737] over a year ago.

With the current development version (and I suppose a later release than 2.6.1 which I am currently testing) the error disappears as well.

However the core issue would come back as soon as the atexit call moves past a destructor again I suppose.  Because of that I would recommend aliasing info to _info and debug to _debug and then calling the underscored methods in the atexit handler.

Any reasons for not doing that?  Otherwise I would like to propose committing that patch.
History
Date User Action Args
2010-09-04 15:32:10aronachersetrecipients: + aronacher
2010-09-04 15:32:09aronachersetmessageid: <1283614329.92.0.49636905905.issue9775@psf.upfronthosting.co.za>
2010-09-04 15:32:08aronacherlinkissue9775 messages
2010-09-04 15:32:06aronachercreate