New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing occasionally spits out exception during shutdown #48356
Comments
I worked up a simple example of using the external processing module for i in $(range 500) ; do Most of the time all I see are the '!' lines from the echo command. Exception in thread QueueFeederThread (most likely raised during
interpreter shutdown):
Traceback (most recent call last):
File "/Users/skip/local/lib/python2.7/threading.py", line 522, in
__bootstrap_inner
File "/Users/skip/local/lib/python2.7/threading.py", line 477, in run
File "/Users/skip/local/lib/python2.7/multiprocessing/queues.py", line
233, in _feed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable This occurred once in approximately 1500 runs of the script (three times |
Oh, the range command used in the shell for loop is analogous to Python's |
Got another one just now, but with just the note about the exception |
Final comment before I see some feedback from the experts. I have this code in the worker function's loop: # quick pause to allow other stuff to happen a bit randomly
t = 0.1 * random.random()
time.sleep(t) If I eliminate the sleep altogether pretty much all hell breaks loose. time.sleep(0.00015625) At that point it was complaining about killing worker processes on I suppose the moral of the story is to not use multiprocessing except |
Skip, using this: while ((x++ < 500)) ; do echo '!'$i ; ./python.exe test_proc.py; done | I don't see the exception in python-trunk, freshly compiled. It could be |
Ah ha. I see it if I run it with the loop set to 3000 - it is pretty rare. |
For what it's worth, I think I have a simpler reproducer of this issue. Using freshly-compiled python-from-trunk (as well as multiprocessing-from-trunk), I get tracebacks from the following about 30% of the time: """ My tracebacks are of the form:
"""
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/local/lib/python2.7/threading.py", line 530, in __bootstrap_inner
File "/usr/local/lib/python2.7/threading.py", line 483, in run
File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 272, in _handle_workers
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
""" |
Greg - what platform? |
I'm on Ubuntu 10.04, 64 bit. |
Greg - this is actually a different exception then the original bug report; could you please file a new issue with the information you've provided? I'm going to need to find a 64bit ubuntu box as I don't have one right now. |
Sure thing. See http://bugs.python.org/issue9207. |
With the example script attached I see the exception every time. On Ubuntu 10.10 with Python 2.6 Since the offending line in multiprocesing/queues.py (233) is a debug statement, just commenting it out seems to stop this exception. Looking at the util file shows the logging functions to be all of the form: if _logger:
_logger.log(... Could it be possible that after the check the _logger global (or the debug function) is destroyed by the exit handler? Can we convince them to stick around until such a time that they cannot be called? Adding a small delay before joining also seems to work, but is ugly. Why should another Process *have* to have a minimum amount of work to not throw an exception? |
On Tue, Jan 18, 2011 at 6:23 PM, Brian Thorne <report@bugs.python.org> wrote:
See http://bugs.python.org/issue9207 - but yes, the problem is that |
I can confirm this but with Python 2.7.1 on Ubuntu 11.04 64bit My code was working with a queue that was being fed a two-string tuple. |
I can't seem to reproduce this under 3.3. Should it be closed? |
On Wed, Aug 24, 2011 at 3:01 PM, Antoine Pitrou <report@bugs.python.org> wrote:
I don't think so; it's still applicable to 2.x, and a fix should go |
Indeed, 2.7 seems still affected. |
Ok, I think the reason this doesn't appear in 3.2/3.3 is the fix for bpo-1856. In 2.x (and 3.1) daemon threads can continue executing after the interpreter's internal structures have started being destroyed. The least intrusive solution is to always join the helper thread before shutting down the interpreter. Patch attached. |
In Antoine's patch, ISTM that the line created_by_this_process = ... could also be deleted, as the patch no longer uses that value and it's not used anywhere later in the method. |
New changeset d316315a8781 by Antoine Pitrou in branch '2.7': |
This should hopefully be fixed now. Feel free to reopen if it isn't. |
Ugh. Not 100% sure it's related, but I've been getting a similar traceback when running pip's test suite (python setup.py test) on OSX 10.6.8 with Python 2.7.2. Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/local/Cellar/python/2.7.2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/util.py", line 284, in _exit_function
info('process shutting down')
TypeError: 'NoneType' object is not callable Obviously it's not the exact same bug as fixed here, but Googling the traceback led me here and I do think it's the same genre of bug, i.e., multiprocessing's use of forking leads to issues when atexit is called (wasn't sure whether to open it here or bpo-9207). Also, see https://groups.google.com/forum/#!topic/nose-users/fnJ-kAUbYHQ, it seems other users of the nose testsuite ran into this. I'm afraid I won't have time to look much further into this (the reason I'm running pip's testsuite is that I'm already trying to make a contribution to pip...), but I thought it's best to at least mention it somewhere. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: