New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
race condition in subprocess module #45043
Comments
Python's subprocess module has a race condition: Popen() constructor has a call to global "_cleanup()" function on whenever a Popen object gets created, and that call causes a check for all pending Popen objects whether their subprocess has exited - i.e. the poll() method is called for every active Popen object. Now, if I create Popen object "foo" in one thread, and then a.wait(), and meanwhile I create another Popen object "bar" in another thread, then a.poll() might get called by _clean() right at the time when my first thread is running a.wait(). But those are not synchronized - each calls os.waitpid() if returncode is None, but the section which checks returncode and calls os.waitpid() is not protected as a critical section should be. The following code illustrates the problem, and is known to break reasonably consistenly with Python2.4. Changes to subprocess in Python2.5 seems to address a somewhat related problem, but not this one. import sys, os, threading, subprocess, time
class X(threading.Thread):
def __init__(self, *args, **kwargs):
super(X, self).__init__(*args, **kwargs)
self.start()
def tt():
s = subprocess.Popen(("/bin/ls", "-a", "/tmp"), stdout=subprocess.PIPE,
universal_newlines=True)
# time.sleep(1)
s.communicate()[0]
for i in xrange(1000):
X(target = tt) This typically gives a few dozen errors like these:
Exception in thread Thread-795:
Traceback (most recent call last):
File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
self.run()
File "/usr/lib/python2.4/threading.py", line 422, in run
self.__target(*self.__args, **self.__kwargs)
File "z.py", line 21, in tt
s.communicate()[0]
File "/usr/lib/python2.4/subprocess.py", line 1083, in communicate
self.wait()
File "/usr/lib/python2.4/subprocess.py", line 1007, in wait
pid, sts = os.waitpid(self.pid, 0)
OSError: [Errno 10] No child processes Note that uncommenting time.sleep(1) fixes the problem. So does wrapping subprocess.poll() and wait() with a lock. So does removing the call to global _cleanup() in Popen constructor. |
Peter, could you take a look at this? Thanks. |
It appears that subprocess calls a module global "_cleanup()" whenever opening a new subprocess. This method is meant to reap any child processes that have terminated and have not explicitly cleaned up. These are processes you would expect to be cleaned up by GC, however, subprocess keeps a list of of all spawned subprocesses in _active until they are reaped explicitly so it can cleanup any that nolonger referenced anywhere else. The problem is lots of methods, including poll() and wait(), check self.returncode and then modify it. Any non-atomic read/modify action is inherently non-threadsafe. And _cleanup() calls poll() on all un-reaped child processes. If two threads happen to try and spawn subprocesses at once, these _cleanup() calls collide.. The way to fix this depends on how thread-safe you want to make it. If you want to share popen objects between threads to wait()/poll() with impunity from any thread, you should add a recursive lock attribute to the Popen instance and have it lock/release it at the start/end of every method call. If you only care about using popen objects in one thread at a time, then all you need to fix is the nasty "every popen created calls poll() on every other living popen object regardless of what thread started them, Removing _cleanup() is one way, but it will then not reap child processes that you del'ed all references to (except the one in subprocess._active) before you checked they were done. Probably another good idea is to not append and remove popen objects to _active directly, instead and add a popen.__del__() method that defers GC'ing of non-finished popen objects by adding them to _active. This However, this alone will not fix things because you are still calling _cleanup() from different threads, it is still calling poll() on all these un-reaped processes, and poll() is not threadsafe. So... you either have to make poll() threadsafe (lock/unlock at the beginning/end of the method), or make _cleanup() threadsafe. The reason you can get away with making only _cleanup() threadsafe this way is _active will contain a list of processes that are not referenced anywhere else, so you know the only thing that will call poll() on them is the _cleanup() method. |
Having just gone through that waffly description of the problems and various levels of fix, I think there are really only two fixes worth considering;
|
I like #2. I don't see any use for threadsafe Popen instances, and I think that any self-respecting long-running server using Popen should properly manage its subprocesses anyway. (And for short-running processes it doesn't really matter if you have a few zombies.) One could add a __del__ method that registers zombies to be reaped later, but I don't think it's worth it, and __del__ has some serious issues of its own. (If you really want to do this, use a weak reference callback instead of __del__ to do the zombie registration.) |
Suddenly starting to leave Zombies is NOT an option for me. To prevent zombies, all applications using the subprocess module would need to be changed. This is a major breakage of backwards compatibility, IMHO. subprocess (as well as the popen2 module) has prevented zombies automatically from the beginning and I believe they should continue to do so (at least by default). A little bit of history: When I wrote the subprocess module, I stole the idea of calling _cleanup whenever a new instance is created from the popen2 module, since I thought it was nice, lightweight and avoided having a __del__ method (which I have some bad experience with, GC-related). This worked great for a long time. At 2006-04-10, however, martin.v.loewis committed patch 1467770 (revision r45234), to solve bug 1460493. It introduces a __del__ method and some more complexity. I must admit that I didn't review this patch in detail, but it looked like thread safeness was being taken care of. But apparently it isn't. Perhaps reverting to the original popen2 approach would solve the problem? Or does the popen2 module suffer from this bug as well? I think that we need to fix this once and for all, this time. We should probably also consider adding the option of not having subprocess auto-wait while we are at it, for those what would like to wait() themselves (there should be a tracker item for this, but I can't find it right now). Are we tight on time for fixing this bug? I'm out in the forest right now... |
I just looked at popen2 and it has the same bug. I don't think __del__() related changes have anything to do with this. Popen.poll() and _cleanup() are non-threadsafe. When __init__() is called to create subprocesses simultaniously in two different threads, they both call _cleanup() and trample all over each other. Removing _cleanup() will not leave zombies for popened processes that are correctly handled to completion. Using methods like communicate() and wait() will handle the process to completion and reap the child. Unfortunately, I think a fairly common use-case is people using popen to fire-and-forget subprocesses without using communicate() or wait(). These will not get reaped, and will end up as zombies. I cannot think of an easy way to reap abandoned popen instances that is thread safe without using locks. At times like this I wish that the GIL was exposed as a recursive lock... we could have __cleanup__() acquire/release the GIL and make it atomic... no more race condition :-) |
Actually, after thinking about it, there may be a way to make _cleanup() threadsafe without using explicit locks by leveraging off the GIL. If _active.pop() and _active.append() are atomic because of the GIL, then _cleanup() can be made threadsafe. To be truely threadsafe it also needs to make sure that _active contains only abandoned popen instances so that _cleanup() doesn't have it's inst.poll() calls collide with other threads that are still using those popen instances. This can be done by havin the popen instances added to _active only by Popen.__del__(), and only removed by _cleanup() when they are done. |
If you want this fixed in 2.5.x, a new release is just around the corner, and time is very tight. Otherwise, 2.6a1 is half a year off. |
I just looked at subprocess in svs trunk and it looks like it might already be fixed in both subprocess.py and popen2.py. The part where __del__() adds abandoned Popen instances to _active and _cleanup() removes them is already there. _cleanup() has been made more thread resistant by catching and ignoring exceptions that arise when two _cleanups() try to remove the same instances. Popen.poll() has been made more robust by having it set self.returncode to an optional _deadstate argument value when poll gets an os.error from os.pidwait() on a child that gets cleaned up by another thread. I think there are still a few minor race conditions around Popen.poll(), but it will only affect what non-zero returncode you get... it's not so much "thread safe" as "thread robust". I think the _deadstate argument approach to try and make Popen.poll() thread-robust is a bit hacky. I would rather see _cleanup() be properly threadsafe by having it remove the inst from _active before processing it and re-adding it back if it is still not finished. However, as it is now it looks like it is fixed... |
I can create a patch to make current head a bit cleaner, if anyone is interested... |
Sure, this would be great. I would also love a test case that reproduces this problem. |
There should be test cases aplenty - this bug was first reported in March 2006 and this is one of four incarnations of it. See also 1753891, 1754642, 1183780 I think all of these contain testcases (though two roughly coincide) and any fix would be wise to try them all out... |
See http://bugs.python.org/issue1236 for a good repeatable testcase. |
This or some variant also shows up with scons Traceback (most recent call last):
File "/usr/lib/scons-0.97.0d20070918/SCons/Taskmaster.py", line 194,
in execute
self.targets[0].build()
File "/usr/lib/scons-0.97.0d20070918/SCons/Node/__init__.py", line
365, in build
stat = apply(executor, (self,), kw)
File "/usr/lib/scons-0.97.0d20070918/SCons/Executor.py", line 139, in
__call__
return self.do_execute(target, kw)
File "/usr/lib/scons-0.97.0d20070918/SCons/Executor.py", line 129, in
do_execute
status = apply(act, (self.targets, self.sources, env), kw)
File "/usr/lib/scons-0.97.0d20070918/SCons/Action.py", line 332, in
__call__
stat = self.execute(target, source, env)
File "/usr/lib/scons-0.97.0d20070918/SCons/Action.py", line 479, in
execute
result = spawn(shell, escape, cmd_line[0], cmd_line, ENV)
File "/usr/lib/scons-0.97.0d20070918/SCons/Platform/posix.py", line
104, in spawnvpe_spawn
return exec_spawnvpe([sh, '-c', string.join(args)], env)
File "/usr/lib/scons-0.97.0d20070918/SCons/Platform/posix.py", line
68, in exec_spawnvpe
stat = os.spawnvpe(os.P_WAIT, l[0], l, env)
File "/sw/pkgs/python-2.3.5_00/lib/python2.3/os.py", line 553, in spawnvpe
return _spawnvef(mode, file, args, env, execvpe)
File "/sw/pkgs/python-2.3.5_00/lib/python2.3/os.py", line 504, in
_spawnvef
wpid, sts = waitpid(pid, 0)
OSError: [Errno 10] No child processes
scons: building terminated because of errors. |
Looking at the subprocess.py code it occurred to me that it never checks |
What makes you think os.fork(0 can return -1? It's not C you know... |
Good question. The documentation I was reading was mute on the subject Guido van Rossum wrote:
|
Yes, like all system calls in the os module. |
"""Basically it's OK to collect note that this would still have problems. PIDs are recycled by the OS |
I am unable to reproduce this problem in today's trunk (2.6a) on OSX 10.4. |
I'm not sure what the POSIX standards say on this (and MS-Windows may go Gregory P. Smith wrote:
|
Which system uses a 32 bit PID? Not Linux, or FreeBSD, or OS X - they |
AIX, HP-UX, Solaris, 64 bit Linux, ... Even in the Linux x86 header Jean-Paul Calderone wrote:
|
PIDs cycle quite frequently. You must be thinking of something else. |
fwiw, I see pids cycle in a reasonable amount of time on modern |
I'm afraid I'm not going to be installing exim in order to test this. Copying files into /usr/lib/pythonx.x is a very odd thing to do, by the |
I can reproduce the problem (or at least get the same symptom) by doing import subprocess, signal
signal.signal(signal.SIGCLD, signal.SIG_IGN)
subprocess.Popen(['echo','foo']).wait() The echo command completes, but the subprocess command throws the "no
I don't have good enough understanding of the underlying implementation This is a follow-up to trying to resolve this problem in the PEP-3143 |
Some buildbot hit this bug: |
Sometimes IA64 Ubuntu bot fails on this one. |
Previous buildbot failures were in test_multiprocessing: Now it should be fixed:
|
Florent, Have you tested any of the sample test programs mentioned in this bug report? For example, the one by Joel Martin (kanaka). I'd suggest to test those first before marking this issue as fixed.
|
>>> import subprocess, signal
>>> signal.signal(signal.SIGCLD, signal.SIG_IGN)
0
>>> subprocess.Popen(['echo','foo']).wait()
foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "./Lib/subprocess.py", line 1229, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File "./Lib/subprocess.py", line 482, in _eintr_retry_call
return func(*args)
OSError: [Errno 10] No child processes You're right, I fixed something giving the same ECHILD error in multiprocessing. But it was not related. |
http://groups.google.com/group/comp.lang.python/browse_thread/thread/9a853d0308c8e55a "I'm also glad to see a test case that causes exactly the same error with or without the presence of a ‘daemon.DaemonContext’. Further research shows that handling of ‘SIGCLD’ (or ‘SIGCLD’) is fairly So I will apply a change similar to Joel Martin's suggestion, to default This causes the above test case to succeed; the output file contains:: -- By Ben Finney |
Ben Finney's comment suggests to me that this bug is being ignored. Am I wrong? "with extra notes in the documentation that anyone using child processes needs to be wary of signal handling." Why should they be wary? We should just fix this bug. |
By the way, in three months from today, this bug will be 3 years old..... |
I really don't care about the age of a bug. This bug is young. I've fixed many bugs over twice its age in the past. Regardless, I've got some serious subprocess internal refactoring changes coming in the very near future to explicitly deal with thread safety issues. I'm adding this one to my list of things to tackle. |
Gregory, Awesome! Approx. how long until we hear back from you in this report? |
I have exactly the same problem. Is there a thread-safe alternative to execute subprocesses in threads? |
I'm using an old Plone/Zope Product, PHParser, that uses the popen2 call ... same problem for me. |
A workaround for those still having problems with this: stub out subprocess._cleanup with a no-op method. It it only useful if your app is ever using subprocess and forgetting to call wait() on Popen objects before they are deleted. If you are, you can keep a reference to the old _cleanup() method and periodically call it on your own. http://bugs.python.org/issue1236 effectively demonstrates this. |
stubbing out subprocess._cleanup does not work around the problem from this example on 2.6.5: import subprocess, signal
subprocess._cleanup = lambda: None
signal.signal(signal.SIGCLD, signal.SIG_IGN)
subprocess.Popen(['echo','foo']).wait() james@hyla:~$ python tt.py
foo
Traceback (most recent call last):
File "tt.py", line 5, in <module>
subprocess.Popen(['echo','foo']).wait()
File "/usr/lib/python2.6/subprocess.py", line 1170, in wait
pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
File "/usr/lib/python2.6/subprocess.py", line 465, in _eintr_retry_call
return func(*args)
OSError: [Errno 10] No child processes This bug still prevents subprocess from being used inside of a daemon where SIGCLD is being caught to reap zombie processes. |
r87233 fixes the OSError escaping from wait() issue when SIGCLD is set to be ignored. (to appear in 3.2beta1; it is a candidate for backporting to 3.1 and 2.7) |
sorry, i meant 3.2beta2 above. |
It seems the canonical spelling is SIGCHLD. SIGCLD doesn't exist everywhere and it produces test failures under OS X: http://www.python.org/dev/buildbot/all/builders/AMD64%20Leopard%203.x FWIW, this is what POSIX says: “Some implementations, including System V, have a signal named SIGCLD, which is similar to SIGCHLD in 4.2 BSD. POSIX.1 permits implementations to have a single signal with both names. POSIX.1 carefully specifies ways in which conforming applications can avoid the semantic differences between the two different implementations. The name SIGCHLD was chosen for POSIX.1 because most current application usages of it can remain unchanged in conforming applications. SIGCLD in System V has more cases of semantics that POSIX.1 does not specify, and thus applications using it are more likely to require changes in addition to the name change.” http://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xsh_chap02.html |
please check my logic here, but the patched code seems to throw away perfectly valid return codes: IMHO it could be fixed by moving _handle_exitstatus to the try: section, and returning "self.returncode or 0" or something like that |
Is this issue fixed in python 2.7.10? I am experiencing strange race conditions in code that combines threads with multiprocessing pool, whereby a thread is spawned by each pool worker. Running on latest Mac OS X. |
Hi, Could anyone help here to identify in which Python release the bug is fixed. I am unable to deduce from the bug tracker interface in which release this issue is fixed. Regards. |
"Hi, Could anyone help here to identify in which Python release the bug is fixed. I am unable to deduce from the bug tracker interface in which release this issue is fixed." Oh, this is an old issue :-) msg123946: "r87233 fixes the OSError escaping from wait() issue when SIGCLD is set to be ignored. (to appear in 3.2beta1; it is a candidate for backporting to 3.1 and 2.7)" msg123951: "sorry, i meant 3.2beta2 above. => the fix is in Python 2 >= 2.7.2, Python 3.1 >= v3.1.4, Python 3 >= 3.2.4. I hope that you are using a more recent Python version than these versions :-) |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: