Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

race condition in subprocess module #45043

Closed
dsagal mannequin opened this issue Jun 5, 2007 · 59 comments
Closed

race condition in subprocess module #45043

dsagal mannequin opened this issue Jun 5, 2007 · 59 comments
Assignees
Labels
stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@dsagal
Copy link
Mannequin

dsagal mannequin commented Jun 5, 2007

BPO 1731717
Nosy @dbaarda, @gpshead, @pitrou, @vstinner, @giampaolo, @tiran, @matejcik, @benjaminp, @djc, @bitdancer, @jonashaag, @florentx, @kmoza
Files
  • mylib.c
  • main.c
  • exim_local_scan2.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/gpshead'
    closed_at = <Date 2010-12-14.15:17:05.636>
    created_at = <Date 2007-06-05.22:19:54.000>
    labels = ['type-bug', 'library']
    title = 'race condition in subprocess module'
    updated_at = <Date 2016-02-18.13:05:37.382>
    user = 'https://bugs.python.org/dsagal'

    bugs.python.org fields:

    activity = <Date 2016-02-18.13:05:37.382>
    actor = 'vstinner'
    assignee = 'gregory.p.smith'
    closed = True
    closed_date = <Date 2010-12-14.15:17:05.636>
    closer = 'gregory.p.smith'
    components = ['Library (Lib)']
    creation = <Date 2007-06-05.22:19:54.000>
    creator = 'dsagal'
    dependencies = []
    files = ['14086', '14087', '14134']
    hgrepos = []
    issue_num = 1731717
    keywords = []
    message_count = 59.0
    messages = ['32210', '32211', '32212', '32213', '32214', '32215', '32216', '32217', '32218', '32219', '32220', '32221', '32222', '56245', '57716', '57888', '57890', '57892', '57894', '64180', '64181', '64421', '64422', '64424', '64426', '64428', '64660', '64737', '65610', '70471', '82663', '88406', '88617', '88619', '88621', '88623', '88625', '92772', '100430', '100523', '100643', '100646', '100647', '100648', '100650', '100651', '100657', '100659', '106519', '115210', '117985', '122556', '123946', '123951', '123957', '142215', '251268', '260455', '260457']
    nosy_count = 30.0
    nosy_names = ['nnorwitz', 'collinwinter', 'abo', 'gregory.p.smith', 'astrand', 'siemer', 'exarkun', 'gjb1002', 'oefe', 'pitrou', 'vstinner', 'giampaolo.rodola', 'christian.heimes', 'dsagal', 'jyasskin', 'matejcik', 'tom_culliton', 'benjamin.peterson', 'djc', 'grossetti', 'r.david.murray', 'yonas', 'jonash', 'flox', 'kanaka', 'santoni', 'shaphiro', 'jlamanna', 'vitaly', 'Krishna Oza']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'needs patch'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue1731717'
    versions = ['Python 2.6', 'Python 3.1', 'Python 2.7', 'Python 3.2']

    @dsagal
    Copy link
    Mannequin Author

    dsagal mannequin commented Jun 5, 2007

    Python's subprocess module has a race condition: Popen() constructor has a call to global "_cleanup()" function on whenever a Popen object gets created, and that call causes a check for all pending Popen objects whether their subprocess has exited - i.e. the poll() method is called for every active Popen object.

    Now, if I create Popen object "foo" in one thread, and then a.wait(), and meanwhile I create another Popen object "bar" in another thread, then a.poll() might get called by _clean() right at the time when my first thread is running a.wait(). But those are not synchronized - each calls os.waitpid() if returncode is None, but the section which checks returncode and calls os.waitpid() is not protected as a critical section should be.

    The following code illustrates the problem, and is known to break reasonably consistenly with Python2.4. Changes to subprocess in Python2.5 seems to address a somewhat related problem, but not this one.

    import sys, os, threading, subprocess, time
    
    class X(threading.Thread):
      def __init__(self, *args, **kwargs):
        super(X, self).__init__(*args, **kwargs)
        self.start()
    
    def tt():
      s = subprocess.Popen(("/bin/ls", "-a", "/tmp"), stdout=subprocess.PIPE,
          universal_newlines=True)
      # time.sleep(1)
      s.communicate()[0]
    
    for i in xrange(1000):
      X(target = tt)
    This typically gives a few dozen errors like these:
    Exception in thread Thread-795:
    Traceback (most recent call last):
      File "/usr/lib/python2.4/threading.py", line 442, in __bootstrap
        self.run()
      File "/usr/lib/python2.4/threading.py", line 422, in run
        self.__target(*self.__args, **self.__kwargs)
      File "z.py", line 21, in tt
        s.communicate()[0]
      File "/usr/lib/python2.4/subprocess.py", line 1083, in communicate
        self.wait()
      File "/usr/lib/python2.4/subprocess.py", line 1007, in wait
        pid, sts = os.waitpid(self.pid, 0)
    OSError: [Errno 10] No child processes

    Note that uncommenting time.sleep(1) fixes the problem. So does wrapping subprocess.poll() and wait() with a lock. So does removing the call to global _cleanup() in Popen constructor.

    @dsagal dsagal mannequin assigned astrand Jun 5, 2007
    @dsagal dsagal mannequin added the stdlib Python modules in the Lib dir label Jun 5, 2007
    @dsagal dsagal mannequin assigned astrand Jun 5, 2007
    @dsagal dsagal mannequin added the stdlib Python modules in the Lib dir label Jun 5, 2007
    @nnorwitz
    Copy link
    Mannequin

    nnorwitz mannequin commented Jun 7, 2007

    Peter, could you take a look at this? Thanks.

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 1, 2007

    It appears that subprocess calls a module global "_cleanup()" whenever opening a new subprocess. This method is meant to reap any child processes that have terminated and have not explicitly cleaned up. These are processes you would expect to be cleaned up by GC, however, subprocess keeps a list of of all spawned subprocesses in _active until they are reaped explicitly so it can cleanup any that nolonger referenced anywhere else.

    The problem is lots of methods, including poll() and wait(), check self.returncode and then modify it. Any non-atomic read/modify action is inherently non-threadsafe. And _cleanup() calls poll() on all un-reaped child processes. If two threads happen to try and spawn subprocesses at once, these _cleanup() calls collide..

    The way to fix this depends on how thread-safe you want to make it. If you want to share popen objects between threads to wait()/poll() with impunity from any thread, you should add a recursive lock attribute to the Popen instance and have it lock/release it at the start/end of every method call.

    If you only care about using popen objects in one thread at a time, then all you need to fix is the nasty "every popen created calls poll() on every other living popen object regardless of what thread started them,
    and poll() is not threadsafe" behaviour.

    Removing _cleanup() is one way, but it will then not reap child processes that you del'ed all references to (except the one in subprocess._active) before you checked they were done.

    Probably another good idea is to not append and remove popen objects to _active directly, instead and add a popen.__del__() method that defers GC'ing of non-finished popen objects by adding them to _active. This
    way, _active only contains un-reaped child processes that were due to be GC'ed. _cleanup() will then be responsible for polling and removing these popen objects from _active when they are done.

    However, this alone will not fix things because you are still calling _cleanup() from different threads, it is still calling poll() on all these un-reaped processes, and poll() is not threadsafe. So... you either have to make poll() threadsafe (lock/unlock at the beginning/end of the method), or make _cleanup() threadsafe. The reason you can get away with making only _cleanup() threadsafe this way is _active will contain a list of processes that are not referenced anywhere else, so you know the only thing that will call poll() on them is the _cleanup() method.

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 1, 2007

    Having just gone through that waffly description of the problems and various levels of fix, I think there are really only two fixes worth considering;

    1. Make Popen instances fully threadsafe. Give them a recursive lock attribute and have every method acquire the lock at the start, and release it at the end.

    2. Decide the "try to reap abandoned children at each Popen" idea was an ugly hack and abandon it. Remove _active and _cleanup(), and document that any child process not explicitly handled to completion will result in zombie child processes.

    @gvanrossum
    Copy link
    Member

    I like #2. I don't see any use for threadsafe Popen instances, and I think that any self-respecting long-running server using Popen should properly manage its subprocesses anyway. (And for short-running processes it doesn't really matter if you have a few zombies.)

    One could add a __del__ method that registers zombies to be reaped later, but I don't think it's worth it, and __del__ has some serious issues of its own. (If you really want to do this, use a weak reference callback instead of __del__ to do the zombie registration.)

    @astrand
    Copy link
    Mannequin

    astrand mannequin commented Aug 2, 2007

    Suddenly starting to leave Zombies is NOT an option for me. To prevent zombies, all applications using the subprocess module would need to be changed. This is a major breakage of backwards compatibility, IMHO. subprocess (as well as the popen2 module) has prevented zombies automatically from the beginning and I believe they should continue to do so (at least by default).

    A little bit of history: When I wrote the subprocess module, I stole the idea of calling _cleanup whenever a new instance is created from the popen2 module, since I thought it was nice, lightweight and avoided having a __del__ method (which I have some bad experience with, GC-related). This worked great for a long time. At 2006-04-10, however, martin.v.loewis committed patch 1467770 (revision r45234), to solve bug 1460493. It introduces a __del__ method and some more complexity. I must admit that I didn't review this patch in detail, but it looked like thread safeness was being taken care of. But apparently it isn't.

    Perhaps reverting to the original popen2 approach would solve the problem? Or does the popen2 module suffer from this bug as well?

    I think that we need to fix this once and for all, this time. We should probably also consider adding the option of not having subprocess auto-wait while we are at it, for those what would like to wait() themselves (there should be a tracker item for this, but I can't find it right now).

    Are we tight on time for fixing this bug? I'm out in the forest right now...

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 3, 2007

    I just looked at popen2 and it has the same bug. I don't think __del__() related changes have anything to do with this. Popen.poll() and _cleanup() are non-threadsafe. When __init__() is called to create subprocesses simultaniously in two different threads, they both call _cleanup() and trample all over each other.

    Removing _cleanup() will not leave zombies for popened processes that are correctly handled to completion. Using methods like communicate() and wait() will handle the process to completion and reap the child.

    Unfortunately, I think a fairly common use-case is people using popen to fire-and-forget subprocesses without using communicate() or wait(). These will not get reaped, and will end up as zombies.

    I cannot think of an easy way to reap abandoned popen instances that is thread safe without using locks. At times like this I wish that the GIL was exposed as a recursive lock... we could have __cleanup__() acquire/release the GIL and make it atomic... no more race condition :-)

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 3, 2007

    Actually, after thinking about it, there may be a way to make _cleanup() threadsafe without using explicit locks by leveraging off the GIL. If _active.pop() and _active.append() are atomic because of the GIL, then _cleanup() can be made threadsafe. To be truely threadsafe it also needs to make sure that _active contains only abandoned popen instances so that _cleanup() doesn't have it's inst.poll() calls collide with other threads that are still using those popen instances. This can be done by havin the popen instances added to _active only by Popen.__del__(), and only removed by _cleanup() when they are done.

    @gvanrossum
    Copy link
    Member

    If you want this fixed in 2.5.x, a new release is just around the corner, and time is very tight. Otherwise, 2.6a1 is half a year off.

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 3, 2007

    I just looked at subprocess in svs trunk and it looks like it might already be fixed in both subprocess.py and popen2.py. The part where __del__() adds abandoned Popen instances to _active and _cleanup() removes them is already there. _cleanup() has been made more thread resistant by catching and ignoring exceptions that arise when two _cleanups() try to remove the same instances. Popen.poll() has been made more robust by having it set self.returncode to an optional _deadstate argument value when poll gets an os.error from os.pidwait() on a child that gets cleaned up by another thread. I think there are still a few minor race conditions around Popen.poll(), but it will only affect what non-zero returncode you get... it's not so much "thread safe" as "thread robust".

    I think the _deadstate argument approach to try and make Popen.poll() thread-robust is a bit hacky. I would rather see _cleanup() be properly threadsafe by having it remove the inst from _active before processing it and re-adding it back if it is still not finished. However, as it is now it looks like it is fixed...

    @dbaarda
    Copy link
    Mannequin

    dbaarda mannequin commented Aug 6, 2007

    I can create a patch to make current head a bit cleaner, if anyone is interested...

    @astrand
    Copy link
    Mannequin

    astrand mannequin commented Aug 6, 2007

    I can create a patch to make current head a bit cleaner, if anyone is
    interested...

    Sure, this would be great.

    I would also love a test case that reproduces this problem.

    @gjb1002
    Copy link
    Mannequin

    gjb1002 mannequin commented Aug 22, 2007

    There should be test cases aplenty - this bug was first reported in March 2006 and this is one of four incarnations of it. See also 1753891, 1754642, 1183780

    I think all of these contain testcases (though two roughly coincide) and any fix would be wise to try them all out...

    @gvanrossum
    Copy link
    Member

    See http://bugs.python.org/issue1236 for a good repeatable testcase.

    @tomculliton
    Copy link
    Mannequin

    tomculliton mannequin commented Nov 20, 2007

    This or some variant also shows up with scons
    (http://scons.tigris.org/issues/show_bug.cgi?id=1839) leading to some
    nasty intermittent build failures. Neal may remember how we addressed
    this for a Process class in a past life. Basically it's OK to collect
    all the child exit codes if you record the results and return them when
    requested. This would mean that waitpid and the like would have to check
    a cached list of PIDs and exit statuses before actually waiting. Don't
    know if that's possible/any help in this case or not...

    Traceback (most recent call last):
      File "/usr/lib/scons-0.97.0d20070918/SCons/Taskmaster.py", line 194,
    in execute
        self.targets[0].build()
      File "/usr/lib/scons-0.97.0d20070918/SCons/Node/__init__.py", line
    365, in build
        stat = apply(executor, (self,), kw)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Executor.py", line 139, in
    __call__
        return self.do_execute(target, kw)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Executor.py", line 129, in
    do_execute
        status = apply(act, (self.targets, self.sources, env), kw)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Action.py", line 332, in
    __call__
        stat = self.execute(target, source, env)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Action.py", line 479, in
    execute
        result = spawn(shell, escape, cmd_line[0], cmd_line, ENV)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Platform/posix.py", line
    104, in spawnvpe_spawn
        return exec_spawnvpe([sh, '-c', string.join(args)], env)
      File "/usr/lib/scons-0.97.0d20070918/SCons/Platform/posix.py", line
    68, in exec_spawnvpe
        stat = os.spawnvpe(os.P_WAIT, l[0], l, env)
      File "/sw/pkgs/python-2.3.5_00/lib/python2.3/os.py", line 553, in spawnvpe
        return _spawnvef(mode, file, args, env, execvpe)
      File "/sw/pkgs/python-2.3.5_00/lib/python2.3/os.py", line 504, in
    _spawnvef
        wpid, sts = waitpid(pid, 0)
    OSError: [Errno 10] No child processes
    scons: building terminated because of errors.

    @tomculliton
    Copy link
    Mannequin

    tomculliton mannequin commented Nov 27, 2007

    Looking at the subprocess.py code it occurred to me that it never checks
    if the value of self.pid returned by os.fork is -1, this would mean that
    later it runs waitpid with -1 as the first argument, putting it into
    promiscuous (wait for any process in the group) mode. This seems like a
    bad idea...

    @gvanrossum
    Copy link
    Member

    Looking at the subprocess.py code it occurred to me that it never
    checks if the value of self.pid returned by os.fork is -1

    What makes you think os.fork(0 can return -1? It's not C you know...

    @tomculliton
    Copy link
    Mannequin

    tomculliton mannequin commented Nov 27, 2007

    Good question. The documentation I was reading was mute on the subject
    so I made a "reasonable" guess. Does it throw an exception instead?

    Guido van Rossum wrote:

    Guido van Rossum added the comment:

    > Looking at the subprocess.py code it occurred to me that it never
    > checks if the value of self.pid returned by os.fork is -1
    >

    What makes you think os.fork(0 can return -1? It's not C you know...


    Tracker <report@bugs.python.org>
    <http://bugs.python.org/issue1731717\>


    @gvanrossum
    Copy link
    Member

    Yes, like all system calls in the os module.

    @gpshead
    Copy link
    Member

    gpshead commented Mar 20, 2008

    """Basically it's OK to collect
    all the child exit codes if you record the results and return them when
    requested. This would mean that waitpid and the like would have to check
    a cached list of PIDs and exit statuses before actually waiting."""

    note that this would still have problems. PIDs are recycled by the OS
    so as soon as you've waited on one and the process dies, the OS is free
    to launch a new process using it. If the new process happens to be
    another one of ours launched by subprocess that would be a problem.
    Make the cached map of pids -> exit codes be a map of pids -> [list of
    exit codes] instead?

    @gpshead
    Copy link
    Member

    gpshead commented Mar 20, 2008

    I am unable to reproduce this problem in today's trunk (2.6a) on OSX 10.4.

    @tomculliton
    Copy link
    Mannequin

    tomculliton mannequin commented Mar 24, 2008

    I'm not sure what the POSIX standards say on this (and MS-Windows may go
    it's own contrary way), but for most real systems the PID is a running
    count (generally 32 bit or larger today) which would have to cycle all
    the way around to repeat. It's actually done this way on purpose to
    provide the longest possible time between IDs getting reused. I suspect
    that having it cycle and reuse an ID isn't an issue in practice, and
    keeping a list of results leaves you with the problem of figuring out
    which PID 55367 they're talking about... Not to mention that if you're
    leaving child process results unchecked for long enough for the PID
    counter to cycle, there are other problems with the application. ;-)

    Gregory P. Smith wrote:

    Gregory P. Smith <greg@krypto.org> added the comment:

    """Basically it's OK to collect
    all the child exit codes if you record the results and return them when
    requested. This would mean that waitpid and the like would have to check
    a cached list of PIDs and exit statuses before actually waiting."""

    note that this would still have problems. PIDs are recycled by the OS
    so as soon as you've waited on one and the process dies, the OS is free
    to launch a new process using it. If the new process happens to be
    another one of ours launched by subprocess that would be a problem.
    Make the cached map of pids -> exit codes be a map of pids -> [list of
    exit codes] instead?


    Tracker <report@bugs.python.org>
    <http://bugs.python.org/issue1731717\>


    @exarkun
    Copy link
    Mannequin

    exarkun mannequin commented Mar 24, 2008

    Which system uses a 32 bit PID? Not Linux, or FreeBSD, or OS X - they
    all use 16 bit PIDs.

    @tomculliton
    Copy link
    Mannequin

    tomculliton mannequin commented Mar 24, 2008

    AIX, HP-UX, Solaris, 64 bit Linux, ... Even in the Linux x86 header
    files there's a mix of int and short. The last time I had to do the
    math on how long it would take the PID to cycle was probably on an AIX
    box and it was a very long time.

    Jean-Paul Calderone wrote:

    Jean-Paul Calderone <exarkun@divmod.com> added the comment:

    Which system uses a 32 bit PID? Not Linux, or FreeBSD, or OS X - they
    all use 16 bit PIDs.

    ----------
    nosy: +exarkun


    Tracker <report@bugs.python.org>
    <http://bugs.python.org/issue1731717\>


    @exarkun
    Copy link
    Mannequin

    exarkun mannequin commented Mar 24, 2008

    PIDs cycle quite frequently. You must be thinking of something else.
    Even 64 bit Linux has a maximum PID of 2 ** 15 - 1.

    @gpshead
    Copy link
    Member

    gpshead commented Mar 24, 2008

    fwiw, I see pids cycle in a reasonable amount of time on modern
    production linux systems today. its a fact of life, we should deal with
    it. :)

    @bitdancer
    Copy link
    Member

    I'm afraid I'm not going to be installing exim in order to test this.
    Perhaps someone else will be interested enough to do so, perhaps not :)

    Copying files into /usr/lib/pythonx.x is a very odd thing to do, by the
    way (though it should have no impact on the bug).

    @kanaka
    Copy link
    Mannequin

    kanaka mannequin commented Sep 17, 2009

    I can reproduce the problem (or at least get the same symptom) by doing
    this (in 2.4.6, 2.5.4 and 2.6.2):

    import subprocess, signal
    signal.signal(signal.SIGCLD, signal.SIG_IGN)
    subprocess.Popen(['echo','foo']).wait()

    The echo command completes, but the subprocess command throws the "no
    child" exception. It seems like the subprocess command should either:

    • detect that SIGCLD is not set correctly and throw a more informative
      exception before running the command.
    • override SIGCLD during the running of the sub-command. Not sure what
      the UNIX correctness implications of this are.
    • or allow the child to zombie without throwing an exception. The fact
      that the programmer has overridden SIGCLD sort of implies that reaping
      of zombie children has been switched off.

    I don't have good enough understanding of the underlying implementation
    to know if this is a reproducer as requested or if this should be a new
    bug. Please advise and I will file a new bug if requested.

    This is a follow-up to trying to resolve this problem in the PEP-3143
    python-daemon module. See this thread:
    http://groups.google.com/group/comp.lang.python/browse_thread/thread/9a853d0308c8e55a

    @florentx
    Copy link
    Mannequin

    florentx mannequin commented Mar 4, 2010

    Some buildbot hit this bug:
    http://bugs.python.org/issue7805#msg100428

    @florentx
    Copy link
    Mannequin

    florentx mannequin commented Mar 6, 2010

    Sometimes IA64 Ubuntu bot fails on this one.
    http://www.python.org/dev/buildbot/all/builders/ia64%20Ubuntu%20trunk/builds/571

    @florentx
    Copy link
    Mannequin

    florentx mannequin commented Mar 8, 2010

    Previous buildbot failures were in test_multiprocessing:
    http://bugs.python.org/issue1731717#msg100430

    Now it should be fixed:

    • r78777, r78787, r78790 on 2.x
    • r78798 on 3.x

    @yonas
    Copy link
    Mannequin

    yonas mannequin commented Mar 8, 2010

    Florent,

    Have you tested any of the sample test programs mentioned in this bug report? For example, the one by Joel Martin (kanaka).

    I'd suggest to test those first before marking this issue as fixed.

    • Yonas

    @florentx
    Copy link
    Mannequin

    florentx mannequin commented Mar 8, 2010

    >>> import subprocess, signal
    >>> signal.signal(signal.SIGCLD, signal.SIG_IGN)
    0
    >>> subprocess.Popen(['echo','foo']).wait()
    foo
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "./Lib/subprocess.py", line 1229, in wait
        pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
      File "./Lib/subprocess.py", line 482, in _eintr_retry_call
        return func(*args)
    OSError: [Errno 10] No child processes

    You're right, I fixed something giving the same ECHILD error in multiprocessing. But it was not related.

    @yonas
    Copy link
    Mannequin

    yonas mannequin commented Mar 8, 2010

    http://groups.google.com/group/comp.lang.python/browse_thread/thread/9a853d0308c8e55a

    "I'm also glad to see a test case that causes exactly the same error with or without the presence of a ‘daemon.DaemonContext’.

    Further research shows that handling of ‘SIGCLD’ (or ‘SIGCLD’) is fairly
    OS-specific, with “ignore it” or “handle it specifically” being correct
    on different systems. I think Python's default handling of this signal
    is already good (modulo bug bpo-1731717 to be addressed in ‘subprocess’).

    So I will apply a change similar to Joel Martin's suggestion, to default
    to avoid touching the ‘SIGCLD’ signal at all, and with extra notes in
    the documentation that anyone using child processes needs to be wary of
    signal handling.

    This causes the above test case to succeed; the output file contains::
    =====
    Child process via os.system.
    Child process via 'subprocess.Popen'.
    Parent daemon process.
    Parent daemon process done.
    ====="

    -- By Ben Finney

    @yonas
    Copy link
    Mannequin

    yonas mannequin commented Mar 8, 2010

    Ben Finney's comment suggests to me that this bug is being ignored. Am I wrong?

    "with extra notes in the documentation that anyone using child processes needs to be wary of signal handling."

    Why should they be wary? We should just fix this bug.

    @yonas
    Copy link
    Mannequin

    yonas mannequin commented Mar 8, 2010

    By the way, in three months from today, this bug will be 3 years old.....

    @gpshead
    Copy link
    Member

    gpshead commented Mar 8, 2010

    I really don't care about the age of a bug. This bug is young. I've fixed many bugs over twice its age in the past.

    Regardless, I've got some serious subprocess internal refactoring changes coming in the very near future to explicitly deal with thread safety issues. I'm adding this one to my list of things to tackle.

    @gpshead gpshead assigned gpshead and unassigned astrand Mar 8, 2010
    @yonas
    Copy link
    Mannequin

    yonas mannequin commented Mar 8, 2010

    Gregory,

    Awesome! Approx. how long until we hear back from you in this report?

    @santoni
    Copy link
    Mannequin

    santoni mannequin commented May 26, 2010

    I have exactly the same problem. Is there a thread-safe alternative to execute subprocesses in threads?

    @shaphiro
    Copy link
    Mannequin

    shaphiro mannequin commented Aug 30, 2010

    I'm using an old Plone/Zope Product, PHParser, that uses the popen2 call ... same problem for me.
    Is there a thread-safe alternative to execute subprocesses in threads? I need a patch!!! thanks in advance!!!

    @gpshead
    Copy link
    Member

    gpshead commented Oct 4, 2010

    A workaround for those still having problems with this:

    stub out subprocess._cleanup with a no-op method.

    It it only useful if your app is ever using subprocess and forgetting to call wait() on Popen objects before they are deleted. If you are, you can keep a reference to the old _cleanup() method and periodically call it on your own.

    http://bugs.python.org/issue1236 effectively demonstrates this.

    @jlamanna
    Copy link
    Mannequin

    jlamanna mannequin commented Nov 27, 2010

    stubbing out subprocess._cleanup does not work around the problem from this example on 2.6.5:

    import subprocess, signal
    subprocess._cleanup = lambda: None
    
    signal.signal(signal.SIGCLD, signal.SIG_IGN)
    subprocess.Popen(['echo','foo']).wait()
    james@hyla:~$ python tt.py
    foo
    Traceback (most recent call last):
      File "tt.py", line 5, in <module>
        subprocess.Popen(['echo','foo']).wait()
      File "/usr/lib/python2.6/subprocess.py", line 1170, in wait
        pid, sts = _eintr_retry_call(os.waitpid, self.pid, 0)
      File "/usr/lib/python2.6/subprocess.py", line 465, in _eintr_retry_call
        return func(*args)
    OSError: [Errno 10] No child processes

    This bug still prevents subprocess from being used inside of a daemon where SIGCLD is being caught to reap zombie processes.

    @gpshead
    Copy link
    Member

    gpshead commented Dec 14, 2010

    r87233 fixes the OSError escaping from wait() issue when SIGCLD is set to be ignored. (to appear in 3.2beta1; it is a candidate for backporting to 3.1 and 2.7)

    @gpshead
    Copy link
    Member

    gpshead commented Dec 14, 2010

    sorry, i meant 3.2beta2 above.
    release27-maint: r87234 targeting 2.7.2
    release31-maint: r87235 targeting 3.1.4

    @pitrou
    Copy link
    Member

    pitrou commented Dec 14, 2010

    It seems the canonical spelling is SIGCHLD. SIGCLD doesn't exist everywhere and it produces test failures under OS X:

    http://www.python.org/dev/buildbot/all/builders/AMD64%20Leopard%203.x
    http://www.python.org/dev/buildbot/all/builders/AMD64%20Snow%20Leopard%203.x

    FWIW, this is what POSIX says:

    “Some implementations, including System V, have a signal named SIGCLD, which is similar to SIGCHLD in 4.2 BSD. POSIX.1 permits implementations to have a single signal with both names. POSIX.1 carefully specifies ways in which conforming applications can avoid the semantic differences between the two different implementations. The name SIGCHLD was chosen for POSIX.1 because most current application usages of it can remain unchanged in conforming applications. SIGCLD in System V has more cases of semantics that POSIX.1 does not specify, and thus applications using it are more likely to require changes in addition to the name change.”

    http://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xsh_chap02.html

    @matejcik
    Copy link
    Mannequin

    matejcik mannequin commented Aug 16, 2011

    please check my logic here, but the patched code seems to throw away perfectly valid return codes:
    in wait(), self._handle_exitstatus(sts) gets called unconditionally, and it resets self.returncode also unconditionally.
    now, if a _cleanup() already did _internal_poll and set self.returncode that way, it is lost when wait() catches the ECHILD, in the one place where it actually matters, by setting sts=0 for the _handle_exitstatus call

    IMHO it could be fixed by moving _handle_exitstatus to the try: section, and returning "self.returncode or 0" or something like that

    @vitaly
    Copy link
    Mannequin

    vitaly mannequin commented Sep 21, 2015

    Is this issue fixed in python 2.7.10? I am experiencing strange race conditions in code that combines threads with multiprocessing pool, whereby a thread is spawned by each pool worker. Running on latest Mac OS X.

    @kmoza
    Copy link
    Mannequin

    kmoza mannequin commented Feb 18, 2016

    Hi, Could anyone help here to identify in which Python release the bug is fixed. I am unable to deduce from the bug tracker interface in which release this issue is fixed.

    Regards.

    @vstinner
    Copy link
    Member

    "Hi, Could anyone help here to identify in which Python release the bug is fixed. I am unable to deduce from the bug tracker interface in which release this issue is fixed."

    Oh, this is an old issue :-)

    msg123946: "r87233 fixes the OSError escaping from wait() issue when SIGCLD is set to be ignored. (to appear in 3.2beta1; it is a candidate for backporting to 3.1 and 2.7)"

    msg123951: "sorry, i meant 3.2beta2 above.
    release27-maint: r87234 targeting 2.7.2
    release31-maint: r87235 targeting 3.1.4"

    => the fix is in Python 2 >= 2.7.2, Python 3.1 >= v3.1.4, Python 3 >= 3.2.4.

    I hope that you are using a more recent Python version than these versions :-)

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    7 participants