classification
Title: multiprocessing (and concurrent.futures) doesn't detect killed processes
Type: Stage:
Components: Library (Lib) Versions: Python 3.2, Python 3.3
process
Status: closed Resolution: duplicate
Dependencies: Superseder:
Assigned To: Nosy List: asksol, bquinlan, haypo, jnoller, pitrou
Priority: normal Keywords:

Created on 2011-03-24 16:09 by pitrou, last changed 2011-03-31 10:43 by haypo. This issue is now closed.

Messages (6)
msg131995 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2011-03-24 16:09
If you do:

./python -c "from concurrent.futures import *; from time import *; t = ProcessPoolExecutor(1); t.submit(sleep, 60)"

and then kill the child process, the parent process doesn't notice and waits endlessly for the child to return the results.

I'm using concurrent.futures here but I assume the bug (or limitation) is on the multiprocessing side?
msg131998 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2011-03-24 16:16
In the following example, if I kill the child process, the parent is immediatly done:
---
from os import getpid
from time import sleep
from multiprocessing import Process

def f(sec):
    print("child %s: wait %s seconds" % (getpid(), sec))
    sleep(sec)

if __name__ == '__main__':
    print("parent %s: wait child" % (getpid(),))
    p = Process(target=f, args=(30,))
    p.start()
    p.join()
---
msg131999 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2011-03-24 16:18
Le jeudi 24 mars 2011 à 16:16 +0000, STINNER Victor a écrit :
> STINNER Victor <victor.stinner@haypocalc.com> added the comment:
> 
> In the following example, if I kill the child process, the parent is immediatly done:
> ---
> from os import getpid
> from time import sleep
> from multiprocessing import Process

concurrent.futures uses a multiprocessing.Queue to get the function
results back. You should use a similar setup in your script.
msg132004 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2011-03-24 16:33
In the following example, the parent doesn't react when the child process is killed:
-----
from os import getpid
from time import sleep, time
from multiprocessing import Pool

def f(sec):
    print("child %s: wait %s seconds" % (getpid(), sec))
    sleep(sec)

if __name__ == '__main__':
    print("parent %s: wait child" % (getpid(),))
    pool = Pool(processes=1)
    result = pool.apply_async(f, [60])
    print(result.get(timeout=120))
    print("parent: done")
-----
msg132005 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2011-03-24 16:37
It's possible to stop the parent with a CTRL+c, and so here is the trace of blocking function:

$ ./python y.py 
parent 26706: wait child
child 26707: wait 60 seconds
^CProcess PoolWorker-2:
Traceback (most recent call last):
  File "y.py", line 13, in <module>
Traceback (most recent call last):
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/process.py", line 263, in _bootstrap
    print(result.get(timeout=120))
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/pool.py", line 539, in get
    self.run()
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/process.py", line 118, in run
    self._target(*self._args, **self._kwargs)
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/pool.py", line 102, in worker
    self.wait(timeout)
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/pool.py", line 534, in wait
    task = get()
  File "/home/haypo/prog/HG/cpython/Lib/multiprocessing/queues.py", line 378, in get
    return recv()
KeyboardInterrupt
    self._cond.wait(timeout)
  File "/home/haypo/prog/HG/cpython/Lib/threading.py", line 241, in wait
    gotit = waiter.acquire(True, timeout)
KeyboardInterrupt
[61207 refs]
msg132645 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2011-03-31 10:43
This issue is a duplicate of #9205.
History
Date User Action Args
2011-03-31 10:43:14hayposetstatus: open -> closed
resolution: duplicate
messages: + msg132645
2011-03-24 16:37:15hayposetmessages: + msg132005
2011-03-24 16:35:18hayposettitle: concurrent.futures (or multiprocessing?) doesn't detect killed processes -> multiprocessing (and concurrent.futures) doesn't detect killed processes
2011-03-24 16:33:13hayposetmessages: + msg132004
2011-03-24 16:18:21pitrousetmessages: + msg131999
2011-03-24 16:16:54hayposetmessages: + msg131998
2011-03-24 16:09:48hayposetnosy: + haypo
2011-03-24 16:09:12pitroucreate