This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: multiprocessing with maxtasksperchild: bug in control logic?
Type: Stage: resolved
Components: Library (Lib) Versions: Python 2.7
process
Status: closed Resolution: duplicate
Dependencies: Superseder: Multiprocessing maxtasksperchild results in hang
View: 10332
Assigned To: Nosy List: asksol, jnoller, neologix, ranga
Priority: normal Keywords:

Created on 2012-03-25 12:12 by ranga, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Messages (3)
msg156754 - (view) Author: (ranga) Date: 2012-03-25 12:12
I asked this on Stackoverflow and discovered from the discussions there that it might be a Python bug.

http://stackoverflow.com/questions/9859222/python-multiprocessing-with-maxtasksperchild

HOW TO REPRODUCE
================

This seemingly-simple program isn't working for me unless I remove the maxtasksperchild parameter. What am I doing wrong?

from multiprocessing import Pool
import os
import sys

def f(x):
  print "pid: ", os.getpid(), " got: ", x
  sys.stdout.flush()
  return [x, x+1]

def cb(r):
  print "got result: ", r

if __name__ == '__main__':
  pool = Pool(processes=1, maxtasksperchild=9)
  keys = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
  result = pool.map_async(f, keys, chunksize=1, callback=cb)
  pool.close()
  pool.join()

When I run it, I get:

$ python doit.py
pid:  6409  got:  1
pid:  6409  got:  2
pid:  6409  got:  3
pid:  6409  got:  4
pid:  6409  got:  5
pid:  6409  got:  6
pid:  6409  got:  7
pid:  6409  got:  8
pid:  6409  got:  9

And it hangs. That is, the new worker to process the 10th element didn't get spawned.

In another terminal, I see:

$ ps -C python
  PID TTY          TIME CMD
 6408 pts/11   00:00:00 python
 6409 pts/11   00:00:00 python <defunct>

This is done on Ubuntu 11.10 running python 2.7.2+ (installed from ubuntu packages).

MY HYPOTHESIS
=============

This is based on skimming the code and turning on logging.

The call to pool.close() (which the docs say I should call before calling pool.join()), sets the flag pool._state to CLOSE. The function Pool._handle_workers relies on that flag being 'RUN' to kick off new worker processes. Which doesn't happen, leading to everything getting stuck.

One workaround for the bug is to sleep after the map_async call for about 10 seconds before pool.close() is called. That works as it should (because pool._state doesn't get set to CLOSE until the all the jobs have finished).

Sorry if I missed something, didn't RTFM etc.
msg156820 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-03-26 13:15
> Sorry if I missed something, didn't RTFM etc.

You didn't: it's a duplicate of #10332 (which has already been fixed).
msg156827 - (view) Author: (ranga) Date: 2012-03-26 16:10
Thanks for the quick response, neologix. I copied the multiprocessing/ directory from latest cpython's 2.7 branch into my project dir and it works as advertised now!
History
Date User Action Args
2022-04-11 14:57:28adminsetgithub: 58612
2012-03-26 16:10:23rangasetmessages: + msg156827
2012-03-26 13:15:21neologixsetstatus: open -> closed

superseder: Multiprocessing maxtasksperchild results in hang

nosy: + neologix
messages: + msg156820
resolution: duplicate
stage: resolved
2012-03-26 12:55:47jnollersetnosy: + jnoller, asksol
2012-03-25 12:12:55rangasettitle: multiprocessing with maxtasksperchild: bug in state machine? -> multiprocessing with maxtasksperchild: bug in control logic?
2012-03-25 12:12:24rangacreate