This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gdb
Recipients Albert.Strasheim, asksol, gdb, jnoller, vlasovskikh
Date 2010-08-27.19:16:59
SpamBayes Score 1.3351376e-11
Marked as misclassified No
Message-id <AANLkTi=zNDtPc2cmWxgYJ3jiwOYf3HdBOfyNc_d1Ztum@mail.gmail.com>
In-reply-to <1282934865.24.0.138816152346.issue9205@psf.upfronthosting.co.za>
Content
Ah, you're right--sorry, I had misread your code.  I hadn't noticed
the usage of the worker_pids.  This explains what you're doing with
the ACKs.  Now, the problem is, I think doing it this way introduces
some races (which is why I introduced the ACK from the task handler in
my most recent patch).  What happens if:
- A worker removes a job from the queue and is killed before sending an ACK.
- A worker removes a job from the queue, sends an ACK, and then is
killed.  Due to bad luck with the scheduler, the parent cleans the
worker before the parent has recorded the worker pid.

You're now reading from self._cache in one thread but writing it in
another.  What happens if a worker sends a result and then is killed?
Again, I haven't thought too hard about what will happen here, so if
you have a correctness argument for why it's safe as-is I'd be happy
to hear it.

Also, I just noted that your current way of dealing with child deaths
doesn't play well with the maxtasksperchild variable.  In particular,
try running:
"""
import multiprocessing
def foo(x):
  return x
multiprocessing.Pool(1, maxtasksperchild=1).map(foo, [1, 2, 3, 4])
"""
(This should be an easy fix.)
History
Date User Action Args
2010-08-27 19:17:02gdbsetrecipients: + gdb, jnoller, asksol, vlasovskikh, Albert.Strasheim
2010-08-27 19:17:00gdblinkissue9205 messages
2010-08-27 19:16:59gdbcreate