This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author neologix
Recipients Jimbofbx, asksol, jkeating, jnoller, neologix
Date 2011-04-13.08:19:50
SpamBayes Score 9.123366e-09
Marked as misclassified No
Message-id <1302682792.7.0.614141789982.issue10332@psf.upfronthosting.co.za>
In-reply-to
Content
This problem arises because the pool's close method is called before all the tasks have completed. Putting a sleep(1) before pool.close() won't exhibit this lockup.
The root cause is that close makes the workers handler thread exit: since the maxtasksperchild argument is used, workers exit when they've processed their max number of tasks. But since the workers handler thread exited, it doesn't maintain the pool of workers anymore, and thus the remaining tasks are not treated anymore, and the task handler thread waits indefinitely (since it waits until the cache is empty).
The solution is to prevent the worker handler thread from exiting until the cache has been drained (unless the pool is terminated in which case it must exit right away).
Attached is a patch and relevant test.

Note: I noticed that there are some thread-unsafe operations (the cache that can be modified from different threads, and thread states are modified also from different threads). While this isn't an issue with the current cPython implementation (GIL), I wonder if this should be fixed.
History
Date User Action Args
2011-04-13 08:19:52neologixsetrecipients: + neologix, jnoller, asksol, Jimbofbx, jkeating
2011-04-13 08:19:52neologixsetmessageid: <1302682792.7.0.614141789982.issue10332@psf.upfronthosting.co.za>
2011-04-13 08:19:50neologixlinkissue10332 messages
2011-04-13 08:19:50neologixcreate