This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author charlesc
Recipients charlesc
Date 2009-09-22.02:28:04
SpamBayes Score 5.2413074e-12
Marked as misclassified No
Message-id <1253586512.1.0.332909972993.issue6963@psf.upfronthosting.co.za>
In-reply-to
Content
Worker processes with multiprocessing.Pool live for the duration of the
Pool.  If the tasks they run happen to leak memory (from a C extension
module, or from creating cycles of unreachable objects, etc) or open
files or other resources, there's no easy way to clean them up.

Similarly, if one task passed to the pool allocates a large amount of
memory, but further tasks are small, that additional memory isn't
returned to the system because the process involved hasn't exited.

A common approach to this problem (as used by Apache, mod_wsgi, and
various other software) is to allow worker processes to exit (and be
replaced with fresh processes) after completing a specified amount of
work.  The attached patch (against Python 2.6.2, but applies to various
other versions with some fuzz) implements this as optional new behaviour
in multiprocessing.Pool().  An additional optional argument is specified
for the maximum number of tasks a worker process performs before it
exits and is replaced with a fresh worker process.
History
Date User Action Args
2009-09-22 02:28:32charlescsetrecipients: + charlesc
2009-09-22 02:28:32charlescsetmessageid: <1253586512.1.0.332909972993.issue6963@psf.upfronthosting.co.za>
2009-09-22 02:28:11charlesclinkissue6963 messages
2009-09-22 02:28:11charlesccreate