This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author sbt
Recipients asksol, jnoller, pitrou, sbt, schlesin
Date 2013-03-31.00:12:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1364688757.92.0.371847024594.issue6653@psf.upfronthosting.co.za>
In-reply-to
Content
I don't think this is a bug -- processes started with fork() should nearly always be exited with _exit().  And anyway, using sys.exit() does *not* guarantee that all deallocators will be called.  To be sure of cleanup at exit you could use (the undocumented) multiprocessing.util.Finalize().

Note that Python 3.4 on Unix will probably offer the choice of using os.fork()/os._exit() or _posixsubprocess.fork_exec()/sys.exit() for starting/exiting processes on Unix.

Sturla's scheme for doing reference counting of shared memory is also flawed because reference counts can fall to zero while a shared memory object is in a pipe/queue, causing the memory to be prematurely deallocated.

I think a more reliable scheme would be to use fds created using shm_open(), immediately unlinking the name with shm_unlink().  Then one could use the existing infrastructure for fd passing and let the operating system handle the reference counting.  This would prevent leaked shared memory (unless the process is killed in between shm_open() and shm_unlink()).  I would like to add something like this to multiprocessing.
History
Date User Action Args
2013-03-31 00:12:37sbtsetrecipients: + sbt, pitrou, jnoller, asksol, schlesin
2013-03-31 00:12:37sbtsetmessageid: <1364688757.92.0.371847024594.issue6653@psf.upfronthosting.co.za>
2013-03-31 00:12:37sbtlinkissue6653 messages
2013-03-31 00:12:37sbtcreate