This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author DavidDecotigny
Recipients DavidDecotigny
Date 2008-09-05.22:35:24
SpamBayes Score 2.9059533e-12
Marked as misclassified No
Message-id <1220654126.75.0.335381648399.issue3789@psf.upfronthosting.co.za>
In-reply-to
Content
With the attached script, then demo() called with for example
datasize=40*1024*1024 and timeout=1 will deadlock: the program never
terminates.

The bug appears on Linux (RHEL4) / intel x86 with "multiprocessing"
coming with python 2.6b3 and I think it can be easily reproduced on
other Unices. It also appears with python 2.5 and the standalone
processing package 0.52
(https://developer.berlios.de/bugs/?func=detailbug&bug_id=14453&group_id=9001).

After a quick investigation, it seems to be a deadlock between waitpid
in the parent process, and a pipe::send in the "_feed" thread of the
child process. Indeed, the problem seems to be that "_feed" is still
sending data (the data is laaarge) to the pipe while the parent process
already called waitpid (because of the "short" timeout): the pipe fills
up because no consumer is eating the data (consumer already in waitpid)
and hence the "_feed" thread in the child blocks forever. Since the
child process does a _feed.join() before exiting (after function f), it
never exits. And hence the waitpid in the parent process never returns
because the child never exits.

This doesn't happen anymore if I use timeout=None or a larger timeout
(eg. 10 seconds). Because in both cases, waitpid is called /after/ the
"_feed" thread in the child process could send all of its data through
the pipe.
History
Date User Action Args
2008-09-05 22:35:26DavidDecotignysetrecipients: + DavidDecotigny
2008-09-05 22:35:26DavidDecotignysetmessageid: <1220654126.75.0.335381648399.issue3789@psf.upfronthosting.co.za>
2008-09-05 22:35:25DavidDecotignylinkissue3789 messages
2008-09-05 22:35:24DavidDecotignycreate