This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author underrun
Recipients asksol, jnoller, ryles, underrun
Date 2011-06-16.06:06:01
SpamBayes Score 1.5744739e-11
Marked as misclassified No
Message-id <1308204363.74.0.497901037683.issue6056@psf.upfronthosting.co.za>
In-reply-to
Content
While having multiprocessing use a timeout would be great, I didn't really have the time to fiddle with the c code.

Instead of using the socket timeout, I'm modifying all the sockets created by the socket module to have no timeout (and thus to be blocking) which makes the multiprocessing module 'immune' to the socket module's default timeout.

For testing, I just run the test suite twice, once with the initial default socket timeout and once with a 60 second timeout. Nothing there failed with this issue.

It is worth noting, however, that when using a default socket timeout, for some reason processes that have have put data into a queue no longer block at exit waiting for the data to be consumed. I'm not sure if there is some additional cleanup code that uses sockets and might need to block? Or maybe whatever the issue was with blocking sockets is not an issue with non-blocking sockets?
History
Date User Action Args
2011-06-16 06:06:03underrunsetrecipients: + underrun, jnoller, ryles, asksol
2011-06-16 06:06:03underrunsetmessageid: <1308204363.74.0.497901037683.issue6056@psf.upfronthosting.co.za>
2011-06-16 06:06:03underrunlinkissue6056 messages
2011-06-16 06:06:02underruncreate