New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiprocessing deadlocks when sending large data through Queue with timeout #48039
Comments
With the attached script, then demo() called with for example The bug appears on Linux (RHEL4) / intel x86 with "multiprocessing" After a quick investigation, it seems to be a deadlock between waitpid This doesn't happen anymore if I use timeout=None or a larger timeout |
A quick fix in the user code, when we are sure we don't need the child |
See http://docs.python.org/dev/library/multiprocessing.html#multiprocessing- Specifically: Bear in mind that a process that has put items in a queue will wait This means that whenever you use a queue you need to make sure that all |
In a later release, I'd like to massage this in such a way that you do not One way to work around this David, is to call Queue.cancel_join_thread(): def f(datasize, q):
q.cancel_join_thread()
q.put(range(datasize)) |
Thank you Jesse. When I read this passage, I thought naively that a |
No problem David, you're the 4th person to ask me about this in the past 2 |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: