This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Possible multiprocessing deadlock when placing too many objects in Queue()
Type: behavior Stage: resolved
Components: Versions: Python 3.6, Python 3.5
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: davin, fhstoica, pitrou, xiang.zhang
Priority: normal Keywords:

Created on 2018-07-17 14:21 by fhstoica, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
IssuesWithQueueMultiProcessing.tar.bz fhstoica, 2018-07-17 14:21
Messages (3)
msg321832 - (view) Author: Horace Stoica (fhstoica) Date: 2018-07-17 14:21
I am trying to use the  multiprocessing module for a simulation on a spherical lattice, but the process hangs when the lattice is too large. 

In the file IssuesWithQueueMultiProcessing.py, the method createLattice(), use either "return(I4)" for the small lattice or "return(I5)" for the large lattice. 

Running the script when using the large lattice causes the process to hang while for the small lattice it works fine. I have tested with Python 3.5.2 and 3.6.1 and the behavior is the same in both versions.
msg321842 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2018-07-17 16:51
The problem is you're joining the child processes before draining the queue in the parent.

Generally, instead of building your own kind of synchronization like this, I would recommend you use the higher-level abstractions provided by multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor.

By the way, this issue is mentioned precisely in the documentation:

"""
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.

This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
"""

(from https://docs.python.org/3/library/multiprocessing.html#pipes-and-queues)
msg321843 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2018-07-17 16:51
Closing as not a bug.
History
Date User Action Args
2022-04-11 14:59:03adminsetgithub: 78321
2018-07-17 16:51:46pitrousetstatus: open -> closed
type: performance -> behavior
messages: + msg321843

resolution: not a bug
stage: resolved
2018-07-17 16:51:17pitrousetmessages: + msg321842
2018-07-17 16:19:57xiang.zhangsetnosy: + pitrou, davin, xiang.zhang
2018-07-17 14:21:19fhstoicacreate