This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Unsupported provider

classification
Title: multiprocessing.Queue() blocks program
Type: behavior Stage: resolved
Components: Library (Lib) Versions: Python 2.6
process
Status: closed Resolution: duplicate
Dependencies: Superseder: multiprocessing.Queue fails to get() very large objects
View: 8426
Assigned To: Nosy List: eua, jnoller, neologix, r.david.murray
Priority: normal Keywords:

Created on 2010-03-26 03:32 by eua, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
damine6.py eua, 2010-03-26 03:32 Minimal sample crash program.
Messages (5)
msg101740 - (view) Author: Erdem U. Altinyurt (eua) Date: 2010-03-26 03:32
multiprocessing.Queue() blocking program on my computer after adding 1400 entry (depending addition size).

Tested with 2.6.2 and 2.6.5(compiled from source with gcc 4.4.1)
Using 64 bit OpenSUSE  11.2.

Output is:
-----------
....
1398  done
1399  done
-----------

and enters deadlock because Q.put() cannot completed.
No problems with basic array with lock().
Here the result after pressing CTRL+C:

-----------------------------------
^CTraceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 5, in testQ
KeyboardInterrupt
>>> 
^CError in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/opt/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/opt/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
    p.join()
  File "/opt/python/lib/python2.6/multiprocessing/process.py", line 119, in join
    res = self._popen.wait(timeout)
  File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait
    return self.poll(0)
  File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll
    pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Error in sys.exitfunc:
Traceback (most recent call last):
  File "/opt/python/lib/python2.6/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/opt/python/lib/python2.6/multiprocessing/util.py", line 269, in _exit_function
    p.join()
  File "/opt/python/lib/python2.6/multiprocessing/process.py", line 119, in join
    res = self._popen.wait(timeout)
  File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 117, in wait
    return self.poll(0)
  File "/opt/python/lib/python2.6/multiprocessing/forking.py", line 106, in poll
    pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
msg101750 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2010-03-26 14:51
Crash is for interpreter segfaults, changing to type 'behavior'.  Setting stage to 'test needed' because if this is a valid bug the test will need to be turned into a unit test.
msg101752 - (view) Author: Jesse Noller (jnoller) * (Python committer) Date: 2010-03-26 15:20
multiprocessing.Queue.Put() acts the same as Queue.put() - if the queue is full, the put call "hangs" until the queue is no longer full. The process will not exit, as the Queue is full, and it's waiting in put.

This works as designed, unless I'm missing something painfully obvious, which is entirely possible.
msg101756 - (view) Author: Erdem U. Altinyurt (eua) Date: 2010-03-26 16:11
Firstly I think as you but this is not correct.
Added Q.full() to know if Queue is full or not to the testQ code..

def testQ():
   for i in range(10000):
      mp.Process( None, QueueWorker, None, (i,Q,lock) ).start()
      while len(mp.active_children())>=mp.cpu_count()+4:
        time.sleep(0.01)
        print Q.full()

output is:
1397  done
1398  done
1399  done
False
False
False

So Queue is not full. And you can also add some things to queue at this state(by adding extra line to  while loop) and this will not blocks while loop.

Please test..
msg144115 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-09-16 07:59
It's a dupe of issue #8426: the Queue isn't full, but the underlying pipe is, so the feeder thread blocks on the write to the pipe (actually when trying to acquire the lock protecting the pipe from concurrent access).
Since the children processes join the feeder thread on exit (to make sure all data has been flushed to pipe), they block.
History
Date User Action Args
2022-04-11 14:56:59adminsetgithub: 52484
2011-09-16 07:59:19neologixsetstatus: open -> closed

superseder: multiprocessing.Queue fails to get() very large objects

nosy: + neologix
messages: + msg144115
resolution: not a bug -> duplicate
stage: test needed -> resolved
2010-03-26 16:11:03euasetstatus: closed -> open

messages: + msg101756
2010-03-26 15:20:12jnollersetstatus: open -> closed
resolution: not a bug
messages: + msg101752
2010-03-26 14:51:47r.david.murraysetpriority: normal

nosy: + r.david.murray, jnoller
messages: + msg101750

type: crash -> behavior
stage: test needed
2010-03-26 03:32:25euacreate