This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Cannot capture sys.stderr output from an uncaught exception in a multiprocessing Process using a multiprocessing Queue
Type: behavior Stage:
Components: Library (Lib) Versions: Python 3.4
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: abarry, davin, jnoller, ppperry, sbt
Priority: normal Keywords:

Created on 2016-05-21 21:04 by ppperry, last changed 2022-04-11 14:58 by admin.

Files
File name Uploaded Description Edit
issue27081.py davin, 2016-07-02 20:17 Cleaned and embellished example with commentary.
Messages (7)
msg266026 - (view) Author: (ppperry) Date: 2016-05-21 21:04
In this code, one would expect that the entire traceback from the uncaught recursion error would get put onto the queue, where it could be read in the main process.
  queue = multiprocessing.Queue()
  def do_stderr(queue):
   	class f:
		def write(self, data):
			queue.put(data)
		def flush(self):
			pass
	import sys
	sys.stderr = f()
	def g():
		g()
	g()
    multiprocessing.Process(target=do_stderr,args=(queue,)).start()

However, only some of the output actually gets enqueued:

 Process IdleProcess-6:
 Traceback (most recent call last):
 File "C:\Python34\lib\multiprocessing\process.py", line 254, in _bootstrap
    self.run()
  File "C:\Python34\lib\multiprocessing\process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "<pyshell#446>", line 12, in do_stderr
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g
  File "<pyshell#446>", line 11, in g

The rest of the data is not accessible.
msg266028 - (view) Author: (ppperry) Date: 2016-05-21 21:12
Replacing the first line with `queue = multiprocessing.SimpleQueue()` fixes this issue.
msg269733 - (view) Author: Davin Potts (davin) * (Python committer) Date: 2016-07-02 20:17
I took the example snippet and cleaned things up a bit, adding some crude timestamping and commentary.  (Attached)

In the example, when the Process finally dies, a lot of information has been put onto the Queue but it hasn't necessarily had enough time to be synced to the parent process so it may be only a truncated subset of the data that remains available/visible/retrievable on the parent.  How much data gets synced to the parent depends on many factors and should even vary from one run to another.

That is to say that a multiprocessing.Queue is not going to hold up a dying process to finish communicating data to other processes.
msg269736 - (view) Author: (ppperry) Date: 2016-07-02 20:42
I believe that regardless of the number of prints to sys.stderr that happen before the recursion error, all of them will get sent to the parent. The problem is that the queue is flushed before the uncaught error is sent to stderr, not after.
msg269737 - (view) Author: Anilyka Barry (abarry) * (Python triager) Date: 2016-07-02 21:02
Worth of note: #26823 will actually change printing to stderr for recursion errors (if/when it gets merged that is; I don't know what's holding it up). It might be interesting to see if you can still witness this issue with the patch applied.
msg269738 - (view) Author: (ppperry) Date: 2016-07-02 21:10
This issue isn't specific to recursion errors. It only occurs when the error message is long enough, so #26823 would fix the RecursionError case, but it would still happen when someone calls a function with a billion-character-long name that raises an error, for example.
msg269750 - (view) Author: Davin Potts (davin) * (Python committer) Date: 2016-07-03 04:25
The spawned process (you appear to have run on Windows, so I'm assuming spawn but that's not so significant) has triggered an unhandled exception.  If the triggering and subsequent sending of the traceback to stderr is synchronous and completes before Python actually kills the process, then the original "one would expect" statement would make sense.  Removing the use of the Queue entirely and substituting the use of a socket to communicate from the child (spawned) process to some distinct listener process results in only a portion of the traceback text being received by the listener (or sometimes little to none of the traceback text).  This suggests that the original premise is false and the expectation inaccurate.
History
Date User Action Args
2022-04-11 14:58:31adminsetgithub: 71268
2016-07-03 04:25:20davinsetmessages: + msg269750
2016-07-02 21:10:03ppperrysetmessages: + msg269738
2016-07-02 21:02:45abarrysetnosy: + abarry
messages: + msg269737
2016-07-02 20:42:36ppperrysetmessages: + msg269736
2016-07-02 20:17:53davinsetfiles: + issue27081.py

messages: + msg269733
2016-06-07 19:16:29ppperrysettitle: Multiprocessing is not robust against sys.stderr changes involving queues -> Cannot capture sys.stderr output from an uncaught exception in a multiprocessing Process using a multiprocessing Queue
2016-05-22 13:06:35ppperrysettitle: Multiprocessing is not robust against sys.stderr changes -> Multiprocessing is not robust against sys.stderr changes involving queues
2016-05-21 22:24:17ned.deilysetnosy: + davin
2016-05-21 21:12:27ppperrysetmessages: + msg266028
2016-05-21 21:04:59ppperrysetnosy: + jnoller, sbt
2016-05-21 21:04:33ppperrycreate