New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot capture sys.stderr output from an uncaught exception in a multiprocessing Process using a multiprocessing Queue #71268
Comments
In this code, one would expect that the entire traceback from the uncaught recursion error would get put onto the queue, where it could be read in the main process. However, only some of the output actually gets enqueued: Process IdleProcess-6:
Traceback (most recent call last):
File "C:\Python34\lib\multiprocessing\process.py", line 254, in _bootstrap
self.run()
File "C:\Python34\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "<pyshell#446>", line 12, in do_stderr
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g
File "<pyshell#446>", line 11, in g The rest of the data is not accessible. |
Replacing the first line with |
I took the example snippet and cleaned things up a bit, adding some crude timestamping and commentary. (Attached) In the example, when the Process finally dies, a lot of information has been put onto the Queue but it hasn't necessarily had enough time to be synced to the parent process so it may be only a truncated subset of the data that remains available/visible/retrievable on the parent. How much data gets synced to the parent depends on many factors and should even vary from one run to another. That is to say that a multiprocessing.Queue is not going to hold up a dying process to finish communicating data to other processes. |
I believe that regardless of the number of prints to sys.stderr that happen before the recursion error, all of them will get sent to the parent. The problem is that the queue is flushed before the uncaught error is sent to stderr, not after. |
Worth of note: bpo-26823 will actually change printing to stderr for recursion errors (if/when it gets merged that is; I don't know what's holding it up). It might be interesting to see if you can still witness this issue with the patch applied. |
This issue isn't specific to recursion errors. It only occurs when the error message is long enough, so bpo-26823 would fix the RecursionError case, but it would still happen when someone calls a function with a billion-character-long name that raises an error, for example. |
The spawned process (you appear to have run on Windows, so I'm assuming spawn but that's not so significant) has triggered an unhandled exception. If the triggering and subsequent sending of the traceback to stderr is synchronous and completes before Python actually kills the process, then the original "one would expect" statement would make sense. Removing the use of the Queue entirely and substituting the use of a socket to communicate from the child (spawned) process to some distinct listener process results in only a portion of the traceback text being received by the listener (or sometimes little to none of the traceback text). This suggests that the original premise is false and the expectation inaccurate. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: