This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Trying to cleanly terminate a threaded Queue at exit of program raises an "EOFError"
Type: behavior Stage:
Components: Library (Lib) Versions: Python 3.6, Python 3.5
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: Delgan
Priority: normal Keywords:

Created on 2018-03-14 21:14 by Delgan, last changed 2022-04-11 14:58 by admin.

Files
File name Uploaded Description Edit
bug.py Delgan, 2018-03-14 21:14
Messages (1)
msg313841 - (view) Author: Delgan (Delgan) * Date: 2018-03-14 21:14
Hi.

I use a worker Thread to which I communicate trough a multiprocessing Queue. I would like to properly close this daemon thread when my program terminates, so I registered a "stop()" function using "atexit.register()".

However, this raises an "EOFError" because the multiprocessing module uses "atexit.register()" too and closes the Queue internal pipe connections before that my thread ends.

After scratching inside the multiprocessing module, I tried to summarize my understanding of the problem here: https://stackoverflow.com/a/49244528/2291710

I joined a demonstration script that triggers the bug with (at least) Python 3.5/3.6 on both Windows and Linux.

The issue is fixable by forcing multiprocessing "atexit.register()" before mine with "import multiprocessing.queues", but this means I would rely on an implementation detail, and others dynamic calls made to "atexit.register()" (like one I saw in multiprocessing "get_logger()" for example) could break it again.
I first thought that "atexit.register()" could accept an optional "priority" argument, but every developers would probably want to be first. Could a subtle change be made however to guarantee that registered functions are executed before Python internal ones? As for now, the atexit statement "The assumption is that lower level modules will normally be imported before higher level modules and thus must be cleaned up later" is not quite true.

I do not know what to do with it, from what I know there is no way to achieve an automatic yet clean closure of such worker, so I would like to know if some kind of fix is possible for a future version of Python.

Thanks for your time.
History
Date User Action Args
2022-04-11 14:58:58adminsetgithub: 77257
2018-03-14 21:14:36Delgancreate