This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients eric.snow, grahamd, vstinner
Date 2020-04-09.16:52:44
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1586451164.72.0.515762112408.issue40234@roundup.psfhosted.org>
In-reply-to
Content
First of all, I know that mod_wsgi uses subinterpreters, but I didn't know that it uses daemon threads. When I hack Python to better isolate subinterpreters, I keep mod_wsgi in mind ;-)


My commit 066e5b1a917ec2134e8997d2cadd815724314252 message says: "Daemon threads were never supported in subinterpreters." That's inaccurate. Only the second sentence is correct: "Previously, the subinterpreter finalization crashed with a Pyton fatal error if a daemon thread was still running."

In Python 3.8, it is possible to use daemon thread but only if they complete before Py_EndInterpreter() is called.

I'm not comfortable with Python "crashing" (call Py_FatalError() which calls abort() which may generate a coredump) depending if all daemon threads complete or not. I'm trying to make Python finalization more deterministic.


My concern is this Py_EndInterpreter() check:

    if (tstate != interp->tstate_head || tstate->next != NULL) {
        Py_FatalError("not the last thread");
    }


This check is as old as subinterpreters, it was added at the same time than the Py_EndInterpreter() function:

commit 25ce566661c1b7446b3ddb4076513a62f93ce08d
Author: Guido van Rossum <guido@python.org>
Date:   Sat Aug 2 03:10:38 1997 +0000

    The last of the mass checkins for separate (sub)interpreters.
    Everything should now work again.
    
    See the comments for the .h files mass checkin (e.g. pystate.h) for
    more detail.


My concern is that Py_FatalError() exits the whole process, not only the thread calling Py_EndInterpreter(). The caller cannot catch Py_FatalError() call and decide how to handle it :-( That doesn't seem like a good API when Python is embedded in an application.


Recently, I had to deal  with (fix) many bugs related to daemon threads in Python finalization:

* https://vstinner.github.io/gil-bugfixes-daemon-threads-python39.html
* https://vstinner.github.io/threading-shutdown-race-condition.html
* https://vstinner.github.io/daemon-threads-python-finalization-python32.html

Simply denying daemon threads in subinterpreters is the safest and simplest option. But it's also a backward incompatible change.

I'm open to allow again daemon threads in subinterpreter in Python 3.9 but emit a DeprecationWarning to announce that this feature is going to be remove "later" (ex: Python 3.11) since it's "broken by design" (in my opinion).


> We could also deprecate use of daemon threads in *all* subinterpreters, with the goal of dropping support after a while.

It sounds like a good idea to make Python finalization more reliable. But daemon threads are still widely used in the Python standard library. For example, by the multiprocessing module.

concurrent.futures was only modified recently to stop using daemon threads, after I denied spawned daemon threads in subinterpreters:

* bpo-39812
* commit b61b818d916942aad1f8f3e33181801c4a1ed14b

So if someone wants to ban daemon threads, we should start by modifying the stdlib to avoid them, then deprecate the feature, and then we can start talking about removing the feature.
History
Date User Action Args
2020-04-09 16:52:44vstinnersetrecipients: + vstinner, grahamd, eric.snow
2020-04-09 16:52:44vstinnersetmessageid: <1586451164.72.0.515762112408.issue40234@roundup.psfhosted.org>
2020-04-09 16:52:44vstinnerlinkissue40234 messages
2020-04-09 16:52:44vstinnercreate