Title: Stuck during interpreter exit, attempting to take the GIL
Type: Stage: resolved
Components: Interpreter Core, Library (Lib) Versions: Python 3.5
Status: closed Resolution: later
Dependencies: Superseder:
Assigned To: Nosy List: eric.snow, inada.naoki, mocramis
Priority: normal Keywords:

Created on 2019-03-29 10:45 by mocramis, last changed 2019-04-25 15:56 by mocramis. This issue is now closed.

File name Uploaded Description Edit
stacktraces mocramis, 2019-03-29 10:45
Messages (18)
msg339103 - (view) Author: Remy Noel (mocramis) Date: 2019-03-29 10:45
I have a script (sadly, I can't publish it) spawning multiple threads that, in rare occurences, does not manage to exit properly and get stuck forever.

More precisely, this seems to happen during Interpreter exit: The atexit callbacks are called sucessfully, and we then have multiple threads that are all atempting to get the GIL why None seems to owns it (_PyThreadState_Current is always '{_value = 0}' while gil_locked is '{_value = 1}').

The main thread stack looks like this:

#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:225
#1  0x000055d00997ce6a in PyCOND_TIMEDWAIT (cond=0x55d009e8ddc0 <gil_cond>, mut=0x55d009e8dd80 <gil_mutex>, us=<optimized out>) at ../Python/condvar.h:103
#2  take_gil () at ../Python/ceval_gil.h:224
#3  0x000055d00998580b in PyEval_EvalFrameEx () at ../Python/ceval.c:1273
#4  0x000055d00996f16f in _PyEval_EvalCodeWithName.lto_priv.1929 (qualname=0x0, name=<optimized out>, closure=0x0, kwdefs=0x0, defcount=0, defs=0x0, kwcount=0, kws=<optimized out>, argcount=<optimized out>, 
    args=<optimized out>, locals=<optimized out>, globals=<optimized out>, _co=<optimized out>) at ../Python/ceval.c:4033
#5  PyEval_EvalCodeEx () at ../Python/ceval.c:4054
#6  0x000055d0099b90e3 in function_call.lto_priv () at ../Objects/funcobject.c:627
#7  0x000055d009a02e17 in PyObject_Call () at ../Objects/abstract.c:2166
#8  0x000055d00992034e in method_call.lto_priv () at ../Objects/classobject.c:330
#9  0x000055d009a02e17 in PyObject_Call () at ../Objects/abstract.c:2166
#10 0x000055d00996df7d in PyEval_CallObjectWithKeywords () at ../Python/ceval.c:4595
#11 0x000055d009a5d05d in slot_tp_repr () at ../Objects/typeobject.c:5992
#12 0x000055d0099c9685 in PyObject_Repr () at ../Objects/object.c:482
#13 0x000055d0099aa6be in unicode_fromformat_arg (vargs=0x7ffc2ca81110, f=0x55d009a8a837 "R", writer=0x7ffc2ca810b0) at ../Objects/unicodeobject.c:2645
#14 PyUnicode_FromFormatV () at ../Objects/unicodeobject.c:2710
#15 0x000055d009a572bc in PyErr_WarnFormat () at ../Python/_warnings.c:895
#16 0x000055d0098840bb in sock_dealloc (s=0x7f43000fc528) at ../Modules/socketmodule.c:4177
#17 0x000055d0099d031d in subtype_dealloc.lto_priv () at ../Objects/typeobject.c:1209
#18 0x000055d0099b68f7 in frame_dealloc.lto_priv () at ../Objects/frameobject.c:431
#19 0x000055d0098ab7b1 in PyThreadState_Clear (tstate=0x55d00bee8a70) at ../Python/pystate.c:386
#20 0x000055d009a4d08a in PyInterpreterState_Clear () at ../Python/pystate.c:118
#21 0x000055d009a4e1d2 in Py_Finalize () at ../Python/pylifecycle.c:633
#22 0x000055d009a4e2a8 in Py_Exit (sts=sts@entry=0) at ../Python/pylifecycle.c:1465
#23 0x000055d009a4e38e in handle_system_exit () at ../Python/pythonrun.c:602
#24 0x000055d009a4e3f6 in PyErr_PrintEx () at ../Python/pythonrun.c:612
#25 0x000055d009a4f667 in PyErr_Print () at ../Python/pythonrun.c:508
#26 PyRun_SimpleFileExFlags () at ../Python/pythonrun.c:401
#27 0x000055d009a7c2e7 in run_file (p_cf=0x7ffc2ca814fc, filename=0x55d00bb01140 L"...", fp=0x55d00bb62e60) at ../Modules/main.c:318
#28 Py_Main () at ../Modules/main.c:768
#29 0x000055d00990bd71 in main () at ../Programs/python.c:65
#30 0x00007f430b7cd2e1 in __libc_start_main (main=0x55d00990bc90 <main>, argc=11, argv=0x7ffc2ca81708, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc2ca816f8)
    at ../csu/libc-start.c:291
#31 0x000055d009a12a7a in _start ()

We can see it is trying to get the GIL while finalizing (as it is emitting a warning when destroying a socket). However, this prevents any other thread to get deleted since the first thread holds the head_lock. For instance we have thread 18 trying to get the head lock:

Thread 18 (Thread 0x7f4302ffd700 (LWP 21117)):
#0  0x00007f430c6aa536 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x55d00bb014c0) at ../sysdeps/unix/sysv/linux/futex-internal.h:205
#1  do_futex_wait (sem=sem@entry=0x55d00bb014c0, abstime=0x0) at sem_waitcommon.c:111
#2  0x00007f430c6aa5e4 in __new_sem_wait_slow (sem=0x55d00bb014c0, abstime=0x0) at sem_waitcommon.c:181
#3  0x000055d00994d2d5 in PyThread_acquire_lock_timed () at ../Python/thread_pthread.h:352
#4  0x000055d009a4dcec in tstate_delete_common () at ../Python/pystate.c:418
#5  0x000055d009a4dd88 in PyThreadState_DeleteCurrent () at ../Python/pystate.c:457
#6  0x000055d009a482a4 in t_bootstrap () at ../Modules/_threadmodule.c:1027
#7  0x00007f430c6a2494 in start_thread (arg=0x7f4302ffd700) at pthread_create.c:333
#8  0x00007f430b895acf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97

I attached the full stacktrace of the 18 threads.

I a not sure wether we either shouldn't try to lock the GIL while finalizing or if i somehow just happen to have run into a thread aqcuiring the GIL without releasing it.

python version is 3.5.3.

I kept the problematic process running and can extract any information you may want from it.
msg339111 - (view) Author: Inada Naoki (inada.naoki) * (Python committer) Date: 2019-03-29 13:40
Can you reproduce it on 3.7 or master branch?

Python 3.5 is security fix only mode now.
msg339116 - (view) Author: Remy Noel (mocramis) Date: 2019-03-29 14:20
The bug happens about once every two weeks on a script that is fired more than 10K times a day.

Sadly, i can't update the whole production environment to try it on the latest. (and i was unable to trigger the bug by myself).

I was hoping we could find inconsistencies in the hanging process that could lead to clues about the origin of the error.
msg339118 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 15:01
As Inada-san indicated, the problem might be resolved already.  So without the option of reproducing the problem, it will be hard to resolve this.

Here's some information you could provide that might help narrow down the scope a bit:

* what (stdlib/third-party/internal) modules are you using, particularly extension modules?
* how many threads are you using at once?
* how many atexit handlers do you have and what are they each doing?
msg339120 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 15:05

* are any of your threads daemon threads?
msg339125 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 16:15
* Are any signals (or signal handlers) involved?
* Are you monkeypatching any builtins, builtin/stdlib modules (or any parts of those)?

Keep in mind that a number of bugs have been fixed in later releases related to the various things I've asked about.  For instance, see issue #30703.
msg339132 - (view) Author: Remy Noel (mocramis) Date: 2019-03-29 18:47
std modules: atexit, json, os, signal, socket, ssl, subprocess, sys, time, threading, xmlrpc.client
The non-standard extension: psutil, PIL (only for image encoding though).

The amount of threads may vary, but i may have around 20 threads at all time. Most of them are indeed demon threads.

I have one atexit handler: i executes a few subprocess and pereform a few xmlrpc calls then exit. In this case, the handler go fully executed.

There are signal handlers, but none of them got called.

No monkeypatching is involved :)

I only browsed the patch up until the 3.5 head. (i guess il lacked to courage to go up to 3.7).

I tried to write a reproduction case, but i failed to run into the error. Of course, i will try to improve it if i get a clue about a way to increase the likelyness of the problem.
Sadly, all i have right now is a process i can attach to.
msg339133 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 18:50
At this point I think it's likely that the problem relates to how daemon threads are handled during runtime finalization.

What normally happens in the main thread of the "python3" executable is this:

1. Python runtime initializes
2. main interpreter initializes
3. the requested code is executed (in this case via PyRun_SimpleFileExFlags)
4. the runtime is finalized
   a. wait for all non-daemon threads to finish
   b. call registered atexit funcs
   c. mark the runtime as finalizing
   d. ...

From the stack trace you gave, the main thread is definitely past step 4c in the runtime finalization process.

Note the following:

* marking the runtime as finalizing is meant to cause all remaining daemon threads to exit the next time they take the GIL
* further, runtime finalization assumes that all daemon threads will have finished soon after step 4c
* not all C-API functions for acquiring the GIL actually cause the current thread to exit if the runtime is finalizing (see below)
* step 4a applies only to the main interpreter; using subinterpreters complicates the situation a little (threads get finalized with each subinterpreter later on)
* any thread created in an atexit func (4b) can cause problems

Cause thread to exit if runtime is finalizing:

* PyEval_RestoreThread()
* PyEval_EvalFrameEx()  (the eval loop)

Do not cause thread to exit if runtime is finalizing:

* PyEval_InitThreads()
* PyEval_ReInitThreads()
* PyEval_AcquireLock()
* PyEval_AcquireThread()

Regardless, from what you've reported it looks like the following is happening:

m1. main thread starts
m2. thread A (daemon) created
m3. thread B (daemon) created (by main thread or thread A)
m4. code in main thread raises SystemExit
m5. the Python runtime begins finalization (incl. steps 4a-4c above)
m6. the main thread starts cleaning up the main interpreter
m7. it starts cleaning up the main interpreter's threads
m8. it acquires the "head" lock
m9. it starts cleaning up the frame tied to one of those threads
m10. a socket object in that frame gets destroyed
m11. a warning is issued (presumably about an unclosed socket)

tB1. thread B (still running) acquires GIL
tB2. it does not release the GIL

m12. creating the warning causes a function to get called (socket.__repr__)
m13. this causes the main thread to try to acquire the GIL
m14. it blocks (waiting for thread B)

tA1. thread A (still running) finishes and starts cleaning itself up
tA2. it tries to acquire the "head" lock
tA3. it blocks (waiting for the main thread)


* at step m7 the code assumes that all the interpreter's threads have exited at that point
* with the verbose flag to "python3", the runtime will print a warning if any thread is still running when its (finalizing) interpreter tries to clean it up
msg339134 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 18:59
Here are some things that would likely help:

* in the main thread stop your daemon threads if you hit SystemExit
* make sure daemon threads release/acquire the GIL frequently (so they notice finalization)
* make sure daemon threads otherwise exit promptly (no long running C code)
* stop using daemon threads
* use a newer version of Python (may be fixed there)

I'm going take a look at master to see if it has a similar possible problem with daemon threads and runtime finalization.  If there is then I'll likely open a separate issue (and reference it here).
msg339137 - (view) Author: Remy Noel (mocramis) Date: 2019-03-29 19:22
Thank you a lot for this detailed answer.

Does the "causes of exit" may terminate the thread without releasing the GIL ?
Because as far as i can tell, none of the threads seems to own the GIL (i only checked _PyThreadState_Current though there might be a better way to find the GIL owner).
Therefore, the question is whether thread B is still alive after tB2. and, if so, whether we can find it. (Or whether we can understand why it left without releasing the GIL).

Is there any variable i may check to dig this further ?
msg339139 - (view) Author: Remy Noel (mocramis) Date: 2019-03-29 19:27
Oh, also, i do not use any C extension (apart from the one i mentionned), so i do not acquire/release the GIL directly (a component of the standard library would do so). 

The demon threads, mainly spend their time listening to sockets and running short subprocesses (all in pure python).
msg339140 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 19:30
I've opened issue36475 for the two C-API functions that do not cause daemon threads to exit.
msg339141 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 19:56
Looking at the stack traces for all your threads (super helpful, BTW), I saw 4 groups:

* (1) main thread (blocked on GIL during runtime finalization)
   + Thread 1
* (12) blocked on GIL in call to PyEval_RestoreThread() (socket, select, time.sleep(),
* (3) waiting for a connection on a socket (with GIL not held, presumably)
   + Thread 5
   + Thread 12
   + Thread 14
* (1) blocked on a threading.Lock
   + Thread 7
* (1) blocked on "head" lock when cleaning up after execution
   + Thread 18

So there's a third lock involved in this deadlock.  It isn't actually clear to me (without further digging) which thread actually holds the GIL.  I'd guess it's one of the last two (though it could be one of the 3 waiting on a socket).  However, in each of those cases I would not expect the GIL to be held at that point.

Regardless, the race likely involves the threading.Lock being held late in finalization.  It could be that you are not releasing it somewhere where you should be.
msg339146 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 20:41
I've also opened issues #36476 and #36477.
msg339147 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-29 20:48
@Remy, aside from the recommendations I've made, I'm not sure what else we can do to help.  Before we close the issue, I'd really like to ensure that one of those threads is holding the GIL still.  It would definitely be a problem if a thread exited while still holding the GIL.  The only way I can think of is to trace through the code. [1]

msg339187 - (view) Author: Remy Noel (mocramis) Date: 2019-03-30 10:16
Thanks for the advicesand thorough analysis. I'll try to force threads shutdown from the cleanup callback but i'd like to dig to the root of this isssue if possible.

This is what the thread 7 python backtrace looks like:

(gdb) py-bt
Traceback (most recent call first):
  <built-in method acquire of _thread.lock object at remote 0x7f43088859b8>
  File "/usr/lib/python3.5/", line 293, in wait
  File "/usr/lib/python3.5/", line 549, in wait
    signaled = self._cond.wait(timeout)
  File "/usr/lib/python3.5/", line 849, in start
  File "...", line 44, in __init__

So we are basically spawning a thread and waiting for it to start (which will likely never happen). That seems like a "normal" behaviour for me (from a programming standpoint, that is), but this may be another cause of never-terminating threads. (unless this is also caused by the headlock and the thread is expected to spawn/release the lock even after finalizing.)

Also, i have access to the process that i kept running. Is there any way to me to figure out which thread is currently holding the GIL ? I just want to be sure i can't get this info myself before we close this ticket (at which point i will get rid of the culprit process).
msg339947 - (view) Author: Remy Noel (mocramis) Date: 2019-04-11 09:15
@eric.snow Unless you confirm there is no way to figure out which thread is/was holding the GIL from a debugging session on the running process, I'll get rid of it at the end of the week.

Should i close the ticket then or will you do it ?

msg340847 - (view) Author: Remy Noel (mocramis) Date: 2019-04-25 15:56
closing the ticket for now.
Date User Action Args
2019-04-25 15:56:45mocramissetstatus: open -> closed
resolution: later
messages: + msg340847

stage: resolved
2019-04-11 09:15:14mocramissetmessages: + msg339947
2019-03-30 10:16:05mocramissetmessages: + msg339187
2019-03-29 20:48:18eric.snowsetmessages: + msg339147
2019-03-29 20:41:32eric.snowsetmessages: + msg339146
2019-03-29 19:56:27eric.snowsetmessages: + msg339141
2019-03-29 19:30:01eric.snowsetmessages: + msg339140
2019-03-29 19:27:32mocramissetmessages: + msg339139
2019-03-29 19:22:18mocramissetmessages: + msg339137
2019-03-29 18:59:07eric.snowsetmessages: + msg339134
2019-03-29 18:50:39eric.snowsetmessages: + msg339133
2019-03-29 18:47:32mocramissettype: behavior ->
messages: + msg339132
2019-03-29 16:15:07eric.snowsetmessages: + msg339125
2019-03-29 15:05:17eric.snowsetmessages: + msg339120
2019-03-29 15:01:47eric.snowsettype: behavior

messages: + msg339118
nosy: + eric.snow
2019-03-29 14:20:51mocramissetmessages: + msg339116
2019-03-29 13:40:26inada.naokisetnosy: + inada.naoki
messages: + msg339111
2019-03-29 10:45:19mocramiscreate