This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: The python interpreter crashed with "_enter_buffered_busy"
Type: crash Stage:
Components: Interpreter Core Versions: Python 3.10
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: Mark.Shannon, stestagg, xxm
Priority: normal Keywords:

Created on 2020-12-22 10:25 by xxm, last changed 2022-04-11 14:59 by admin.

Messages (9)
msg383582 - (view) Author: Xinmeng Xia (xxm) Date: 2020-12-22 10:25
The following program can work well in Python 2. However it crashes in Python 3( 3.6-3.10 ) with the following error messages.
Program:
============================================
import sys,time, threading

class test:
    def test(self):
        pass


class test1:
    def run(self):
        for i in range(0,10000000):
            connection = test()
            sys.stderr.write(' =_= ')


def testrun():
    client = test1()
    thread = threading.Thread(target=client.run, args=())
    thread.setDaemon(True)
    thread.start()

    time.sleep(0.1)

testrun()
============================================

Error message:
------------------------------------------------------------------------------

=_=  =_=  =_=  =_=  =_=  ......  =_=  =_=  =_=  =_= 
Fatal Python error: _enter_buffered_busy: could not acquire lock for <_io.BufferedWriter name='<stderr>'> at interpreter shutdown, possibly due to daemon threads
Python runtime state: finalizing (tstate=0xd0c180)

Current thread 0x00007f08a638f700 (most recent call first):
<no Python frame>
Aborted (core dumped)
------------------------------------------------------------------------------

When I remove "time.sleep(0.1)" or  "thread.setDaemon(True)" or "sys.stderr.write(' =_= ')" or "for i in range(0,10000000)":, the python interpreter seems to work well.
msg383609 - (view) Author: Steve Stagg (stestagg) Date: 2020-12-22 19:49
Minimal test case:

====
import sys, threading

def run():
    for i in range(10000000):
        sys.stderr.write(' =.= ')

if __name__ == '__main__':
    threading.Thread(target=run, daemon=True).start()
===

I think this is expected behaviour.  My knowledge isn't complete here, but something similar to:

* During shutdown, the daemon threads are aborted
* In your example, the thread is very likely to be busy doing IO, so holding the io lock.
* The abort you're seeing is an explicit check/abort to avoid a deadlock (https://bugs.python.org/issue23309).
msg383628 - (view) Author: Xinmeng Xia (xxm) Date: 2020-12-23 02:05
Thanks for your kind explanation! Now, I have understand the causes of this core dump. Considering it will not cause core dump in Python 2.x, I am wondering should we suggest an exception to catch it rather than "core dump"?
msg383913 - (view) Author: Steve Stagg (stestagg) Date: 2020-12-28 18:48
I think the problem here is that the issue can only really be detected late on during interpreter shutdown.

This makes recovery very hard to do.  Plus the thread termination has left shared state in an unmanaged condition, so it's super dangerous to re-enter python again
msg383953 - (view) Author: Xinmeng Xia (xxm) Date: 2020-12-29 03:57
Could we try to limit the number of thread or state or something? I mean if we set parameter of "range", for example, to 1000 or less here, the crash will no longer happen. I think the parser can not handle too heavy loop so that it crashes.
msg384007 - (view) Author: Steve Stagg (stestagg) Date: 2020-12-29 14:28
It's one of those ugly multithreading issues that's really hard to reason about unfortunately.

In this case, it's not the size of the loop so much as you've discovered a way to make it very likely that the background thread is doing IO (and holding the IO lock) during shutdown.

Here's an example that reproduces the abort for me (again, it's multithreading, so you may have a different experience) with a smaller range value:

---
import sys, threading

def run():
    for i in range(100):
        sys.stderr.write(' =.= ' * 10000)

if __name__ == '__main__':
    threading.Thread(target=run, daemon=True).start()
---

The problem with daemon threads is that they get killed fairly suddenly and without much ability to correct bad state during shutdown, so any fix here would likely be around re-visiting the thread termination code as linked in the issue above.

There may be a fix possible, but it's going to be a complex thread state management fix, not just a limit on loop counts, unfortunately
msg384045 - (view) Author: Xinmeng Xia (xxm) Date: 2020-12-30 02:29
Thank you for your patient reply. I see now. Hoping that some one can figure out a good idea to fix this problem.
msg385172 - (view) Author: Xinmeng Xia (xxm) Date: 2021-01-18 04:20
It seems that this bug won't be fixed. Should this issue be closed now?
msg385191 - (view) Author: Mark Shannon (Mark.Shannon) * (Python committer) Date: 2021-01-18 11:40
Please leave the issue open. This is a real bug.

It may not be fixed right now, but that doesn't mean it won't ever be fixed.
History
Date User Action Args
2022-04-11 14:59:39adminsetgithub: 86883
2021-01-18 11:40:55Mark.Shannonsetnosy: + Mark.Shannon
messages: + msg385191
2021-01-18 04:20:15xxmsetmessages: + msg385172
2020-12-30 02:29:52xxmsetmessages: + msg384045
2020-12-29 14:28:34stestaggsetmessages: + msg384007
2020-12-29 03:57:10xxmsetmessages: + msg383953
2020-12-28 18:48:41stestaggsetmessages: + msg383913
2020-12-23 02:05:22xxmsetmessages: + msg383628
2020-12-22 19:49:15stestaggsetnosy: + stestagg
messages: + msg383609
2020-12-22 10:25:03xxmcreate