This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.
Type: Stage: resolved
Components: Subinterpreters Versions: Python 3.9
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: eric.snow Nosy List: Johan Dahlin, Mark.Shannon, emilyemorehouse, eric.snow, koobs, maciej.szulik, nascheme, ncoghlan, pconnell, phsilva, pmpp, serhiy.storchaka, shprotx, steve.dower, vstinner, yselivanov
Priority: normal Keywords: patch

Created on 2018-05-22 19:34 by eric.snow, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
managers.patch shprotx, 2019-05-18 10:19
take_gil.assert.patch shprotx, 2019-05-18 10:48
fini_crash.c pconnell, 2019-11-21 13:48
wrap_threadstate.diff pconnell, 2019-11-21 16:02
Pull Requests
URL Status Linked Edit
PR 9334 closed eric.snow, 2018-09-15 18:25
PR 11617 merged eric.snow, 2019-01-18 20:43
PR 11617 merged eric.snow, 2019-01-18 20:43
PR 11617 merged eric.snow, 2019-01-18 20:43
PR 12024 merged eric.snow, 2019-02-24 23:56
PR 12062 merged eric.snow, 2019-02-27 01:19
PR 12159 merged vstinner, 2019-03-04 12:36
PR 12240 merged eric.snow, 2019-03-08 18:37
PR 12243 merged eric.snow, 2019-03-08 23:47
PR 12245 merged eric.snow, 2019-03-09 00:07
PR 12246 merged eric.snow, 2019-03-09 00:15
PR 12247 merged eric.snow, 2019-03-09 00:18
PR 12346 merged vstinner, 2019-03-15 14:06
PR 12360 merged eric.snow, 2019-03-15 22:33
PR 12665 closed Paul Monson, 2019-04-02 18:51
PR 12806 merged eric.snow, 2019-04-12 15:57
PR 12877 merged steve.dower, 2019-04-18 16:00
PR 13714 merged eric.snow, 2019-06-01 04:29
PR 13780 merged vstinner, 2019-06-03 15:30
Messages (82)
msg317333 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2018-05-22 19:34
In order to keep subinterpreters properly isolated, objects
from one interpreter should not be used in C-API calls in
another interpreter.  That is relatively straight-forward
except in one case: indicating that the other interpreter
doesn't need the object to exist any more (similar to
PyBuffer_Release() but more general).  I consider the
following solutions to be the most viable.  Both make use
of recounts to protect cross-interpreter usage (with incref
before sharing).

1. manually switch interpreters (new private function)
  a. acquire the GIL
  b. if refcount > 1 then decref and release the GIL
  c. switch
  d. new thread (or re-use dedicated one)
  e. decref
  f. kill thread
  g. switch back
  h. release the GIL
2. change pending call mechanism (see Py_AddPendingCall) to
   per-interpreter instead of global (add "interp" arg to
   signature of new private C-API function)
  a. queue a function that decrefs the object
3. new cross-interpreter-specific private C-API function
  a. queue the object for decref (a la Py_AddPendingCall)
     in owning interpreter

I favor #2, since it is more generally applicable.  #3 would
probably be built on #2 anyway.  #1 is relatively inefficient.
With #2, Py_AddPendingCall() would become a simple wrapper
around the new private function.
msg317334 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2018-05-22 19:36
As a lesser (IMHO) alternative, we could also modify Py_DECREF
to respect a new "shared" marker of some sort (perhaps relative
to #33607), but that would probably still require one of the
refcount-based solutions (and add a branch to Py_DECREF).
msg317344 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2018-05-22 20:39
"That is relatively straight-forward except in one case: indicating that the other interpreter doesn't need the object to exist any more (similar to PyBuffer_Release() but more general)"

Why an interpreter would access an object from a different interpreter? Each interpreter should have its own object space, no?
msg317404 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2018-05-23 13:17
Adding a low level callback based mechanism to ask another interpreter to do work seems like a good way to go to me.

As an example of why that might be needed, consider the case of sharing a buffer exporting object with another subinterpreter: when the memoryview in the subinterpreter is destroyed, it needs to request that the buffer view be released in the source interpreter that actually owns the original object.
msg325481 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2018-09-16 11:54
I would suggest that sharing of objects between interpreters should be stamped out.  Could we have some #ifdef debug checking that would warn or assert so this doesn't happen?  I know currently we share a lot of objects.  However, in the long term, that does not seem like the correct design.  Instead, each interpreter should have its own objects and any passing or sharing should be tightly controlled.
msg325548 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2018-09-17 15:41
@Neil, we're definitely on the same page.  In fact, in a world where subinterpreters do not share a GIL, we can't ever use an object in one interpreter that was created in another (due to thread safety on refcounts).  The role of "tightly controlling" passing/sharing objects (for a very loose definition of "sharing") falls to the channels described in PEP 554. [1]

However, there are several circumstances where interpreters may collaborate that involves one holding a reference (but not using it) to an object owned by the other.  For instance, see PyBuffer_Release(). [2]  This issue is about addressing that situation safely.  It is definitely not about safely using objects from other interpreters.

[1] The low-level implementation, including channels, already exists in Modules/_xxsubinterpretersmodule.c.
[2] https://docs.python.org/3/c-api/buffer.html#c.PyBuffer_Release
msg336490 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-24 23:40
New changeset ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465 by Eric Snow in branch 'master':
bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (GH-11617)
https://github.com/python/cpython/commit/ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465
msg336544 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-25 17:40
GH-11617 has introduces a slight performance regression which will need to be resolved.  If I can't find the problem right away then I'll probably temporarily revert the commit.

See https://mail.python.org/pipermail/python-dev/2019-February/156451.html.
msg336596 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-26 02:29
At least, it *might* be a performance regression.  Here are the two commits I tried:

after:  ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465 (PR #11617, from this issue)
before:  463572c8beb59fd9d6850440af48a5c5f4c0c0c9 (the one right before)

After building each (a normal build, not debug), I ran the micro-benchmark Raymond referred to ("./python Tools/scripts/var_access_benchmark.py") multiple times.  There was enough variability between runs from the same commit that I'm uncertain at this point if there really is any performance regression.  For the most part the results between the two commits are really close.  Also, the results from the "after" commit fall within the variability I've seen between runs of the "before" commit.

I'm going to try with the "performance" suite (https://github.com/python/performance) to see if it shows any regression.
msg336602 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-26 04:10
Here's what I did (more or less):

$ python3 -m pip install --user perf
$ python3 -m perf system tune
$ git clone git@github.com:python/performance.git
$ git clone git@github.com:python/cpython.git
$ cd cpython
$ git checkout ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465
$ ./configure
$ make -j8
$ ./python ../performance/pyperformance run --fast --output speed.after
$ git checkout HEAD^
$ make -j8
$ ./python ../performance/pyperformance run --fast --output speed.before
$ ./python ../performance/pyperformance compare speed.before speed.after -O table

Here are my results:

speed.before
============

Performance version: 0.7.0
Report on Linux-4.15.0-45-generic-x86_64-with-glibc2.26
Number of logical CPUs: 4
Start date: 2019-02-25 20:39:44.151699
End date: 2019-02-25 20:48:04.817516

speed.after
===========

Performance version: 0.7.0
Report on Linux-4.15.0-45-generic-x86_64-with-glibc2.26
Number of logical CPUs: 4
Start date: 2019-02-25 20:29:56.610702
End date: 2019-02-25 20:38:11.667461

+-------------------------+--------------+-------------+--------------+-----------------------+
| Benchmark               | speed.before | speed.after | Change       | Significance          |
+=========================+==============+=============+==============+=======================+
| 2to3                    | 447 ms       | 440 ms      | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| chaos                   | 155 ms       | 156 ms      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| crypto_pyaes            | 154 ms       | 152 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| deltablue               | 10.7 ms      | 10.5 ms     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| django_template         | 177 ms       | 172 ms      | 1.03x faster | Significant (t=3.66)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| dulwich_log             | 105 ms       | 106 ms      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| fannkuch                | 572 ms       | 573 ms      | 1.00x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| float                   | 149 ms       | 146 ms      | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| go                      | 351 ms       | 349 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| hexiom                  | 14.6 ms      | 14.4 ms     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| html5lib                | 126 ms       | 122 ms      | 1.03x faster | Significant (t=3.46)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| json_dumps              | 17.6 ms      | 17.2 ms     | 1.02x faster | Significant (t=2.65)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| json_loads              | 36.4 us      | 36.1 us     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| logging_format          | 15.2 us      | 14.9 us     | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| logging_silent          | 292 ns       | 296 ns      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| logging_simple          | 13.7 us      | 13.4 us     | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| mako                    | 22.9 ms      | 22.5 ms     | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| meteor_contest          | 134 ms       | 134 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| nbody                   | 157 ms       | 161 ms      | 1.03x slower | Significant (t=-3.85) |
+-------------------------+--------------+-------------+--------------+-----------------------+
| nqueens                 | 134 ms       | 132 ms      | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pathlib                 | 30.1 ms      | 31.0 ms     | 1.03x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pickle                  | 11.5 us      | 11.6 us     | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pickle_dict             | 29.5 us      | 30.5 us     | 1.03x slower | Significant (t=-6.37) |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pickle_list             | 4.54 us      | 4.58 us     | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pickle_pure_python      | 652 us       | 651 us      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| pidigits                | 212 ms       | 215 ms      | 1.02x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| python_startup          | 11.6 ms      | 11.5 ms     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| python_startup_no_site  | 8.07 ms      | 7.95 ms     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| raytrace                | 729 ms       | 722 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| regex_compile           | 249 ms       | 247 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| regex_dna               | 203 ms       | 204 ms      | 1.00x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| regex_effbot            | 3.55 ms      | 3.55 ms     | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| regex_v8                | 28.3 ms      | 28.3 ms     | 1.00x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| richards                | 105 ms       | 105 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| scimark_fft             | 429 ms       | 436 ms      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| scimark_lu              | 238 ms       | 237 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| scimark_monte_carlo     | 144 ms       | 139 ms      | 1.04x faster | Significant (t=3.61)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| scimark_sor             | 265 ms       | 260 ms      | 1.02x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| scimark_sparse_mat_mult | 5.41 ms      | 5.25 ms     | 1.03x faster | Significant (t=4.26)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| spectral_norm           | 174 ms       | 173 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sqlalchemy_declarative  | 216 ms       | 214 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sqlalchemy_imperative   | 41.6 ms      | 41.2 ms     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sqlite_synth            | 3.99 us      | 3.91 us     | 1.02x faster | Significant (t=2.49)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sympy_expand            | 559 ms       | 559 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sympy_integrate         | 25.2 ms      | 25.3 ms     | 1.00x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sympy_str               | 261 ms       | 260 ms      | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| sympy_sum               | 136 ms       | 138 ms      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| telco                   | 8.36 ms      | 8.32 ms     | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| tornado_http            | 273 ms       | 271 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| unpack_sequence         | 58.8 ns      | 60.0 ns     | 1.02x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| unpickle                | 21.5 us      | 21.5 us     | 1.00x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| unpickle_list           | 5.60 us      | 5.55 us     | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| unpickle_pure_python    | 497 us       | 481 us      | 1.03x faster | Significant (t=5.04)  |
+-------------------------+--------------+-------------+--------------+-----------------------+
| xml_etree_generate      | 141 ms       | 140 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| xml_etree_iterparse     | 131 ms       | 133 ms      | 1.01x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| xml_etree_parse         | 186 ms       | 187 ms      | 1.00x slower | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
| xml_etree_process       | 115 ms       | 113 ms      | 1.01x faster | Not significant       |
+-------------------------+--------------+-------------+--------------+-----------------------+
msg336603 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-26 04:11
...so it doesn't appear that my PR introduces a performance regression.
msg336677 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-02-26 14:37
> ...so it doesn't appear that my PR introduces a performance regression.

IMHO there is no performance regression at all. Just noice in the result which doesn't come with std dev.
msg336687 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-02-26 15:57
FYI, I have a couple of small follow-up changes to land before I close this issue.
msg336897 - (view) Author: David Bolen (db3l) * Date: 2019-03-01 07:20
I suspect changes for this issue may be creating test_io failures on my windows builders, most notably my x86 Windows 7 builder where test_io has been failing both attempts on almost all builds.  It fails with a lock failure during interpreter shutdown, and commit ef4ac967 appears the most plausible commit out of the set introduced at the first failing build on Feb 24.

See https://buildbot.python.org/all/#/builders/58/builds/1977 for the first failure.  test_io has failed both attempts on all but 3 of the subsequent 16 tests of the 3.x branch.

It might be worth noting that this builder is slow, so if there are timeouts involved or a race condition of any sort it might be triggering it more readily than other builders.  I do, however, see the same failures - albeit less frequently and not usually on both tries - on the Win8 and Win10 builders.

For what it's worth one other oddity is that while having test_multiprocessing* failures are relatively common on the Win7 builder during the first round of tests due to exceeding the timeout, they usually pass on the retry, but over this same time frame have begun failing - not due to timeout - even on the second attempt, which is unusual.  It might be coincidental but similar failures are also showing up sporadically on the Win8/Win10 builders where such failures are not common at all.
msg336948 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-01 19:35
New changeset b05b711a2cef6c6c381e01069dedac372e0b9fb2 by Eric Snow in branch 'master':
bpo-33608: Use _Py_AddPendingCall() in _PyCrossInterpreterData_Release(). (gh-12024)
https://github.com/python/cpython/commit/b05b711a2cef6c6c381e01069dedac372e0b9fb2
msg336951 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-01 20:15
New changeset bda918bf65a88560ec453aaba0758a9c0d49b449 by Eric Snow in branch 'master':
bpo-33608: Simplify ceval's DISPATCH by hoisting eval_breaker ahead of time. (gh-12062)
https://github.com/python/cpython/commit/bda918bf65a88560ec453aaba0758a9c0d49b449
msg336952 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-01 20:22
I've merged the PR for hoisting eval_breaker before the ceval loop starts.  @Raymond, see if that addresses the performance regression you've been seeing.

I'll close this issue after I've sorted out the buildbot issues David mentioned (on his Windows builders).
msg336975 - (view) Author: David Bolen (db3l) * Date: 2019-03-02 00:27
If I can help with testing or builder access or anything just let me know.  It appears that I can pretty reliably trigger the error through just the individual tests (test_daemon_threads_shutdown_std{out,err}_deadlock) in isolation, which is definitely less tedious than a full buildbot run to test a change.

BTW, while not directly related since it was only just merged, and I won't pretend to have any real understanding of the changes here, I do have a question about PR 12024 ... it appears HEAD_UNLOCK is used twice in _PyInterpreterState_LookUpID.  Shouldn't one of those be HEAD_LOCK?
msg337075 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 08:22
> I suspect changes for this issue may be creating test_io failures on my windows builders, (...)

I created bpo-36177 to track this regression.
msg337096 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 12:16
The commit ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465 introduced a regression in test.test_multiprocessing_spawn.WithThreadsTestManagerRestart.test_rapid_restart: bpo-36114.

Would it be possible to revert it until the bug is properly understood and fixed?
msg337110 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 13:12
> The commit ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465 introduced a regression in test.test_multiprocessing_spawn.WithThreadsTestManagerRestart.test_rapid_restart: bpo-36114.

There is a very similar bug on Windows: bpo-36116.

I'm now also quite confident that bpo-36177 is also a regression caused by this issue.
msg337115 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 13:21
New changeset 4d61e6e3b802399be62a521d6fa785698cb670b5 by Victor Stinner in branch 'master':
Revert: bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (GH-11617) (GH-12159)
https://github.com/python/cpython/commit/4d61e6e3b802399be62a521d6fa785698cb670b5
msg337116 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 13:21
Hi Eric, I'm sorry but I had to revert your recent work. It introduced regressions in at least in test_io and test_multiprocessing_spawn on Windows and FreeBSD. I simply don't have the bandwidth to investigate this regression yet, whereas we really need the CI to remain stable to catch other regressions (the master branch received multiple new features recently, and there are some other regressions by that way ;-)).

test_multiprocessing_spawn is *very* hard to reproduce on FreeBSD, but I can reliably reproduce it. It just takes time. The issue is a crash producing a coredump. I consider that the bug is important enough to justify a revert.

The revert is an opportunity to seat down and track the root cause rather than urging to push a "temporary" quickfix. This bug seems to be very deep in the Python internals: thread state during Python shutdown. So it will take time to fully understand it and fix it (or redesign your recent changes, I don't know).

Tell me if you need my help to reproduce the bug. The bug has been seen on FreeBSD but also Windows:

* test_multiprocessing_spawn started to produce coredumps on FreeBSD: https://bugs.python.org/issue36114
* test_multiprocessing_spawn started to fail randomly on Windows: https://bugs.python.org/issue36116
* test_io started to fail randomly on Windows: https://bugs.python.org/issue36177

-- The Night's Watch
msg337117 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-04 13:25
For curious people, Pablo Galindo spent a few hours to investigate https://bugs.python.org/issue36114 and I spent a few more hours to finish the analysis. The bug indirectly crashed my laptop :-)
https://twitter.com/VictorStinner/status/1102528982036201478

That's what I mean by "I don't have the bandwidth": we already spent hours to identify the regression ;-)
msg337159 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-04 23:58
That's okay, Victor.  Thanks for jumping on this.  I'll take a look when I get a chance.
msg337179 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-05 11:23
> That's okay, Victor.  Thanks for jumping on this.  I'll take a look when I get a chance.

From what I saw, your first commit was enough to reproduce the crash.

If I recall correctly, Antoine Pitrou modified the GIL so threads exit immediately when Py_Finalize() is called. I'm thinking at:

void
PyEval_RestoreThread(PyThreadState *tstate)
{
    ...
    take_gil(tstate);
    /* _Py_Finalizing is protected by the GIL */
    if (_Py_IsFinalizing() && !_Py_CURRENTLY_FINALIZING(tstate)) {
        drop_gil(tstate);
        PyThread_exit_thread();
        Py_UNREACHABLE();
    }
    ...
    PyThreadState_Swap(tstate);
}

Problem: this code uses tstate, whereas the crash occurred because tstate pointed to freed memory:

"""
Thread 1 got the crash

(gdb) p *tstate
$3 = {
  prev = 0xdbdbdbdbdbdbdbdb,
  next = 0xdbdbdbdbdbdbdbdb,
  interp = 0xdbdbdbdbdbdbdbdb,
  ...
}

...

Thread 1 (LWP 100696):
#0  0x0000000000368210 in take_gil (tstate=0x8027e2050) at Python/ceval_gil.h:216
#1  0x0000000000368a94 in PyEval_RestoreThread (tstate=0x8027e2050) at Python/ceval.c:281
...
"""

https://bugs.python.org/issue36114#msg337090

When this crash occurred, Py_Finalize() already completed in the main thread!

"""
void _Py_NO_RETURN
Py_Exit(int sts)
{
    if (Py_FinalizeEx() < 0) {  /* <==== DONE! */
        sts = 120;
    }

    exit(sts);    /* <=============== Crash occurred here! */
}
"""

Py_Finalize() is supposed to wait for threads before deleting Python thread states:

"""
int
Py_FinalizeEx(void)
{
    ...

    /* The interpreter is still entirely intact at this point, and the
     * exit funcs may be relying on that.  In particular, if some thread
     * or exit func is still waiting to do an import, the import machinery
     * expects Py_IsInitialized() to return true.  So don't say the
     * interpreter is uninitialized until after the exit funcs have run.
     * Note that Threading.py uses an exit func to do a join on all the
     * threads created thru it, so this also protects pending imports in
     * the threads created via Threading.
     */
    call_py_exitfuncs(interp);

    ...

    /* Remaining threads (e.g. daemon threads) will automatically exit
       after taking the GIL (in PyEval_RestoreThread()). */
    _PyRuntime.finalizing = tstate;
    _PyRuntime.initialized = 0;
    _PyRuntime.core_initialized = 0;
    ...

    /* Delete current thread. After this, many C API calls become crashy. */
    PyThreadState_Swap(NULL);

    PyInterpreterState_Delete(interp);

    ...
}
"""

The real problem for years are *deamon threads* which... BY DESIGN... remain alive after Py_Finalize() exit! But as I explained, they must exit as soon at they attempt to get GIL.
msg337218 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-05 16:08
Thanks for taking a look Victor!  That info is helpful.  It will likely be a few days before I can sort this out.

Once I have addressed the problem I'll re-merge.  I plan on using the "buildbot-custom" branch to make sure the buildbots are happy with the change before making that PR.
msg337235 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-05 18:03
The bug is hard to reproduce even manually. I can test a PR for you once
it's ready.
msg337554 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-09 05:47
New changeset 5be45a6105d656c551adeee7770afdc3b806fbb5 by Eric Snow in branch 'master':
bpo-33608: Minor cleanup related to pending calls. (gh-12247)
https://github.com/python/cpython/commit/5be45a6105d656c551adeee7770afdc3b806fbb5
msg337557 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-09 06:44
New changeset 8479a3426eb7d1840473f7788e639954363ed37e by Eric Snow in branch 'master':
bpo-33608: Make sure locks in the runtime are properly re-created.  (gh-12245)
https://github.com/python/cpython/commit/8479a3426eb7d1840473f7788e639954363ed37e
msg337996 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-03-15 15:04
New changeset e3f4070aee6f2d489416fdcafd51d6b04d661919 by Victor Stinner in branch 'master':
bpo-33608: Fix PyEval_InitThreads() warning (GH-12346)
https://github.com/python/cpython/commit/e3f4070aee6f2d489416fdcafd51d6b04d661919
msg338039 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-03-15 21:47
New changeset 842a2f07f2f08a935ef470bfdaeef40f87490cfc by Eric Snow in branch 'master':
bpo-33608: Deal with pending calls relative to runtime shutdown. (gh-12246)
https://github.com/python/cpython/commit/842a2f07f2f08a935ef470bfdaeef40f87490cfc
msg340057 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-04-12 15:18
New changeset f13c5c8b9401a9dc19e95d8b420ee100ac022208 by Eric Snow in branch 'master':
bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (gh-12360)
https://github.com/python/cpython/commit/f13c5c8b9401a9dc19e95d8b420ee100ac022208
msg340069 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-12 15:41
I have a bad news for you Eric: I'm able again to reproduce the crash at commit f13c5c8b9401a9dc19e95d8b420ee100ac022208.


vstinner@freebsd$ ./python -m test --matchfile=bisect5 test_multiprocessing_spawn --fail-env-changed -F
Run tests sequentially
0:00:00 load avg: 0.69 [  1] test_multiprocessing_spawn
0:00:06 load avg: 0.80 [  2] test_multiprocessing_spawn
0:00:12 load avg: 1.19 [  3] test_multiprocessing_spawn
...
0:01:55 load avg: 1.48 [ 21] test_multiprocessing_spawn
0:02:01 load avg: 1.53 [ 22] test_multiprocessing_spawn
0:02:08 load avg: 1.29 [ 23] test_multiprocessing_spawn
0:02:17 load avg: 1.51 [ 24] test_multiprocessing_spawn
0:02:27 load avg: 2.27 [ 25] test_multiprocessing_spawn
0:02:38 load avg: 3.14 [ 26] test_multiprocessing_spawn
0:02:48 load avg: 3.51 [ 27] test_multiprocessing_spawn
Warning -- files was modified by test_multiprocessing_spawn
  Before: []
  After:  ['python.core'] 
test_multiprocessing_spawn failed (env changed)

== Tests result: ENV CHANGED ==

All 26 tests OK.

1 test altered the execution environment:
    test_multiprocessing_spawn

Total duration: 2 min 59 sec
Tests result: ENV CHANGED


Note: Sorry for not testing before, after one long week, I didn't succeed to catch up with my very long list of unread emails.

I don't know what should be done. Revert? I don't have the bandwidth to investigate this crash.
msg340070 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-12 15:42
I ran "./python -m test --matchfile=bisect5 test_multiprocessing_spawn --fail-env-changed -F" 4 times in parallel: in less than 5 minutes (in fact, I didn't look carefully at the terminal, maybe it was faster), I got 3 core dumps :-(
msg340071 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-04-12 15:49
Thanks for checking, Victor.  Don't feel bad about your results, nor about not checking sooner. :)  We'll get this sorted out.

For now I'll revert.  This is not code that changes very often, so there isn't much benefit to keeping it merged.  Testing against a separate branch is just as easy.

Could you point me at an immage for that VM or instructions on how to reproduce it?  I hate having to bother you to test my changes! :)
msg340075 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-12 16:02
Eric Snow:
> For now I'll revert.  This is not code that changes very often, so there isn't much benefit to keeping it merged.  Testing against a separate branch is just as easy.

Again, Python shutdown is *really* fragile. Last time I tried to "enhance" it, I also introduced random regressions and so I had to revert my changes.

Old info about the crash, should still be relevant:
https://bugs.python.org/issue36114#msg337090

> Could you point me at an immage for that VM or instructions on how to reproduce it?  I hate having to bother you to test my changes! :)

*In theory*, you should be able to reproduce the crash on any platform. But in my experience, bugs which involve multiple threads are simply "more likely" on FreeBSD because FreeBSD manages threads very differently than Linux. Sometimes, a bug can only be reproduce on one specific FreeBSD computer, but once the root issue has been identified, we start to be able to trigger the crash reliably on other platforms (like Linux).

My procedure to reproduce the crash on FreeBSD:
https://bugs.python.org/issue36114#msg337092

I'm using FreeBSD 12.0 RELEASE VM hosted on Linux. My FreeBSD is not customized in any way.

On modern Linux distributions, coredumps are no longer written in the current directory but handled by a system service like ABRT on Fedora. For this reason, Python test runner can "miss" crashes, especially in child processes run by tests (not directly in the process used to run the test).

To get a coredump in the current directory on Linux, you can use:

sudo bash -c 'echo "%e.%p" > /proc/sys/kernel/core_pattern'

Manual test:

$ ./python -c 'import ctypes; ctypes.string_at(0)'
Segmentation fault (core dumped)

vstinner@apu$ git status
...
Untracked files:
	python.18343
...

Say hello to python.18343 coredump!


Usually, running the command which trigger the crash multiple times in parallel (in different terminals, using screen and multiple terminals, etc.) makes the crash more likely since it does stress the system.

Sometimes, I run the Python test suite in parallel to stress the system even more.

The goal of the game is to trigger a race condition which depends on time. Stressing the system helps to "randomize" timings.
msg340077 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-04-12 16:20
New changeset b75b1a3504a0cea6fac6ecba44c10b2629577025 by Eric Snow in branch 'master':
bpo-33608: Revert "Factor out a private, per-interpreter _Py_AddPendingCall()." (gh-12806)
https://github.com/python/cpython/commit/b75b1a3504a0cea6fac6ecba44c10b2629577025
msg340079 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-12 16:26
I tried but failed to reproduce the crash on Linux!?

$ sudo bash -c 'echo "%e.%p" > /proc/sys/kernel/core_pattern'
$ ./python -m test --matchfile=bisect5 test_multiprocessing_spawn --fail-env-changed -F
# wait 5 min
^C
$ ./python -m test --matchfile=bisect5 -j0 test_multiprocessing_spawn --fail-env-changed -F  # I added -j0
# wait 5 min
^C
$ ./python -m test --matchfile=bisect5 -j0 test_multiprocessing_spawn --fail-env-changed -F  # I added -j0
# wait 5 min
^C

No coredump seen...
msg340082 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-12 16:43
FYI AMD64 FreeBSD CURRENT Shared 3.x failed at commit f13c5c8b9401a9dc19e95d8b420ee100ac022208:
https://buildbot.python.org/all/#/builders/168/builds/913

But this issue has already been fixed: Eric reverted his change.
msg340133 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-04-12 23:41
@Victor, I set up a FreeBSD 12.0 VM (in Hyper-v) and made sure core files were getting generated for segfaults.  Then I cloned the cpython repo, built it (using GCC), and ran regrtest as you recommended.  It generated no core files after half an hour.  I adjusted the VM down to 1 CPU from 4 and there were no segfaults over an hour and a half of running those 4 test loops.  So I've set the VM to 10% of a CPU and still have gotten no core files after over half an hour.  The load average has been hovering between 5 and 6.  I guess I'm not starving the VM enough. :)

Any ideas of how far I need to throttle the VM?  Is there more than just CPU that I need to limit?
msg340139 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-04-13 00:54
> Any ideas of how far I need to throttle the VM?  Is there more than just CPU that I need to limit?

I don't know how to make the race condition more likely. I'm not sure that starving the CPU helps. Maybe try the opposite: add more CPUs and reduce the number of tests run in parallel.

Did you test commit f13c5c8b9401a9dc19e95d8b420ee100ac022208?
msg340143 - (view) Author: David Bolen (db3l) * Date: 2019-04-13 02:02
Eric, I'm also seeing the same Win7 and Win10 worker failures with commit b75b1a350 as last time (test_multiprocessing_spawn on both, and test_io on Win7).

For test_multiprocessing_spawn, it fails differently than Victor since no core file is generated, but I assume it's related in terms of child process termination.  See for example https://buildbot.python.org/all/#/builders/3/builds/2390 for Win10, where test_mymanager_context fails with:

======================================================================
FAIL: test_mymanager_context (test.test_multiprocessing_spawn.WithManagerTestMyManager)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\buildarea\3.x.bolen-windows10\build\lib\test\_test_multiprocessing.py", line 2747, in test_mymanager_context
    self.assertIn(manager._process.exitcode, (0, -signal.SIGTERM))
AssertionError: 3221225477 not found in (0, -15)
----------------------------------------------------------------------

(3221225477 is C0000005 which I believe is an access violation)

For some reason, the Windows 7 worker didn't get a test run while your commit was live, but I can reproduce the same error manually.

For test_io, as before, its a shutdown lock failure:

======================================================================
FAIL: test_daemon_threads_shutdown_stdout_deadlock (test.test_io.CMiscIOTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "D:\cygwin\home\db3l\python.test\lib\test\test_io.py", line 4157, in test_daemon_threads_shutdown_stdout_deadlock
    self.check_daemon_threads_shutdown_deadlock('stdout')
  File "D:\cygwin\home\db3l\python.test\lib\test\test_io.py", line 4148, in check_daemon_threads_shutdown_deadlock
    self.assertIn("Fatal Python error: could not acquire lock "
AssertionError: "Fatal Python error: could not acquire lock for <_io.BufferedWriter name='<stdout>'> at interpreter shutdown, possibly due to daemon threads" not found in ''

----------------------------------------------------------------------

In manual attempts I have yet to be able to recreate the test_multiprocessing_spawn failure under Win10 but can usually manage a 25-50% failure rate under Win7 (which is much slower).  The test_io failure on Win7 however, appears to be more easily reproducible.

It's possible I/O is more critical than CPU, or perhaps its impact on latency; I seem to more easily exacerbate the test_multiprocessing_spawn failure rate by loading down the host disk than its CPU.  I also noticed that the Win10 failure was when test_io and test_multiprocessing_spawn overlapped.

While I'm guessing this should happen on any low powered Windows VM, if it would help, I could arrange remote access to the Win7 worker for you.  Or just test a change on your behalf.  In fairness, it's unlikely to be useful for any significant remote debugging but perhaps at least having somewhere you could test a change, even if just with print-based debugging, might help.  And while it might be an independent issue, the test_io failure rate appears to occur more reliably than test_multiprocessing_spawn.
msg340145 - (view) Author: David Bolen (db3l) * Date: 2019-04-13 04:35
I just noticed that my last message referenced the wrong commit.  My test failures were against commit f13c5c8b9401a9dc19e95d8b420ee100ac022208 (the same as Victor).
msg340666 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2019-04-22 18:13
New changeset 264490797ad936868c54b3d4ceb0343e7ba4be76 by Steve Dower in branch 'master':
bpo-33608: Normalize atomic macros so that they all expect an atomic struct (GH-12877)
https://github.com/python/cpython/commit/264490797ad936868c54b3d4ceb0343e7ba4be76
msg340768 - (view) Author: Kubilay Kocak (koobs) (Python triager) Date: 2019-04-24 11:05
@Eric Happy to give you SSH access to one of the FreeBSD buildbots that's experienced the crash if it helps. Just flick me a pubkey, I'm on IRC (koobs @ freenode / #python-dev)
msg342791 - (view) Author: Pavel Kostyuchenko (shprotx) Date: 2019-05-18 10:19
I was able to reproduce the error with version f13c5c8b9401a9dc19e95d8b420ee100ac022208 on FreeBSD 12.0 VM. The error seems to be caused not by those changes, but by lack of synchronization in the multiprocessing.managers.Server.
The failure happens when running the "test_shared_memory_SharedMemoryManager_basics" with high CPU load and frequent interrupts e.g. moving some window during test. Mostly I used the "python -m test --fail-env-changed test_multiprocessing_spawn -m 'WithProcessesTestS[hu]*' -F" command to reproduce the crash.
By analyzing core dumps I deduced that the crash happens during this call from the parent test process:

class BaseManager(object):
    def _finalize_manager(process, address, authkey, state, _Client):
        ...
            try:
                conn = _Client(address, authkey=authkey)
                try:
                    dispatch(conn, None, 'shutdown')
                finally:
                    conn.close()
            except Exception:
                pass

Main thread in the multiprocessing child:

class Server(object):
    def serve_forever(self):
        ...
        try:
            accepter = threading.Thread(target=self.accepter)
            accepter.daemon = True
            accepter.start()
            try:
                while not self.stop_event.is_set():
                    self.stop_event.wait(1)
            except (KeyboardInterrupt, SystemExit):
                pass
        finally:
            ...
            sys.exit(0)  << main thread have finished and destroyed the interpreter

Worker thread in the multiprocessing child.
Locals:
File "/usr/home/user/cpython/Lib/multiprocessing/managers.py", line 214, in handle_request
    c.send(msg)
        self = <SharedMemoryServer(....)>
        funcname = 'shutdown'
        result = None
        request = (None, 'shutdown', (), {})
        ignore = None
        args = ()
        kwds = {}
        msg = ('#RETURN', None)

Listing:
class Server(object):
    def handle_request(self, c):
        ...
            try:
                result = func(c, *args, **kwds)  << calls Server.shutdown method
            except Exception:
                msg = ('#TRACEBACK', format_exc())
            else:
                msg = ('#RETURN', result)
        try:
            c.send(msg)  << crashes with SIGBUS in _send_bytes -> write -> take_gil -> SET_GIL_DROP_REQUEST(tstate->interp)
        except Exception as e:
            try:
                c.send(('#TRACEBACK', format_exc()))
            except Exception:
                pass
    ...
    def shutdown(self, c):
        ...
        try:
            util.debug('manager received shutdown message')
            c.send(('#RETURN', None))
        except:
            import traceback
            traceback.print_exc()
        finally:
            self.stop_event.set()

Worker thread is daemonic and is not terminated during the interpreter finalization, thus it might still be running and is terminated silently when the process exits. The connection (c) has different implementations on several platforms, so we cannot be sure whether the connection is closed during shutdown or not, whether the last "c.send(msg)" blocks until the end of the process, returns instantly, or fails inconsistently.
The error was there for a long time, but for two reasons it didn't cause much trouble:
- the race condition is hard to trigger;
- SET_GIL_DROP_REQUEST used to ignore the errorneous state of interpreter, but introduction of tstate->interp argument by Eric manifested SIGBUS on FreeBSD.

I haven't managed to find a nice clean test to reproduce the bug automatically. I suggest the changes for the multiprocessing/managers.py in the attachment.
msg342792 - (view) Author: Pavel Kostyuchenko (shprotx) Date: 2019-05-18 10:48
Also it might be viable to add some assertion to verify the take_gil is not called with uninitialized interpreter.
I used the changes in the attachment (take_gil.assert.patch), but it produced errors during test_tracemalloc with f13c5c8b9401a9dc19e95d8b420ee100ac022208 . It happens because, during startup with invalid arguments, the interpreter is finalized with pymain_main->pymain_free->_PyRuntime_Finalize before the error is printed. However, the problem seems to be fixed for me in the last revisions of master branch, so I upload the diff against it.
msg344210 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-01 19:29
That's really helpful, Pavel!  Thanks for investigating so thoroughly.  I'm going to double check all the places I've made the assumption that "tstate" isn't NULL and likewise for "tstate->interp".

Is there an issue open for the bug in multiprocessing?  If not, would you mind opening one?
msg344211 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-01 19:32
I've made a few tweaks and Victor did some cleanup, so I'm going to try the PR again.  At first I'm also going to disable the _PyEval_FinishPendingCalls() call in _Py_FinalizeEx() and then enable it is a separate PR.

Also, I've opened bpo-37127 specifically to track the problem with _PyEval_FinishPendingCalls() during _Py_FinalizeEx(), to ensure that the disabled call doesn't slip through the cracks. :)
msg344240 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-01 21:39
New changeset 6a150bcaeb190d1731b38ab9c7a5d1a352847ddc by Eric Snow in branch 'master':
bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (gh-13714)
https://github.com/python/cpython/commit/6a150bcaeb190d1731b38ab9c7a5d1a352847ddc
msg344244 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-01 23:16
So far so good. :)  I'll keep an eye on things and if the buildbots are still happy then I'll add back _PyEval_FinishPendingCalls() in _Py_FinalizeEx() in a few days.
msg344245 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-01 23:32
FYI, after merging that PR I realized that the COMPUTE_EVAL_BREAKER macro isn't quite right.  While the following scenario worked before, now it doesn't:

1. interpreter A: _PyEval_AddPendingCall() causes the global
   eval breaker to be set
2. interpreter B: the next pass through the eval loop uses
   COMPUTE_EVAL_BREAKER; it has no pending calls so the global
   eval breaker is unset
3. interpreter A: the next pass through the eval loop does not
   run the pending call because the eval breaker is no longer set

This really isn't a problem because the eval breaker is triggered for the GIL pretty frequently.  Furthermore, it won't be a problem once the GIL is per-interpreter (at which point we will switch to a per-interpreter eval breaker).

If it is important enough then I can fix it.  I even wrote up a solution. [1]  However, I'd rather leave it alone (hence no PR).  The alternatives are more complicated and the situation should be relatively short-lived.

FWIW, in addition to the solution I mentioned above, I tried a few other ways:

* have a per-interpreter eval breaker in addition to the global one
* have only a per-interpreter eval breaker (the ultimate objective)
* consolidate the pending calls for every interpreter every time
  UNSIGNAL_PENDING_CALLS and UNSIGNAL_ASYNC_EXC are used

However, each has performance penalties while the branch I created does not.  I try to be really careful when it comes to the performance of the eval loop. :)

[1] https://github.com/ericsnowcurrently/cpython/tree/eval-breaker-shared
msg344355 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 01:24
> So far so good. :)  I'll keep an eye on things and if the buildbots are still happy then I'll add back _PyEval_FinishPendingCalls() in _Py_FinalizeEx() in a few days.

I have no idea if it's related, but we started to see coredumps on FreeBSD:
https://bugs.python.org/issue37135
msg344372 - (view) Author: Kubilay Kocak (koobs) (Python triager) Date: 2019-06-03 02:38
Again, happy to provide ssh access to the koobs-freebsd-current worker to help analysis, just PM me ssh public keys on IRC (koobs @ #python-dev freenode)
msg344417 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 12:28
> So far so good. :)  I'll keep an eye on things and if the buildbots are still happy then I'll add back _PyEval_FinishPendingCalls() in _Py_FinalizeEx() in a few days.

Another crash, on Windows:
https://bugs.python.org/issue37143

It sounds like the exactly same bug is back: same crash on same buildbots.
msg344429 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 15:31
> It sounds like the exactly same bug is back: same crash on same buildbots.

Since it became clear to me that the change broke multiple buildbots and nobody is able to investigate the issue, I wrote PR 13780 to revert the change:
https://pythondev.readthedocs.io/ci.html#revert-on-fail
msg344434 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-03 15:52
Please wait on reverting.  I'm going to investigate.

@koobs, yeah, I'll send you a key.  Thanks! :)
msg344435 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-03 15:58
FWIW, the failures before were in test_io and test_multiprocessing_spawn, not test_asyncio.
msg344438 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-03 16:08
I had really hoped to get this in for 3.8, as I may need it.  However, realistically I doubt it will take just a few minutes to resolve this.  With the beta 1 targeted for today it makes sense to revert my change. :(

(I may still try to convince our fearless RM to allow it in beta 2, but we'll see.)
msg344441 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 16:13
> FWIW, the failures before were in test_io and test_multiprocessing_spawn,
not test_asyncio.

I am not aware of a test_asyncio issue related to this issue, only crashes
in multiprocessing. I bet on a similar crash than the previous attempt,
during shutdown.
msg344442 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 16:14
New changeset e225bebc1409bcf68db74a35ed3c31222883bf8f by Victor Stinner in branch 'master':
Revert "bpo-33608: Factor out a private, per-interpreter _Py_AddPendingCall(). (gh-13714)" (GH-13780)
https://github.com/python/cpython/commit/e225bebc1409bcf68db74a35ed3c31222883bf8f
msg344443 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 16:23
What annoy me the most in this issue is that I am unable to reproduce it. Even directly on the FreeBSD CURRENT buildbot, I failed to reproduce rhe crash even when I stressed the buildbot and the test.

On the other side, when the CI runs the test suite, crashes are likely!?

These bugs are the worst.

IMHO we must make multiprocessing more deterministic. Python 3.8 is better, but there are still many known issues.
https://pythondev.readthedocs.io/unstable_tests.html

test_venv multiprocessing test misuses the multiprocessing issue, it emirs a ResourceWarning.
msg344444 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-03 16:40
My bet is that multiprocessing triggers a bug in daemon threads. Multiprocessing is using and abusing daemon threads. For example, as I wrote, test_venv misuses the multiprocessing API and implicitly let Python release (or maybe not!) "resources" where "resources" can be processes and threads, likely including daemon threads.

Eric: I would love to give you access to a VM where I can reproduce the bug but... I failed to reproduce the bug today :-(

I would suggest to write a stress test for daemon threads:

* spawn daemon threads: 4x times threads than CPU to stress the system
* each thread would sleep a random number of milliseconds and then execute a few pure Python instructions
* spawns these threads, wait a random number of milliseconds, and then "exit Python"

The race condition is that daemon threads may or maybe run during Python finalization depending on the delay.

Maybe you can make the crash more likely by adding a sleep of a few *seconds* before the final exit. For example, at the exit of Py_RunMain().

Last time I saw the crash, it was a thread which uses a Python structure whereas the memory of the structure was freed (filled with a random pattern by debug hooks on memory allocators).

Ah, either use a debug build of Python, or use PYTHONMALLOC=debug, or -X dev command line option, to fill freed memory with a specific pattern, to trigger the crash.

... Good luck.
msg344445 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-03 16:46
Well, for this PR I actually disabled the execution of pending calls during runtime finialization.  I'd hoped it would help, but it did not. :(

Regardless, I'll need to look even more closely at what is different during runtime finalization with my PR.
msg344446 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-06-03 16:47
Also, thanks for the suggestions, Victor!
msg357169 - (view) Author: Phil Connell (pconnell) * Date: 2019-11-21 13:48
Based on Victor's info from https://bugs.python.org/issue36114#msg337090 I believe the crash is essentially what's reproduced in the attached program.

From the root of a (built) cpython clone run:

gcc -c -o fini_crash.o -IInclude -I. fini_crash.c && gcc -o fini_crash fini_crash.o libpython3.9.a -lcrypt -lpthread -ldl -lutil -lm && ./fini_crash

The output should be:

MAIN: allow other thread to execute                                                                                    
OTHER: acquired GIL                                                                                                    
OTHER: released GIL                                                                                                    
MAIN: interpreter finalized
OTHER: attempt to acquire GIL...crash!
[1]    266749 segmentation fault (core dumped)  ./fini_crash

And running it through valgrind:

$ valgrind --suppressions=Misc/valgrind-python.supp fini_crash                                                                                                                 -- COMMAND -- 13:4[12/5973]
==266836== Memcheck, a memory error detector
==266836== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==266836== Using Valgrind-3.15.0 and LibVEX; rerun with -h for copyright info
==266836== Command: fini_crash                    
==266836==                                            
MAIN: allow other thread to execute                       
OTHER: acquired GIL                                
OTHER: released GIL                                                                                                    
MAIN: interpreter finalized
OTHER: attempt to acquire GIL...crash!                                                                                 
==266836== Thread 2:                                                                                                   
==266836== Invalid read of size 8                                                                                      
==266836==    at 0x15607D: PyEval_RestoreThread (ceval.c:389)                                                                                                                                                                                  
==266836==    by 0x15479F: evil_main (in /home/phconnel/dev/cpython/fini_crash)
==266836==    by 0x48B94CE: start_thread (in /usr/lib/libpthread-2.30.so)
==266836==    by 0x4B232D2: clone (in /usr/lib/libc-2.30.so)
==266836==  Address 0x4d17270 is 16 bytes inside a block of size 264 free'd
==266836==    at 0x48399AB: free (vg_replace_malloc.c:540)
==266836==    by 0x1773FF: tstate_delete_common (pystate.c:829)
==266836==    by 0x1773FF: _PyThreadState_Delete (pystate.c:848)
==266836==    by 0x1773FF: zapthreads (pystate.c:311)
==266836==    by 0x1773FF: PyInterpreterState_Delete (pystate.c:321)
==266836==    by 0x174920: finalize_interp_delete (pylifecycle.c:1242)
==266836==    by 0x174920: Py_FinalizeEx.part.0 (pylifecycle.c:1400)
==266836==    by 0x15487B: main (in /home/phconnel/dev/cpython/fini_crash)
==266836==  Block was alloc'd at
==266836==    at 0x483877F: malloc (vg_replace_malloc.c:309)
==266836==    by 0x178D7C: new_threadstate (pystate.c:557)
==266836==    by 0x178D7C: PyThreadState_New (pystate.c:629)
==266836==    by 0x178D7C: PyGILState_Ensure (pystate.c:1288)
==266836==    by 0x154759: evil_main (in /home/phconnel/dev/cpython/fini_crash)
==266836==    by 0x48B94CE: start_thread (in /usr/lib/libpthread-2.30.so)
==266836==    by 0x4B232D2: clone (in /usr/lib/libc-2.30.so)
==266836== 
==266836== Invalid read of size 8
==266836==    at 0x156081: PyEval_RestoreThread (ceval.c:389)
==266836==    by 0x15479F: evil_main (in /home/phconnel/dev/cpython/fini_crash)
==266836==    by 0x48B94CE: start_thread (in /usr/lib/libpthread-2.30.so)
==266836==    by 0x4B232D2: clone (in /usr/lib/libc-2.30.so)
==266836==  Address 0x4c3a0f0 is 16 bytes inside a block of size 2,960 free'd
==266836==    at 0x48399AB: free (vg_replace_malloc.c:540)
==266836==    by 0x174920: finalize_interp_delete (pylifecycle.c:1242)
==266836==    by 0x174920: Py_FinalizeEx.part.0 (pylifecycle.c:1400)
==266836==    by 0x15487B: main (in /home/phconnel/dev/cpython/fini_crash)
==266836==  Block was alloc'd at
==266836==    at 0x483877F: malloc (vg_replace_malloc.c:309)
==266836==    by 0x177153: PyInterpreterState_New (pystate.c:205)
==266836==    by 0x1732BF: pycore_create_interpreter (pylifecycle.c:526)
==266836==    by 0x1732BF: pyinit_config.constprop.0 (pylifecycle.c:695)
==266836==    by 0x1766B7: pyinit_core (pylifecycle.c:879)
==266836==    by 0x1766B7: Py_InitializeFromConfig (pylifecycle.c:1055)
==266836==    by 0x1766B7: Py_InitializeEx (pylifecycle.c:1093)
==266836==    by 0x154801: main (in /home/phconnel/dev/cpython/fini_crash)
==266836==
msg357170 - (view) Author: Phil Connell (pconnell) * Date: 2019-11-21 13:55
Just to summarise, I'm fairly sure this is exactly what Victor saw: a daemon thread attempts to reacquire the GIL via Py_END_ALLOW_THREADS after interpreter finalisation. Obviously the threadstate pointer held by the thread is then invalid...so we crash.

So I see basically two options:

1. Don't (always) free threadstate structures in Py_Finalize, and figure out a way to avoid leaking them (if Python is re-initialized in the same process).

2. Ban this behaviour entirely, e.g. have Py_Finalize fail if there are live threads with threadstate objects.

The discussion so far assumes that we should support this, i.e. #1. Any thoughts on that? (I'll have a think about whether this is actually doable!)
msg357179 - (view) Author: Phil Connell (pconnell) * Date: 2019-11-21 16:02
The attached patch (wrap_threadstate.diff) is enough to stop the crash. It's a slightly dirty proof-of-concept, but equally could be the basis for a solution.

The main functional issue is that there's still a race on the Py_BLOCK_THREADS side: it's possible that the "is threadstate still valid" check can pass, but the interpreter is finalised while the daemon thread is trying to reacquire the GIL in PyEval_RestoreThread.

(The Py_UNBLOCK_THREADS side is non-racy as the GIL is held while the ts and wrapper updates are done).
msg358364 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-12-13 22:31
Sorry for the delay, Phil.  I'll try to take a look in the next couple of hours.
msg358367 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-12-14 00:08
I'm out of time and this deserves some careful discussion.  I'll get to it next Friday (or sooner if possible).  Sorry!
msg358748 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2019-12-20 23:46
Thanks for the detailed analysis, Phil.  I think the results are pretty conclusive: daemon threads are the worst. :)  But seriously, thanks.

As you demonstrated, it isn't just Python "daemon" threads that cause the problem.  It is essentially any external access of the C-API once runtime finalization has started.  The docs [1] aren't super clear about it, but there are some fundamental assumptions we make about runtime finalization:

* no use of the C-API while Py_FinalizeEx() is executing (except for a few helpers like Py_Initialized)
* only a small portion of the C-API is available afterward (at least until Py_Initialize() is run)

I guess the real question is what to do about this?

Given that this is essentially a separate problem, let's move further discussion and effort over related to sorting out problematic threads to #36476, "Runtime finalization assumes all other threads have exited."  @Phil, would you mind attaching those same two files to that issue?


[1] https://docs.python.org/3/c-api/init.html#c.Py_FinalizeEx
msg364670 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-03-20 14:20
I partially reimplemented commit ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465 in the following issues:

* bpo-39984: Move some ceval fields from _PyRuntime.ceval to PyInterpreterState.ceval
* bpo-40010: Inefficient signal handling in multithreaded applications
* bpo-39877: Daemon thread is crashing in PyEval_RestoreThread() while the main thread is exiting the process
* bpo-37127: Handling pending calls during runtime finalization may cause problems.

I cannot give a list of commits, I made too many of them :-D


The most important change is:

commit 50e6e991781db761c496561a995541ca8d83ff87
Author: Victor Stinner <vstinner@python.org>
Date:   Thu Mar 19 02:41:21 2020 +0100

    bpo-39984: Move pending calls to PyInterpreterState (GH-19066)
    
    If Py_AddPendingCall() is called in a subinterpreter, the function is
    now scheduled to be called from the subinterpreter, rather than being
    called from the main interpreter.
    
    Each subinterpreter now has its own list of scheduled calls.
    
    * Move pending and eval_breaker fields from _PyRuntimeState.ceval
      to PyInterpreterState.ceval.
    * new_interpreter() now calls _PyEval_InitThreads() to create
      pending calls lock.
    * Fix Py_AddPendingCall() for subinterpreters. It now calls
      _PyThreadState_GET() which works in a subinterpreter if the
      caller holds the GIL, and only falls back on
      PyGILState_GetThisThreadState() if _PyThreadState_GET()
      returns NULL.


My plan is now to fix pending calls in subinterpreters. Currently, they are only executed in the "main thread"... _PyRuntimeState.main_thread must be moved to PyInterpreterState, as it was done in commit ef4ac967e2f3a9a18330cc6abe14adb4bc3d0465. 

This issue shows that it's dangerous to change too many things at once. Python internals were still very fragile: bpo-39877 and and bpo-37127 are good examples of that. I'm trying to move *very slowly*. Move pieces one by one.

I added _Py_ThreadCanHandlePendingCalls() function in commit d83168854e19d0381fa57db25fca6c622917624f (bpo-40010): it should ease moving _PyRuntimeState.main_thread to PyInterpreterState. I also added _Py_ThreadCanHandleSignals() in a previous commit, so it's easier to change which thread is allowed or not to handle signals and pending calls.

I also plan to revisit (removal) pending_calls.finalizing added by commit 842a2f07f2f08a935ef470bfdaeef40f87490cfc (bpo-33608). I plan to work on that in bpo-37127.
msg365160 - (view) Author: Mark Shannon (Mark.Shannon) * (Python committer) Date: 2020-03-27 16:10
Just to add my 2 cents.

I think this a bad idea and is likely to be unsafe.
Having interpreters interfering with each other's objects is almost certain to lead to race conditions.

IMO, objects should *never* be shared across interpreters (once interpreters are expected to be able to run in parallel).

Any channel between two interpreters should consist of two objects, one per interpreter. The shared memory they use to communicated can be managed by the runtime. Likewise for inter-interpreter locks. The lock itself should be managed by the runtime, with each interpreter having its own lock object with a handle on the shared lock.
msg365163 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2020-03-27 16:24
FYI, in bpo-39984 Victor moved pending calls to PyInterpreterState, which was part of my reverted change.  However, there are a few other pieces of that change that need to be applied before this issue is resolved.  I'm not sure when I'll get to it, but hopefully soon.
msg365178 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-03-27 17:31
> Having interpreters interfering with each other's objects is almost certain to lead to race conditions.

To be honest, I don't understand the purpose of this issue. Some messages are talking about pending calls, some others are talking about channels, and a few are talking about "shared" objects.

Can someone either clarify the exact intent of this issue, or close this one and create a new issue with a clear explanation of what should be done and why?
msg366009 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-04-08 21:20
New changeset cfc3c2f8b34d3864717ab584c5b6c260014ba55a by Victor Stinner in branch 'master':
bpo-37127: Remove _pending_calls.finishing (GH-19439)
https://github.com/python/cpython/commit/cfc3c2f8b34d3864717ab584c5b6c260014ba55a
msg366013 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-04-08 21:37
I created bpo-40231: Fix pending calls in subinterpreters.
msg366016 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-04-08 21:48
Pavel Kostyuchenko:
> (...) The error seems to be caused not by those changes, but by lack of synchronization in the multiprocessing.managers.Server.

Pavel: would you mind to open a separated issue to suggest to add synchronization and/or avoid daemon thread in multiprocessing?

The concurrent.futures module was recently modified to avoid daemon threads in bpo-39812.
msg366017 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-04-08 21:50
> FYI, after merging that PR I realized that the COMPUTE_EVAL_BREAKER macro isn't quite right.

I reworked this function in bpo-40010.
msg366019 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-04-08 21:58
This issue has a long history. A change has been applied and then reverted three times in a row. Pending calls are now per-interpreter.

The issue title is "Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed." but I don't understand if pending calls are expected to be used to communicate between two interpreters. Why not using a UNIX pipe and exchange bytes through it? Py_AddPendingCall() is a weird concept. I would prefer to not abuse it.

Moreover, it's unclear if this issue attempts to *share* a same object between two interpreters. I would prefer to avoid that by any possible way.

I close this issue with a complex history.

If someone wants to continue to work on this topic, please open an issue with a very clear description of what should be done and how it is supposed to be used.
msg366027 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2020-04-08 23:04
> I close this issue with a complex history.
>
> If someone wants to continue to work on this topic, please open an issue with a very clear description of what should be done and how it is supposed to be used.

Yeah, there is more to do.  I'll create a new issue.
History
Date User Action Args
2022-04-11 14:59:00adminsetgithub: 77789
2020-06-03 16:43:06vstinnersetcomponents: + Subinterpreters
2020-04-08 23:04:27eric.snowsetmessages: + msg366027
2020-04-08 21:58:41vstinnersetstatus: open -> closed
versions: + Python 3.9, - Python 3.8
messages: + msg366019

resolution: fixed
stage: patch review -> resolved
2020-04-08 21:50:36vstinnersetmessages: + msg366017
2020-04-08 21:48:47vstinnersetmessages: + msg366016
2020-04-08 21:37:51vstinnersetmessages: + msg366013
2020-04-08 21:30:24db3lsetnosy: - db3l
2020-04-08 21:20:10vstinnersetmessages: + msg366009
2020-03-27 17:31:30vstinnersetmessages: + msg365178
2020-03-27 16:24:44eric.snowsetmessages: + msg365163
2020-03-27 16:10:06Mark.Shannonsetnosy: + Mark.Shannon
messages: + msg365160
2020-03-26 17:14:56phsilvasetnosy: + phsilva
2020-03-20 14:20:31vstinnersetnosy: + vstinner
messages: + msg364670
2020-02-07 15:22:07maciej.szuliksetnosy: + maciej.szulik
2019-12-21 23:16:38vstinnersetnosy: - vstinner
2019-12-20 23:46:05eric.snowsetmessages: + msg358748
2019-12-14 00:08:39eric.snowsetmessages: + msg358367
2019-12-13 22:31:46eric.snowsetmessages: + msg358364
2019-11-21 16:02:09pconnellsetfiles: + wrap_threadstate.diff

messages: + msg357179
2019-11-21 13:55:58pconnellsetmessages: + msg357170
2019-11-21 13:48:29pconnellsetfiles: + fini_crash.c

messages: + msg357169
2019-11-12 16:03:19pconnellsetnosy: + pconnell
2019-06-03 16:47:07eric.snowsetmessages: + msg344446
2019-06-03 16:46:14eric.snowsetmessages: + msg344445
2019-06-03 16:40:31vstinnersetmessages: + msg344444
2019-06-03 16:23:20vstinnersetmessages: + msg344443
2019-06-03 16:14:39vstinnersetmessages: + msg344442
2019-06-03 16:13:34vstinnersetmessages: + msg344441
2019-06-03 16:08:00eric.snowsetmessages: + msg344438
2019-06-03 15:58:28eric.snowsetmessages: + msg344435
2019-06-03 15:52:22eric.snowsetmessages: + msg344434
2019-06-03 15:31:54vstinnersetmessages: + msg344429
2019-06-03 15:30:54vstinnersetpull_requests: + pull_request13665
2019-06-03 12:28:44vstinnersetmessages: + msg344417
2019-06-03 02:38:23koobssetmessages: + msg344372
2019-06-03 01:24:36vstinnersetmessages: + msg344355
2019-06-01 23:32:41eric.snowsetmessages: + msg344245
2019-06-01 23:16:53eric.snowsetmessages: + msg344244
2019-06-01 21:39:49eric.snowsetmessages: + msg344240
2019-06-01 19:32:35eric.snowsetmessages: + msg344211
2019-06-01 19:29:52eric.snowsetmessages: + msg344210
2019-06-01 04:29:06eric.snowsetpull_requests: + pull_request13601
2019-05-18 10:48:34shprotxsetfiles: + take_gil.assert.patch

messages: + msg342792
2019-05-18 10:19:49shprotxsetfiles: + managers.patch
nosy: + shprotx
messages: + msg342791

2019-04-24 11:05:25koobssetnosy: + koobs
messages: + msg340768
2019-04-22 18:13:24steve.dowersetnosy: + steve.dower
messages: + msg340666
2019-04-18 16:00:50steve.dowersetpull_requests: + pull_request12802
2019-04-13 04:35:17db3lsetmessages: + msg340145
2019-04-13 02:02:22db3lsetmessages: + msg340143
2019-04-13 00:54:41vstinnersetmessages: + msg340139
2019-04-12 23:41:40eric.snowsetmessages: + msg340133
2019-04-12 16:43:07vstinnersetmessages: + msg340082
2019-04-12 16:26:23vstinnersetmessages: + msg340079
2019-04-12 16:20:15eric.snowsetmessages: + msg340077
2019-04-12 16:02:40vstinnersetmessages: + msg340075
2019-04-12 15:57:51eric.snowsetpull_requests: + pull_request12733
2019-04-12 15:49:02eric.snowsetmessages: + msg340071
2019-04-12 15:42:56vstinnersetmessages: + msg340070
2019-04-12 15:41:31vstinnersetmessages: + msg340069
2019-04-12 15:18:20eric.snowsetmessages: + msg340057
2019-04-02 18:51:03Paul Monsonsetpull_requests: + pull_request12593
2019-03-15 22:33:04eric.snowsetpull_requests: + pull_request12327
2019-03-15 21:47:56eric.snowsetmessages: + msg338039
2019-03-15 15:04:23vstinnersetmessages: + msg337996
2019-03-15 14:06:08vstinnersetpull_requests: + pull_request12312
2019-03-09 06:44:37eric.snowsetmessages: + msg337557
2019-03-09 05:47:17eric.snowsetmessages: + msg337554
2019-03-09 00:18:53eric.snowsetpull_requests: + pull_request12236
2019-03-09 00:15:59eric.snowsetpull_requests: + pull_request12235
2019-03-09 00:07:40eric.snowsetpull_requests: + pull_request12234
2019-03-08 23:47:57eric.snowsetpull_requests: + pull_request12231
2019-03-08 18:37:51eric.snowsetstage: patch review
pull_requests: + pull_request12228
2019-03-05 18:03:07vstinnersetmessages: + msg337235
title: [subinterpreters] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed. -> Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.
2019-03-05 16:08:16eric.snowsetmessages: + msg337218
stage: patch review -> (no value)
2019-03-05 11:23:11vstinnersetmessages: + msg337179
2019-03-04 23:58:01eric.snowsetmessages: + msg337159
2019-03-04 13:25:42vstinnersetmessages: + msg337117
2019-03-04 13:21:47vstinnersetmessages: + msg337116
2019-03-04 13:21:31vstinnersetmessages: + msg337115
2019-03-04 13:12:33vstinnersetmessages: + msg337110
2019-03-04 12:36:47vstinnersetpull_requests: + pull_request12157
2019-03-04 12:16:57vstinnersetmessages: + msg337096
2019-03-04 08:22:10vstinnersetmessages: + msg337075
2019-03-02 00:27:46db3lsetmessages: + msg336975
2019-03-01 20:22:41eric.snowsetmessages: + msg336952
2019-03-01 20:15:48eric.snowsetmessages: + msg336951
2019-03-01 19:35:15eric.snowsetmessages: + msg336948
2019-03-01 07:20:43db3lsetnosy: + db3l
messages: + msg336897
2019-02-27 01:19:48eric.snowsetpull_requests: + pull_request12086
2019-02-26 15:57:24eric.snowsetmessages: + msg336687
2019-02-26 14:37:46vstinnersetmessages: + msg336677
2019-02-26 04:11:45eric.snowsetmessages: + msg336603
2019-02-26 04:10:24eric.snowsetmessages: + msg336602
2019-02-26 02:29:06eric.snowsetmessages: + msg336596
2019-02-25 17:40:35eric.snowsetmessages: + msg336544
2019-02-24 23:56:13eric.snowsetpull_requests: + pull_request12057
2019-02-24 23:40:49eric.snowsetmessages: + msg336490
2019-01-18 20:43:41eric.snowsetpull_requests: + pull_request11362
2019-01-18 20:43:26eric.snowsetpull_requests: + pull_request11361
2019-01-18 20:43:08eric.snowsetpull_requests: + pull_request11360
2018-11-10 18:26:45Johan Dahlinsetnosy: + Johan Dahlin
2018-09-17 15:41:58eric.snowsetassignee: eric.snow
messages: + msg325548
2018-09-16 11:54:57naschemesetnosy: + nascheme
messages: + msg325481
2018-09-15 18:25:04eric.snowsetkeywords: + patch
stage: needs patch -> patch review
pull_requests: + pull_request8758
2018-06-22 22:48:34eric.snowsetnosy: + emilyemorehouse
2018-05-23 13:17:57ncoghlansetmessages: + msg317404
2018-05-22 20:57:59vstinnersettitle: Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed. -> [subinterpreters] Add a cross-interpreter-safe mechanism to indicate that an object may be destroyed.
2018-05-22 20:42:16pmppsetnosy: + pmpp
2018-05-22 20:39:54vstinnersetmessages: + msg317344
2018-05-22 19:36:39eric.snowsetmessages: + msg317334
versions: + Python 3.8, - Python 3.4
2018-05-22 19:34:41eric.snowcreate