This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: asyncio + multiprocessing = core dump in sem_trywait
Type: crash Stage: resolved
Components: asyncio Versions: Python 3.9
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: Andrei Pozolotin, asvetlov, knl, yselivanov
Priority: normal Keywords:

Created on 2021-04-13 15:50 by Andrei Pozolotin, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
dump_core.py Andrei Pozolotin, 2021-04-13 15:50 script to reproduce core dump
dump_core.txt Andrei Pozolotin, 2021-04-13 15:51 stack trace from core dump
Messages (3)
msg390973 - (view) Author: Andrei Pozolotin (Andrei Pozolotin) * Date: 2021-04-13 15:50
every attempt to touch multiprocessing.Event.is_set()
from asyncio.run() results in reproducible core dump in sem_trywait

system: Linux 5.11.8

Linux work3 5.11.8-arch1-1 #1 SMP PREEMPT Sun, 21 Mar 2021 01:55:51 +0000 x86_64 GNU/Linux

Python: 3.9.2

https://archlinux.org/packages/extra/x86_64/python/
msg396734 - (view) Author: Kunal (knl) * Date: 2021-06-29 14:29
I was trying to reproduce the problem and can get the core dump to happen with a smaller program as well -- it doesn't seem related to asyncio specifically but seems to be a bug with multiprocessing.Event (or probably the primitives inside it).

```
#!/usr/bin/env python

import time
import multiprocessing


def master_func(event) -> None:
    print(f"Child: {event = }")
    print(f"Child: {event.is_set() = }") # Crashes here with SIGSEGV in sem_trywait
    print("Completed")


if __name__ == "__main__":
    event = multiprocessing.Event()
    context_spawn = multiprocessing.get_context("spawn")
    proc = context_spawn.Process(target=master_func, args=(event, ))
    proc.start()
    print(f"Parent: {event = }")
    print(f"Parent: {event.is_set() = }")
    proc.join()
```

Switching to fork instead of spawn bypasses the issue. Trying to dig into this a little bit more.
msg396740 - (view) Author: Kunal (knl) * Date: 2021-06-29 14:48
Oh, it looks like it's because the context for the Event doesn't match when created this way (the default context on linux is fork). 

The problem goes away if the Event instance is created from the spawn context -- specifically, patching dump_core.py 

```
@@ -22,14 +22,13 @@ def master_func(space:dict) -> None:

 if __name__ == "__main__":

-    this_event = multiprocessing.Event()
+    context_spawn = multiprocessing.get_context("spawn")

+    this_event = context_spawn.Event()
     this_space = dict(
         this_event=this_event,
     )

-    context_spawn = multiprocessing.get_context("spawn")
-
     master_proc = context_spawn.Process(
```

I think this can be closed; the incompatibility between contexts is also documented at https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Lock, thought it could be a little more explicit about potential segfaults.

> Note that objects related to one context may not be compatible with processes for a different context. In pait's berticular, locks created using the fork context cannot be passed to processes started using the spawn or forkserver start methods.

(Event uses a Lock under the hood)
History
Date User Action Args
2022-04-11 14:59:44adminsetgithub: 87998
2021-07-02 09:46:31asvetlovsetstatus: open -> closed
resolution: not a bug
stage: resolved
2021-06-29 14:48:36knlsetmessages: + msg396740
2021-06-29 14:29:49knlsetnosy: + knl
messages: + msg396734
2021-04-13 15:51:47Andrei Pozolotinsetfiles: + dump_core.txt
2021-04-13 15:50:47Andrei Pozolotincreate