This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: resource tracker destroys shared memory segments when other processes should still have valid access
Type: behavior Stage: patch review
Components: Library (Lib) Versions: Python 3.10, Python 3.9, Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: davin Nosy List: damian.barabonkov, davfelsen, davin, jdogzz-g5, keven425, maggyero, pablogsal, pitrou, steve.newcomb, timka, turicas, vinay0410
Priority: normal Keywords: patch

Created on 2019-09-11 15:58 by davin, last changed 2022-04-11 14:59 by admin.

Files
File name Uploaded Description Edit
mprt_monkeypatch.py turicas, 2021-03-08 19:22
Pull Requests
URL Status Linked Edit
PR 15989 open davin, 2019-09-11 16:20
PR 21516 open vinay0410, 2020-07-17 08:47
PR 23174 open vinay0410, 2020-11-06 07:45
Messages (14)
msg351960 - (view) Author: Davin Potts (davin) * (Python committer) Date: 2019-09-11 15:58
The resource tracker currently destroys (via _posixshmem.shm_unlink) shared memory segments on posix systems when any independently created Python process with a handle on a shared memory segment exits (gracefully or otherwise).  This breaks the expected cross-platform behavior that a shared memory segment persists at least as long as any running process has a handle on that segment.

As described with an example scenario in issue37754:
Let's say a three processes P1, P2 and P3 are trying to communicate using shared memory.
 --> P1 creates the shared memory block, and waits for P2 and P3 to access it.
 --> P2 starts and attaches this shared memory segment, writes some data to it and exits.
 --> Now in case of Unix, shm_unlink is called as soon as P2 exits. (This is by action of the resource tracker.)
 --> Now, P3 starts and tries to attach the shared memory segment.
 --> P3 will not be able to attach the shared memory segment in Unix, because shm_unlink has been called on that segment.
 --> Whereas, P3 will be able to attach to the shared memory segment in Windows.

Another key scenario we expect to work but does not currently:
1. A multiprocessing.managers.SharedMemoryManager is instantiated and started in process A.
2. A shared memory segment is created using that manager in process A.
3. A serialized representation of that shared memory segment is deserialized in process B.
4. Process B does work with the shared memory segment that is also still visible to process A.
5. Process B exits cleanly.
6. Process A reads data from the shared memory segment after process B is gone.  (This currently fails.)

The SharedMemoryManager provides a flexible means for ensuring cleanup of shared memory segments.  The current resource tracker attempts to treat shared memory segments as equivalent to semaphore references, which is too narrow of an interpretation.  As such, the current resource tracker should not be attempting to enforce cleanup of shared memory segments because it breaks expected behavior and significantly limits functionality.
msg352050 - (view) Author: Vinay Sharma (vinay0410) * Date: 2019-09-12 05:11
Hi Davin,
This PR would fix the issues mentioned by you, by not prematurely unlinking the shared memory segment. And, therefore it would make shared memory useful in a lot of use cases.

But, this would still not make Unix's implementation consistent with Windows.
Windows uses a reference counting mechanism to count the number of processes using a shared memory segment. When all of them are done using it, Windows simply unlinks and frees the memory allocated to the shared memory segment.

I know that you already know this. I am commenting to find out, that what would be the next steps to fix the above inconsistency. You could see my last comment(msg351445) in issue37754, where I have listed some ways to implement the above reference counting mechanism. 

If you could have a look and see which one would be the best way, I would be happy to make a PR for it.
msg374434 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2020-07-27 22:45
@Davin, could you merge one or the other of the PRs that fix this? Presumably also backport to 3.9 and 3.8 (but that's up to you and the release manager).
msg374939 - (view) Author: Damian Barabonkov (damian.barabonkov) Date: 2020-08-06 15:52
As per Guido's comment (https://github.com/python/cpython/pull/21516#issuecomment-668110711), I'm going to use this space to discuss ways to go forward with resource tracking and SharedMemory.

Taking inspiration from Vinay (https://bugs.python.org/issue37754#msg351445), I think the simplest and best way forward is to use a small section of the shared memory at the start as a reference counter. 

Every time a process latches onto a shared memory block, it does and atomic increment to the reference counter. And if it detaches, it does an atomic decrement. This atomic operations are available in C via hardware specific instructions. This would require modifying the Python C code posixshmem.c. It should not be a difficult change.

This would then change the SharedMemory API such that a call to `close()` could check the reference count at the end, and aromatically unlink if it reaches 0. Basically, the purpose of the explicit `unlink()` call is dissolved.

I think this would also play nice with the current implementation of the `resource_tracker`. A small change would need to take place such that it calls `close()` instead of `unlink()` as the clean up function. Nonetheless, it would keep track if all attachments of shared memory call `close()` at the end, which they should, and issue a warning if they do not. It would do this with the current code, no need to change anything.
msg374944 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2020-08-06 16:36
I recommend bringing this new proposal up on python-dev or python-ideas, to get more eyeballs on the ideas before attempting implementation. One immediate worry I have is that if the reference counter is maintained in the shared memory segment, every process has to participate, and if a process crashes (segfaults) the shared refcount will be forever wrong and the segment will leak. Distributed GC is hard!
msg374951 - (view) Author: Vinay Sharma (vinay0410) * Date: 2020-08-06 17:38
That's a valid point Guido. But, I guess this can be easily handled by resource tracker. At this current moment resource tracker unlinks shared memory if the process which created it dies without unliking it.

Therefore, resource tracker handles cleaning up resources which the respective processes couldn't or didn't do.

So, instead of unlinking the shared memory segment, resource tracker can instead decrement the reference count, if the process failed to do so, and if the reference count becomes 0, then unlink the shared memory segment.

This approach will ensure that even if the respective processes died unexpectedly, there are no leaks.
msg374954 - (view) Author: Damian Barabonkov (damian.barabonkov) Date: 2020-08-06 18:18
Unless the resource_tracker also dies along with the process. In which case, I'm not sure what there is there to do.

I believe the resource_tracker actually spawns a process alongside the process that uses it. So if the parent process seg-faults, the resource_tracker should still be alive.
msg374956 - (view) Author: Vinay Sharma (vinay0410) * Date: 2020-08-06 18:24
Well, the chances of resource tracker dying abruptly are very less because it's thoroughly tested, and there are mechanisms to re-spawn resource_tracker process if you see the code. There is a function called `def ensure_running`.

Resource tracker is still alive even if the process for which it was created dies. It also handles cleaning shared semaphores. So, I guess this is something we can rely on for cleaning up things, because at the end of the day that's what it was made for.
msg374958 - (view) Author: Damian Barabonkov (damian.barabonkov) Date: 2020-08-06 18:32
Agreed.
msg374974 - (view) Author: Vinay Sharma (vinay0410) * Date: 2020-08-07 05:34
As suggested by Guido I have floated this solution to python-dev mailing list.
Link to archive: https://mail.python.org/archives/list/python-dev@python.org/thread/O67CR7QWEOJ7WDAJEBKSY74NQ2C4W3AI/
msg387606 - (view) Author: keven wang (keven425) Date: 2021-02-24 04:57
Agree w/ PR here to remove resource tracker unlinking as a quick fix: https://github.com/python/cpython/pull/15989

This will at least make the unlink behavior more controllable, which is not the case currently (on mac and linux).

Would love to have this merged.
msg388287 - (view) Author: Álvaro Justen (turicas) Date: 2021-03-08 19:22
Based on changes at https://github.com/python/cpython/pull/15989 I've monkey-patched `multiprocessing.resource_tracker` so my current applications (running on Python 3.9.2) won't be affected. The code may be useful to others while the PR is not merged and we don't have a new release - you just need to call the `remove_shm_from_resource_tracker` function inside each Process target function.

----- >8 -----
from multiprocessing import resource_tracker


def remove_shm_from_resource_tracker():
    """Monkey-patch multiprocessing.resource_tracker so SharedMemory won't be tracked

    More details at: https://bugs.python.org/issue38119
    """

    def fix_register(name, rtype):
        if rtype == "shared_memory":
            return
        return resource_tracker._resource_tracker.register(self, name, rtype)
    resource_tracker.register = fix_register

    def fix_unregister(name, rtype):
        if rtype == "shared_memory":
            return
        return resource_tracker._resource_tracker.unregister(self, name, rtype)
    resource_tracker.unregister = fix_unregister

    if "shared_memory" in resource_tracker._CLEANUP_FUNCS:
        del resource_tracker._CLEANUP_FUNCS["shared_memory"]
----- 8< -----

There's a working example in attached file `mprt_monkeypatch.py` (if you comment the function calls on lines 28 and 37 the warnings will be shown).
msg390198 - (view) Author: Steve Newcomb (steve.newcomb) * Date: 2021-04-04 16:06
Sometimes a leak is exactly what's wanted, i.e. a standing block of shared memory that allows sharing processes come and go ad libitum.  I mention this because I haven't seen anyone mention it explicitly.  

While turicas's monkeypatch covers the use case in which a manually-named block of shared memory is intended to remain indefinitely, it would be best if future versions of shared_memory allowed for such a use case, too.  It shouldn't be necessary to monkeypatch in order to have a standing block of shared memory remain ready and waiting even when nobody's using it.
msg412440 - (view) Author: (timka) Date: 2022-02-03 12:44
On the long run: Maybe this could solve some win32api problems?

https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/
History
Date User Action Args
2022-04-11 14:59:20adminsetgithub: 82300
2022-02-10 20:53:56maggyerosetnosy: + maggyero
2022-02-03 12:44:03timkasetnosy: + timka
messages: + msg412440
2021-12-15 19:51:30jdogzz-g5setnosy: + jdogzz-g5
2021-04-21 21:18:21davfelsensetnosy: + davfelsen
2021-04-04 16:06:28steve.newcombsetnosy: + steve.newcomb
messages: + msg390198
2021-03-08 19:22:16turicassetfiles: + mprt_monkeypatch.py
nosy: + turicas
messages: + msg388287

2021-02-24 04:57:21keven425setnosy: + keven425
messages: + msg387606
2020-11-06 17:24:04gvanrossumsetnosy: - gvanrossum
2020-11-06 07:45:55vinay0410setpull_requests: + pull_request22087
2020-08-07 05:34:51vinay0410setmessages: + msg374974
2020-08-06 18:32:27damian.barabonkovsetmessages: + msg374958
2020-08-06 18:24:36vinay0410setmessages: + msg374956
2020-08-06 18:18:30damian.barabonkovsetmessages: + msg374954
2020-08-06 17:38:36vinay0410setmessages: + msg374951
2020-08-06 16:36:04gvanrossumsetmessages: + msg374944
2020-08-06 15:52:24damian.barabonkovsetnosy: + damian.barabonkov
messages: + msg374939
2020-08-03 23:30:50vstinnersetnosy: - vstinner
2020-07-27 22:45:29gvanrossumsetnosy: + gvanrossum

messages: + msg374434
versions: + Python 3.10
2020-07-27 22:41:33gvanrossumlinkissue39959 superseder
2020-07-17 08:47:49vinay0410setpull_requests: + pull_request20652
2019-09-12 05:11:37vinay0410setmessages: + msg352050
2019-09-11 16:20:54davinsetkeywords: + patch
stage: patch review
pull_requests: + pull_request15618
2019-09-11 15:58:26davincreate