This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: multiprocessing documentation should note behavior when process with Lock dies
Type: enhancement Stage:
Components: Library (Lib) Versions: Python 3.11, Python 3.10, Python 3.9, Python 3.8, Python 3.7, Python 3.6
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: kushal-kumaran
Priority: normal Keywords:

Created on 2021-05-14 22:10 by kushal-kumaran, last changed 2022-04-11 14:59 by admin.

Messages (1)
msg393695 - (view) Author: Kushal Kumaran (kushal-kumaran) Date: 2021-05-14 22:10
If a process holding a multiprocessing.Lock dies, other waiters on the lock will be stuck.  This is mostly the same as with threads, but threads-users can usually avoid this with careful coding (dealing with errors, etc.), and significant crashes will normally take down the entire process anyway.  With multiprocessing, a process with lock held could get SIGKILLed with no recourse.

A simple program demonstrating the problem:
```
import multiprocessing
import os
import signal

lk = multiprocessing.Lock()

def f():
    my_pid = os.getpid()
    print("PID {} going to wait".format(my_pid))
    with lk:
        print("PID {} got the lock".format(my_pid))
        os.kill(my_pid, signal.SIGKILL)

if __name__ == '__main__':
    for i in range(5):
        multiprocessing.Process(target=f).start()
```

Running this will have one of the processes acquiring the lock and dying; the other processes will wait forever.

The reason behind this behavior is obvious from the implementation that uses POSIX semaphores (I don't know how the win32 implementation behaves).

I don't think the behavior can be changed, since releasing the lock on process crash could leave other processes having to deal with unexpected state. A note in the documentation for the multiprocessing module is all I could think of. I don't see a way to use multiprocessing.Lock with safety against process crashes. If someone has a scenario where they can guarantee their data consistency in the face of process crash, they should use some alternative mechanism such as file-based locking.
History
Date User Action Args
2022-04-11 14:59:45adminsetgithub: 88304
2021-05-14 22:10:29kushal-kumarancreate