Message393695
If a process holding a multiprocessing.Lock dies, other waiters on the lock will be stuck. This is mostly the same as with threads, but threads-users can usually avoid this with careful coding (dealing with errors, etc.), and significant crashes will normally take down the entire process anyway. With multiprocessing, a process with lock held could get SIGKILLed with no recourse.
A simple program demonstrating the problem:
```
import multiprocessing
import os
import signal
lk = multiprocessing.Lock()
def f():
my_pid = os.getpid()
print("PID {} going to wait".format(my_pid))
with lk:
print("PID {} got the lock".format(my_pid))
os.kill(my_pid, signal.SIGKILL)
if __name__ == '__main__':
for i in range(5):
multiprocessing.Process(target=f).start()
```
Running this will have one of the processes acquiring the lock and dying; the other processes will wait forever.
The reason behind this behavior is obvious from the implementation that uses POSIX semaphores (I don't know how the win32 implementation behaves).
I don't think the behavior can be changed, since releasing the lock on process crash could leave other processes having to deal with unexpected state. A note in the documentation for the multiprocessing module is all I could think of. I don't see a way to use multiprocessing.Lock with safety against process crashes. If someone has a scenario where they can guarantee their data consistency in the face of process crash, they should use some alternative mechanism such as file-based locking. |
|
Date |
User |
Action |
Args |
2021-05-14 22:10:29 | kushal-kumaran | set | recipients:
+ kushal-kumaran |
2021-05-14 22:10:29 | kushal-kumaran | set | messageid: <1621030229.72.0.539650187678.issue44138@roundup.psfhosted.org> |
2021-05-14 22:10:29 | kushal-kumaran | link | issue44138 messages |
2021-05-14 22:10:29 | kushal-kumaran | create | |
|