New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Locks broken wrt timeouts on Windows #55827
Comments
In thread_nt.h, when the WaitForSingleObject() call in Note that the first line of EnterNonRecursiveMutex() is the comment
Allowing EnterNonRecursiveMutex() to fail with a timeout obviously
The following Windows session demonstrates unexpected behaviour: Python 3.3a0 (default, Mar 19 2011, 18:16:48) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import threading
>>> l = threading.Lock()
>>> l.acquire()
True
>>> l.acquire(timeout=1)
False
>>> l.release()
>>> l.locked() # should return False
True
>>> l.acquire(blocking=False) # should return True
False Also, after a timeout, uncontended acquires/releases always take the D:\Repos\cpython\PCbuild>python -m timeit ^ D:\Repos\cpython\PCbuild>python -m timeit ^ A unit test is attached which passes on Linux but has three failures The "owned" field of NRMUTEX is a count of the number of threads The obvious fix is to decrement mutex->owned when a timeout occurs. I also notice that EnterNonRecursiveMutex() wrongly sets BTW only thread_pthread.h and thread_nt.h have implementations of |
True, I think we can remove it.
Getting ridding them is scheduled for 3.3. |
First stab at a fix. Gets rid of mutex->thread_id and adds a mutex->timeouts counter. Does not try to prevent mutex->owned from overflowing. When no timeouts have occurred I don't think it changes behaviour, and it uses the same number of Interlocked functions. |
Well, Windows 2000 has semaphores, so why not use them? It makes the code much simpler. Patch attached (including test). |
Have you tried benchmarking it? Interlocked functions are *much* faster than Win32 mutex/semaphores in the uncontended case. It only doubles the time taken for a "l.acquire(); l.release()" loop in Python code, but at the C level it is probably 10 times slower. Do you really want the GIL to be 10 times slower in the uncontended case? ;-) |
Well, I'd rather have obviously correct code than The patch I've posted takes less than a microsecond per acquire/release
The GIL doesn't use these functions (see ceval_gil.h). |
Interestingly, it used to be a Semaphore up to [5e6e9e893acd]; in [cde4da18c4fa], Yakov Markovitch rewrote this to be the faster implementation we have today. |
At that time, the Pythread_* functions were still in use by the GIL |
Hmm. And if some application uses thread.lock heavily, won't it still |
An acquire/release pair is less than one microsecond here. Compared to |
Yes, the race condition with the timeout is a problem. |
I don't understand why you need something that complicated. A simple |
I'm just providing this as a fast alternative to the Semaphore, which as far as I know, will cause a kernel call every time. Complicated is relative. In terms of the condition variable api, I wouldn't say that it is. But given the fact that we have to emulate condition variables on older windows, then yes, it is complex. If we are rolling our own instead of using Semaphores (as has been suggested for performance reasons) then using a Condition variable is IMHO safer than a custom solution because the correctness of that approach is so easily provable. |
A Semaphore might be "slow", but I'm not sure other primitives are Have you timed your solution? |
Assuming that you trust the implementation of condition variables, then I agree. Unfortunately implementing condition variables correctly on Windows is notoriously difficult. The patch contains the lines + Generic emulations of the pthread_cond_* API using Apparently all the examples from that web page are faulty one way or another. http://newsgroups.derkeiler.com/Archive/Comp/comp.programming.threads/2008-07/msg00025.html contains the following quote:
pthreads-w32 used to use a solution depending on that paper but changed to something else. The following is a long but relevant read: ftp://sourceware.org/pub/pthreads-win32/sources/pthreads-w32-2-8-0-release/README.CV Of course implementing condition variables is a whole lot easier if you don't need to broadcast and you only need weak guarantees on the behaviour. So python's implementation may be quite sufficient. (It does appear that a thread which calls COND_SIGNAL() may consume that signal with a later call of COND_WAIT(). A "proper" implementation should never allow that because it can cause deadlocks in code depending on normal pthread sematics.) |
Emulating condition variables on windows became easy once Semaphores were provided by the OS because they provide a way around the lost wakeup problem. The current implementation in cpython was submitted by me :) The source material is provided for reference only. |
Benchmarks (on an old laptop running XP without a VM) doing D:\Repos\cpython\PCbuild>python -m timeit -s "from threading import Lock; l = Lock()" "l.acquire(); l.release()" default: 0.934 |
Btw, the locktimeout.patch appears to have a race condition. LeaveNonRecursiveMutex may SetEvent when there is no thread waiting (because a timeout just occurred, but the thread on which it happened is still somewhere around line #62 ). This will cause the next WaitForSingleObject() to succeed, when it shouldn't. It is this race between the timeout occurring, and the ability of us being able to register that in the lock's bookkeeping, that is the source of all the race problems with the timeout. This is what prompted me to submit the condition variable version. |
Just for the record, here is the critical section-based version. I would still favour committing the semaphore-based version first (especially in 3.2), and then discussing performance improvements if desired. |
I believe the lock is still in a consistent state. If this race happens and SetEvent() is called then we will must have mutex->owned > -1 because the timed out waiter is still counted by mutex->owned. This prevents the tests involving interlocked functions from giving true. Thus WaitForSingleObject() is the ONLY way for a waiter to get the lock. In other words, as soon as a timeout happens the fast "interlocked path" gets blocked. It is only unblocked again after a call to WaitForSingleObject() succeeds: then the thread which now owns the lock fixes mutex->owned using mutex->timeouts and the interlocked path is operational again (unless another timeout happens). I can certainly understand the desire to follow the KISS principle. |
Antoine: I agree, the semaphore is the quick and robust solution. sbt: I see your point. Still, I think we still may have a flaw: The statement that (owned-timeouts) is never an under-estimate isn't true on modern architectures, I think. The order of the atomic decrement operations in the code means nothing and cannot be depended on to guarantee such a claim: The thread doing the reading may see the individual updates in any order, and so the estimate may be an over- or an underestimate. It would fix this and simplify things a lot to take the special case for timeout==0 out of the code. |
sbt wrote: The interlocked functions act as read (and write) memory barriers, so mutex->timeout is never any staler than the value of owned obtained from the preceeding interlocked function call. As you say my claim that (owned-timeout) is never an underestimate is dubious. But the only time I use this quantity is in this bit:
If this test gives a false negative we just fall through to the slow path (no problem). If we get a false positive it is because one of the two following races happened:
|
There is no barrier in use on the read part. I realize that this is a subtle point, but in fact, the atomic functions make no memory barrier guarantees either (I think). And even if they did, you are not using a memory barrier when you read the 'timeouts' to perform the subtraction. On a multiprocessor machine the two values can easily fall on two cache lines and become visible to the other cpu in a random fashion. In other words: One cpu decreases the "owner" and "timeouts" at about the same time. A different thread, on a different cpu may see the decrease in "owner" but not the decrease in "timeouts" until at some random later point. Lockless algorithms are notoriously hard and it is precisely because of subtle pitfalls like these. I could even be wrong about the above, but that would not be blindingly obvious either. I'm sure you've read something similar but this is where I remember seeing some of this stuff mentioned: http://msdn.microsoft.com/en-us/library/ee418650(v=vs.85).aspx |
Antoine: I notice that even the fast path contains a ResetEvent() call. I think this is a kernel call and so just as expensive as directly using a semaphore :). Otherwise, the logic looks robust, although ResetEvent() and Event objects always give me an uneasy feeling. |
Yes, in my timings it doesn't show significant improvements compared to |
krisvale wrote From the webpage you linked to: Interlocked functions would be pretty useless for implementing mutexes if they did not also act as some kind of barrier: preventing two threads from manipulating an object at the same time is not much use if they don't also get up-to-date views of that object while they own the lock. Given that mutex->timeout is only modified by interlocked functions, an unprotected read of mutex->timeout will get a value which is at least as fresh as the one available the last time we crossed a barrier by calling InterlockedXXX() or WaitForSingleObject(). Note that if the read of mutex->timeouts in this line
gives the "wrong" answer it will be an underestimate because we own the lock and the only other threads which might interfere will be incrementing the counter. The worst that can happen is that the fast path remains blocked: consistency is not affected. |
For 3.2, I would prefer a solution that makes least changes to the For 3.3, I predict that any Semaphore-based version will be shortly |
No need to guess: http://msdn.microsoft.com/en-us/library/ms683560(v=vs.85).aspx "This function generates a full memory barrier (or fence) to ensure that |
Martin: I wouldn't worry too much about replacing a "Mutex" with a "Semaphore". There is no reason to believe that they behave in any way different scheduling wise, and if they did, then any python code that this would affect would be extremely poorly written. sbt: Please allow me to repeat: Lockless programming is notoriously hard and there is almost always one subtlety or other that is overlooked. I can't begin to count the number of times I've reluctantly had to admit defeat to its devious manipulations. |
Sbt: I re-read the code and while I still maintain that the evaluation in line 50 is meaningless, I agree that the worst that can happen is an incorrect timeout. So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented. Sorry for being so nitpicky but having this stuff correct is crucial. |
krisvale wrote: Sorry for being so nitpicky but having this stuff correct is crucial. Nitpickiness is a necessity ;-) I've done a new version which replaces the "meaningless" racy test on line 50 with the simpler test
As with the old "meaningless" test, if the test succeeds then there must at least have been very recent conention for the lock, so timing out is reasonable. Also the new patch only considers rezeroing mutex->timeouts if we acquire the lock on the slow path. The patch contains more comments than before. |
1 similar comment
krisvale wrote: Sorry for being so nitpicky but having this stuff correct is crucial. Nitpickiness is a necessity ;-) I've done a new version which replaces the "meaningless" racy test on line 50 with the simpler test
As with the old "meaningless" test, if the test succeeds then there must at least have been very recent conention for the lock, so timing out is reasonable. Also the new patch only considers rezeroing mutex->timeouts if we acquire the lock on the slow path. The patch contains more comments than before. |
New changeset 9b12af6e9ea9 by Antoine Pitrou in branch '3.2': New changeset 9d658f000419 by Antoine Pitrou in branch 'default': |
I have now committed the semaphore implementation, so as to fix the issue. |
Here is a new patch. It also adds an internal set of critical section/condition variable structures, that can be used on windows to do other such things without resorting to explicit kernel objects. This code works on XP and newer, since it relies on the "semaphore" kernel object being present. In addition, if compiled to target Vista or greater, it will use the built-in critical section primitives and the FRWLock objects (which are faster still than CriticalSection objects and more robust) |
Can you post some numbers? |
Two runs with standard locks: D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() Two runs with CV locks (emulated) D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() Two runs with CV locks targeted for Vista: D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release() You can see the big win from not doing kernel switches all the time. shedding 60% of the time. |
Any thougts? Is a 60% performance increase for the common case of acquiring an uncontested lock worth doing? Btw, for our console game I also opted for non-semaphore based locks in thread_pthread.h, because our console profilers were alarmed at all the kernel transitions caused by the GIL being ticked.... |
Yes, I agree it is. However, the Vista-specific path seems uninteresting, if it's really 2% faster.
That's the old GIL. The new GIL uses a fixed timeout with a condition variable. |
The vista specific path is included there for completeness, if and when code moves to that platform, besides showing what the "emulated CV" is actually emulating. Also, I am aware of the old/new GIL, but our console game uses python 2.7 and the old GIL will be living on for many a good year, just like 2.7 will. But you make a good point. I had forgotten that the new GIL is actually implemented with emulated condition variables (current version contributed by myself :). I think a different patch is in order, where ceval_gil.h makes use of the platform specific "condition variable" services as declared in thread_platform.h. There is no point in duplicating code. |
Here is a new patch. |
So, what do you think, should this go in? Any qualms about the thread_nt_cv.h header? |
On the principle it's ok, but I'd like to do a review before it goes |
-1. Choice of operating system must be a run-time decision, not a compile-time decision. We will have to support XP for quite some time. |
Antoine: of course, sorry for rushing you. Martin, Incidentally, we should make sure that python defines NTDDI_VERSION to NTDDI_WINXP (0x05010000), either in the sources before including "windows" (tricky) or in the solution (probably in the .prefs files) This will ensure that we don't attempt to use non-existent features, unless we dynamically check for them. |
Martin means that you shouldn't use #ifdef's but runtime detection, so that we can provide a single installer for all Windows versions. |
I understand what he meant, but that wasn't the intent of the patch. The patch is to use simulated critical sections using a semaphore, same as the new GIL implementation already does. If you want dynamic runtime detection, then this is a feature request :) |
We do the runtime checks for a few things in winreg as well as the os.symlink implementation and i think a few other supplemental functions for symlinking. |
Ok, but the patch as provided would become more compliated. For general consumption, the primitives would need to become dynamically allocated structures, and so on. I'm not sure that its worth the effort, but I can have a look. (I thought the patch was radical enough, tbh.) |
As it stands, the patch is pointless, and can safely be rejected. We will just not have defined NTDDI_VERSION at NTDDI_VISTA for any foreseeable future, so all the Vista-specific code can be eliminated from the patch. Python had been using dynamic checking for API "forever". In 2.5, there was a check for presence of GetFileAttributesExA; in 2.4, there was a check for CryptAcquireContextA. |
Martin, I think you misunderstand completely. the patch is _not_ about using the VISTA features. It is about not using a "mutex" for threading.lock. Currently, the locks in python use Mutex objects, and a WaitForSingleObjects() system call to acquire them. The patch comes in two flavors. The current version _emulates_ condition variables on Windows by the same mechanism as I introduced for the new GIL, that is, using a combination of "critical section" objects and a construct made of a "semaphore" and a counter. Also provided, for those that want, and for future reference, is a version that uses native system objects (windows condition variables and SRWLocks). I can drop them from the patch to make you happy, but they are dormant and nicely show how conditional compilation can switch in more modern features for a different target architecture. K
|
Again, to clarify because this seems to have been put to sleep by Martin's unfortunate dismissal. A recap of the patch:
I think Martin got distraught by 3) and though that was the only thing this patch is about. The important part is 1) and 2) whereas 3) is provided as a bonus (and to make sure that 1) is future-safe) So, can we get this reviewed please? |
I agree that Martin that it's not a good idea to add "dead" code. Furthermore, you patch has: +#ifndef _PY_EMULATED_WIN_CV so am I right to understand that when compiled under Vista or later, it will produce an XP-incompatible binary? |
Possibly the patch had a mixup |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: