New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize python Locks on Windows #59243
Comments
The attached patch does three things: "
Using this locking mechanism on windows results in a 60% speedup of using uncontested locks, due to the removal of the necessary kernel transition that is required by regular semaphore objects. Before: After: |
This defect springs out of issue bpo-11618 |
Applies and builds cleanly on Win7 32-bit. The speed difference is visible here too: PS D:\Data\cpython\PCbuild> .\python.exe -m timeit -s "from _thread import allocate_lock; l=allocate_lock()" "l.acquire();l.release()" The test suite had a few Python crashes, but a build of trunk did too. No time to diagnose these now, but I didn't see any failures that weren't also in the unpatched build. Basically, test suite results look the same as for the unpatched build. |
I've tested Ubuntu 64 myself using a Virtualbox, confirming that the pythread functionality is untouched. |
While I'm confident about the correctness of this implementation (it´s in production use right now) I´d like comments on the architecture.
|
Ok, I take the lack of negative reviews as a general approvement. I'll improve comments a bit, write the appropriate NEWS item and make a commit soon. |
New changeset 978326f98316 by Kristján Valur Jónsson in branch 'default': |
There's a problem here: Fatal Python error: PyCOND_SIGNAL(gil_cond) failed http://www.python.org/dev/buildbot/all/builders/x86%20XP-4%203.x/builds/6859/steps/test/logs/stdio |
Py_LOCAL_INLINE(int)
_PyCOND_WAIT_MS(PyCOND_T *cv, PyMUTEX_T *cs, DWORD ms)
{
DWORD wait;
cv->waiting++;
PyMUTEX_UNLOCK(cs);
/* "lost wakeup bug" would occur if the caller were interrupted here,
* but we are safe because we are using a semaphore wich has an internal
* count.
*/
wait = WaitForSingleObject(cv->sem, ms);
PyMUTEX_LOCK(cs);
if (wait != WAIT_OBJECT_0)
--cv->waiting;
/* Here we have a benign race condition with PyCOND_SIGNAL.
* When failure occurs or timeout, it is possible that
* PyCOND_SIGNAL also decrements this value
* and signals releases the mutex. This is benign because it
* just means an extra spurious wakeup for a waiting thread.
*/
... Are you really sure this race is benign? If cv->waiting gets double decremented then it can become negative. PyCOND_SIGNAL() is defined as Py_LOCAL_INLINE(int)
PyCOND_SIGNAL(PyCOND_T *cv)
{
if (cv->waiting) {
cv->waiting--;
return ReleaseSemaphore(cv->sem, 1, NULL) ? 0 : -1;
}
return 0;
} While cv->waiting is negative, each call of PyCOND_SIGNAL() decrements cv->waiting, and increments the semaphore, while each call of PyCOND_WAIT() will increment cv->waiting and decrement the semaphore. So if calls of PyCOND_SIGNAL() outnumber calls of PyCOND_WAIT() then we can have cv->waiting becoming very negative and the semaphore overflowing. Maybe just changing the test in PyCOND_SIGNAL() to if (cv->waiting > 0) { would be enough, but I am not convinced. |
Thanks Antoine. I tested this in my virtualbox so something new must have happened... Anyway, the GIL code should not have changed from before, only moved about slightly. I´ll figure out what happened |
You are right, Richard. Thanks for pointing this out. This is not a new problem, however, because this code has been in the New GIL since it was launched. The purpose of the "n_waiting" member is to make "signal" a no-op when no one is there waiting. Otherwise, we could increase the semaphore's internal count without bound, since the condition variable protocol allows "signal" to be called as often as one desires. When the next waiter comes along, with semaphore count==1, it will just pass though, having incremented n_waiting to 0. The problem you describe is that if if 'signal' is ever hit with n_waiting<0, it will continue to tip this balance. What this will do is just cause the semaphore count to start growing. And thus, we have defeated the purpose of n_waiting. This oversight is my fault. However rest assured that it is rare, having been in the new GIL implementation since its launch :) There are two fixes:
I'll implement number 1), and improve the documentation to that effect. |
The old version was 243 __inline static void _cond_signal(COND_T *cond) { So the test should be "if (cv->waiting > 0)" not "if (cv->waiting)". |
Well spotted. This probably fixes the failure we saw in the buildbots as well. |
New changeset 110b38c36a31 by Kristjan Valur Jonsson in branch 'default': |
Standard condition variables have the following guarantees:
The implementation in condvar.h does not have these guarantees since a future waiter (possibly the signalling thread) may steal the signal intended for a current waiter. In many cases this does not matter, but in some it can cause a deadlock. For instance, consider from threading import Condition, Thread
import time
def set_to_value(value, cond, state):
while 1:
with cond:
while state.value == value:
cond.wait()
state.value = value
print("set_to_value(%s)" % value)
cond.notify_all()
class state:
value = False
c = Condition()
for i in (0, 1):
t = Thread(target=set_to_value, args=(i, c, state))
t.daemon = True
t.start()
time.sleep(5) This *should* make state.value bounce back and forth between 0 and 1 continually for five seconds. But using a condition variable implemented like in condvar.h this program is liable to deadlock because the signalling thread steals the signal intended for the other thread. I think a note about this should be added to condvar.h. |
Yes, another correct observation. This can if a thread is interrupted in between releasing the mutex and waiting for the semaphore. Personally I'm not sure it is a wise guarantee to make. The shift has been away from "handover" semantics towards "retry" semantics for locking in general over the last few years, particularly with the rise of multiprocessing. But this distinction should be made clear and I will make sure to document it. Thanks for pointing this out. |
If you make sure internal users are immune to this issue, then fine (but make sure to document it somewhere). |
Let me elaborate: the GIL can perhaps suffer lost wakeups from time to time. The Lock API certainly shouldn't. |
I think with FORCE_SWITCHING defined (the default?) it is not possible for the thread releasing the GIL to immediately reacquire it (unless there is a spurious wakeup when waiting on switch_cond). If all threads which wait on a condition are testing the same predicate then the stolen wakeup issue probably won't cause any misbehaviour. |
The implementation in condvar.h is basically the same as one of the attempts mentioned in
(Listing 2 fixed to use non-binary semaphores.) The implementation for multiprocessing.Condition is virtually the same as Listing 3 which the author says he thinks is "formally correct" but with "a fundamental performance problem". |
To me, it seems similar to the last listing (under "The Sequel—NT and |
The problem Richard describes isn´t a lost wakeup. PyCOND_SIGNAL _will_ wake up _at least_ one thread. It just isn't guaranteed to be one of those who previously called PyCOND_WAIT(): It could be a latecomer to the game, including the one who called Signal himself. If no such thread comes in to steal it, then one of the waiting threads _will_ wake up. None of the internal usages of condition variables makes this assumption about the order of wakeup from PyCOND_WAIT(). |
Ah, you said multiprocessing.Condition. Sorry. I was thinking about |
Ok, thanks for clearing up. |
It's an interesting article Richard, but I don't see how their 2nd attempt solves the probvlem. All it does is block the thread doing the Signal(), not other threads, from stealing the wakeup. I think I know how to fix this correctly, using a separate internal "locking" condition variable. I will make some offline experiments with that, to see if it makes sense, given the added complexity. In the mean time, I will document this issue and add the link to the article you mentioned. |
The notes should also mention that PyCOND_SIGNAL() and PyCOND_BROADCAST() must be called while holding the mutex. (pthreads does not have that restriction.) |
Right. There are a number of implementations that are subjects to serious problems if the mutex isn't held when doing pthread_cond_signal(), including the notorious 'lost wakeup' bug, eg: http://docs.oracle.com/cd/E19963-01/html/821-1601/sync-21067.html, so it is certainly recommended practice to use pthread_cond_signal() with the mutex held regardless. But you are right, the emulation implementation depends on the mutex not only for predicable scheduling but for synchronizing access to the internal state, and this should be documented. |
New changeset d7a72fdcc168 by Kristjan Valur Jonsson in branch 'default': |
Do you mean the listing on page 5? (The earlier attempts were failures.) The signalling thread holds the lock "x" while issuing the signal "s.V()" and waiting for notification of wakeup "h.P()". A new thread cannot steal the wakeup because it needs to acquire the lock "x" before it can start its wait. Of course, if the main mutex is always held when doing signal()/broadcast() then the lock "x" is unnecessary. I don't think trying to do a full emulation is necessary. Better to just document the limitations. |
Ah, right, the lock x, I forgot about that. |
1.41 Generic emulations of the pthread_cond_* API using 1.45 and 1.46 should be removed? Also I would not recommend the win32-cv-1.html page as "edificating" (edifying?). The implementations all either suffer from the same stolen wakeup issue or are broken.*
|
Thanks. |
I see dead code here: Py_LOCAL_INLINE(int)
PyCOND_BROADCAST(PyCOND_T *cv)
{
if (cv->waiting > 0) {
return ReleaseSemaphore(cv->sem, cv->waiting, NULL) ? 0 : -1;
cv->waiting = 0;
}
return 0;
} |
Thanks, Benjamin. That's what reviews are for :) |
New changeset 08b87dda6f6a by Kristján Valur Jónsson in branch 'default': New changeset fde60d3f542e by Kristján Valur Jónsson in branch '3.3': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: