Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Locks broken wrt timeouts on Windows #55827

Closed
sbt mannequin opened this issue Mar 20, 2011 · 54 comments
Closed

Locks broken wrt timeouts on Windows #55827

sbt mannequin opened this issue Mar 20, 2011 · 54 comments
Labels
type-bug An unexpected behavior, bug, or error

Comments

@sbt
Copy link
Mannequin

sbt mannequin commented Mar 20, 2011

BPO 11618
Nosy @loewis, @pitrou, @kristjanvalur, @tjguk, @briancurtin
Files
  • test-timeout.py
  • locktimeout.patch
  • semlocknt.patch
  • locktimeout2.patch
  • critlocknt.patch
  • locktimeout3.patch
  • ntlocks.patch
  • ntlocks.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2012-06-07.10:58:19.061>
    created_at = <Date 2011-03-20.17:06:57.180>
    labels = ['type-bug']
    title = 'Locks broken wrt timeouts on Windows'
    updated_at = <Date 2012-06-07.10:58:19.060>
    user = 'https://bugs.python.org/sbt'

    bugs.python.org fields:

    activity = <Date 2012-06-07.10:58:19.060>
    actor = 'kristjan.jonsson'
    assignee = 'none'
    closed = True
    closed_date = <Date 2012-06-07.10:58:19.061>
    closer = 'kristjan.jonsson'
    components = []
    creation = <Date 2011-03-20.17:06:57.180>
    creator = 'sbt'
    dependencies = []
    files = ['21304', '21306', '21308', '21322', '21325', '21336', '25271', '25351']
    hgrepos = []
    issue_num = 11618
    keywords = ['patch']
    message_count = 54.0
    messages = ['131515', '131520', '131521', '131524', '131525', '131527', '131529', '131530', '131533', '131534', '131635', '131636', '131642', '131643', '131650', '131656', '131661', '131669', '131670', '131672', '131676', '131682', '131684', '131685', '131686', '131695', '131699', '131702', '131728', '131732', '131744', '131745', '132618', '132619', '158713', '158720', '158830', '159071', '159072', '159081', '159210', '159427', '159429', '159432', '159447', '159459', '159465', '159466', '159468', '159681', '159686', '159819', '159949', '162468']
    nosy_count = 7.0
    nosy_names = ['loewis', 'pitrou', 'kristjan.jonsson', 'tim.golden', 'brian.curtin', 'python-dev', 'sbt']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue11618'
    versions = ['Python 3.2', 'Python 3.3']

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 20, 2011

    In thread_nt.h, when the WaitForSingleObject() call in
    EnterNonRecursiveMutex() fails with WAIT_TIMEOUT (or WAIT_FAILED) the
    mutex is left in an inconsistent state.

    Note that the first line of EnterNonRecursiveMutex() is the comment

    /* Assume that the thread waits successfully */
    

    Allowing EnterNonRecursiveMutex() to fail with a timeout obviously
    violates this promise ;-) I think the problem was introduced to Python
    3.2 with:

     bpo-7316: Add a timeout functionality to common locking operations.
    

    The following Windows session demonstrates unexpected behaviour:

    Python 3.3a0 (default, Mar 19 2011, 18:16:48) [MSC v.1500 32 bit (Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import threading
    >>> l = threading.Lock()
    >>> l.acquire()
    True
    >>> l.acquire(timeout=1)
    False
    >>> l.release()
    >>> l.locked()                  # should return False
    True
    >>> l.acquire(blocking=False)   # should return True
    False

    Also, after a timeout, uncontended acquires/releases always take the
    slow path:

    D:\Repos\cpython\PCbuild>python -m timeit ^
    More? -s "from threading import Lock; l = Lock()" ^
    More? "l.acquire();l.release()"
    1000000 loops, best of 3: 0.974 usec per loop

    D:\Repos\cpython\PCbuild>python -m timeit ^
    More? -s "from threading import Lock; l = Lock()" ^
    More? -s "l.acquire();l.acquire(timeout=0.1);l.release()" ^
    More? "l.acquire();l.release()"
    100000 loops, best of 3: 2.18 usec per loop

    A unit test is attached which passes on Linux but has three failures
    on Windows.

    The "owned" field of NRMUTEX is a count of the number of threads
    waiting for the mutex (not including the owner). "owned" will
    over-estimate the number of waiters if a timeout occurs, because the
    timed out thread will still be counted as a waiter.

    The obvious fix is to decrement mutex->owned when a timeout occurs.
    Unfortunately that would introduce a race which might allow two
    threads to think they own the lock at the same time.

    I also notice that EnterNonRecursiveMutex() wrongly sets
    mutex->thread_id to the current thread even when it fails with a
    timeout. It appears that the thread_id field is never actually used
    -- is it there to help with debugging? Perhaps it should just be
    removed.

    BTW only thread_pthread.h and thread_nt.h have implementations of
    PyThread_acquire_lock_timed(). Since this function appears to be
    required by _threadmodule.c, does this mean that in Python 3.2
    threads are only supported with pthreads and win32? If so you can
    get rid of all those other thread_*.h files.

    @sbt sbt mannequin added the type-bug An unexpected behavior, bug, or error label Mar 20, 2011
    @pitrou
    Copy link
    Member

    pitrou commented Mar 20, 2011

    It appears that the thread_id field is never actually used
    -- is it there to help with debugging? Perhaps it should just be
    removed.

    True, I think we can remove it.

    does this mean that in Python 3.2
    threads are only supported with pthreads and win32? If so you can
    get rid of all those other thread_*.h files.

    Getting ridding them is scheduled for 3.3.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 20, 2011

    First stab at a fix.

    Gets rid of mutex->thread_id and adds a mutex->timeouts counter.

    Does not try to prevent mutex->owned from overflowing.

    When no timeouts have occurred I don't think it changes behaviour, and it uses the same number of Interlocked functions.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 20, 2011

    Well, Windows 2000 has semaphores, so why not use them? It makes the code much simpler. Patch attached (including test).

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 20, 2011

    Have you tried benchmarking it?

    Interlocked functions are *much* faster than Win32 mutex/semaphores in the uncontended case.

    It only doubles the time taken for a "l.acquire(); l.release()" loop in Python code, but at the C level it is probably 10 times slower.

    Do you really want the GIL to be 10 times slower in the uncontended case? ;-)

    @pitrou
    Copy link
    Member

    pitrou commented Mar 20, 2011

    Have you tried benchmarking it?

    Interlocked functions are *much* faster than Win32 mutex/semaphores in
    the uncontended case.

    Well, I'd rather have obviously correct code than
    difficult-to-understand "speedy" code.

    The patch I've posted takes less than a microsecond per acquire/release
    pair, and that's in a virtual machine to begin with.

    Do you really want the GIL to be 10 times slower in the uncontended case? ;-)

    The GIL doesn't use these functions (see ceval_gil.h).

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Mar 20, 2011

    Interestingly, it used to be a Semaphore up to [5e6e9e893acd]; in [cde4da18c4fa], Yakov Markovitch rewrote this to be the faster implementation we have today.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 20, 2011

    Interestingly, it used to be a Semaphore up to [5e6e9e893acd]; in
    [cde4da18c4fa], Yakov Markovitch rewrote this to be the faster
    implementation we have today.

    At that time, the Pythread_* functions were still in use by the GIL
    implementation, and it made a difference judging by the commit message.

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Mar 20, 2011

    At that time, the Pythread_* functions were still in use by the GIL
    implementation, and it made a difference judging by the commit message.

    Hmm. And if some application uses thread.lock heavily, won't it still
    make a difference?

    @pitrou
    Copy link
    Member

    pitrou commented Mar 20, 2011

    > At that time, the Pythread_* functions were still in use by the GIL
    > implementation, and it made a difference judging by the commit message.

    Hmm. And if some application uses thread.lock heavily, won't it still
    make a difference?

    An acquire/release pair is less than one microsecond here. Compared to
    the evaluation overhead of Python code, it seems not very significant.
    That said, if someone can guarantee than the complex approach is
    correct, why not.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    Yes, the race condition with the timeout is a problem.
    Here is a patch that implements this lock using a condition variable.
    I agree that one must consider performance/simplicity when doing this.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 21, 2011

    Yes, the race condition with the timeout is a problem.
    Here is a patch that implements this lock using a condition variable.
    I agree that one must consider performance/simplicity when doing this.

    I don't understand why you need something that complicated. A simple
    semaphore should be enough (as in the POSIX implementation).

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    I'm just providing this as a fast alternative to the Semaphore, which as far as I know, will cause a kernel call every time.

    Complicated is relative. In terms of the condition variable api, I wouldn't say that it is. But given the fact that we have to emulate condition variables on older windows, then yes, it is complex.

    If we are rolling our own instead of using Semaphores (as has been suggested for performance reasons) then using a Condition variable is IMHO safer than a custom solution because the correctness of that approach is so easily provable.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 21, 2011

    I'm just providing this as a fast alternative to the Semaphore, which
    as far as I know, will cause a kernel call every time.

    A Semaphore might be "slow", but I'm not sure other primitives are
    faster. For the record, I tried another implementation using a critical
    section, and it's not significantly faster under a VM (even though MSDN
    claims critical sections are fast).

    Have you timed your solution?

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 21, 2011

    If we are rolling our own instead of using Semaphores (as has been
    suggested for performance reasons) then using a Condition variable is
    IMHO safer than a custom solution because the correctness of that
    approach is so easily provable.

    Assuming that you trust the implementation of condition variables, then I agree. Unfortunately implementing condition variables correctly on Windows is notoriously difficult. The patch contains the lines

    + Generic emulations of the pthread_cond_* API using
    + Win32 functions can be found on the Web.
    + The following read can be edificating (or not):
    + http://www.cse.wustl.edu/~schmidt/win32-cv-1.html

    Apparently all the examples from that web page are faulty one way or another.

    http://newsgroups.derkeiler.com/Archive/Comp/comp.programming.threads/2008-07/msg00025.html

    contains the following quote:

    Perhaps this list should provide links to a "reliable" windows
    condition variable implementation instead of continuously bad
    mouthing the ~schmidt/win32-cv-1.html page and thereby raising
    it's page rank. It would greatly help out all us newbies out here.

    pthreads-w32 used to use a solution depending on that paper but changed to something else. The following is a long but relevant read:

    ftp://sourceware.org/pub/pthreads-win32/sources/pthreads-w32-2-8-0-release/README.CV

    Of course implementing condition variables is a whole lot easier if you don't need to broadcast and you only need weak guarantees on the behaviour. So python's implementation may be quite sufficient. (It does appear that a thread which calls COND_SIGNAL() may consume that signal with a later call of COND_WAIT(). A "proper" implementation should never allow that because it can cause deadlocks in code depending on normal pthread sematics.)

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    Emulating condition variables on windows became easy once Semaphores were provided by the OS because they provide a way around the lost wakeup problem. The current implementation in cpython was submitted by me :) The source material is provided for reference only.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 21, 2011

    Benchmarks (on an old laptop running XP without a VM) doing

    D:\Repos\cpython\PCbuild>python -m timeit -s "from threading import Lock; l = Lock()" "l.acquire(); l.release()"
    1000000 loops, best of 3: 0.934 usec per loop

    default: 0.934
    locktimeout.patch: 0.965
    semlocknt.patch: 2.76
    locktimeout2.patch: 2.03

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    Btw, the locktimeout.patch appears to have a race condition. LeaveNonRecursiveMutex may SetEvent when there is no thread waiting (because a timeout just occurred, but the thread on which it happened is still somewhere around line #62 ). This will cause the next WaitForSingleObject() to succeed, when it shouldn't.

    It is this race between the timeout occurring, and the ability of us being able to register that in the lock's bookkeeping, that is the source of all the race problems with the timeout. This is what prompted me to submit the condition variable version.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 21, 2011

    Just for the record, here is the critical section-based version.

    I would still favour committing the semaphore-based version first (especially in 3.2), and then discussing performance improvements if desired.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 21, 2011

    Btw, the locktimeout.patch appears to have a race condition.
    LeaveNonRecursiveMutex may SetEvent when there is no thread waiting
    (because a timeout just occurred, but the thread on which it happened
    is still somewhere around line #62 ). This will cause the next
    WaitForSingleObject() to succeed, when it shouldn't.

    I believe the lock is still in a consistent state. If this race happens and SetEvent() is called then we will must have mutex->owned > -1 because the timed out waiter is still counted by mutex->owned. This prevents the tests involving interlocked functions from giving true. Thus WaitForSingleObject() is the ONLY way for a waiter to get the lock.

    In other words, as soon as a timeout happens the fast "interlocked path" gets blocked. It is only unblocked again after a call to WaitForSingleObject() succeeds: then the thread which now owns the lock fixes mutex->owned using mutex->timeouts and the interlocked path is operational again (unless another timeout happens).

    I can certainly understand the desire to follow the KISS principle.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    Antoine: I agree, the semaphore is the quick and robust solution.

    sbt: I see your point. Still, I think we still may have a flaw: The statement that (owned-timeouts) is never an under-estimate isn't true on modern architectures, I think. The order of the atomic decrement operations in the code means nothing and cannot be depended on to guarantee such a claim: The thread doing the reading may see the individual updates in any order, and so the estimate may be an over- or an underestimate.

    It would fix this and simplify things a lot to take the special case for timeout==0 out of the code.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 21, 2011

    sbt wrote:
    -----
    I see your point. Still, I think we still may have a flaw: The statement that (owned-timeouts) is never an under-estimate isn't true on modern architectures, I think. The order of the atomic decrement operations in the code means nothing and cannot be depended on to guarantee such a claim: The thread doing the reading may see the individual updates in any order, and so the estimate may be an over- or an underestimate.
    -----

    The interlocked functions act as read (and write) memory barriers, so mutex->timeout is never any staler than the value of owned obtained from the preceeding interlocked function call. As you say my claim that (owned-timeout) is never an underestimate is dubious. But the only time I use this quantity is in this bit:

        else if (owned - mutex->timeouts != -1)     /* harmless race */
            return WAIT_TIMEOUT ;
    

    If this test gives a false negative we just fall through to the slow path (no problem). If we get a false positive it is because one of the two following races happened:

    1. Another thread just got the lock: letting the non-blocking acquire fail is clearly the right thing to do.

    2. Another thread just timed out: this means that a third thread must have held the lock up until very recently, so allowing a non-blocking acquire to fail is entirely reasonable (even if WaitForSingleObject() might now succeed).

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    There is no barrier in use on the read part. I realize that this is a subtle point, but in fact, the atomic functions make no memory barrier guarantees either (I think). And even if they did, you are not using a memory barrier when you read the 'timeouts' to perform the subtraction. On a multiprocessor machine the two values can easily fall on two cache lines and become visible to the other cpu in a random fashion. In other words: One cpu decreases the "owner" and "timeouts" at about the same time. A different thread, on a different cpu may see the decrease in "owner" but not the decrease in "timeouts" until at some random later point.

    Lockless algorithms are notoriously hard and it is precisely because of subtle pitfalls like these. I could even be wrong about the above, but that would not be blindingly obvious either. I'm sure you've read something similar but this is where I remember seeing some of this stuff mentioned: http://msdn.microsoft.com/en-us/library/ee418650(v=vs.85).aspx

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 21, 2011

    Antoine: I notice that even the fast path contains a ResetEvent() call. I think this is a kernel call and so just as expensive as directly using a semaphore :). Otherwise, the logic looks robust, although ResetEvent() and Event objects always give me an uneasy feeling.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 21, 2011

    Antoine: I notice that even the fast path contains a ResetEvent()
    call. I think this is a kernel call and so just as expensive as
    directly using a semaphore :)

    Yes, in my timings it doesn't show significant improvements compared to
    the semaphore approach (although again it's on a VM, so I'm not sure how
    much this reflects a native Windows system).

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 21, 2011

    krisvale wrote
    ----
    There is no barrier in use on the read part. I realize that this is a subtle point, but in fact, the atomic functions make no memory barrier guarantees either (I think). And even if they did, you are not using a memory barrier when you read the 'timeouts' to perform the subtraction. On a multiprocessor machine the two values can easily fall on two cache lines and become visible to the other cpu in a random fashion. In other words: One cpu decreases the "owner" and "timeouts" at about the same time. A different thread, on a different cpu may see the decrease in "owner" but not the decrease in "timeouts" until at some random later point.
    ----

    From the webpage you linked to:
    ----
    Sometimes the read or write that acquires or releases a resource is done using one of the InterlockedXxx functions. On Windows this simplifies things, because on Windows, the InterlockedXxx functions are all full-memory barriers—they effectively have a CPU memory barrier both before and after them, which means that they are a full read-acquire or write-release barrier all by themselves.
    ----

    Interlocked functions would be pretty useless for implementing mutexes if they did not also act as some kind of barrier: preventing two threads from manipulating an object at the same time is not much use if they don't also get up-to-date views of that object while they own the lock.

    Given that mutex->timeout is only modified by interlocked functions, an unprotected read of mutex->timeout will get a value which is at least as fresh as the one available the last time we crossed a barrier by calling InterlockedXXX() or WaitForSingleObject().

    Note that if the read of mutex->timeouts in this line

    if ((timeouts = mutex->timeouts) != 0)
    

    gives the "wrong" answer it will be an underestimate because we own the lock and the only other threads which might interfere will be incrementing the counter. The worst that can happen is that the fast path remains blocked: consistency is not affected.

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Mar 21, 2011

    I would still favour committing the semaphore-based version first
    (especially in 3.2), and then discussing performance improvements if
    desired.

    For 3.2, I would prefer a solution that makes least changes to the
    current code. This is better than fundamentally replacing the
    synchronization mechanism which locks are based on.

    For 3.3, I predict that any Semaphore-based version will be shortly
    replaced by something "fast". Benchmarks seem to indicate that you can
    get much faster than semaphores.

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Mar 21, 2011

    I realize that this is
    a subtle point, but in fact, the atomic functions make no memory
    barrier guarantees either (I think).

    No need to guess:

    http://msdn.microsoft.com/en-us/library/ms683560(v=vs.85).aspx

    "This function generates a full memory barrier (or fence) to ensure that
    memory operations are completed in order."

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 22, 2011

    Martin: I wouldn't worry too much about replacing a "Mutex" with a "Semaphore". There is no reason to believe that they behave in any way different scheduling wise, and if they did, then any python code that this would affect would be extremely poorly written.

    sbt:
    Look, I really hate to be a pain but please consider: In line 50 of your patch the thread may pause at any point, perhaps even a number of times. Meanwhile, a number of locks/unlocks may go by. The values of "owned" and "timeouts" that the reader sees may be from any number of different lock states that the lock goes through during this, including any number of different reset cycles of these counters. In short, there is no guarantee that the values read represent any kind of mutually consistent state. They might as well be from two different locks.

    Please allow me to repeat: Lockless programming is notoriously hard and there is almost always one subtlety or other that is overlooked. I can't begin to count the number of times I've reluctantly had to admit defeat to its devious manipulations.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Mar 22, 2011

    Sbt: I re-read the code and while I still maintain that the evaluation in line 50 is meaningless, I agree that the worst that can happen is an incorrect timeout.
    It is probably harmless because this state is only encountered for timeout==0, and it is only incorrect in the face of lock contention, while a 0 timeout provides no guarantees between two threads.

    So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented.

    Sorry for being so nitpicky but having this stuff correct is crucial.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 22, 2011

    krisvale wrote:
    ----
    So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented.

    Sorry for being so nitpicky but having this stuff correct is crucial.
    ----

    Nitpickiness is a necessity ;-)

    I've done a new version which replaces the "meaningless" racy test on line 50 with the simpler test

        else if (mutex->timeouts == 0)
    

    As with the old "meaningless" test, if the test succeeds then there must at least have been very recent conention for the lock, so timing out is reasonable.

    Also the new patch only considers rezeroing mutex->timeouts if we acquire the lock on the slow path.

    The patch contains more comments than before.

    1 similar comment
    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Mar 22, 2011

    krisvale wrote:
    ----
    So, I suggest a change in the comments: Do not claim that the value is never an underestimate, and explain how falsely returning a WAIT_TIMEOUT is safe and only occurs when the lock is heavily contented.

    Sorry for being so nitpicky but having this stuff correct is crucial.
    ----

    Nitpickiness is a necessity ;-)

    I've done a new version which replaces the "meaningless" racy test on line 50 with the simpler test

        else if (mutex->timeouts == 0)
    

    As with the old "meaningless" test, if the test succeeds then there must at least have been very recent conention for the lock, so timing out is reasonable.

    Also the new patch only considers rezeroing mutex->timeouts if we acquire the lock on the slow path.

    The patch contains more comments than before.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Mar 30, 2011

    New changeset 9b12af6e9ea9 by Antoine Pitrou in branch '3.2':
    Issue bpo-11618: Fix the timeout logic in threading.Lock.acquire() under
    http://hg.python.org/cpython/rev/9b12af6e9ea9

    New changeset 9d658f000419 by Antoine Pitrou in branch 'default':
    Issue bpo-11618: Fix the timeout logic in threading.Lock.acquire() under
    http://hg.python.org/cpython/rev/9d658f000419

    @pitrou
    Copy link
    Member

    pitrou commented Mar 30, 2011

    I have now committed the semaphore implementation, so as to fix the issue.
    Potential performance optimizations can still be discussed, of course (either here or in a new issue, I'm not sure).

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 19, 2012

    Here is a new patch.
    This uses critical sections and condition variables to avoid kernel mode switches for locks. Windows mutexes are expensive and for uncontented locks, this offers a big win.

    It also adds an internal set of critical section/condition variable structures, that can be used on windows to do other such things without resorting to explicit kernel objects.

    This code works on XP and newer, since it relies on the "semaphore" kernel object being present. In addition, if compiled to target Vista or greater, it will use the built-in critical section primitives and the FRWLock objects (which are faster still than CriticalSection objects and more robust)

    @pitrou
    Copy link
    Member

    pitrou commented Apr 19, 2012

    This uses critical sections and condition variables to avoid kernel
    mode switches for locks. Windows mutexes are expensive and for
    uncontented locks, this offers a big win.

    Can you post some numbers?

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 20, 2012

    Two runs with standard locks:

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.746 usec per loop

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.749 usec per loop

    Two runs with CV locks (emulated)

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.278 usec per loop

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.279 usec per loop

    Two runs with CV locks targeted for Vista:

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.272 usec per loop

    D:\pydev\hg\cpython2>pcbuild\amd64\python.exe -m timeit -s "import _thread; l = _thread.allocate_lock()" l.acquire();l.release()
    1000000 loops, best of 3: 0.272 usec per loop

    You can see the big win from not doing kernel switches all the time. shedding 60% of the time.
    Once in user space, moving from CriticalSection objects to SRWLock objects is less beneficial, being overshadowed by Python overhead. Still, 2% overall is not to be frowned upon.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 23, 2012

    Any thougts? Is a 60% performance increase for the common case of acquiring an uncontested lock worth doing?

    Btw, for our console game I also opted for non-semaphore based locks in thread_pthread.h, because our console profilers were alarmed at all the kernel transitions caused by the GIL being ticked....

    @pitrou
    Copy link
    Member

    pitrou commented Apr 23, 2012

    Is a 60% performance increase for the common case of acquiring an
    uncontested lock worth doing?

    Yes, I agree it is. However, the Vista-specific path seems uninteresting, if it's really 2% faster.

    our console profilers were alarmed at all the kernel transitions caused
    by the GIL being ticked....

    That's the old GIL. The new GIL uses a fixed timeout with a condition variable.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 23, 2012

    The vista specific path is included there for completeness, if and when code moves to that platform, besides showing what the "emulated CV" is actually emulating.

    Also, I am aware of the old/new GIL, but our console game uses python 2.7 and the old GIL will be living on for many a good year, just like 2.7 will.

    But you make a good point. I had forgotten that the new GIL is actually implemented with emulated condition variables (current version contributed by myself :). I think a different patch is in order, where ceval_gil.h makes use of the platform specific "condition variable" services as declared in thread_platform.h. There is no point in duplicating code.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 24, 2012

    Here is a new patch.
    I've factored out the NT condittion variable code into thread_nt_cv.h which is now used by both thread_nt.h and ceval_gil.h

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 26, 2012

    So, what do you think, should this go in? Any qualms about the thread_nt_cv.h header?

    @pitrou
    Copy link
    Member

    pitrou commented Apr 26, 2012

    So, what do you think, should this go in? Any qualms about the thread_nt_cv.h header?

    On the principle it's ok, but I'd like to do a review before it goes
    in :)

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Apr 26, 2012

    -1. Choice of operating system must be a run-time decision, not a compile-time decision. We will have to support XP for quite some time.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 27, 2012

    Antoine: of course, sorry for rushing you.

    Martin,
    This is an XP patch. The "vista" option is put in there as a compile time option, and disabled by hand. I'm not adding any apis that weren't already in use since the new gil (windows Semaphores).

    Incidentally, we should make sure that python defines NTDDI_VERSION to NTDDI_WINXP (0x05010000), either in the sources before including "windows" (tricky) or in the solution (probably in the .prefs files)

    This will ensure that we don't attempt to use non-existent features, unless we dynamically check for them.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 27, 2012

    This is an XP patch. The "vista" option is put in there as a compile
    time option, and disabled by hand. I'm not adding any apis that
    weren't already in use since the new gil (windows Semaphores).

    Martin means that you shouldn't use #ifdef's but runtime detection, so that we can provide a single installer for all Windows versions.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 27, 2012

    I understand what he meant, but that wasn't the intent of the patch. The patch is to use simulated critical sections using a semaphore, same as the new GIL implementation already does.

    If you want dynamic runtime detection, then this is a feature request :)
    I'm not sure we do it elsewhere in Python, and the benefit is doubtful...

    @briancurtin
    Copy link
    Member

    We do the runtime checks for a few things in winreg as well as the os.symlink implementation and i think a few other supplemental functions for symlinking.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 27, 2012

    Ok, but the patch as provided would become more compliated. For general consumption, the primitives would need to become dynamically allocated structures, and so on. I'm not sure that its worth the effort, but I can have a look.

    (I thought the patch was radical enough, tbh.)

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Apr 30, 2012

    As it stands, the patch is pointless, and can safely be rejected. We will just not have defined NTDDI_VERSION at NTDDI_VISTA for any foreseeable future, so all the Vista-specific code can be eliminated from the patch.

    Python had been using dynamic checking for API "forever". In 2.5, there was a check for presence of GetFileAttributesExA; in 2.4, there was a check for CryptAcquireContextA.

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Apr 30, 2012

    Martin, I think you misunderstand completely. the patch is _not_ about using the VISTA features. It is about not using a "mutex" for threading.lock.

    Currently, the locks in python use Mutex objects, and a WaitForSingleObjects() system call to acquire them.
    This patch replaces theses locks with user-level objects (critical sections and condition variables.). This drops the time needed for an uncontended acquire/release by 60% since there is no kernel transition and scheduling.

    The patch comes in two flavors. The current version _emulates_ condition variables on Windows by the same mechanism as I introduced for the new GIL, that is, using a combination of "critical section" objects and a construct made of a "semaphore" and a counter.

    Also provided, for those that want, and for future reference, is a version that uses native system objects (windows condition variables and SRWLocks). I can drop them from the patch to make you happy, but they are dormant and nicely show how conditional compilation can switch in more modern features for a different target architecture.

    K

    -----Original Message-----
    From: Martin v. Löwis [mailto:report@bugs.python.org]
    Sent: 30. apríl 2012 09:05
    To: Kristján Valur Jónsson
    Subject: [bpo-11618] Locks broken wrt timeouts on Windows

    Martin v. Löwis <martin@v.loewis.de> added the comment:

    As it stands, the patch is pointless, and can safely be rejected. We will just
    not have defined NTDDI_VERSION at NTDDI_VISTA for any foreseeable
    future, so all the Vista-specific code can be eliminated from the patch.

    Python had been using dynamic checking for API "forever". In 2.5, there was
    a check for presence of GetFileAttributesExA; in 2.4, there was a check for
    CryptAcquireContextA.

    ----------


    Python tracker <report@bugs.python.org>
    <http://bugs.python.org/issue11618\>


    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented May 2, 2012

    Again, to clarify because this seems to have been put to sleep by Martin's unfortunate dismissal. A recap of the patch:

    1. Extract the Contition Variable functions on windows out of ceval_gil.h and into thread_nt_cv.h, so that they can be used in more places.
    2. Implement the "Lock" primitive in Python using CritialSection and condition variables, rather than windows Mutexes. This gives a large performance boost on uncontended locks.
    3. Provide an alternate implementation of the Condition Variable for a build target of Vista/Server 2008, using the native contidion variable objects available for that platform.

    I think Martin got distraught by 3) and though that was the only thing this patch is about. The important part is 1) and 2) whereas 3) is provided as a bonus (and to make sure that 1) is future-safe)

    So, can we get this reviewed please?

    @pitrou
    Copy link
    Member

    pitrou commented May 4, 2012

    I agree that Martin that it's not a good idea to add "dead" code.

    Furthermore, you patch has:

    +#ifndef _PY_EMULATED_WIN_CV
    +#define _PY_EMULATED_WIN_CV 0 /* use emulated condition variables */
    +#endif
    +
    +#if !defined NTDDI_VISTA || NTDDI_VERSION < NTDDI_VISTA
    +#undef _PY_EMULATED_WIN_CV
    +#define _PY_EMULATED_WIN_CV 1
    +#endif

    so am I right to understand that when compiled under Vista or later, it will produce an XP-incompatible binary?

    @kristjanvalur
    Copy link
    Mannequin

    kristjanvalur mannequin commented Jun 7, 2012

    Possibly the patch had a mixup
    I'm going to rework it a bit and post as a separate issue.

    @kristjanvalur kristjanvalur mannequin closed this as completed Jun 7, 2012
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    2 participants