This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: asyncio.Lock.acquire() does not always yield
Type: behavior Stage:
Components: asyncio Versions: Python 3.6
process
Status: closed Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: anonymous2017, gvanrossum, yselivanov
Priority: normal Keywords:

Created on 2017-01-17 23:26 by anonymous2017, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (4)
msg285688 - (view) Author: anonymous2017 (anonymous2017) Date: 2017-01-17 23:26
Details here: http://stackoverflow.com/questions/41708609/unfair-scheduling-bad-lock-optimization-in-asyncio-event-loop

Essentially there should be a `yield` before this line otherwise a coroutine that only acquires and releases a lock will never yield to other coroutines.
msg285689 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2017-01-17 23:27
Locks are not meant to have predictable behavior like that. They are meant to protect against concurrent access. If you want fairness you have to look elsewhere.
msg285690 - (view) Author: anonymous2017 (anonymous2017) Date: 2017-01-17 23:40
I may have mis-characterized the original report. Rephrasing just to make sure it is correctly addressed:

First, it is not about fairness. Sorry for the original characterization. I was trying to understand what was happening.

Second, the precise issue is whether a `yield from` can be relied upon for cooperative multitasking. Should other co-routines should be prevented from running or delayed if one routine is running and acquiring and releasing a lock using `yield from`.

It seemed at first sight that `yield from <anything>` should give the scheduler a chance to consider other routines. But the lock code above and including line #171 does not give control to the scheduler in a special case: https://github.com/python/cpython/blob/master/Lib/asyncio/locks.py#L171

Code Sample 1 (shows b() being delayed until a() is fully complete despite a() yielding many times)
=============
import asyncio

lock = asyncio.Lock()

def a ():
 yield from lock.acquire()
 for i in range(10):
  print('b: ' + str(i))
  if i % 2 == 0:
   lock.release()
   yield from lock.acquire()
 lock.release()

async def b ():
 print('hello')

asyncio.get_event_loop().run_until_complete(asyncio.gather(a(), b()))

print('done')

Code Sample 2
=============
(shows interleaving if an additional INITIAL yield from asyncio.sleep is inserted; removing the initial sleep removes the interleaving)

import asyncio

lock = asyncio.Lock()

def a ():
 yield from lock.acquire()
 yield from asyncio.sleep(0)
 for i in range(10):
  print('a: ' + str(i))
  if i % 2 == 0:
   lock.release()
   yield from lock.acquire()
 lock.release()

def b ():
 yield from lock.acquire()
 yield from asyncio.sleep(0)
 for i in range(10):
  print('b: ' + str(i))
  if i % 2 == 0:
   lock.release()
   yield from lock.acquire()
 lock.release()

asyncio.get_event_loop().run_until_complete(asyncio.gather(a(), b()))

print('done')


Thank you for your kind consideration.
msg285691 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2017-01-17 23:59
No, `yield from` (or, in Python 3.5+, `await`) is not meant to bound back to the scheduler. If the target is a coroutine that itself doesn't yield, it is a *feature* that the scheduler is bypassed.

If you want to force a trip through the scheduler, use `asyncio.sleep(0)`.
History
Date User Action Args
2022-04-11 14:58:42adminsetgithub: 73489
2017-01-17 23:59:48gvanrossumsetstatus: open -> closed

messages: + msg285691
2017-01-17 23:40:54anonymous2017setmessages: + msg285690
2017-01-17 23:27:54gvanrossumsetmessages: + msg285689
2017-01-17 23:26:15anonymous2017create