This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: asyncio: nested event loop
Type: enhancement Stage:
Components: asyncio Versions:
process
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: Nosy List: Rokas K. (rku), crusaderky, davidbrochart, djarb, douglas-raillard-arm, jab, jcea, kumaraditya, martin.panter, njs, pmpp, rob.moore, yselivanov, zzzeek
Priority: normal Keywords: patch

Created on 2014-08-20 18:03 by djarb, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
nested.py djarb, 2014-08-20 18:03 Decorator allowing a coroutine to run in a nested event loop
nested.patch djarb, 2014-08-28 19:10 review
Messages (23)
msg225578 - (view) Author: Daniel Arbuckle (djarb) * Date: 2014-08-20 18:03
It's occasionally necessary to invoke the asyncio event loop from code that was itself invoked within (although usually not directly by) the event loop.

For example, imagine you are writing a class that serves as a local proxy for a remote data structure. You can not make the __contains__ method of that class into a coroutine, because Python automatically converts the return value into a boolean. However, __contains__ must invoke coroutines in order to communicate over the network, and it must be invokable from within a coroutine to be at all useful.

If the event loop _run_once method were reentrant, addressing this problem would be simple. That primitive could be used to create a loop_until_complete function, which could be applied to the io tasks that __contains__ needs to invoke

So, making _run_once reentrant is one way of addressing this request.

Alternately, I've attached a decorator that sets aside some of the state of _run_once, runs a couroutine to completion in a nested event loop, restores the saved state, and returns the coroutine's result. This is merely a proof of concept, but it does work, at least in my experiments.
msg225883 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-08-25 18:04
While I understand your problem, I really do not want to enable recursive event loops. While they are popular in some event systems (IIRC libevent relies heavily on the concept), I have heard some strong objections from other parts, and I am trying to keep the basic event loop functionality limited to encourage interoperability with other even loop systems (e.g. Tornado, Twisted).

In my own experience, the very programming technique that you are proposing has caused some very hard to debug problems that appeared as very infrequent and hard to predict stack overflows.

I understand this will make your code slightly less elegant in some cases, but I think in the end it is for the best if you are required to define an explicit method (declared to be a coroutine) for membership testing of a remote object.  The explicit "yield from" will help the readers of your code understand that global state may change (due to other callbacks running while you are blocked), and potentially help a static analyzer find bugs in your code before they take down your production systems.
msg226037 - (view) Author: Daniel Arbuckle (djarb) * Date: 2014-08-28 19:10
All right.

However, for anyone who's interested, here is a patch that enables nested event loops in asyncio, and the accompanying unit tests
msg298388 - (view) Author: Rokas K. (rku) (Rokas K. (rku)) Date: 2017-07-15 08:21
I understand rationale for rejection of this issue but i beg to reconsider.

Unlike in traditional coroutines (windows fibers / setjmp|longjmp with stack switching) we can not yield from any point of execution. There must be full async-await chain preserved. This basically divides code into two islands - async and non-async. And there seems to be no way to schedule async call from non-async code and get a response. While suggestion to make custom `async def contains()` call is a valid one we can not always do that. Consider the case when we have to do some networking calls in a function that is invoked by non-async library. Naturally it would be simple non-awaited call from which we can not call a coroutine and get a response. And since it is a library calling into our code we can not easily change it. It might even be completely unsuitable change for library in question.

I see two solutions to this problem (if i am missing something please point it out):

1. Reentrant loops as suggested in this issue.
2. Allow awaited calls from non-coroutines provided execution is invoked from a coroutine somewhere up the call stack.

First one is certainly easier to implement.
msg373145 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-06 17:05
hey there,

I seem to have two cents to offer so here it is.    An obscure issue in the Python bug tracker is probably not the right place for this so consider this as an early draft of something that maybe I'll talk about more elsewhere.

> This basically divides code into two islands - async and non-async

yes, this is the problem, and at the bottom of this apparently somewhat ranty comment is a solution, and the good news is that it does not require Python or asyncio be modified.  My concern is kind of around how it is that everyone has been OK with the current state of affairs for so long, why it is that "asyncio is fundamentally incompatible with library X" is considered to be acceptable, and also how easy it was to find a workaround, this is not something I would have expected to come up with.  Kind of like you don't expect to invent Velcro or windshield wipers.

asyncio's approach is what those of us in the library/framework community call "explicit async", you have to mark functions that will be doing IO and the points at which IO occurs must also be marked.    Long ago it was via callback functions, then asyncio turned it into decorators and yields, and finally pep492 turned it into async/await, and it is very nicely done.  It is of course a feature of asyncio that writing out async/await means your code can in theory be clearer as to where IO occurs and all that, and while I don't totally buy that myself, I'm of course in favor of that style of coding being available, it definitely has its own kind of self-satisfaction built in when you do it.  That's all great.

But as those of us in the library/framework community also know, asyncio's approach essentially means, libraries like Flask, Django, my own SQLAlchemy, etc. are all automatically "non-workable" with the asyncio approach; while these libraries can certainly have asyncio endpoints added to them, the task as designed is not that simple, since to go from an asyncio endpoint all the way through library code that doesn't care about async and then down into a networking library that again has asyncio endpoints, the publishing of "async" and the "await" or yield approach must be wired all the way through every function and method.  This is all despite that when you're not at the endpoints, the points at which IO occurs is fully predictable such that libraries like gevent don't need you to write it.   So we are told that libraries have to have full end-to-end rewrites of all their code to work this way, or otherwise maintain two codebases, or something like that.

The side effect of this is that a whole bunch of library and framework authors now get to create all new libraries and frameworks, which do exactly the same thing as all the existing libraries and frameworks, except they sprinkle the "async/await" keywords throughout middle tiers as required.  Vague claims of "framework X is faster because it's async" appear, impossible to confirm as it is unknown how much of their performance gains come from the "async" aspect and how much of it is that they happened to rewrite a new framework from scratch in a completely different way (hint: it's the latter).

Or in other cases, as if to make it obvious how much the "async/await" keywords come down to being more or less boilerplate for the "middle" parts of libraries, the urllib3 project wrote the "unasync" project [1] so that they can simply maintain two separate codebases, one that has "async/await" and  the other which just search-and-replaced them out.

SQLAlchemy has not been "replaced" by this trend as asyncio database libraries have not really taken off in Python, and there are very few actual async drivers.   Some folks have written SQLAlchemy-async libraries that use SQLAlchemy's expression system while they have done the tedious, redundant and impossible-to-maintain work of replicating enough of SQLAlchemy's execution internals such that a modest "sqlalchemy-like" experience with asyncio can be reproduced. But these libraries are closed out from all of the fixes and improvements that occur to SQLAlchemy itself, as well as that these systems likely target a smaller subset of SQLAlchemy's behaviors and features in any case.    They certainly can't get the ORM working as the ORM runs lots of SQL executions internally, all of which would have to propagate their "asyncness" outwards throughout hundreds of functions.

The asyncpg project, one of the few asyncio database drivers that exists, notes in its FAQ "asyncpg uses asynchronous execution model and API, which is fundamentally incompatible with SQLAlchemy" [2], yet we know this is not true  because SQLAlchemy works just fine with gevent and eventlet, with no architectural changes at all.  Using libraries like SQLAlchemy or Django with a non-blocking IO, event-based model is commonplace.   It's the "explicit" part of it that is hard, which is because of how asyncio is designed, without any mediation for code that doesn't publish "async / await" keywords in the middle.

So I finally just sat down to figure out how to use the underlying greenlet library (which we all know as the portable version of "Stackless Python") to bridge the gap between asyncio and blocking-style code, it's about 30 lines and I have SQLAlchemy working with an async front-end to asyncpg DBAPI as can be seen at [3] based on the proof of concept at [4].  I'm actually running the full py.test suite all inside the asyncio event loop and running asyncpg through SQLAlchemy's whole battery of thousands of tests, all of them written in purely blocking style, and there's not any need to add "async / await / yield / etc" anywhere except the very endpoints, that is, where the top function is called, and then down where we call into asyncpg directly, using a function called await_() that works just like the "await" keyword.  Just no "async" function declaration.

A day later, someone took the same idea and got Flask to work in an asyncio event loop at [5].  The general idea of using greenlet in this way is also present at [6], so I won't be patenting this idea today as oremanj can claim prior art.

Using greenlet, there is no need to break out of the asyncio event loop at all, nor does it change the control flow of parallel coroutines within the loop. It uses greenlet's "switch", quite minimally, to bridge the gap between code that does not push out an "async/await" yield and code that does.   There are no threadpools, no alternate event loops, no monkeypatching, just a few greenlet.switch() calls in the right spots.   A slight performance decrease of about 15%, but in theory one would only be using asyncio if their application is expected to be IO bound in any case (which folks that know me know is another assertion I frequently doubt).

So to sum up, last week, libraries like Flask and SQLAlchemy were "fundamentally incompatible" with asyncio, and this week they are not.    What's confusing me is that I'm not that smart and this is something all of the affected libraries should have been doing years ago, and really, while I know this is not going to happen, this should be *part of asyncio itself* or at least a very standard approach so that nobody has to assume asyncio means "rewrite all your library code".

To add an extra bonus, you can use this greenlet approach to have blocking-style functions right in the middle of your otherwise asyncio application.  Which means this also is a potential solution to the "lazy-loading" problem.  You have an asyncio app that does lots of asyncio to talk to microservices, but some functions are doing database work and they really would like to just work in a transaction, load some objects and access their attributes without worrying that a SQL statement can't be emitted.  This approach makes that possible as well.  ORM lazy loading with the asyncpg driver: [7]  .     Indeed, if you have a PostgreSQL SQLAlchemy application already written in blocking style, you can use this new extension and drop the entire application into the event loop and use the asyncpg driver, not too unlike using gevent except nothing is monkeypatched.

The recipe is simple and so far appears to be very effective.   Using greenlet to manipulate the stack is of course "spooky" and I would assume Python devs may propose that this would lead to hard-to-debug conditions.   I've used gevent and eventlet for many years and while they do produce some new issues, most of them relate to the fact that they use monkeypatching of existing modules and particularly around low level network drivers like pymysql.  The actual stack moving around within business logic doesn't seem to produce any difficult new issues.   Using plain asyncio has a lot of novel and confusing failure modes too.    Using the little bit of "spookyness" of greenlet IMO is a lot less work than rewriting SQLAlchemy, Django ORM, Flask, urllib3, etc. from scratch and maintaining two codebases though.


[1] https://pypi.org/project/unasync/

[2] https://magicstack.github.io/asyncpg/current/faq.html#can-i-use-asyncpg-with-sqlalchemy-orm

[3] https://gerrit.sqlalchemy.org/c/sqlalchemy/sqlalchemy/+/2071

[4] https://gist.github.com/zzzeek/4e89ce6226826e7a8df13e1b573ad354

[5] https://twitter.com/miguelgrinberg/status/1279894131976921088

[6] https://github.com/oremanj/greenback

[7] https://gerrit.sqlalchemy.org/plugins/gitiles/sqlalchemy/sqlalchemy/+/refs/changes/71/2071/10/examples/asyncio/greenlet_orm.py
msg373177 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2020-07-06 21:33
Thanks for posting this, Mike.

> Vague claims of "framework X is faster because it's async" appear, impossible to confirm as it is unknown how much of their performance gains come from the "async" aspect and how much of it is that they happened to rewrite a new framework from scratch in a completely different way (hint: it's the latter).

These kind of claims are not specific to async vs. sync. They are all over the place for every two pieces of comparable technologies. While novice users might base their technology choice purely on such benchmarks, it's less of an issue for startups/tech companies.

That said, I agree with most of your points so far.

> The asyncpg project, one of the few asyncio database drivers that exists, notes in its FAQ "asyncpg uses asynchronous execution model and API, which is fundamentally incompatible with SQLAlchemy" [2], yet we know this is not true  because SQLAlchemy works just fine with gevent and eventlet, with no architectural changes at all.

But it is true. Making asynchronous network requests in asyncio requires async/await or using callbacks and it's not possible to do them, say, from __getattr__ (you mention this yourself).  This is what that particular comment is about, nothing more. Using gevent and eventlet as examples in this particular context isn't helping you. Apologies for nitpicking, I know it's not the point of this discussion.

> A day later, someone took the same idea and got Flask to work in an asyncio event loop at [5].  The general idea of using greenlet in this way is also present at [6], so I won't be patenting this idea today as oremanj can claim prior art.

Yes, this approach definitely works and I even did that in production myself a few years ago with https://github.com/1st1/greenio (it's terribly outdated now).

> The recipe is simple and so far appears to be very effective.

This recipe was one of the reasons why I added `loop.set_task_factory` method to the spec, so that it's possible to implement this in an *existing* event loop like uvloop. But ultimately asyncio is flexible enough to let users use their own event loop which can be compatible with greenlets among other improvements.

Ultimately, asyncio will probably never ship with greenlets integration enabled by default, but we should definitely make it possible (if there are some limitations now).  It doesn't seem to me that nested event loops are needed for this, right?
msg373183 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-06 21:58
>  This recipe was one of the reasons why I added `loop.set_task_factory` method to the spec, so that it's possible to implement this in an *existing* event loop like uvloop. But ultimately asyncio is flexible enough to let users use their own event loop which can be compatible with greenlets among other improvements.

Right, when I sought to look at this, I know that my users want to use the regular event loop in asyncio or whatever system they are using.


> Ultimately, asyncio will probably never ship with greenlets integration enabled by default, but we should definitely make it possible (if there are some limitations now).  It doesn't seem to me that nested event loops are needed for this, right?

So right, the approach I came up with does not need nested event loops and it's my vague understanding that nesting event loops is more difficult to debug, because you have these two separate loops handing off to each other.    

What is most striking about my recipe is that it never even leaves the default event loop.  Originally my first iteration when I was trying to get it working, I had a separate thread going on, as it seemed intuitive that "of course you need a thread to bridge async and sync code" but then I erased the "Thread()" part around it and then it just worked anyway.   Like it's simple enough that shipping this as a third party library is almost not even worth it, you can just drop this in wherever.   If different libraries had their own drop-in of this, they would even work together.   greenlet is really like totally cheating.  

the philosophical thing here is, usually in my many twitter debates on the subject, folks get into how they like the explicit async and await keywords and they like that IO is explicit.   So I'm seeking to keep these people happy and give then "async def execute(sql)", and use an async DB driver, but then the library that these folks are using is internally not actually explicit IO.  But they don't have to see that, I've done all the work of the "implicit IO" stuff and in a library it's not the same kind of problem anyway.     I think this is a really critical technique to have so that libraries that mediate between a user-facing facade and TCP based backends no longer have to make a hard choice about if they are to support sync vs. async (or async with an optional sync facade around it).
msg373186 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2020-07-06 22:13
> I think this is a really critical technique to have so that libraries that mediate between a user-facing facade and TCP based backends no longer have to make a hard choice about if they are to support sync vs. async (or async with an optional sync facade around it).

If this works for such a big and elaborate framework as SQLA, we can definitely highlight this as a valid approach and even add a link to a blog post from the docs. We'll need to add an asyncio specific FAQ page for that or something similar.

Another approach, which would probably be a nonstarter for SQLA, is to use async/await for literally everything internally, and provide a tiny synchronous facade on top.  Funny thing you don't even need an event loop for that, just the basic understanding of how coroutines work internally.  I used this to create the edgedb-python package which has both sync and async first-class support with one code base.  Sync is even faster there in simple throughput benchmarks (as expected).
msg373192 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-06 23:28
yes so if you have async/await all internal, are you saying you can make that work for synchronous code *without* running the event loop?  that is, some kind of container that just does the right thing?  my concern with that would still be performance.    When asyncio was based on yield and exception throws, that was a lot of overhead to add to functions and that was what my performance testing some years back showed.   w/ async/await I'm sure things have been optimized, but in general when i have function a() -> b() -> c(), I am trying to iron as much Python overhead as I possibly can out of that and I'd be concerned that the machinery to work through async/await would add latency.   additionally if it was async/await internally but then i need to access the majority of Python DBAPIs that are sync, I need a thread pool anyway, right?  which is also another big barrier to jump over.

It seems you were involved with urllib3's approach to use a code rewriter rather than a runtime approach based on the discussion at https://github.com/urllib3/urllib3/issues/1323 , but it's not clear if Python 2 compatibility was the only factor or if the concern of "writing a giant shim" was also.
msg373240 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2020-07-07 19:20
Yeah, writing a trivial "event loop" to drive actually-synchronous code is easy. Try it out:

-----

async def f():
    print("hi from f()")
    await g()

async def g():
    print("hi from g()")

# This is our event loop:
coro = f()
try:
    coro.send(None)
except StopIteration:
    pass

-----

I guess there's technically some overhead, but it's tiny.

I think dropping 'await' syntax has two major downsides:

Downside 1: 'await' shows you where context switches can happen: As we know, writing correct thread-safe code is mind-bendingly hard, because data can change underneath your feet at any moment. With async/await, things are much easier to reason about, because any span of code that doesn't contain an 'await' is automatically atomic:

---
async def task1():
    # These two assignments happen atomically, so it's impossible for
    # another task to see 'someobj' in an inconsistent state.
    someobj.a = 1
    someobj.b = 2
---

This applies to all basic operations like __getitem__ and __setitem__, arithmetic, etc. -- in the async/await world, any combination of these is automatically atomic.

With greenlets OTOH, it becomes possible for another task to observe someobj.a == 1 without someobj.b == 2, in case someobj.__setattr__ internally invoked an await_(). Any operation can potentially invoke a context switch. So debugging greenlets code is roughly as hard as debugging full-on multithreaded code, and much harder than debugging async/await code.

This first downside has been widely discussed (e.g. Glyph's "unyielding" blog post), but I think the other downside is more important:

- 'await' shows where cancellation can happen: Synchronous libraries don't have a concept of cancellation. OTOH async libraries *are* expected to handle cancellation cleanly and correctly. This is *not* trivial. With your sqlalchemy+greenlets code, you've introduced probably hundreds of extra unwinding paths that you've never tested or probably even thought about. How confident are you that they all unwind correctly (e.g. without corrupting sqlalchemy's internal state)? How do you plan to even find them, given that you can't see the cancellation points in your code? How can your users tell which operations could raise a cancelled exception?

AFAICT you can't reasonably build systems that handle cancellation correctly without some explicit mechanism to track where the cancellation can happen. There's a ton of prior art here and you see this conclusion over and over.

tl;dr: I think switching from async/await -> greenlets would make it much easier to write programs that are 90% correct, and much harder to write programs that are 100% correct. That might be a good tradeoff in some situations, but it's a lot more complicated than it seems.
msg373241 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2020-07-07 19:37
> Yeah, writing a trivial "event loop" to drive actually-synchronous code is easy. Try it out:

This is exactly the approach I used in edgedb-python.

> I guess there's technically some overhead, but it's tiny.

Correct, the overhead isn't even detectable in microbenchmarks. In most async programs regular function calls dominate awaits by a few factors of magnitude.

> I think dropping 'await' syntax has two major downsides:

For the extra context, in the case of using this approach for something like edgedb-python these downsides don't really apply, because we're adapting async/await implementation to be sync. The async/await code can handle cancellation etc. whereas the sync code only needs to support the general protocol parsing flow. FWIW I don't think it would be possible to apply my approach to SQLA without a very invasive rewrite, which isn't probably worth it.

> tl;dr: I think switching from async/await -> greenlets would make it much easier to write programs that are 90% correct, and much harder to write programs that are 100% correct. That might be a good tradeoff in some situations, but it's a lot more complicated than it seems.

Yeah, this sums up my opinion on this topic.

Also, having spent a couple of years writing and debugging big and small greenlet-heavy code bases I wouldn't want to touch them ever again. Your mileage may vary, Mike.
msg373244 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-07 20:12
> With greenlets OTOH, it becomes possible for another task to observe someobj.a == 1 without someobj.b == 2, in case someobj.__setattr__ internally invoked an await_(). Any operation can potentially invoke a context switch. So debugging greenlets code is roughly as hard as debugging full-on multithreaded code, and much harder than debugging async/await code.

I would invite you to look more closely at my approach.   The situation you describe above applies to a library like gevent, where IO means a context switch that can go anywhere.  My small recipe never breaks out of the asyncio event loop, and it only context switches within the scope of a single coroutine, not to any arbitrary coroutine.   So I don't think the above issue applies.

Additionally, we are here talking about *libraries* that are independently developed and tested distinct from end-user code.    If there's a bug in SQLAlchemy, the end user isn't the person debugging that.   arguments over "is async or sync easier to debug" are IMO pretty subjective and at this point they are not relevant to what sync-based libraries should be doing.
msg373245 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-07 20:26
> With greenlets OTOH, it becomes possible for another task to observe someobj.a == 1 without someobj.b == 2, in case someobj.__setattr__ internally invoked an await_().

let me try this one more time.    Basically if someone wrote this:

async def do_thing():
   someobj.a =1
   await do_io_setattr(someobj, "b", 2)

then in the async approach, you can again say, assuming "someobj" is global, that another task can observe "someobj.a == 1" without "someobj.b == 2".    I suppose you are making the argument that because there's an "await" keyword there, now everything is OK because the reader of the code knows there's a context switch.

Whether or not one buys that, the point of my approach is that SQLAlchemy itself *will* publish async methods.  End user code *will not* ever context switch to another task without them explicitly calling using an await.  That SQLAlchemy internally is not using this coding style, whether or not that leads to new kinds of bugs, there are new kinds of bugs no matter what kind of code a library uses, I don't think this hurts the user community.  The community is hurting *A LOT* right now because asyncio is intentionally non-compatible with the traditional blocking approach that is not only still prevalent it's one that a lot of us think is *easier* to work with.
msg373247 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2020-07-07 20:43
> The community is hurting *A LOT* right now because asyncio is intentionally non-compatible with the traditional blocking approach that is not only still prevalent it's one that a lot of us think is *easier* to work with.

Mike, I'm super happy with having you here and I encourage you to propose feature requests etc. That said, please don't use arguments like this here. Everyone has their own point of view and I, for example, haven't seen the "A LOT of community hurt" you're describing. I'm not implying that what you're saying is wrong, or that asyncio is perfect; the point is that it's just very subjective. The bug tracker is not the medium for these kind of remarks.

> That SQLAlchemy internally is not using this coding style, whether or not that leads to new kinds of bugs, there are new kinds of bugs no matter what kind of code a library uses, I don't think this hurts the user community.

You're free to use whatever approach you want in SQLAlchemy. We're here to share our advice and perspective (if we have any) and/or to discuss concrete proposals for API improvements or changes.
msg373249 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2020-07-07 21:15
> Whether or not one buys that, the point of my approach is that SQLAlchemy itself *will* publish async methods.  End user code *will not* ever context switch to another task without them explicitly calling using an await.

Oh, I thought the primary problem for SQLAlchemy supporting async is that the ORM needs to do IO from inside __getattr__ methods. So I assumed that the reason you were so excited about greenlets was that it would let you use await_() from inside those __getattr__ calls, which would involve exposing your use of greenlets as part of your public API.

If you're just talking about using greenlets internally and then writing both sync and async shims to be your public API, then obviously that reduces the risks. Maybe greenlets will cause you problems, maybe not, but either way you know what you're getting into and the decision only affects you :-). But, if that's all you're using them for, then I'm not sure that they have a significant advantage over the edgedb-style synchronous wrapper or the unasync-style automatically generated sync code.
msg373270 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-08 03:44
> Oh, I thought the primary problem for SQLAlchemy supporting async is that the ORM needs to do IO from inside __getattr__ methods. So I assumed that the reason you were so excited about greenlets was that it would let you use await_() from inside those __getattr__ calls, which would involve exposing your use of greenlets as part of your public API.


The primary problem is people want to execute() a SQL statement using await, and then they want to use a non-blocking database driver (basically asyncpg, I'm not sure there are any others, maybe there's one for MySQL also) on the back.    Tools like aiopg have provided partial SQLAlchemy-like front-ends to accomplish this but they can't do ORM support, not because the ORM has lazy loading, but just to do explicit operations like query.all() or session.flush() that can sometimes require a lot of front-to-back database operations to complete which would be very involved to rewrite all that code using async/await.

Then there's the secondary problem of ORMs doing lazy loading, which is what you refer towards as "IO inside __getattr__ methods".   SQLAlchemy is not actually as dependent on lazy loading as other ORMs as we support a wide range of ways to "eagerly" load data up front.  With the SQLAlchemy 2.0-style ORM API that has a clear spot for "await" to occur, they can call "await session.execute(select(SomeObject))" and get a whole traversible graph of things loaded up front.    We even have a loader called "raiseload" that is specifically anti-lazy loading, it's a loader that raises an error if you try to access something that wasn't explicitly loaded already.  So for a lot of cases we are already there.

But then, towards your example of "something.b = x", or more commonly in ORMS a get operation like "something.b" emitting SQL, the extension I'm building will very likely include some kind of feature that they can do this with an explicit call.  At the moment with the preliminary code that's in there, this might look like:

   await greenlet_spawn(getattr, something, "b")

not very pretty at the moment but that general idea.   

But the thing is, greenlet_spawn() can naturally apply to anything.  So it remains to be seen both how I would want to present this notion, as well as if people are going to be interested in it or not, but as a totally extra thing beyond the "await session.execute()" API that is the main thing, someone could do something like this:

   await greenlet_spawn(my_business_orm_method)

and then in "my_business_orm_method()", all the blocking style ORM things that async advocates warn against could be happening in there.     I'm certainly not going to tell people they have to be doing that, but I dont think I should discourage it either, because if the above business method is written "reasonably" (see next paragraph), there really is no problem introduced by implicit IO.

By "written reasonably" I'm referring to the fact that in this whole situation, 90% of everything people are doing here are in the context of HTTP services.   The problem of, "something.a now creates state that other tasks might see" is not a real "problem" that is solved by using IO-only explicit context switching.  This is because in a CRUD-style application, "something" is not going to be a process-local yet thread-global object that had to be created specifically for the application (there's things like the database connection pool and some registries that the ORM uses, but those problems are taken care of and aren't specific to one particular application).     There is certainly going to be global mutable state with the CRUD/HTTP application which is the database itself.  Event based programming doesn't save you from concurrency issues here because any number of processes maybe accessing the database at the same time.  There are well-established concurrency patterns one uses with relational databases, which include first and foremost transaction isolation, but also things like compare-and-swap, "select for update", ensuring MVCC is turned on (SQL Server), table locks, etc.  These techniques are independent of the concurrency pattern used within the application, and they are arguably better suited to blocking-style code in any case because on the database side we must emit our commands within a transaction serially in any case.   The major convenient point of "async" that we can fire off a bunch of web service requests in parallel does not apply to the CRUD-style business methods within our web service request because we can only do things in our ACID transaction one at a time.

The problem of "something.a" emitting IO needs to be made sane against other processes also viewing or altering "something.a", assuming "something" is a database-bound object like a row in a table, using traditional database concurrency constructs such as choosing an appropriate isolation mode, using atomically-composed SQL statements, things like that.   The problem of two greenlets or coroutines seeing "something" before it's been fully altered would happen across two processes in any case, but if "something" is a database row, that second greenlet would not see "something.a / something.b" in mid-flight because the isolation level is going to be at least "read committed".

In the realm of Python HTTP/CRUD applications, async is actually very popular however it is in the form of gevent and sometimes eventlet monkeypatching, often because people are using async web servers like gunicorn.    I don't see much explicit async at all because as mentioned before, there are very few async database drivers and there are also very few async database abstraction layers.   I've sort of made a side business at work out of helping people with the problems of gevent-enabled HTTP services.  There are two problems that I see: the main one is that they configure their workers for 1000 greenlets, they set their database connection pool to only allow 20 database connections, and then their processes get totally hung as all the requests pile up in one process that is advertising that it still has 980 more requests it can service.  The other one is that their application is completely CPU bound, and sometimes so badly that we see database timeouts because their greenlets can't respond to a database ping or authentication challenge within 30 seconds.   I have never seen any issues related to the fact that IO is implicit or that lazy loading confused someone.    Maybe this is a thing if they had some kind of microservice-parallel HTTP request spawning monster of some kind but we don't have that kind of thing in CRUD applications.

The two aforementioned problems with too many greenlets or coroutines vs. what their application can actually handle would occur just as much with an explicit async driver, and that's fine, I know how to debug these cases.  But in any case, people are already writing huge CRUD apps that run under gevent.   To my secondary idea that someone can run their app using asyncio and then on an *as needed* basis put some more CRUD-like methods into greenlets with blocking style code, this is an *improvement* over the current state of affairs where everything everywhere is implicit IO.  Not only that, but they can do this already common programming style and interact with a database driver that is *designed for async*.   Right now everyone uses pymysql because it is pure Python and therefore can have all the socket / IO related code monkeypatched by gevent.  It's bad.  Whether or not one thinks writing HTTP services using greenlets is a good idea or not, it is definitely better to do it using a database driver that is designed for async talking to the database without doing any monkeypatching.  My approach makes this possible where it has previously not been possible at all, so I think this represents a big improvement to an already popular programming pattern while at the same time introduces the notion of a single application using both explicit and implicit approaches simultaneously.

I think the notion that someone who really wants to use async/await in order to carefully schedule how they communicate with other web services and resources which often need to be loaded in parallel, but then for their transactional CRUD code which is necessarily serial in any case they can write those parts in blocking style, is a good thing.    This style of code is already prevalent and here we'd be giving an application the ability to use both styles simultaneously.   I had always hoped that Python's move towards asyncio would allow this programming paradigm to flourish as it seems inherently useful.  


> If you're just talking about using greenlets internally and then writing both sync and async shims to be your public API, then obviously that reduces the risks. Maybe greenlets will cause you problems, maybe not, but either way you know what you're getting into and the decision only affects you :-). But, if that's all you're using them for, then I'm not sure that they have a significant advantage over the edgedb-style synchronous wrapper or the unasync-style automatically generated sync code.> 

w.r.t the issue of writing everything as async and then using the coroutine primitives to convert to "sync" as means of maintaining both facades, I don't think that covers the fact that most DBAPI drivers are sync only (and not monkeypatchable either, but I think we all agree here that monkeypatching is terrible in any case), and to suit the much more common use case of sync front end -> agnostic middle -> sync driver, to go from an async event loop to a blocking IO database driver you need to use a thread executor of some kind.    The other way around, that the library code is written in "sync" and you can attach "async" to both ends of it using greenlets in the middle, much more lightweight of a transition in that direction, vs. the transition of async internals out to a sync only driver.
msg373273 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-08 04:00
slight correction: it is of course possible to use gevent with a database driver without monkeypatching, as I wrote my own gevent benchmarks using psycogreen.  I think what I'm getting at is that it's a good thing if async DBAPIs could target asyncio explicitly rather than having to write different gevent/eventlet specific things, and that tools like SQLAlchemy can allow for greenlet style coding against those DBAPIs without one having to install/run the whole gevent event loop.   Basically I like the greenlet style of coding but I would be excited to skip the gevent part, never do any monkeypatching again, and also have other parts of the app doing asyncio work with other kinds of services.     this is about interoperability.
msg373280 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2020-07-08 06:26
> 90% of everything people are doing here are in the context of HTTP services.   The problem of, "something.a now creates state that other tasks might see" is not a real "problem" that is solved by using IO-only explicit context switching.  This is because in a CRUD-style application, "something" is not going to be a process-local yet thread-global object that had to be created specifically for the application (there's things like the database connection pool and some registries that the ORM uses, but those problems are taken care of and aren't specific to one particular application).

Yeah, in classic HTTP CRUD services the concurrency is just a bunch of stateless handlers running simultaneously. This is about the simplest possible kind of concurrency. There are times when async is useful here, but to me the main motivation for async is for building applications with more complex concurrency, that currently just don't get written because of the lack of good frameworks. So that 90% number might be accurate for right now, but I'm not sure it's accurate for the future.

> In the realm of Python HTTP/CRUD applications, async is actually very popular however it is in the form of gevent and sometimes eventlet monkeypatching, often because people are using async web servers like gunicorn.

A critical difference between gevent-style async and newer frameworks like asyncio and trio is that the newer frameworks put cancellation support much more in the foreground. To me cancellation is the strongest argument for 'await' syntax, so I'm not sure experience with gevent is representative.

I am a bit struck that you haven't mentioned cancellation handling at all in your replies. I can't emphasize enough how much cancellation requires care and attention throughout the whole ecosystem.

> w.r.t the issue of writing everything as async and then using the coroutine primitives to convert to "sync" as means of maintaining both facades, I don't think that covers the fact that most DBAPI drivers are sync only

I think I either disagree or am missing something :-). Certainly for both edgedb and urllib3, when they're running in sync mode, they end up using synchronous network APIs at the "bottom", and it works fine.

The greenlet approach does let you skip adding async/await annotations to your code, so it saves some work that way. IME this isn't particularly difficult (you probably don't need to change any logic at all, just add some extra annotations), and to me the benefits outweigh that, but I can see how you might prefer greenlets either temporarily as a transition hack or even in the long term.
msg373307 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-08 13:28
as far as cancellation, I gather you're referring to what in gevent / greenlet is the GreenletExit exception.  Sure, that thing is a PITA.   Hence we're all working to provide asyncio frontends and networking backends so that the effects of cancellation I (handwavy handwavy) believe would work smoothly as long as the middle part is done right.   cancellation is likely a more prominent issue with HTTP requests and responses because users are hitting their browser stop buttons all the time.  With databases this typically is within the realm of network partitioning or service restarts, or if the driver is screwing up in some way which with the monkeypatching thing is more likely, but "cancellation" from a database perspective is not the constant event that I think it would be in an HTTP perspective.


> I think I either disagree or am missing something :-). Certainly for both edgedb and urllib3, when they're running in sync mode, they end up using synchronous network APIs at the "bottom", and it works fine.

OK it took me a minute to understand what you're saying, which is, if we are doing the coroutine.send() thing you illustrated below, we're not in an event loop anyway so we can just call blocking code.   OK I did not understand that.  I haven't looked at the coroutine internals through all of this (which is part of my original assertion that I should not have been the person proposing this whole greenlet thing anyway :) ).

Why did urllib3 write unasync?  https://pypi.org/project/unasync/    strictly so they can have a python 2 codebase and that's it?   

SQLAlchemy goes python 3 only in version 2.0.  I did bench the coro example against a non-coro example and it's 3x slower likely due to the StopIteration but as mentioned earlier if this is only once per front-to-back then it would not amount to anything in context.   Still, the risk factor of a rewrite like that, where risk encompasses just all the dumb mistakes and bugs that would be introduced by rewriting everything, does not seem worth it.
msg373311 - (view) Author: mike bayer (zzzeek) * Date: 2020-07-08 14:59
I tested "cancellation", shutting down the DB connection mid query.  Because the greenlet is only in the middle and not at the endpoints, it propagates the exception and there does not seem to be anything different except for the greenlet sequence in the middle, which is also clear:

https://gist.github.com/zzzeek/9e0d78eff14b3bbd5cf12fed8b02bce6

the first comment on the gist has the stack trace produced.
msg389487 - (view) Author: David Brochart (davidbrochart) Date: 2021-03-24 19:42
Regarding the initial message in this issue, and enabling recursive event loops, this has proved to be very useful when an event loop is already running and some non-async code needs to run async code. This situation is very frequent when e.g. a library is designed to be async-first, and also provides a blocking API which just wraps the async code by running it until complete.
The nest-asyncio library (https://github.com/erdewit/nest_asyncio) allows that by patching asyncio's event loop, but obviously this doesn't work with other event loops such as uvloop. I was wondering if things had changed since the original post of this issue, and if such a feature had any chance to make it into the standard library.
msg398257 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2021-07-26 21:06
> nest-asyncio library (https://github.com/erdewit/nest_asyncio)

Seeing that the community actively wants to have support for nested loops I'm slowly changing my opinion on this.

Guido, maybe we should allow nested asyncio loops disabled by default?
msg399920 - (view) Author: Douglas Raillard (douglas-raillard-arm) Date: 2021-08-19 14:22
Drive by comment: I landed on this thread for the exact same reason:
> This situation is very frequent when e.g. a library is designed to be async-first, and also provides a blocking API which just wraps the async code by running it until complete.

The library in question is "devlib", which abstracts over SSH/adb/local shell. We cannot make a "full" switch to async as it would be a big breaking change. To workaround that, I came up with a decorator that wraps a corountine, and "replaces" it such that:

    @asyncf
    async def f(...):
        ...

    # Blocking call under its "normal" name, for backward compat
    f()

    # Used in an async environment
    await f.asyn()

This allows converting bit by bit the whole library, with full backward compatibility for both users and internal calls.

On top of that, that library is heavily used in jupyter notebooks, so all in all, nest-asyncio is impossible to avoid.
History
Date User Action Args
2022-04-11 14:58:07adminsetgithub: 66435
2021-12-11 07:40:23kumaradityasetnosy: + kumaraditya
2021-09-28 22:07:39rob.mooresetnosy: + rob.moore
2021-08-19 14:22:25douglas-raillard-armsetnosy: + douglas-raillard-arm
messages: + msg399920
2021-07-26 21:06:24yselivanovsetmessages: + msg398257
2021-03-24 19:42:11davidbrochartsetnosy: + davidbrochart
messages: + msg389487
2020-07-08 14:59:48zzzeeksetmessages: + msg373311
2020-07-08 13:28:48zzzeeksetmessages: + msg373307
2020-07-08 06:33:52pmppsetnosy: + pmpp
2020-07-08 06:26:34njssetmessages: + msg373280
2020-07-08 04:00:14zzzeeksetmessages: + msg373273
2020-07-08 03:44:22zzzeeksetmessages: + msg373270
2020-07-07 21:15:39njssetmessages: + msg373249
2020-07-07 20:43:04yselivanovsetmessages: + msg373247
2020-07-07 20:26:12zzzeeksetmessages: + msg373245
2020-07-07 20:12:53zzzeeksetmessages: + msg373244
2020-07-07 19:37:37yselivanovsetmessages: + msg373241
2020-07-07 19:20:11njssetnosy: + njs
messages: + msg373240
2020-07-07 15:37:59jabsetnosy: + jab
2020-07-06 23:28:25zzzeeksetmessages: + msg373192
2020-07-06 22:13:27yselivanovsetmessages: + msg373186
2020-07-06 21:58:53zzzeeksetmessages: + msg373183
2020-07-06 21:33:04yselivanovsetmessages: + msg373177
2020-07-06 17:05:17zzzeeksetnosy: + zzzeek
messages: + msg373145
2019-06-19 11:04:14crusaderkysetnosy: + crusaderky
2019-02-22 15:54:10gvanrossumsetnosy: - gvanrossum
2019-02-22 13:47:44vstinnersetnosy: - vstinner
2019-02-21 22:53:58jceasetnosy: + jcea
2017-07-15 08:21:47Rokas K. (rku)setnosy: + Rokas K. (rku)
messages: + msg298388
2014-08-28 19:10:41djarbsetfiles: + nested.patch
keywords: + patch
messages: + msg226037
2014-08-25 18:04:32gvanrossumsetstatus: open -> closed
resolution: wont fix
messages: + msg225883
2014-08-21 22:39:28martin.pantersetnosy: + martin.panter
2014-08-20 18:03:50djarbcreate