New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
call_at/call_later with Timer cancellation can result in (practically) unbounded memory usage. #66638
Comments
The core issue stems from the implementation of Timer cancellation. (which features like asyncio.wait_for build upon). BaseEventLoop stores scheduled events in an array backed heapq named _scheduled. Once an event has been scheduled with call_at, cancelling the event only marks the event as cancelled, it does not remove it from the array backed heap. It is only removed once the cancelled event is at the next scheduled event for the loop. In a system where many events are run (and then cancelled) that may have long timeout periods, and there always exists at least one event that is scheduled for an earlier time, memory use is practically unbounded. The attached program wait_for.py demonstrates a trivial example where memory use is practically unbounded for an hour of time. This is the case even though the program only ever has two "uncancelled" events and two coroutines at any given time in its execution. This could be fixed in a variety of ways: a) Timer cancellation could result in the object being removed from the heap like in the sched module. This would be at least O(N) where N is the number of scheduled events. Given python's lack of a balanced tree structure in the standard library, I assume option c) is a non-starter. I would prefer option b) over option a) as when there are a lot of scheduled events in the system (upwards of 50,000 - 100,000 in some of my use cases) the amortized complexity for cancelling an event trends towards O(1) (N/2 cancellations are handled by a single O(N) event) at the cost of slightly more, but bounded relative to the amount of events, memory. I would be willing to take a shot at implementing this patch with the most agreeable option. Please let me know if that would be appreciated, or if someone else would rather tackle this issue. (First time bug report for python, not sure of the politics/protocols involved). Disclaimer that I by no means an asyncio expert, my understanding of the code base is based on my reading of it debugging this memory leak. |
Hi Joshua, This is indeed a problem -- I just never expected that you would be having that many events and and canceling the majority. I am sorry you had to debug this. :-( This was anticipated by the author of pyftpdlib (Giampaolo Roloda'), who proposed an elegant solution: keep track of the number of cancelled events, and when the number gets too high (according to some measure) the heapq is simply rebuilt by filtering out cancelled events. I think this is similar to your proposal (b). I would love it if you could implement this! Just make sure to add some tests and follow the PEP-8 style guide. You can contribute upstream to the Tulip project first. https://code.google.com/p/tulip/ |
Will I be writing a patch and tests for tulip, and then separate a patch and tests for python 3.4? Or will I submit to tulip, and then the changes will get merged from tulip into python by some other process? If possible, I would like to get this into python 3.4.2 (assuming all goes well). |
We can merge the changes into 3.4 and 3.5 for you, it's just a simple copy On Sat, Sep 20, 2014 at 9:02 AM, Joshua Moore-Oliva <report@bugs.python.org>
|
By the way I just looked at wait_for.py; it has a bug where do_work() isn't using yield-from with the sleep() call. But that may well be the issue you were trying to debug, and this does not change my opinion about the issue -- I am still looking forward to your patch. |
My patch is ready for review, if I followed the process correctly I think you should have received an email https://codereview.appspot.com/145220043
That was not intended, it was just a mistake. (A quick aside on yield from, feel free to ignore, I don't expect to change anyone's opinion on this) with assert_no_switchpoints():
do_something()
do_something_else() I also find that it is less error prone (missing a yield from), but that is a minor point as I could write a static analyzer (on top of test cases ofc) to check for that. But that's just my opinion and opinion's evolve :) |
I will try to review later tonight. One thing though:
That makes sense when using geven, but not when using asyncio or Trollius. |
Thanks!
Yes, I am aware of that. I have written a small custom library using fibers (a greenlet-like library) on top of asyncio so that I don't need to use yield from in my application(s). |
On Sat, Sep 20, 2014 at 3:38 PM, Joshua Moore-Oliva <report@bugs.python.org>
Hm. That sounds like you won't actually be interoperable with other |
New changeset 2a868c9f8f15 by Yury Selivanov in branch '3.4': New changeset a6aaacb2b807 by Yury Selivanov in branch 'default': |
asyncio code can be interoperated with by spinning off an asyncio coroutine that on completion calls a callback that reschedules a non-asyncio coroutine. I assume we shouldn't be spamming an issue with unrelated chatter, I'd be happy to discuss more via email if you would like. |
Also - should I close this issue now that a patch has been committed? |
Hm, strange, usually roundup robot closes issues. Anyways, closed now. Thanks again, Joshua. |
Sorry, I'm coming later after the review and the commit, but I worry about performances of _run_once() since it's the core of asyncio. Yury proposed to only iterate once on self._scheduled when removing delayed calls, and I have the same concern. Here is a patch which change _run_once() to only iterate once. IMO the change is obvious, the current iterates twice and makes the same check twice (check the _cancelled attribute of handles). |
Victor, During the code review we tried the single loop approach. At the end Joshua wrote a small benchmark to test if it's really faster to do it in one loop or not. Turned out that single loop approach is not faster than loop+comprehension (but it's not much slower either, I'd say they are about the same in terms of speed). One loop approach might work faster on PyPy, but unfortunately, they still don't support 3.3 to test. |
IMO it makes the code simpler and easier to understand. |
But it's a tad slower, like 2-3% ;) You can test it yourself, we only tested it on huge tasks list of 1M items. FWIW, I'm not opposed to your patch. |
Victor, I've done some additional testing. Here's a test that Joshua wrote for the code review: https://gist.github.com/1st1/b38ac6785cb01a679722 It appears that single loop approach works a bit faster for smaller collections of tasks. On a list of 10000 tasks it's on average faster 2-3%, on a list of 1000000 tasks it's slower for 2-3%. I'm not sure what's the average number of tasks for an "average" asyncio application, but something tells me it's not going to be in range of millions. I think you can fix the code to have a single loop. |
"... on average faster 2-3% ... slower for 2-3%" 3% is meaningless. On a microbenchmark, you can say that it's faster or slower if the difference is at least 10%. |
New changeset b85ed8bb7523 by Victor Stinner in branch '3.4': New changeset 8e9df3414185 by Victor Stinner in branch 'default': |
Victor, Here's an updated benchmark results:
2 loops is always about 30-40% slower. I've updated the benchmark I used: https://gist.github.com/1st1/b38ac6785cb01a679722 Now it incorporates a call to heapify, and should yield more stable results. Please check it out, as I'm maybe doing something wrong there, but if it's alright, I think that you need to revert your commits. |
typo:
2 loops is always about 30-40% faster. |
Hum you don't reset start between the two benchmarks. |
Eh, I knew something was wrong. Thanks. NUMBER_OF_TASKS 100000 Please commit your change to the tulip repo too. |
Oh sorry. In fact, I made the commit but I forgot to push my change :-p |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: