New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
asyncio: Optimize get_event_loop and _get_running_loop #75531
Comments
get_event_loop() and _get_running_loop() can be faster. Case Time Mean Improve Experimenting with different techniques to optimize get_event_loop and _get_running_loop. After discuss with Yuri, decide not to expand _get_running_loop inside get_event_loop. |
Can you please provide the code of your benchmark? |
Benchmark script: Run 10 times to get mean and stdev import asyncio
import time
async def async_get_loop():
start_time = time.time()
for _ in range(5000000):
asyncio.get_event_loop()
return time.time() - start_time
loop = asyncio.get_event_loop()
results = []
for _ in range(10):
start_time = time.time()
result = loop.run_until_complete(async_get_loop())
results.append(result)
import statistics
print("elapse time: %.3lf +- %.3lf secs" % (statistics.mean(results), statistics.stdev(results))) |
I suggest to use my perf module to run benchmark, especially if the tested function takes less than 1 ms, which is the case here. Attached benchmark script calls asyncio.get_event_loop(). Result on the master branch with PR 3347: haypo@selma$ ./python ~/bench_asyncio.py --inherit=PYTHONPATH -o patch.json Mean +- std dev: [ref] 881 ns +- 42 ns -> [patch] 859 ns +- 14 ns: 1.03x faster (-3%) I'm not convinced that the PR is worth it. 3% is not interesting on a micro benchmark. Or is there an issue in my benchmark? |
I found a small issue in the PR (left a comment in the PR). I think using a tuple is still a good idea (even if the speedup is tiny) because logically, both attributes on that threading.local() object are always set and read at the same time. Essentially, it's a pair of (loop, pid), so using a tuple here makes the code easier to reason about. |
If the motivation is correctness and not performance, please adjust the issue and PR description :-) |
Yes. The correct benchmark would be to measure |
According to Jimmy, asyncio.get_event_loop() behaves differently if it's called while an event loop is running. So my first benchmark was wrong. Attached bench_get_event_loop.py measures asyncio.get_event_loop() performance when an event loop is running. I get a different result: haypo@selma$ ./python -m perf compare_to ref.json patch.json Ok, now it's 10% faster :-) |
This has been backported. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: