Message384883
It's convenient to use @lru_cache on functions with no arguments to delay doing some work until the first time it is needed. Since @lru_cache is implemented in C, it is already faster than manually caching in a closure variable. However, it can be made even faster and more memory efficient by not using the dict at all and caching just the one result that the function returns.
Here are my timing results. Before my changes:
$ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()"
5000000 loops, best of 5: 42.2 nsec per loop
$ ./python -m timeit -s "import functools; f = functools.lru_cache(None)(lambda: 1)" "f()"
5000000 loops, best of 5: 38.9 nsec per loop
After my changes:
$ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()"
10000000 loops, best of 5: 22.6 nsec per loop
So we get improvement of about 80% compared to the default maxsize and about 70% compared to maxsize=None. |
|
Date |
User |
Action |
Args |
2021-01-12 04:31:59 | eltoder | set | recipients:
+ eltoder, vstinner, serhiy.storchaka |
2021-01-12 04:31:59 | eltoder | set | messageid: <1610425919.12.0.56212738564.issue42903@roundup.psfhosted.org> |
2021-01-12 04:31:59 | eltoder | link | issue42903 messages |
2021-01-12 04:31:58 | eltoder | create | |
|