This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author eltoder
Recipients eltoder, serhiy.storchaka, vstinner
Date 2021-01-12.04:31:58
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1610425919.12.0.56212738564.issue42903@roundup.psfhosted.org>
In-reply-to
Content
It's convenient to use @lru_cache on functions with no arguments to delay doing some work until the first time it is needed. Since @lru_cache is implemented in C, it is already faster than manually caching in a closure variable. However, it can be made even faster and more memory efficient by not using the dict at all and caching just the one result that the function returns.

Here are my timing results. Before my changes:

$ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()"
5000000 loops, best of 5: 42.2 nsec per loop
$ ./python -m timeit -s "import functools; f = functools.lru_cache(None)(lambda: 1)" "f()"
5000000 loops, best of 5: 38.9 nsec per loop

After my changes:

$ ./python -m timeit -s "import functools; f = functools.lru_cache()(lambda: 1)" "f()"
10000000 loops, best of 5: 22.6 nsec per loop

So we get improvement of about 80% compared to the default maxsize and about 70% compared to maxsize=None.
History
Date User Action Args
2021-01-12 04:31:59eltodersetrecipients: + eltoder, vstinner, serhiy.storchaka
2021-01-12 04:31:59eltodersetmessageid: <1610425919.12.0.56212738564.issue42903@roundup.psfhosted.org>
2021-01-12 04:31:59eltoderlinkissue42903 messages
2021-01-12 04:31:58eltodercreate