> has the potential to fill the LRU cache with calls that
> will in ultimately result in errors
How would a call result in an error? The worst that can happen is a cache miss and the underlying function gets called.
> Change `functools.lru_cache` not cache calls that
> result in `NotImplemented`.
That would be a backwards incompatible change.
> Allow a user to easily extend `functools.lru_cache`
The implementation is intentionally closed off to support thread-safety, having a C implementation, ease of use, and making it performant.
We've provided OrderedDict to make it relatively easy to roll your own cache variants. Essentially the only algorithmically interesting part of the lru_cache is the linked list. OrderedDict has that logic already built in (the move_to_end() method does the equivalent of the linked list update).
> Add the ability to *throw* `NotImplemented` in binary dunder
> methods so the function call does not complete. This would
> work exactly like returning `NotImplemented` and be handled
> by the runtime.
Deep changes to the language would require a PEP.
> And my current solution...
> 4. Copy-paste `functools.lru_cache`
The usual technique is to cache only what you want cached.
Consider factoring-out the expensive logic rather than caching the NotImplemented logic:
def __add__(self, other):
if not isinstance(other, type(self)):
return NotImplemented
self._expensive_method(other)
@lru_cache
def _expensive_method(self, other):
...
|