This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author serhiy.storchaka
Recipients rhettinger, serhiy.storchaka
Date 2019-01-21.20:11:16
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1548101476.21.0.39558375364.issue35780@roundup.psfhosted.org>
In-reply-to
Content
It was the explanation of possible history, not the justification of bugs. Of course bugs should be fixed. Thank you for rechecking this code and for your fix.

As for the optimization in lru_cache_make_key(), consider the following example:

@lru_cache()
def f(x):
    return x

print(f(1))
print(f(1.0))

Currently the C implementation memoizes only one result, and f(1.0) returns 1. With the Python implementation and with the proposed changes  it will return 1.0. I do not say that any of answers is definitely wrong, but we should be aware of this, and it would be better if both implementation will be consistent. I am sure this example was already discussed, but I can not find it now.

I still did not analyze the new code for adding to the cache. The old code contains flaws, agree.

I am not sure about an addition _PyDict_GetItem_KnownHash(). Every of dict operations can cause executing an arbitrary code and re-entering the execution of bounded_lru_cache_wrapper(). Could not the API that atomically checks and updates the dict (like getdefault()/setdefault()) be useful here?
History
Date User Action Args
2019-01-21 20:11:18serhiy.storchakasetrecipients: + serhiy.storchaka, rhettinger
2019-01-21 20:11:16serhiy.storchakasetmessageid: <1548101476.21.0.39558375364.issue35780@roundup.psfhosted.org>
2019-01-21 20:11:16serhiy.storchakalinkissue35780 messages
2019-01-21 20:11:16serhiy.storchakacreate