This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients AlexWaygood, martenlienen, rhettinger, serhiy.storchaka
Date 2021-10-24.20:36:23
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1635107783.74.0.712129904668.issue45588@roundup.psfhosted.org>
In-reply-to
Content
> In my use case, the objects hold references to large blocks 
> of GPU memory that should be freed as soon as possible. 

That makes sense.  I can see why you reached for weak references.

Would it be a workable alternative to have an explicit close() method to free the underlying resources?  We do this with StringIO instances because relying on reference counting or GC is a bit fragile.


> Additionally, I use cached_method to cache expensive hash 
> calculations which the alternatives cannot do because they 
> would run into bottomless recursion.

This is a new concern.  Can you provide a minimal example that recurses infinitely with @lru_cache but not with @cached_method?  Offhand, I can't think of an example.


> Do you think, in the face of these ambiguities, that the
> stdlib should potentially just not cover this?

Perhaps this should be put on PyPI rather than in the standard library. 

The core problem is that having @cached_method in the standard library tells users that this is the recommended and preferred way to cache methods.  However in most cases they would be worse off.  And it isn't easy to know when @cached_method would be a win or to assess the loss of other optimizations like __slots__ and key-sharing dicts.
History
Date User Action Args
2021-10-24 20:36:23rhettingersetrecipients: + rhettinger, serhiy.storchaka, AlexWaygood, martenlienen
2021-10-24 20:36:23rhettingersetmessageid: <1635107783.74.0.712129904668.issue45588@roundup.psfhosted.org>
2021-10-24 20:36:23rhettingerlinkissue45588 messages
2021-10-24 20:36:23rhettingercreate