This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ncoghlan
Recipients Omer.Katz, alex, carljm, eric.araujo, madison.may, ncoghlan, pitrou, pydanny, r.david.murray, rhettinger, serhiy.storchaka, vstinner
Date 2016-11-12.12:15:33
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1478952934.0.0.294738982834.issue21145@psf.upfronthosting.co.za>
In-reply-to
Content
I realised that PEP 487's __set_name__ can be used to detect the `__slots__` conflict at class definition time rather than on first lookup:

    def __set_name__(self, owner, name):
        try:
            slots = owner.__slots__
        except AttributeError:
            return
        if "__dict__" not in slots:
            msg = f"'__dict__' attribute required on {owner.__name__!r} instances to cache {name!r} property."
            raise TypeError(msg)

It also occurred to me that at the expense of one level of indirection in the runtime lookup, PEP 487's __set_name__ hook and a specified naming convention already permits a basic, albeit inefficient, "cached_slot" implementation:

class cached_slot:
    def __init__(self, func):
        self.func = func
        self.cache_slot = func.__name__ + "_cache"
        self.__doc__ = func.__doc__
        self._lock = RLock()

    def __set_name__(self, owner, name):
        try:
            slots = owner.__slots__
        except AttributeError:
            msg = f"cached_slot requires '__slots__' on {owner!r}"
            raise TypeError(msg) from None
        if self.cache_slot not in slots:
            msg = f"cached_slot requires {self.cache_slot!r} slot on {owner!r}"
            raise TypeError(msg) from None

    def __get__(self, instance, cls=None):
        if instance is None:
            return self
        try:
            return getattr(instance, self.cache_slot)
        except AttributeError:
            # Cache not initialised yet, so proceed to double-checked locking
            pass
        with self.lock:
            # check if another thread filled cache while we awaited lock
            try:
                return getattr(instance, self.cache_slot)
            except AttributeError:
                # Cache still not initialised yet, so initialise it
                setattr(instance, self.cache_slot, self.func(instance))
        return getattr(instance, self.cache_slot)

    def __set__(self, instance, value):
        setattr(instance, self.cache_slot, value)

    def __delete__(self, instance):
        delattr(instance, self.cache_slot)

It can't be done as a data descriptor though (and can't be done efficiently in pure Python), so I don't think it makes sense to try to make cached_property itself work implicitly with both normal attributes and slot entries - instead, cached_property can handle the common case as simply and efficiently as possible, and the cached_slot case can be either handled separately or else not at all.

The "don't offer cached_slot at all" argument would be that, given slots are used for memory-efficiency when handling large numbers of objects and lazy initialization is used to avoid unnecessary computations, a "lazily initialised slot" can be viewed as "64 bits of frequently wasted space", and hence we can expect demand for the feature to be incredibly low (and the community experience to date bears out that expectation).
History
Date User Action Args
2016-11-12 12:15:34ncoghlansetrecipients: + ncoghlan, rhettinger, pitrou, vstinner, carljm, pydanny, eric.araujo, alex, r.david.murray, serhiy.storchaka, madison.may, Omer.Katz
2016-11-12 12:15:34ncoghlansetmessageid: <1478952934.0.0.294738982834.issue21145@psf.upfronthosting.co.za>
2016-11-12 12:15:33ncoghlanlinkissue21145 messages
2016-11-12 12:15:33ncoghlancreate