This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author loewis
Recipients gregory.p.smith, loewis, pitrou, rhettinger, stutzbach
Date 2008-12-18.23:17:18
SpamBayes Score 6.278502e-08
Marked as misclassified No
Message-id <494AD9FD.6000205@v.loewis.de>
In-reply-to <1229592517.9355.7.camel@localhost>
Content
> But what counts is where tuples can be created in massive numbers or
> sizes: the eval loop, the tuple type's tp_new, and a couple of other
> places. We don't need to optimize every single tuple created by the
> interpreter or extension modules (and even the, one can simply call
> _PyTuple_Optimize()).

Still, I think this patch does too much code duplication. There should
be only a single function that does the optional untracking; this then
gets called from multiple places.

> Also, this approach is more expensive

I'm skeptical. It could well be *less* expensive, namely if many tuples
get deleted before gc even happens. Then you currently check whether you
can untrack them, which is pointless if the tuple gets deallocated
quickly, anyway.
History
Date User Action Args
2008-12-18 23:17:20loewissetrecipients: + loewis, rhettinger, gregory.p.smith, pitrou, stutzbach
2008-12-18 23:17:19loewislinkissue4688 messages
2008-12-18 23:17:19loewiscreate