This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author yselivanov
Recipients gvanrossum, josh.r, levkivskyi, ned.deily, python-dev, serhiy.storchaka, yselivanov
Date 2016-11-10.01:20:08
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1478740812.04.0.0226526644551.issue28649@psf.upfronthosting.co.za>
In-reply-to
Content
So Ivan made an interesting observation: if we use Python version of functools.lru_cache, typing tests start to leak like crazy:

    beginning 6 repetitions
    123456
    ......
    test_typing leaked [24980, 24980, 24980] references, sum=74940

I experimented a bit, and I *think* I know what's happening: 

* typing uses an lru cache for types. 

* It looks like that some types in the cache are being *reused* by different tests.  

* Because typing doesn't set the size of the lru cache, it's set to 128 (default). 

* When many typing tests execute, the cache invalidates some entries, but because types in test_typing are so complex, I believe that the resulting graph of objects it too complex for the GC to go through (10s of thousands of cross-linked objects).

A simple fix that removes fixes the refleak with Python version of lru_cache is to add a 'tearDown' method to 'test_typing.BaseTestCase':

    class BaseTestCase(TestCase):
        def tearDown(self):
            for f in typing._cleanups:
                f()

Now this all of this is just my guess of what's going on here. It might be something that has to be fixed in typing, or maybe we should just add tearDown.  

Guido, Ivan, what do you think?
History
Date User Action Args
2016-11-10 01:20:12yselivanovsetrecipients: + yselivanov, gvanrossum, ned.deily, python-dev, serhiy.storchaka, josh.r, levkivskyi
2016-11-10 01:20:12yselivanovsetmessageid: <1478740812.04.0.0226526644551.issue28649@psf.upfronthosting.co.za>
2016-11-10 01:20:11yselivanovlinkissue28649 messages
2016-11-10 01:20:08yselivanovcreate