This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients multiks2200, rhettinger
Date 2021-04-23.07:36:40
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1619163400.78.0.271834283033.issue43911@roundup.psfhosted.org>
In-reply-to
Content
For a large amount of data, a list uses a single large contiguous block of memory while a deque uses many small discontiguous blocks.  In your demo, I that suspect that some of the memory pages for deque's blocks are also being used for other small bits of data.  If any of the those small bits survive (either in active use or held for future use by the small memory allocator), then the page cannot be reclaimed.  When memory fragments like this, it manifests as an increasing amount of process memory.

Also, the interaction between the C library allocation functions and the O/S isn't under our control.  Even when our code correctly calls PyMem_Free(), it isn't assured that total process memory goes goes back down.

As an experiment, try to recreate the effect by building a list of lists:

    class Queue(list):
        def put(self, x):
            if not self or len(self[-1]) >= 66:
                self.append([])
            self[-1].append(x)
        def get(self):
            if not self:
                raise IndexError
            block = self[0]
            x = block.pop(0)
            if not block:
                self.pop(0)
            return x
History
Date User Action Args
2021-04-23 07:36:40rhettingersetrecipients: + rhettinger, multiks2200
2021-04-23 07:36:40rhettingersetmessageid: <1619163400.78.0.271834283033.issue43911@roundup.psfhosted.org>
2021-04-23 07:36:40rhettingerlinkissue43911 messages
2021-04-23 07:36:40rhettingercreate