This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients multiks2200, rhettinger
Date 2021-04-23.18:30:22
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1619202622.34.0.369350035385.issue43911@roundup.psfhosted.org>
In-reply-to
Content
> I don't think this is necessarily specific to my local build

Replace "local" with "specific to a particular combination of C compiler and operating system".  On my Mac, the effect mostly doesn't occur as all, 0.05% before the run and 0.10% after the run.   This shows that Python source isn't at fault.

Also, since the same effect arises with a list of lists, we know that this isn't deque specific.

It isn't even Python specific.  For your OS and C compiler, it would happen to any C program that made the same pattern of calls to malloc() and free().  Memory allocators vary greatly in quality and in their response to particular load patterns.  Fragmentation is a perpetual nuisance.


>  the alternative deque implementations don't seem 
> to suffer from the same issue

This is unsurprising.  Different patterns of memory access produce different random fragmentation artifacts.


> I will continue and try to adapt alternate deque 
> implementations and see if it solves Queues leaking 

For your own project, consider:

1) Use a PriorityQueue adding elements with a counter or timestamp:
      
      mem.put((time(), msg))
      ...
      _, msg = mem.get()

2) Build against a different memory allocator such dlmalloc.

3)  Subclass Queue with some alternate structure that doesn't tickle the fragmentation issues on your O/S.  Here's one to start with:

    class MyQueue(Queue):
        def _init(self, maxsize=0):
            assert not maxsize
            self.start = self.end = 0
            self.q = {}
        def _qsize(self):
            return self.end - self.start
        def _put(self, x):
            self.q[self.end] = x
            self.end += 1
        def _get(self):
            x = self.q.pop(self.start)
            self.start += 1
            return x


> and hanging my instances.

I suspect that hanging is completely unrelated.  The fragments we've studied don't hang.  They just have the undesirable property that the process holds more memory blocks than expected.  Those blocks are still available to Python for reuse.  They just aren't available to other processes.

Side note:  While Python supports large queues, for most applications if the queue depth gets to 20 million, it is indicative of some other design flaw.


> Thanks for your time and involvement in this.

You're welcome.  I wish the best for you.
History
Date User Action Args
2021-04-23 18:30:22rhettingersetrecipients: + rhettinger, multiks2200
2021-04-23 18:30:22rhettingersetmessageid: <1619202622.34.0.369350035385.issue43911@roundup.psfhosted.org>
2021-04-23 18:30:22rhettingerlinkissue43911 messages
2021-04-23 18:30:22rhettingercreate