Message391663
For a large amount of data, a list uses a single large contiguous block of memory while a deque uses many small discontiguous blocks. In your demo, I that suspect that some of the memory pages for deque's blocks are also being used for other small bits of data. If any of the those small bits survive (either in active use or held for future use by the small memory allocator), then the page cannot be reclaimed. When memory fragments like this, it manifests as an increasing amount of process memory.
Also, the interaction between the C library allocation functions and the O/S isn't under our control. Even when our code correctly calls PyMem_Free(), it isn't assured that total process memory goes goes back down.
As an experiment, try to recreate the effect by building a list of lists:
class Queue(list):
def put(self, x):
if not self or len(self[-1]) >= 66:
self.append([])
self[-1].append(x)
def get(self):
if not self:
raise IndexError
block = self[0]
x = block.pop(0)
if not block:
self.pop(0)
return x |
|
Date |
User |
Action |
Args |
2021-04-23 07:36:40 | rhettinger | set | recipients:
+ rhettinger, multiks2200 |
2021-04-23 07:36:40 | rhettinger | set | messageid: <1619163400.78.0.271834283033.issue43911@roundup.psfhosted.org> |
2021-04-23 07:36:40 | rhettinger | link | issue43911 messages |
2021-04-23 07:36:40 | rhettinger | create | |
|