This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author neologix
Recipients dmalcolm, flox, kaifeng, neologix, pitrou
Date 2011-05-02.16:57:54
SpamBayes Score 1.66533e-16
Marked as misclassified No
Message-id <>
In-reply-to <>
I've had some time to look at this, and I've written a quick demo
patch that should - hopefully - fix this, and reduce memory
 A little bit of background first:
 - a couple years ago (probably true when pymalloc was designed and
merged), glibc's malloc used brk for small and medium allocations, and
mmap for large allocations, to reduce memory fragmentation (also,
because of the processes' VM layout in older Linux 32-bit kernels, you
couldn't have a heap bigger than 1GB). The threshold for routing
requests to mmap was fixed, and had a default of 256KB (exactly the
size of an pymalloc arena). Thus, all arenas were allocated with mmap
 - in 2006, a patch was merged to make this mmap threshold dynamic,
see for
more details
 - as a consequence, with modern glibc/elibc versions, the first
arenas will be allocated through mmap, but as soon as one of them is
freed, subsequent arenas allocation will be allocated from the heap
through brk, and not mmap
 - imagine the following happens :
   1) program creates many objects
   2) to store those objects, many arenas are allocated from the heap
through brk
   3) program destroys all the objects created, except 1 which is in
the last allocated arena
   4) since the arena has at least one object in it, it's not
deallocated, and thus the heap doesn't shrink, and the memory usage
remains high (with a huge hole between the base of the heap and its
 Note that 3) can be a single leaked reference, or just a variable
that doesn't get deallocated immediately. As an example, here's a demo
program that should exhibit this behaviour:

 import sys
 import gc

 # allocate/de-allocate/re-allocate the array to make sure that arenas are
 # allocated through brk
 tab = []
 for i in range(1000000):
 tab = []
 for i in range(1000000):

 print('after allocation')

 # allocate a dict at the top of the heap (actually it works even without) this
 a = {}

 # deallocate the big array
 del tab
 print('after deallocation')

 # collect
 print('after collection')

 You should see that even after the big array has been deallocated and
collected, the memory usage doesn't decrease.

 Also, there's another factor coming into play, the linked list of
arenas ("arenas" variable in Object/obmalloc.c), which is expanded
when there are not enough arenas allocated: if this variable is
realloc()ed while the heap is really large and whithout hole in it, it
will be allocated from the top of the heap, and since it's not resized
when the number of used arenas goes down, it will remain at the top of
the heap and will also prevent the heap from shrinking.

 My demo patch (pymem.diff) thus does two things:
 1) use mallopt to fix the mmap threshold so that arenas are allocated
through mmap
 2) increase the maximum size of requests handled by pymalloc from
256B to 512B (as discussed above with Antoine). The reason is that if
a PyObject_Malloc request is not handled by pymalloc from an arena
(i.e. greater than 256B) and is less than the mmap threshold, then we
can't do anything if it's not freed and remains in the middle of the
heap. That's exactly what's happening in the OP case, some
dictionnaries aren't deallocated even after the collection (I couldn't
quite identify them, but there seems to be some UTF-8 codecs and other

 To sum up, this patch increases greatly the likelihood of Python's
objects being allocated from arenas which should reduce fragmentation
(and seems to speed up certain operations quite a bit), and ensures
that arenas are allocated from mmap so that a single dangling object
doesn't prevent the heap from being trimmed.

 I've tested it on RHEL6 64-bit and Debian 32-bit, but it'd be great
if someone else could try it - and of course comment on the above
explanation/proposed solution.
Here's the result on Debian 32-bit:

Without patch:

*** Python 3.3.0 alpha
  0  1843 pts/1    S+     0:00      1  1795  9892  7528  0.5 ./python
  1  1843 pts/1    S+     0:16      1  1795 63584 60928  4.7 ./python
  2  1843 pts/1    S+     0:33      1  1795 112772 109064  8.4
./python /home/cf/
  3  1843 pts/1    S+     0:50      1  1795 162140 159424 12.3
./python /home/cf/
  4  1843 pts/1    S+     1:06      1  1795 211376 207608 16.0
./python /home/cf/
END  1843 pts/1    S+     1:25      1  1795 260560 256888 19.8
./python /home/cf/
 GC  1843 pts/1    S+     1:26      1  1795 207276 204932 15.8
./python /home/cf/

With patch:

*** Python 3.3.0 alpha
  0  1996 pts/1    S+     0:00      1  1795 10160  7616  0.5 ./python
  1  1996 pts/1    S+     0:16      1  1795 64168 59836  4.6 ./python
  2  1996 pts/1    S+     0:33      1  1795 114160 108908  8.4
./python /home/cf/
  3  1996 pts/1    S+     0:50      1  1795 163864 157944 12.2
./python /home/cf/
  4  1996 pts/1    S+     1:07      1  1795 213848 207008 15.9
./python /home/cf/
END  1996 pts/1    S+     1:26      1  1795 68280 63776  4.9 ./python
 GC  1996 pts/1    S+     1:26      1  1795 12112  9708  0.7 ./python

Antoine: since the increasing of the pymalloc threshold is part of the
solution to this problem, I'm attaching a standalone patch here
(pymalloc_threshold.diff). It's included in pymem.diff.
I'll try post some pybench results tomorrow.
File name Uploaded
pymalloc_threshold.diff neologix, 2011-05-02.16:57:54
pymem.diff neologix, 2011-05-02.16:57:53
Date User Action Args
2011-05-02 16:57:57neologixsetrecipients: + neologix, pitrou, flox, dmalcolm, kaifeng
2011-05-02 16:57:55neologixlinkissue11849 messages
2011-05-02 16:57:54neologixcreate