This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Unsupported provider

classification
Title: Python memory allocator: Free memory
Type: Stage:
Components: Interpreter Core Versions: Python 2.5
process
Status: closed Resolution: accepted
Dependencies: Superseder:
Assigned To: tim.peters Nosy List: illume, tim.peters, vulturex
Priority: normal Keywords: patch

Created on 2005-02-15 21:27 by vulturex, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
obmalloc-free-arenas.diff vulturex, 2005-02-19 14:07
Messages (11)
msg47788 - (view) Author: Evan Jones (vulturex) Date: 2005-02-15 21:27
This is the second version of my Python memory allocator patch. 
The first version was discussed on the python-dev mailing list 
here:

http://mail.python.org/pipermail/python-dev/2005-January/
051255.html

This patch enables Python to actually return memory to the 
system. The current version's memory usage will only grow. This 
version maintains the same backwards compatability guarantees 
as the previous version: Calling PyObject_Free with a pointer that 
was returned by malloc() while NOT holding the GIL will work, and 
will not corrupt the state of the memory allocator.

The patch modifies obmalloc.c. If it is accepted, other 
modifications to that file are required. In particular, I have not yet 
updated the WITH_MEMORY_LIMITS implementation, nor have I 
looked closely at the PYMALLOC_DEBUG code to see what changes 
(if any) are required.
msg47789 - (view) Author: Evan Jones (vulturex) Date: 2005-02-18 22:08
Logged In: YES 
user_id=539295

Please ignore this patch for the moment: I'm in the process of making 
some fixes.
msg47790 - (view) Author: Evan Jones (vulturex) Date: 2005-02-19 14:07
Logged In: YES 
user_id=539295

As per the discussion on python-dev, I've removed the concurrency hack. 
The routines in obmalloc.c now *must* be called while holding the GIL, 
even if the pointer was allocated with malloc(). I also finally fixed the 
PYMALLOC_DEBUG routines, so I believe this patch is now "complete."
msg47791 - (view) Author: Evan Jones (vulturex) Date: 2005-05-10 04:31
Logged In: YES 
user_id=539295

Whoops! I uploaded a "fixed" version a while ago, but I guess I didn't 
update the comments. The patch currently attached to this is the most up-
to-date version. Sorry about that.
msg47792 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2005-09-05 22:43
Logged In: YES 
user_id=31435

Assigned to me.
msg47793 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2006-02-23 01:25
Logged In: YES 
user_id=31435

The patch here is out of date, but that's OK.  I created
branch tim-obmalloc, with a working version of the patch,
extensively reformatted to Python's C style, and with some
minor changes to squash compiler warnings.  I plan to finish
this during PyCon.
msg47794 - (view) Author: Evan Jones (vulturex) Date: 2006-02-23 14:29
Logged In: YES 
user_id=539295

Great news! If you need any assistance, I would be more than
happy to help.
msg47795 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2006-03-16 01:17
Logged In: YES 
user_id=31435

The tim-obmalloc branch was merged to the trunk (for Python
2.5a1) in revision 43059.  Thank you again for your hard
work and patience, Evan!
msg47796 - (view) Author: Rene Dudfield (illume) Date: 2006-06-30 04:06
Logged In: YES 
user_id=2042


Note, that this patch doesn't fix all memory leaks of this
type.  For example, this code below doesn't release all the
memory after it is run.  It starts at about 3MB, goes up to
about 56MB and then only drops to 50MB.

AFAIK restarting python processes is still needed to reduce
memory usage for certain types of processes.


<pre>

import random
import os

def fill(map):
    random.seed(0)
    for i in xrange(300000):
        k1 = (random.randrange(100),
              random.randrange(100),
              random.randrange(100))
        k2 = random.randrange(100)
        if k1 in map:
            d = map[k1]
        else:
            d = dict()
        d[k2] = d.get(k2, 0) + 1
        map[k1] = d

if __name__ == "__main__":
    os.system('ps v')
    d = dict()
    fill(d)
    print "-"* 50
    print "\n" * 3
    os.system('ps v')
    del d

    import gc
    gc.collect()

    print "-"* 50
    print "\n" * 3
    os.system('ps v')
</pre>
msg47797 - (view) Author: Rene Dudfield (illume) Date: 2006-06-30 06:02
Logged In: YES 
user_id=2042


I have done some more tests... and it seems that
dictionaries do not release as much memory as lists do.


Here is a modification of the last example posted.

If you only let fill2() run almost all the memory is freed.
 fill2() uses lists.  However if you let the others which
use dicts run not all of the memory is freed.  The processes
are still 38MB when the data is del'ed.

It is still much better than python2.4, however I think
something fishy must be going on with dicts.





<pre>

import random
import os

def fill_this_one_doesnot_free_much_at_all(map):
    random.seed(0)
    for i in xrange(300000):
        k1 = (random.randrange(100),
              random.randrange(100),
              random.randrange(100))
        k2 = random.randrange(100)
        if k1 in map:
            d = map[k1]
        else:
            d = dict()
        d[k2] = d.get(k2, 0) + 1
        map[k1] = d


def fill(map):
    random.seed(0)
    for i in xrange(3000000):
        map[i] = "asdf"


class Obj:
    def __init__( self ):
        self.dumb = "hello"


def fill2(map):
    a = []
    for i in xrange(300000):
        o = Obj()
        a.append(o)
    return a


if __name__ == "__main__":
    import time
    import gc

    os.system('ps v | grep memtest')
    d = dict()
    a = fill2(d)
    #a2 = fill(d)
    a2 = fill_this_one_doesnot_free_much_at_all(d)
    print "-"* 50
    print "\n" * 3
    os.system('ps v | grep memtest')
    del d
    del a

    gc.collect()

    time.sleep(2)
    for x in xrange(100000):
        pass
    print "-"* 50
    print "\n" * 3
    os.system('ps v | grep memtest')
</pre>
msg47798 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2006-06-30 06:17
Logged In: YES 
user_id=31435

As the NEWS entry says,

"""
Patch #1123430: Python's small-object allocator now returns
an arena to the system ``free()`` when all memory within an
arena becomes unused again.  Prior to Python 2.5, arenas
(256KB chunks of memory) were never freed.  Some
applications will see a drop in virtual memory size now,
especially long-running applications that, from time to
time, temporarily use a large number of small objects.  Note
that when Python returns an arena to the platform C's
``free()``, there's no guarantee that the platform C library
will in turn return that memory to the operating system. 
The effect of the patch is to stop making that impossible,
and in tests it appears to be effective at least on
Microsoft C and gcc-based systems.
"""

An instrumented debug build of current trunk showed that
Python eventually returned 472 of 544 allocated arenas to
the platform free() in your program, leaving 72 still
allocated when Python shut down.  The latter is due to
fragmentation, and there will never be "a cure" for that
since CPython guarantees never to move objects.  The arena
highwater mark was 240 arenas, which is what Python would
have held on to forever before the patch.

A large part of the relatively disappointing result we see
here is due to that the tuple implementation maintains its
own free lists too:  it never returns a few thousand of the
tuple objects to obmalloc before Python shuts down, and that
in turn keeps all the arenas from which those tuple objects
were originally obtained alive until Python shuts down. 
There's nothing obmalloc or gcmodule can do about that, and
the tuple free lists are _likely_ an important optimization
in real applications.
History
Date User Action Args
2022-04-11 14:56:09adminsetgithub: 41581
2005-02-15 21:27:11vulturexcreate