Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

high fragmentation of the memory heap on Windows #63445

Closed
ghost opened this issue Oct 13, 2013 · 23 comments
Closed

high fragmentation of the memory heap on Windows #63445

ghost opened this issue Oct 13, 2013 · 23 comments
Labels
OS-windows performance Performance or resource usage

Comments

@ghost
Copy link

ghost commented Oct 13, 2013

BPO 19246
Nosy @tim-one, @pitrou, @vstinner, @tjguk, @bitdancer
Files
  • uglyhack.c: test program in C
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2013-10-15.08:33:01.329>
    created_at = <Date 2013-10-13.12:00:01.484>
    labels = ['OS-windows', 'performance']
    title = 'high fragmentation of the memory heap on Windows'
    updated_at = <Date 2013-10-15.08:53:48.139>
    user = None

    bugs.python.org fields:

    activity = <Date 2013-10-15.08:53:48.139>
    actor = 'pitrou'
    assignee = 'none'
    closed = True
    closed_date = <Date 2013-10-15.08:33:01.329>
    closer = 'vstinner'
    components = ['Windows']
    creation = <Date 2013-10-13.12:00:01.484>
    creator = '\xd0\x9f\xd1\x91\xd1\x82\xd1\x80.\xd0\x94\xd1\x91\xd0\xbc\xd0\xb8\xd0\xbd'
    dependencies = []
    files = ['32110']
    hgrepos = []
    issue_num = 19246
    keywords = []
    message_count = 23.0
    messages = ['199698', '199730', '199813', '199814', '199815', '199817', '199857', '199866', '199936', '199940', '199941', '199943', '199944', '199945', '199950', '199958', '199960', '199961', '199967', '199968', '199982', '199983', '199984']
    nosy_count = 8.0
    nosy_names = ['tim.peters', 'pitrou', 'vstinner', 'tim.golden', 'r.david.murray', 'sbt', 'Esa.Peuha', '\xd0\x9f\xd1\x91\xd1\x82\xd1\x80.\xd0\x94\xd1\x91\xd0\xbc\xd0\xb8\xd0\xbd']
    pr_nums = []
    priority = 'normal'
    resolution = 'rejected'
    stage = None
    status = 'closed'
    superseder = None
    type = 'resource usage'
    url = 'https://bugs.python.org/issue19246'
    versions = ['Python 2.7', 'Python 3.4']

    @ghost
    Copy link
    Author

    ghost commented Oct 13, 2013

    Taken from http://stackoverflow.com/a/19287553/135079
    When I consume all memory:

        Python 2.7 (r27:82525, Jul  4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32
        Type "help", "copyright", "credits" or "license" for more information.
        >>> a = {}
        >>> for k in xrange(1000000): a['a' * k] = k
        ...
        Traceback (most recent call last):
          File "<stdin>", line 1, in <module>
        MemoryError
        >>> len(a)
        64036

    If we'll take summary keys length:

        >>> log(sum(xrange(64036)), 2)
        30.93316861532543

    we'll get near 32-bit integer overflow. After that done,

    >>> a = {}
    

    will free all 2 Gb of allocated memory (as shown in Task Manager), but executing:

    >>> for k in xrange(1000000): a[k] = k
    

    Will cause:

    MemoryError
    

    And dictionary length something like:

        >>> len(a)
        87382

    Repository owner added OS-windows performance Performance or resource usage labels Oct 13, 2013
    @bitdancer
    Copy link
    Member

    My guess would be you are dealing with memory fragmentation issues, but I'll let someone more knowledgeable confirm that before closing the issue :)

    @tim-one
    Copy link
    Member

    tim-one commented Oct 13, 2013

    Here on 32-bit Windows Vista, with Python 3:

    C:\Python33>python.exe
    Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)] on win32
    Type "help", "copyright", "credits" or "license" for more information.
    >>> a = {}
    >>> for k in range(1000000): a['a' * k] = k
    ...
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    MemoryError
    >>> del a

    And here too Task Manager shows that Python has given back close to 2GB of memory.

    >>> a = {}
    >>> for k in range(100000): a['a' * k] = k
    ...
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    MemoryError

    And here Task Manager shows that there's tons of memory still available. sys._debugmallocstats() shows nothing odd after another "a = {}" - only 7 arenas are allocated, less than 2 MB.

    Of course this has nothing to do with running in interactive mode. Same thing happens in a program (catching MemoryError, etc).

    So best guess is that Microsoft's allocators have gotten fatally fragmented, but I don't know how to confirm/refute that.

    It would be good to get some reports from non-Windows 32-bit boxes. If those are fine, then we can be "almost sure" it's a Microsoft problem.

    @pitrou
    Copy link
    Member

    pitrou commented Oct 13, 2013

    Works fine on a 32-bit Linux build (64-bit machine, though):

    >>> import sys
    >>> sys.maxsize
    2147483647
    >>> a = {}
    >>> for k in range(1000000): a['a' * k] = k
    ... 
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
    MemoryError
    >>> del a
    >>> a = {}
    >>> for k in range(1000000): a[k] = k
    ... 
    >>> 

    Note that Linux says the process eats 4GB RAM.

    @vstinner
    Copy link
    Member

    int type of Python 2 uses an internal "free list" which has an unlimited size. If once you have 1 million different integers are the same time, the memory will never be released, even if the container storing all these integers is removed, because a reference is kept in the free list.

    This is a known issue of Python 2, solved "indirectly" in Python 3, because "int" type of Python 3 does not use a free list. The long type of Python 2 does not use a free list neither.

    @tim-one
    Copy link
    Member

    tim-one commented Oct 13, 2013

    haypo, there would only be a million ints here even if the loop had completed. That's trivial in context (maybe 14 MB for the free list in Python 2?). And note that I did my example run under Python 3.

    Besides, the OP and I both reported that Task Manager showed that Python did release "almost all" of the memory back to the OS. While the first MemoryError occurs when available memory has been truly exhausted, the second MemoryError occurs with way over a gigabyte of memory still "free" (according to Task Manager). Best guess is that it is indeed free, but so fragmented that MS C's allocator can't deal with it. That would not be unprecedented on Windows ;-)

    @EsaPeuha
    Copy link
    Mannequin

    EsaPeuha mannequin commented Oct 14, 2013

    So best guess is that Microsoft's allocators have gotten fatally fragmented, but I don't know how to confirm/refute that.

    Let's test this in pure C. Compile and run the attached uglyhack.c on win32; if it reports something significantly less than 100%, it's probably safe to conclude that this has nothing to do with Python.

    @vstinner
    Copy link
    Member

    Python uses an allocator called "pymalloc". For allocations smaller
    than 512 bytes, it uses arenas of 256 KB. If you allocate many small
    objects and later release most of them (but not all!), the memory is
    fragmented. For allocations larger than 512 bytes, Python falls back
    to malloc/free.

    It was discussed to replace pymalloc with Windows Low Fragmented Heap allocator.

    @pitrou pitrou changed the title GC does not really free up memory in console freeing then reallocating lots of memory fails under Windows Oct 14, 2013
    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    @Haypo, this has nothing to do with PyMalloc. As I reported in my first message, only 7 PyMalloc arenas are in use at the end of the program, less than 2 MB total. *All* other arenas ever used were released to the OS.

    And that's not surprising. The vast bulk of the memory used in the test case isn't in small objects, it's in *strings* of ever-increasing size. Those are gotten by many calls to the system malloc().

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    @Esa.Peuha, fine idea! Alas, on the same box I used before, uglyhack.c displays (it varies a tiny amount from run to run):

    65198 65145 99.918709%

    So it's not emulating enough of Python's malloc()/free() behavior to trigger the same kind of problem :-(

    @pitrou
    Copy link
    Member

    pitrou commented Oct 14, 2013

    By the way, in Python 3.4 arena allocation is done using VirtualAlloc and VirtualFree, that may make a difference too.

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    @pitrou, maybe, but seems very unlikely. As explained countless times already ;-), PyMalloc allocates few arenas in the test program. "Small objects" are relatively rare here. Almost all the memory is consumed by strings of ever-increasing length. PyMalloc passes those large requests on to the system malloc().

    @pitrou
    Copy link
    Member

    pitrou commented Oct 14, 2013

    @pitrou, maybe, but seems very unlikely. As explained countless times
    already ;-),

    Indeed, a 32-bit counter would already have overflowed :-D
    You're right that's very unlikely.

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    Just to be sure, I tried under current default (3.4.0a3+). Same behavior.

    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Oct 14, 2013

    After running ugly_hack(), trying to malloc a largeish block (1MB) fails:

    int main(void)
    {
        int first;
        void *ptr;
        ptr = malloc(1024*1024);
        assert(ptr != NULL);        /* succeeds */
        free(ptr);
    
        first = ugly_hack();
    
        ptr = malloc(1024*1024);
        assert(ptr != NULL);        /* fails */
        free(ptr);
        return 0;
    }

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    @sbt, excellent! Happens for me too: trying to allocate a 1MB block fails after running ugly_hack() once. That fits the symptoms: lots of smaller, varying-sized allocations, followed by free()s, followed by a "largish" allocation. Don't know _exactly_ which largish allocation is failing. Could be the next non-trivial dict resize, or, because I'm running under Python 3, a largish Unicode string allocation.

    Unfortunately, using the current default-branch Python in a debug build, the original test case doesn't misbehave, so I can't be more specific. That could be because, in a debug build, Python does more of the memory management itself. Or at least it used to - everything got more complicated in my absence ;-)

    Anyway, since "the problem" has been produced with a simple pure C program, I think we need to close this as "wont fix".

    @vstinner
    Copy link
    Member

    Anyway, since "the problem" has been produced with a simple pure C
    program, I think we need to close this as "wont fix".

    Can someone try the low fragmentation allocator?

    @vstinner
    Copy link
    Member

    I tried jemalloc on Linux which behaves better than the (g)libc on the RSS
    ans VMS memory. I know that Firefox uses it on Windows (and maybe also Mac
    OS X). It may be interesting to try it and/or provide something to use it
    easily.

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    @Haypo, I'm not sure what you mean by "the low fragmentation allocator". If it's referring to this:

    http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750(v=vs.85).aspx

    it doesn't sound all that promising for this failing case. But, sure, someone should try it ;-)

    @tim-one
    Copy link
    Member

    tim-one commented Oct 14, 2013

    BTW, everything I've read (including the MSDN page I linked to) says that the LFH is enabled _by default_ starting in Windows Vista (which I happen to be using). So unless Python does something to _disable_ it (I don't know), there's nothing to try here.

    @vstinner
    Copy link
    Member

    Tim> http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750(v=vs.85).aspx

    Yes, this one.

    Tim> BTW, everything I've read (including the MSDN page I linked to)
    says that the LFH is enabled _by default_ starting in Windows Vista
    (which I happen to be using). So unless Python does something to
    _disable_ it (I don't know), there's nothing to try here.

    Extract of the link:

    "To enable the LFH for a heap, use the GetProcessHeap function to
    obtain a handle to the default heap of the calling process, or use the
    handle to a private heap created by the HeapCreate function. Then call
    the HeapSetInformation function with the handle."

    It should be enabled explicitly.

    @pitrou
    Copy link
    Member

    pitrou commented Oct 15, 2013

    It should be enabled explicitly.

    Victor, please read your own link before posting:

    """The information in this topic applies to Windows Server 2003 and
    Windows XP. Starting with Windows Vista, the system uses the
    low-fragmentation heap (LFH) as needed to service memory allocation
    requests. Applications do not need to enable the LFH for their heaps.
    """

    @pitrou pitrou changed the title freeing then reallocating lots of memory fails under Windows freeing then reallocating lots of memory fails under Windows Oct 15, 2013
    @vstinner
    Copy link
    Member

    Victor, please read your own link before posting:

    Oh. I missed this part, that's why I didn't understand Tim's remark.

    So the issue comes the Windows heap allocator. I don't see any obvious improvment that Python can do to improve the memory usage. I close the issue.

    You have to modify your application to allocate objects differently, to limit manually the fragmentation of the heap. Another option, maybe more complex, is to create a subprocess to process data, and destroy the process to release the memory. multiprocessing helps to implement that.

    I will maybe try jemalloc on Windows, but I prefer to open a new issue if I find something interesting.

    @vstinner vstinner changed the title freeing then reallocating lots of memory fails under Windows high fragmentation of the memory heap on Windows Oct 15, 2013
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    OS-windows performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants