New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high fragmentation of the memory heap on Windows #63445
Comments
Taken from http://stackoverflow.com/a/19287553/135079 Python 2.7 (r27:82525, Jul 4 2010, 09:01:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> a = {}
>>> for k in xrange(1000000): a['a' * k] = k
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>> len(a)
64036 If we'll take summary keys length: >>> log(sum(xrange(64036)), 2)
30.93316861532543 we'll get near 32-bit integer overflow. After that done,
will free all 2 Gb of allocated memory (as shown in Task Manager), but executing:
Will cause:
And dictionary length something like: >>> len(a)
87382 |
My guess would be you are dealing with memory fragmentation issues, but I'll let someone more knowledgeable confirm that before closing the issue :) |
Here on 32-bit Windows Vista, with Python 3: C:\Python33>python.exe
Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:03:43) [MSC v.1600 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> a = {}
>>> for k in range(1000000): a['a' * k] = k
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>> del a And here too Task Manager shows that Python has given back close to 2GB of memory. >>> a = {}
>>> for k in range(100000): a['a' * k] = k
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError And here Task Manager shows that there's tons of memory still available. sys._debugmallocstats() shows nothing odd after another "a = {}" - only 7 arenas are allocated, less than 2 MB. Of course this has nothing to do with running in interactive mode. Same thing happens in a program (catching MemoryError, etc). So best guess is that Microsoft's allocators have gotten fatally fragmented, but I don't know how to confirm/refute that. It would be good to get some reports from non-Windows 32-bit boxes. If those are fine, then we can be "almost sure" it's a Microsoft problem. |
Works fine on a 32-bit Linux build (64-bit machine, though): >>> import sys
>>> sys.maxsize
2147483647
>>> a = {}
>>> for k in range(1000000): a['a' * k] = k
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
MemoryError
>>> del a
>>> a = {}
>>> for k in range(1000000): a[k] = k
...
>>> Note that Linux says the process eats 4GB RAM. |
int type of Python 2 uses an internal "free list" which has an unlimited size. If once you have 1 million different integers are the same time, the memory will never be released, even if the container storing all these integers is removed, because a reference is kept in the free list. This is a known issue of Python 2, solved "indirectly" in Python 3, because "int" type of Python 3 does not use a free list. The long type of Python 2 does not use a free list neither. |
haypo, there would only be a million ints here even if the loop had completed. That's trivial in context (maybe 14 MB for the free list in Python 2?). And note that I did my example run under Python 3. Besides, the OP and I both reported that Task Manager showed that Python did release "almost all" of the memory back to the OS. While the first MemoryError occurs when available memory has been truly exhausted, the second MemoryError occurs with way over a gigabyte of memory still "free" (according to Task Manager). Best guess is that it is indeed free, but so fragmented that MS C's allocator can't deal with it. That would not be unprecedented on Windows ;-) |
Let's test this in pure C. Compile and run the attached uglyhack.c on win32; if it reports something significantly less than 100%, it's probably safe to conclude that this has nothing to do with Python. |
Python uses an allocator called "pymalloc". For allocations smaller It was discussed to replace pymalloc with Windows Low Fragmented Heap allocator. |
@Haypo, this has nothing to do with PyMalloc. As I reported in my first message, only 7 PyMalloc arenas are in use at the end of the program, less than 2 MB total. *All* other arenas ever used were released to the OS. And that's not surprising. The vast bulk of the memory used in the test case isn't in small objects, it's in *strings* of ever-increasing size. Those are gotten by many calls to the system malloc(). |
@Esa.Peuha, fine idea! Alas, on the same box I used before, uglyhack.c displays (it varies a tiny amount from run to run): 65198 65145 99.918709% So it's not emulating enough of Python's malloc()/free() behavior to trigger the same kind of problem :-( |
By the way, in Python 3.4 arena allocation is done using VirtualAlloc and VirtualFree, that may make a difference too. |
@pitrou, maybe, but seems very unlikely. As explained countless times already ;-), PyMalloc allocates few arenas in the test program. "Small objects" are relatively rare here. Almost all the memory is consumed by strings of ever-increasing length. PyMalloc passes those large requests on to the system malloc(). |
Indeed, a 32-bit counter would already have overflowed :-D |
Just to be sure, I tried under current default (3.4.0a3+). Same behavior. |
After running ugly_hack(), trying to malloc a largeish block (1MB) fails: int main(void)
{
int first;
void *ptr; ptr = malloc(1024*1024);
assert(ptr != NULL); /* succeeds */
free(ptr);
first = ugly_hack();
ptr = malloc(1024*1024);
assert(ptr != NULL); /* fails */
free(ptr); return 0;
} |
@sbt, excellent! Happens for me too: trying to allocate a 1MB block fails after running ugly_hack() once. That fits the symptoms: lots of smaller, varying-sized allocations, followed by free()s, followed by a "largish" allocation. Don't know _exactly_ which largish allocation is failing. Could be the next non-trivial dict resize, or, because I'm running under Python 3, a largish Unicode string allocation. Unfortunately, using the current default-branch Python in a debug build, the original test case doesn't misbehave, so I can't be more specific. That could be because, in a debug build, Python does more of the memory management itself. Or at least it used to - everything got more complicated in my absence ;-) Anyway, since "the problem" has been produced with a simple pure C program, I think we need to close this as "wont fix". |
Can someone try the low fragmentation allocator? |
I tried jemalloc on Linux which behaves better than the (g)libc on the RSS |
@Haypo, I'm not sure what you mean by "the low fragmentation allocator". If it's referring to this: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750(v=vs.85).aspx it doesn't sound all that promising for this failing case. But, sure, someone should try it ;-) |
BTW, everything I've read (including the MSDN page I linked to) says that the LFH is enabled _by default_ starting in Windows Vista (which I happen to be using). So unless Python does something to _disable_ it (I don't know), there's nothing to try here. |
Tim> http://msdn.microsoft.com/en-us/library/windows/desktop/aa366750(v=vs.85).aspx Yes, this one. Tim> BTW, everything I've read (including the MSDN page I linked to) Extract of the link: "To enable the LFH for a heap, use the GetProcessHeap function to It should be enabled explicitly. |
Victor, please read your own link before posting: """The information in this topic applies to Windows Server 2003 and |
Oh. I missed this part, that's why I didn't understand Tim's remark. So the issue comes the Windows heap allocator. I don't see any obvious improvment that Python can do to improve the memory usage. I close the issue. You have to modify your application to allocate objects differently, to limit manually the fragmentation of the heap. Another option, maybe more complex, is to create a subprocess to process data, and destroy the process to release the memory. multiprocessing helps to implement that. I will maybe try jemalloc on Windows, but I prefer to open a new issue if I find something interesting. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: