Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive peak memory consumption by the Python parser #70603

Closed
tyomitch mannequin opened this issue Feb 22, 2016 · 34 comments
Closed

Excessive peak memory consumption by the Python parser #70603

tyomitch mannequin opened this issue Feb 22, 2016 · 34 comments
Assignees
Labels
3.8 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@tyomitch
Copy link
Mannequin

tyomitch mannequin commented Feb 22, 2016

BPO 26415
Nosy @freddrake, @rhettinger, @pfmoore, @vstinner, @tiran, @tjguk, @benjaminp, @encukou, @methane, @berkerpeksag, @zware, @serhiy-storchaka, @eryksun, @zooba, @tyomitch, @stratakis, @pablogsal
PRs
  • bpo-26415: reduce peak memory consumption by the parser #10995
  • Files
  • crash.zip
  • python_memleak.py
  • patch
  • patch
  • patch
  • devguide_patch
  • patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/serhiy-storchaka'
    closed_at = <Date 2020-05-23.00:28:30.620>
    created_at = <Date 2016-02-22.23:06:20.050>
    labels = ['interpreter-core', '3.8', 'performance']
    title = 'Excessive peak memory consumption by the Python parser'
    updated_at = <Date 2020-05-23.00:28:30.617>
    user = 'https://github.com/tyomitch'

    bugs.python.org fields:

    activity = <Date 2020-05-23.00:28:30.617>
    actor = 'pablogsal'
    assignee = 'serhiy.storchaka'
    closed = True
    closed_date = <Date 2020-05-23.00:28:30.620>
    closer = 'pablogsal'
    components = ['Interpreter Core']
    creation = <Date 2016-02-22.23:06:20.050>
    creator = 'A. Skrobov'
    dependencies = []
    files = ['42011', '42091', '42101', '43261', '43664', '43665', '44370']
    hgrepos = []
    issue_num = 26415
    keywords = ['patch']
    message_count = 34.0
    messages = ['260704', '260840', '260859', '260860', '260865', '260868', '260870', '260874', '260889', '260930', '261334', '261345', '261346', '261347', '261349', '261362', '261427', '261503', '265412', '267534', '267585', '267602', '267618', '267620', '267622', '270002', '272772', '272786', '274393', '299022', '330197', '331240', '331848', '369673']
    nosy_count = 18.0
    nosy_names = ['fdrake', 'rhettinger', 'paul.moore', 'vstinner', 'christian.heimes', 'tim.golden', 'benjamin.peterson', 'petr.viktorin', 'methane', 'berker.peksag', 'zach.ware', 'serhiy.storchaka', 'eryksun', 'steve.dower', 'xcombelle', 'A. Skrobov', 'cstratak', 'pablogsal']
    pr_nums = ['10995']
    priority = 'normal'
    resolution = 'rejected'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'resource usage'
    url = 'https://bugs.python.org/issue26415'
    versions = ['Python 3.8']

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Feb 22, 2016

    I have a one-line module that assigns a tuple->int dictionary:

    holo_table = {(0, 0, 0, 0, 0, 0, 1, 41, 61, 66, 89): 9, (0, 0, 0, 70, 88, 98, 103, 131, 147, 119, 93): 4, [35MB skipped], (932, 643, 499, 286, 326, 338, 279, 200, 280, 262, 115): 5}

    When I try to import this module, Python grinds 100% of my CPU for like half an hour, then ultimately crashes with a MemoryError.

    How much memory does it need to parse 35MB of data, of a rather simple structure?

    Attaching the module, zipped to 10MB.

    @tyomitch tyomitch mannequin added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Feb 22, 2016
    @serhiy-storchaka serhiy-storchaka self-assigned this Feb 24, 2016
    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Feb 25, 2016

    A practical note: if, instead of importing crash.py, I do a json.loads, with a few extra transformations:

    with open("crash.py") as f: holo_table={tuple(int(z) for z in k.split(', ')):v for k,v in json.loads(f.readlines()[0][13:].replace('(','"').replace(')','"')).iteritems()}

    --the whole data structure loads in a jiffy.

    Makes me wonder why this roundabout approach is so much more efficient than the native parsing.

    @tiran
    Copy link
    Member

    tiran commented Feb 25, 2016

    It takes about 12 seconds to byte compile crash.py on my computer and less than half a second to import about 28 MB of byte code:

    $ rm -rf crash.pyc __pycache__/
    $ time python2.7 -c 'import crash'

    real 0m11.930s
    user 0m9.859s
    sys 0m2.085s
    $ time python2.7 -c 'import crash'

    real 0m0.484s
    user 0m0.401s
    sys 0m0.083s
    $ time python3.4 -c 'import crash'

    real 0m12.327s
    user 0m10.106s
    sys 0m2.236s
    $ time python3.4 -c 'import crash'

    real 0m0.435s
    user 0m0.367s
    sys 0m0.069s

    @tiran
    Copy link
    Member

    tiran commented Feb 25, 2016

    $ python2.7 --version
    Python 2.7.10
    $ python3.4 --version
    Python 3.4.3
    $ uname -r -o
    4.3.5-300.fc23.x86_64 GNU/Linux

    Intel(R) Core(TM) i7-4900MQ CPU @ 2.80GHz

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Feb 25, 2016

    Mine is on Windows. I've now installed both 2.7.10 and 3.4.3 to reconfirm, and it's still the same, on both of them, except that on 3.4.3 it crashes with a MemoryError much faster (within a couple minutes).

    @tyomitch tyomitch mannequin added the OS-windows label Feb 25, 2016
    @eryksun
    Copy link
    Contributor

    eryksun commented Feb 25, 2016

    I don't think this is Windows related. Are you using 32-bit Python? On Linux, if I limit the process address space to 2 gigs, it crashes almost immediately:

        $ ulimit -v 2000000
        $ python-dbg -c 'import crash'
        Segmentation fault

    It runs out of memory while parsing the file:

    Program received signal SIGSEGV, Segmentation fault.
    0x000000000048ecb4 in PyObject_Malloc (nbytes=72) at ../Objects/obmalloc.c:932
    932	../Objects/obmalloc.c: No such file or directory.
    (gdb) bt
    #0  0x000000000048ecb4 in PyObject_Malloc (nbytes=72) at ../Objects/obmalloc.c:932
    #1  0x000000000048f8be in _PyObject_DebugMallocApi (id=111 'o', nbytes=40) at ../Objects/obmalloc.c:1469
    #2  0x000000000048fa2b in _PyObject_DebugReallocApi (api=111 'o', p=0x0, nbytes=40) at ../Objects/obmalloc.c:1520
    #3  0x000000000048f83c in _PyObject_DebugRealloc (p=0x0, nbytes=40) at ../Objects/obmalloc.c:1441
    #4  0x0000000000418a02 in PyNode_AddChild (n1=0x7fff85cbffb8, type=318, str=0x0, lineno=1, col_offset=6446977)
        at ../Parser/node.c:98
    #5  0x0000000000418f53 in push (s=0xa6b680, type=318, d=0x8bfc70 <dfas+2480>, newstate=1, lineno=1, col_offset=6446977)
        at ../Parser/parser.c:126
    #6  0x000000000041946c in PyParser_AddToken (ps=0xa6b680, type=262144, str=0x7fff85cba720 "11", lineno=1, 
        col_offset=6446977, expected_ret=0x7fffffffd324) at ../Parser/parser.c:252
    #7  0x0000000000419f19 in parsetok (tok=0xa5f650, g=0x8c0ac0 <_PyParser_Grammar>, start=257, err_ret=0x7fffffffd300, 
        flags=0x7fffffffd2ec) at ../Parser/parsetok.c:198
    #8  0x0000000000419cb6 in PyParser_ParseFileFlagsEx (fp=0xa19b70, filename=0xa53b00 "crash.py", 
        g=0x8c0ac0 <_PyParser_Grammar>, start=257, ps1=0x0, ps2=0x0, err_ret=0x7fffffffd300, flags=0x7fffffffd2ec)
        at ../Parser/parsetok.c:106
    

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Feb 25, 2016

    My Python is 64-bit, but my computer only has 2GB physical RAM.

    @eryksun
    Copy link
    Contributor

    eryksun commented Feb 25, 2016

    My Python is 64-bit, but my computer only has 2GB physical RAM.

    That explains why it takes half an hour to crash. It's thrashing on page faults. Adding another paging file or increasing the size of your current paging file should allow this to finish parsing... eventually in maybe an hour or two.

    The design of the parser isn't something I've delved into very much, but possibly the dynamic nature of Python prevents optimizing the memory footprint here. Or maybe no one has seen the need to optimize parsing containers (dicts, sets, lists, tuples) that have constant literals. This is an inefficient way to store 35 MiB of data, as opposed to XML, JSON, or a binary format.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Feb 26, 2016

    Yes, I understand that this is a matter of memory consumption, which is why I submitted this ticket as "resource usage".
    What I don't understand is, what could possibly require gigabytes of memory for this task?

    @serhiy-storchaka
    Copy link
    Member

    I though this might be tokenizer issue, but this is just large memory consumption for AST tree. Simpler example:

    ./python -c 'import ast; ast.parse("0,"*1000000, mode="eval")'

    It takes over 450MB of memory on 32-bit system, over 450 bytes per tuple item. Increase the multiplier to 10000000 leads to swapping and failing with MemoryError.

    Of course it would be nice to decrease memory consumption, but this looks rather as new feature.

    @serhiy-storchaka serhiy-storchaka removed their assignment Feb 27, 2016
    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Mar 8, 2016

    OK, I've now looked into it with a fresh build of 3.6 trunk on Linux x64.

    Peak memory usage is about 3KB per node:

    $ /usr/bin/time -v ./python -c 'import ast; ast.parse("0,"*1000000, mode="eval")'
    	Command being timed: "./python -c import ast; ast.parse("0,"*1000000, mode="eval")"
    	...
    	Maximum resident set size (kbytes): 3015552
    	...

    Out of the 2945 MB total peak memory usage, only 330 MB are attributable to the heap use:

    $ valgrind ./python -c 'import ast; ast.parse("0,"*1000000, mode="eval")'
    ==21232== ...
    ==21232== HEAP SUMMARY:
    ==21232==     in use at exit: 3,480,447 bytes in 266 blocks
    ==21232==   total heap usage: 1,010,171 allocs, 1,009,905 frees, 348,600,304 bytes allocated
    ==21232== ...

    So, apparently, it's not the nodes themselves taking up a disproportionate amount of memory -- it's the heap getting so badly fragmented that 89% of its memory allocation is wasted.

    gprof confirms that there are lots of mallocs/reallocs going on, up to 21 per node:

    $ gprof python
    Flat profile:

    Each sample counts as 0.01 seconds.
    % cumulative self self total
    time seconds seconds calls s/call s/call name
    17.82 0.31 0.31 2000020 0.00 0.00 PyParser_AddToken
    13.79 0.55 0.24 2 0.12 0.16 freechildren
    12.64 0.77 0.22 21039125 0.00 0.00 _PyMem_RawMalloc
    6.32 0.88 0.11 17000101 0.00 0.00 PyNode_AddChild
    5.75 0.98 0.10 2837984 0.00 0.00 visit_decref
    5.75 1.08 0.10 1000004 0.00 0.00 ast_for_expr
    4.60 1.16 0.08 2867 0.00 0.00 collect
    4.02 1.23 0.07 20023405 0.00 0.00 _PyObject_Free
    2.30 1.27 0.04 3031305 0.00 0.00 _PyType_Lookup
    2.30 1.31 0.04 3002234 0.00 0.00 _PyObject_GenericSetAttrWithDict
    2.30 1.35 0.04 1 0.04 0.05 ast2obj_expr
    1.72 1.38 0.03 28366858 0.00 0.00 visit_reachable
    1.72 1.41 0.03 12000510 0.00 0.00 subtype_traverse
    1.72 1.44 0.03 3644 0.00 0.00 list_traverse
    1.44 1.47 0.03 3002161 0.00 0.00 _PyObjectDict_SetItem
    1.15 1.49 0.02 20022785 0.00 0.00 PyObject_Free
    1.15 1.51 0.02 15000763 0.00 0.00 _PyObject_Realloc

    So, I suppose what needs to be done is to try reducing the number of reallocs involved in handling an AST node; the representation of the nodes themselves doesn't need to change.

    @vstinner
    Copy link
    Member

    vstinner commented Mar 8, 2016

    So, apparently, it's not the nodes themselves taking up a disproportionate amount of memory -- it's the heap getting so badly fragmented that 89% of its memory allocation is wasted.

    Yeah, the Python parser+compiler badly uses the memory allocator. See my "benchmark" for the memory allocator: python_memleak.py.

    The classical pattern of memory fragmentation is:

    • allocate a lot of small objects
    • allocate a few objects
    • allocate more small objects
    • free *all* small objects

    All objects must allocated on the heap, not mmap(). So the maximum size of a single object must be 128 KB (usual threshold used in malloc() to switch beetween the heap memory and mmap).

    We can try various hacks to reduce the fragmentation, but IMHO the only real fix is to use a different memory allocator for the compiler and then free everything allocated by the parser+compiler at once.

    We already have an "arena" memory allocator: Include/pyarena.h, Python/pyarena.c. It is already used by the parser+compiler, but it's only used for AST objects in practice. The parser uses the PyMem allocator API (ex: PyMem_Malloc).

    @vstinner vstinner changed the title Out of memory, trying to parse a 35MB dict Fragmentation of the heap memory in the Python parser Mar 8, 2016
    @vstinner
    Copy link
    Member

    vstinner commented Mar 8, 2016

    My misc notes about memory fragmentation: https://haypo-notes.readthedocs.org/heap_fragmentation.html

    @vstinner
    Copy link
    Member

    vstinner commented Mar 8, 2016

    libapr library of Apache is designed to group all memory allocations required to handle an HTTP request. It is possible to release *all* memory allocations of the request at once.

    It looks like the the "pool" object:

    @serhiy-storchaka
    Copy link
    Member

    On my computer peak memory usage in non-debug build is about 450 MB. Peak memory usage in debug build is about 730 MB. I'm not sure this is due to memory fragmentation.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Mar 8, 2016

    @serhiy: if your build is 32-bit, then every node is half the size, as it mostly consists of pointers.

    The amount of heap fragmentation can also depend on gcc/glibc version.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Mar 9, 2016

    The attached patch for the parser reduces "Maximum resident set size (kbytes)" threefold, for the degenerate example of 'import ast; ast.parse("0,"*1000000, mode="eval")', by eliminating many CST nodes that have a single child.

    According to the comment in node.c -- "89% of PyObject_REALLOC calls in PyNode_AddChild passed 1 for the size" -- the memory saving should be generally applicable, and not limited just to this degenerate case.

    Modules/parsermodule.c is not yet updated to match. Please tell if you want me to do that, in case that my proposed change to the parser is acceptable.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Mar 10, 2016

    I've now tried it with "perf.py -r -m", and the memory savings are as follows:

    ### 2to3 ###
    Mem max: 45976.000 -> 47440.000: 1.0318x larger

    ### chameleon_v2 ###
    Mem max: 436968.000 -> 401088.000: 1.0895x smaller

    ### django_v3 ###
    Mem max: 23808.000 -> 22584.000: 1.0542x smaller

    ### fastpickle ###
    Mem max: 10768.000 -> 9248.000: 1.1644x smaller

    ### fastunpickle ###
    Mem max: 10988.000 -> 9328.000: 1.1780x smaller

    ### json_dump_v2 ###
    Mem max: 10892.000 -> 10612.000: 1.0264x smaller

    ### json_load ###
    Mem max: 11012.000 -> 9908.000: 1.1114x smaller

    ### nbody ###
    Mem max: 8696.000 -> 7944.000: 1.0947x smaller

    ### regex_v8 ###
    Mem max: 12504.000 -> 9432.000: 1.3257x smaller

    ### tornado_http ###
    Mem max: 27636.000 -> 27608.000: 1.0010x smaller

    So, on these benchmarks, the saving is not threefold, of course; but still quite substantial (up to 30%).

    The run time difference, on these benchmarks, is between "1.04x slower" and "1.06x faster", for reasons beyond my understanding (variability of background load, possibly?)

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented May 12, 2016

    Ping? This patch is two months old now.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jun 6, 2016

    Now that bpo-26526 landed (thanks to everybody involved!), I'm requesting a review on an updated version of my patch, which addresses the excessive memory consumption by the parser.

    The description of my original patch still applies:

    The attached patch for the parser reduces "Maximum resident set size (kbytes)" threefold, for the degenerate example of 'import ast; ast.parse("0,"*1000000, mode="eval")', by eliminating many CST nodes that have a single child.

    According to the comment in Parser/node.c -- "89% of PyObject_REALLOC calls in PyNode_AddChild passed 1 for the size" -- the memory saving should be generally applicable, and not limited just to this degenerate case.

    I've now tried it with "perf.py -r -m", and the memory savings are as follows:
    ...
    on these benchmarks, the saving is not threefold, of course; but still quite substantial (up to 30%).

    My new patch updates Modules/parsermodule.c to accept such "compressed" nodes, so that everything still builds cleanly and passes the tests.

    @benjaminp
    Copy link
    Contributor

    It seems to me a simpler solution would be allocate all nodes for a parse tree in an arena.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jun 7, 2016

    An arena might help reclaim the memory once the parsing is complete, but it wouldn't reduce the peak memory consumption by the parser, and so it wouldn't prevent a MemoryError when parsing a 35MB source on a PC with 2GB of RAM.

    @vstinner
    Copy link
    Member

    vstinner commented Jun 7, 2016

    Benjamin Peterson: "It seems to me a simpler solution would be allocate all nodes for a parse tree in an arena."

    Exactly, that's the real fix. Just make sure that deallocating this arena does punch holes in the "heap memory". For example, on Linux, it means using mmap() rather than sbrk() to allocate memory.

    A. Skrobov: "An arena might help reclaim the memory once the parsing is complete, but it wouldn't reduce the peak memory consumption by the parser, and so it wouldn't prevent a MemoryError when parsing a 35MB source on a PC with 2GB of RAM."

    Parsing a 35 MB source doesn't seem like a good idea :-) I think that it's ok to have a memory peak, but it's not ok to not release the memory later.

    Do you have a solution avoid the memory peak *and* don't create memory fragmentation?

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jun 7, 2016

    My current patch avoids the memory peak *and* doesn't add any memory fragmentation on top of whatever is already there.

    In other words, it makes the parser better in this one aspect, and it doesn't make it worse in any aspect.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jun 7, 2016

    (Updating the issue title, to avoid confusion between two orthogonal concerns)

    @tyomitch tyomitch mannequin changed the title Fragmentation of the heap memory in the Python parser Excessive peak memory consumption by the Python parser Jun 7, 2016
    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jul 8, 2016

    Fixing whitespace in the patch, and including an update for the docs

    @xcombelle
    Copy link
    Mannequin

    xcombelle mannequin commented Aug 15, 2016

    Looks that there is two bug as partial solution of the main bug which is reduce memory consumption of the parser:

    • The compression thing
    • the reducing of heap fragmentation
      Could each sub bug have it's own bug tracker and mark them blocking for the main bug ?

    A. Skrobov Can you describe the big picture of your patch, without I don't understand the logic of your modifications.

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Aug 15, 2016

    Xavier, the big picture description of my patch is in http://bugs.python.org/file43665/devguide_patch

    The heap fragmentation was observed by Victor, not by myself.

    Victor, could you please create a new ticket for your python_memleak.py reproducer?

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Sep 5, 2016

    Updated the comment for Init_ValidationGrammar()

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Jul 24, 2017

    To bump this year-old issue, I have delivered a talk at EuroPython 2017, explaining what my patch does, how it does what it does, and why it's a good thing to do.

    You can watch my talk at https://www.youtube.com/watch?v=dyRDmcsTwhE&t=1h52m38s

    @serhiy-storchaka
    Copy link
    Member

    Do you mind to create a pull request for reviewing? I can not promise that it will be merged, but it will have a chance.

    For reference, the article about this patch (on Russian): https://habr.com/post/314062/ .

    @serhiy-storchaka serhiy-storchaka added the 3.8 only security fixes label Nov 21, 2018
    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Dec 6, 2018

    @serhiy: incredibly, this patch from 2.5 years ago required very minor changes to apply to the latest master. Shows how ossified the parser is :-)

    Now posted as #10995

    @tyomitch
    Copy link
    Mannequin Author

    tyomitch mannequin commented Dec 14, 2018

    I've run pyperformance (0.7.0) with my updated patch, and posted results at the PR page. They look encouraging enough.

    @serhiy-storchaka serhiy-storchaka self-assigned this Dec 19, 2018
    @pablogsal
    Copy link
    Member

    Closing as we have a new parser in 3.9 and the current one will be removed in Python 3.10.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.8 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    6 participants