This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients A. Skrobov, benjamin.peterson, berker.peksag, brett.cannon, christian.heimes, eryksun, fdrake, giampaolo.rodola, paul.moore, rhettinger, serhiy.storchaka, steve.dower, tim.golden, vstinner, zach.ware
Date 2016-06-07.10:19:14
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1465294754.47.0.847635489272.issue26415@psf.upfronthosting.co.za>
In-reply-to
Content
Benjamin Peterson: "It seems to me a simpler solution would be allocate all nodes for a parse tree in an arena."

Exactly, that's the real fix. Just make sure that deallocating this arena does punch holes in the "heap memory". For example, on Linux, it means using mmap() rather than sbrk() to allocate memory.


A. Skrobov: "An arena might help reclaim the memory once the parsing is complete, but it wouldn't reduce the peak memory consumption by the parser, and so it wouldn't prevent a MemoryError when parsing a 35MB source on a PC with 2GB of RAM."

Parsing a 35 MB source doesn't seem like a good idea :-) I think that it's ok to have a memory peak, but it's not ok to not release the memory later.

Do you have a solution avoid the memory peak *and* don't create memory fragmentation?
History
Date User Action Args
2016-06-07 10:19:14vstinnersetrecipients: + vstinner, fdrake, brett.cannon, rhettinger, paul.moore, giampaolo.rodola, christian.heimes, tim.golden, benjamin.peterson, berker.peksag, zach.ware, serhiy.storchaka, eryksun, steve.dower, A. Skrobov
2016-06-07 10:19:14vstinnersetmessageid: <1465294754.47.0.847635489272.issue26415@psf.upfronthosting.co.za>
2016-06-07 10:19:14vstinnerlinkissue26415 messages
2016-06-07 10:19:14vstinnercreate