This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author eryksun
Recipients A. Skrobov, christian.heimes, eryksun, paul.moore, rhettinger, serhiy.storchaka, steve.dower, tim.golden, zach.ware
Date 2016-02-25.17:47:33
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1456422454.15.0.751411153529.issue26415@psf.upfronthosting.co.za>
In-reply-to
Content
> My Python is 64-bit, but my computer only has 2GB physical RAM.

That explains why it takes half an hour to crash. It's thrashing on page faults. Adding another paging file or increasing the size of your current paging file should allow this to finish parsing... eventually in maybe an hour or two. 

The design of the parser isn't something I've delved into very much, but possibly the dynamic nature of Python prevents optimizing the memory footprint here. Or maybe no one has seen the need to optimize parsing containers (dicts, sets, lists, tuples) that have constant literals. This is an inefficient way to store 35 MiB of data, as opposed to XML, JSON, or a binary format.
History
Date User Action Args
2016-02-25 17:47:34eryksunsetrecipients: + eryksun, rhettinger, paul.moore, christian.heimes, tim.golden, zach.ware, serhiy.storchaka, steve.dower, A. Skrobov
2016-02-25 17:47:34eryksunsetmessageid: <1456422454.15.0.751411153529.issue26415@psf.upfronthosting.co.za>
2016-02-25 17:47:34eryksunlinkissue26415 messages
2016-02-25 17:47:33eryksuncreate