This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients goddard, loewis, pitrou, vstinner
Date 2009-03-25.00:25:59
SpamBayes Score 3.0547787e-13
Marked as misclassified No
Message-id <>
When compiling a source file to bytecode, Python first builds a syntax
tree in memory. It is very likely that the memory consumption you
observe is due to the size of the syntax tree. It is also unlikely that
someone else than you will want to modifying the parsing code to
accomodate such an extreme usage scenario :-)

For persistence of large data structures, I suggest using cPickle or a
similar mechanism. You can even embed the pickles in literal strings if
you still need your sessions to be Python source code:

>>> import cPickle
>>> f = open("", "w")
>>> f.write("import cPickle\n")
>>> f.write("x = cPickle.loads(%s)" % repr(cPickle.dumps(range(5000000),
>>> f.close()
>>> import test
>>> len(test.x)
Date User Action Args
2009-03-25 00:26:02pitrousetrecipients: + pitrou, loewis, vstinner, goddard
2009-03-25 00:26:02pitrousetmessageid: <>
2009-03-25 00:26:00pitroulinkissue5557 messages
2009-03-25 00:25:59pitroucreate