Message84133
When compiling a source file to bytecode, Python first builds a syntax
tree in memory. It is very likely that the memory consumption you
observe is due to the size of the syntax tree. It is also unlikely that
someone else than you will want to modifying the parsing code to
accomodate such an extreme usage scenario :-)
For persistence of large data structures, I suggest using cPickle or a
similar mechanism. You can even embed the pickles in literal strings if
you still need your sessions to be Python source code:
>>> import cPickle
>>> f = open("test.py", "w")
>>> f.write("import cPickle\n")
>>> f.write("x = cPickle.loads(%s)" % repr(cPickle.dumps(range(5000000),
protocol=-1)))
>>> f.close()
>>> import test
>>> len(test.x)
5000000 |
|
Date |
User |
Action |
Args |
2009-03-25 00:26:02 | pitrou | set | recipients:
+ pitrou, loewis, vstinner, goddard |
2009-03-25 00:26:02 | pitrou | set | messageid: <1237940762.3.0.651393826784.issue5557@psf.upfronthosting.co.za> |
2009-03-25 00:26:00 | pitrou | link | issue5557 messages |
2009-03-25 00:25:59 | pitrou | create | |
|