New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Byte-code compilation uses excessive memory #49807
Comments
Bytecode compiling large Python files uses an unexpectedly large amount The application that creates simlilarly large Python files is a Here is Python code to produce the test file test.py containing a list print >>open('test.py','w'), 'x = ', repr(range(5000000)) I tried importing the test.py file with Python 2.5, 2.6.1 and 3.0.1 on |
It might be possible to make it more efficient. However, the primary So lowering the priority. If you want this resolved, it might be best if |
Python uses inefficent memory structure for integers. You should use a |
When compiling a source file to bytecode, Python first builds a syntax For persistence of large data structures, I suggest using cPickle or a >>> import cPickle
>>> f = open("test.py", "w")
>>> f.write("import cPickle\n")
>>> f.write("x = cPickle.loads(%s)" % repr(cPickle.dumps(range(5000000),
protocol=-1)))
>>> f.close()
>>> import test
>>> len(test.x)
5000000 |
I agree that having such large Python code files is a rare circumstance Thanks for the cpickle suggestion. The Chimera session file Python code |
If you want editable data, you could use json instead of pickle. The |
Closing, as without a specific issue to fix it is unlikely that this will change. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: