Message104522
There's a bug in python's zlibmodule.c that makes it raise SystemError whenever it tries to decompress a chunk larger than 1GB is size. Here's an example of this in action:
dmitriev@...:~/moo/zlib_bug> cat zlib_bug.py
import zlib
def test_zlib(size_mb):
print "testing zlib with a %dMB object" % size_mb
c = zlib.compressobj(1)
sm = c.compress(' ' * (size_mb*1024*1024)) + c.flush()
d = zlib.decompressobj()
dm = d.decompress(sm) + d.flush()
test_zlib(1024)
test_zlib(1025)
dmitriev@...:~/moo/zlib_bug> python2.6 zlib_bug.py
testing zlib with a 1024MB object
testing zlib with a 1025MB object
Traceback (most recent call last):
File "zlib_bug.py", line 11, in <module>
test_zlib(1025)
File "zlib_bug.py", line 8, in test_zlib
dm = d.decompress(sm) + d.flush()
SystemError: Objects/stringobject.c:4271: bad argument to internal function
dmitriev@...:~/moo/zlib_bug>
A similar issue was reported in issue1372; however, either this one is different, or the issue still exists in all versions of Python 2.6 that I tested, on both Solaris and Mac OS X. These are all 64-bit builds and have no problem manipulating multi-GB structures, so it's not an out-of-memory condition:
dmitriev@...:~/moo/zlib_bug> python2.6
Python 2.6.1 (r261:67515, Nov 18 2009, 12:21:47)
[GCC 4.3.3] on sunos5
Type "help", "copyright", "credits" or "license" for more information.
>>> len(' ' * (6000*1024*1024))
6291456000
>>> |
|
Date |
User |
Action |
Args |
2010-04-29 15:35:57 | ddmitriev | set | recipients:
+ ddmitriev |
2010-04-29 15:35:57 | ddmitriev | set | messageid: <1272555357.11.0.654496142988.issue8571@psf.upfronthosting.co.za> |
2010-04-29 15:35:55 | ddmitriev | link | issue8571 messages |
2010-04-29 15:35:54 | ddmitriev | create | |
|