This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author xiang.zhang
Recipients Klamann, martin.panter, xiang.zhang
Date 2016-05-27.04:17:52
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1464322672.87.0.829442227702.issue27130@psf.upfronthosting.co.za>
In-reply-to
Content
Quick and careless scanning at night lead me to a wrong result, Sorry.

> You would need to compress just under 4 GiB of data that requires 5 MB or more when compressed (i.e. not all the same bytes, or maybe try level=0).

With enough memory, compressing with level 0 does raise a error while the default level didn't. 

Except for overflow fix, does zlib have to support large data in one operation? For example, it's OK that zlib.compress does not support data beyond 4GB since we can split data in application and then call zlib.compress on each part and finally concatenate the results.
History
Date User Action Args
2016-05-27 04:17:52xiang.zhangsetrecipients: + xiang.zhang, martin.panter, Klamann
2016-05-27 04:17:52xiang.zhangsetmessageid: <1464322672.87.0.829442227702.issue27130@psf.upfronthosting.co.za>
2016-05-27 04:17:52xiang.zhanglinkissue27130 messages
2016-05-27 04:17:52xiang.zhangcreate