This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author htrd
Recipients
Date 2001-02-12.16:10:51
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
zlib's decompress method will allocate as much memory as is
needed to hold the decompressed output. The length of the output
buffer may be very much larger than the length of the input buffer,
and the python code calling the decompress method has no other way
to control how much memory is allocated.

In experimentation, I seen decompress generate output that is
1000 times larger than its input

These characteristics may make the decompress method unsuitable for
handling data obtained from untrusted sources (for example,
in a http proxy which implements gzip encoding) since it may be
vulnerable to a denial of service attack. A malicious user could
construct a moderately sized input which forces 'decompress' to
try to allocate too much memory.

This patch adds a new method, decompress_incremental, which allows
the caller to specify the maximum size of the output. This method
returns the excess input, in addition to the decompressed output.

It is possible to solve this problem without a patch:
If input is fed to the decompressor a few tens of bytes
at a time, memory usage will surge by (at most)
a few tens of kilobytes. Such a process is a kludge, and much
less efficient that the approach used in this patch.

(Ive not been able to test the documentation patch; I hope its ok)

(This patch also includes the change from Patch #103748)
History
Date User Action Args
2007-08-23 15:03:47adminlinkissue403753 messages
2007-08-23 15:03:47admincreate