This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: gzip.compress and gzip.decompress are sub-optimally implemented.
Type: performance Stage: resolved
Components: Library (Lib) Versions:
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: lukasz.langa, rhettinger, rhpvorderman
Priority: normal Keywords: patch

Created on 2021-03-24 06:27 by rhpvorderman, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 27941 merged rhpvorderman, 2021-08-25 09:34
Messages (4)
msg389438 - (view) Author: Ruben Vorderman (rhpvorderman) * Date: 2021-03-24 06:27
When working on python-isal which aims to provide faster drop-in replacements for the zlib and gzip modules I found that the gzip.compress and gzip.decompress are suboptimally implemented which hurts performance.

gzip.compress and gzip.decompress both do the following things:
- Instantiate a BytesIO object to mimick a file
- Instantiate a GzipFile object to compress or read the file.

That means there is way more Python code involved than strictly necessary. Also the 'data' is already fully in memory, but the data is streamed anyway. That is quite a waste.

I propose the following:
- The documentation should make it clear that zlib.decompress(... ,wbits=31) and zlib.compress(..., wbits=31) (after 43612 has been addressed), are both quicker but come with caveats. zlib.compress can not set mtime. zlib.decompress does not take multimember gzip into account. 
- For gzip.compress -> The GzipFile._write_gzip_header function should be moved to a module wide _gzip_header function that returns a bytes object. GzipFile._write_gzip_header can call this function. gzip.compress can also call this function to create a header. gzip.compress than calls zlib.compress(data, wbits=-15) (after 43612 has been fixed) to create a raw deflate block. A gzip trailer can be easily created by calling zlib.crc32(data) and len(data) & 0xffffffff and packing those into a struct. See for an example implementation here: https://github.com/pycompression/python-isal/blob/v0.8.0/src/isal/igzip.py#L242
-> For gzip.decompress it becomes quite more involved. A read_gzip_header function can be created, but the current implementation returns EOFErrors if the header is incomplete due to a truncated file instead of BadGzipFile errors. This makes it harder to implement something that is not a major break from current gzip.decompress. Apart from the header, the implementation is straightforward. Do a while true loop. All operations are performed in the loop. Validate the header and report the end of the header. Create a zlib.decompressobj(wbits=-15). Decompress all the data from the end of header. Flush. Extract the crc and length from the first 8 bytes of the unused data. data = decompobj.unused_data[8:]. if not data: break. For a reference implementation check here: https://github.com/pycompression/python-isal/blob/v0.8.0/src/isal/igzip.py#L300. Note that the decompress function is quite straightforward. Checking the header however while maintaining backwards compatibility with gzip.decompress is not so simple.

And that brings to another point. Should non-descriptive EOFErrors be raised when reading the gzip header? Or throw informative BadGzipFile errors when the header is parsed. I tend towards the latter. For example BadGzipFile("Truncated header") instead of EOFError. Or at least EOFError("Truncated gzip header"). I am aware that confounds this issue with another issue, but these things are coupled in the implementation so both need to be solved at the same time.

Given the headaches that gzip.decompress gives it might be easier to solve gzip.compress first in a first PR and do gzip.decompress later.
msg389495 - (view) Author: Ruben Vorderman (rhpvorderman) * Date: 2021-03-25 08:27
I created bpo-43621 for the error issue. There should only be BadGzipFile. Once that is fixed, having only one error type will make it easier to implement some functions that are shared across the gzip.py codebase.
msg400921 - (view) Author: Łukasz Langa (lukasz.langa) * (Python committer) Date: 2021-09-02 15:03
New changeset ea23e7820f02840368569db8082bd0ca4d59b62a by Ruben Vorderman in branch 'main':
bpo-43613: Faster implementation of gzip.compress and gzip.decompress (GH-27941)
https://github.com/python/cpython/commit/ea23e7820f02840368569db8082bd0ca4d59b62a
msg400985 - (view) Author: Ruben Vorderman (rhpvorderman) * Date: 2021-09-03 07:37
Issue was solved by moving code from _GzipReader to separate functions and maintaining the same error structure. 
This solved the problem with maximum code reuse and full backwards compatibility.
History
Date User Action Args
2022-04-11 14:59:43adminsetgithub: 87779
2021-09-03 07:37:25rhpvordermansetmessages: + msg400985
2021-09-03 07:28:45rhpvordermansetstatus: open -> closed
resolution: fixed
stage: patch review -> resolved
2021-09-02 15:03:11lukasz.langasetnosy: + lukasz.langa
messages: + msg400921
2021-08-25 09:34:17rhpvordermansetkeywords: + patch
stage: patch review
pull_requests: + pull_request26386
2021-03-25 08:27:13rhpvordermansetmessages: + msg389495
2021-03-24 18:36:17iritkatrielsetcomponents: + Library (Lib)
2021-03-24 08:44:31rhpvordermansettype: performance
2021-03-24 06:56:11rhettingersetnosy: + rhettinger
2021-03-24 06:27:19rhpvordermancreate