New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limit decompressed data when reading from LZMAFile and BZ2File #67717
Comments
This is a follow-on from bpo-15955, which has added low-level support for limiting the amount of data from the LZMA and bzip decompressors. The high-level LZMAFile and BZ2File reader APIs need to make use of the new low level max_length parameter. I am starting off with a patch for LZMAFile, based on a patch I posted to bpo-15955. I split out a _RawReader class that does the actual decompress() calls, and then wrapped that in a BufferedReader. The LZMAFile then just delegates the read methods to the BufferedReader. This avoids needing any special code to implement buffering, readline(), etc. This involved some changes in the API though:
Once the LZMAFile patch sees a bit of review and looks like it might be acceptable, I plan to change the BZ2File class in a similar manner. |
What do you mean with "useful data"? peek() should always return at least one byte (except on EOF or on non-blocking streams, of course). |
## BufferedReader.peek() ## ## Buffer sizing ##
## Common raw decompression stream class ##
In bpo-23528, Nikolaus also pointed out that “GzipFile would need to additionally overwrite read() and write() in order to handle the CRC and gzip header.” |
On Feb 27 2015, Martin Panter <report@bugs.python.org> wrote:
I'm not 100% sure what you mean by that, but I think the answer is Yes.
Yes, in that case the DecompressReader should simply be used without
I don't think so. There's not much to be gained by buffering data before Best, --
|
I am posting LZMAFile-etc.v3.patch, where I have implemented a “buffer_size” parameter to the buffered LZMAFile etc classes. I have not implemented open(buffering=...) for the time being (which should probably delegate to the buffer_size parameter or return a raw _DecompressReader object, at least for read mode). Other changes:
Still to do: Need to find a better home for the _DecompressReader and _BaseStream classes. Currently it lives in “lzma”, but apparently it is possible for any of the gzip, bz2, lzma modules to not be importable, so it would have to live elsewhere. Possible options are the io module, or a brand new internal module (e.g. Lib/_compression.py). Thoughts? Also I am about to see if I can make GzipFile use the _DecompressReader class. I will have to add GzipFile(buffer_size=...) as a keyword-only parameter since the third parameter position is already taken. There are quite a few quirks with gzip and zlib compared to bz2 and lzma, so I will see how I go. |
I have decided not to reuse the same _DecompressReader for the "gzip" module. This is because the code for parsing the "gzip" header pulls in data by calling read(), and I would have to adapt this code to accept whatever data is pushed to it via a decompress() method. However I have rewritten the “gzip” module to use BufferedReader, removing its own readline() etc implementations. I have split the BaseStream and DecompressReader classes into a new file called Lib/_compression.py. Let me know if there is anything else to be done to add a new file in the Lib/ directory, or if you have any better ideas where these classes should live. Other changes in the new patch (LZMAFile-etc.v4.patch):
I hope that the patch does not need any more major changes, so it should be ready for review. There are a few more suggested enhancements that I have not implemented. While they would be nice, I would prefer to handle them in separate bugs and patches. I believe strengthening Python against gzip bombs is important, and any extra refactoring of the code will probably make things harder to review, more likely to break, and less likely to make it into 3.5. Some possible enhancements I did not do:
|
On Mar 06 2015, Martin Panter <report@bugs.python.org> wrote:
Yes.
I think a new internal module would be the right choice, but I don't Best,
|
Well my last patch just added the _compression.py file without doing anything special, and it seems to be installed properly with “make install” so I will assume nothing else needs to be done. |
Posting LZMAFile-etc.v5.patch with the following changes:
There is still Nikolaus’s concern about setting the buffer size to zero and doing short reads, discussed on Rietveld. Apart from that, I think this patch addresses the rest of the comments. Let me know if I missed something! |
If you want to add support for buffer_size=0 in a separate patch/issue I think that's fine. But in that case I would not add a buffer_size parameter now at all. IMO, not having it is better as having it but not supporting zero (even if it's documented that's pretty surprising). |
The patch has a huge block replacement in the gzip.py module starting at GzipFile.write, which makes it large and hard to identify changes. Could that be ameliorated or is it too much work right now? |
Wolfgang: Unfortunately, I think that block of changes is largely unavoidable. The write() method should not have changed in reality, but all the init and read methods around it have been split into two classes, and the diff has decided to compare the old GzipFile methods to the new _DecompressReader methods. If you like I could try preparing a pseudo diff files that forces it to compare old GzipFile to new GzipFile to highlight the changes there. Nikolaus: I will try to split my buffer_size parameter changes out and post them in a separate issue that can be enhanced further. |
I am posting v6, which basically drops the buffer_size parameters. The change is also pushed as a series of revisions to a Bit Bucket repository, see <https://bitbucket.org/vadmium/cpython/branches/compare/compression%0D@\>. I thought that might help eliminate the giant blocks of changes in gzip.py, but it did not. The previous changes for the buffer_size parameters are in the “buffer_size” bookmark in that repository. |
As discussed in Rietveld, here's an attempt to reuse more DecompressReader for GzipFile. There is still an unexplained test failure (test_read_truncated). |
Wolfgang: If the patch here is too massive, perhaps you might like to review my patch at bpo-23528 first. :) It only fixes the “gzip” module, but is definitely simpler, so might have more chance of making it into 3.4 or 3.5 on time. === Changes introduced by patch v7 if anyone’s interested: <https://bitbucket.org/vadmium/cpython/commits/b7c43f7\>. A few problems and behaviour changes are introduced:
However I think the general idea of inheriting the common reader class and overriding as necessary is sound. Just looks like the code got too messed around in the process. |
Another behaviour change in v7 is the seekable() return value is now inherited from underlying file, instead of always being True. I mentioned on Reitveld why this could be a bad idea, but as it is undocumented and untested and will leave the new behaviour unless someone complains. Posting v8, with these changes from v7:
|
Except for the pointless 'myfileobj' stuff in gzip.py, rev 8 of the patch looks good to me. (Btw, I'm not actually in favor of e.g. the seekable() change. The previous patch was intended as a proof-of-concept to see what would be necessary to inherit more from DecompressReader and if it's actually worth it. But having thought about it for a while more, I don't think there's a significant gain. But I'm happy to see that you were able to cherry-pick some useful pieces out of it nevertheless). |
Is it still work-in-progress or are you looking for a review? |
I believe Martin's patch (v8) is ready for a core committer review. At least I can't find anything to criticize anymore :-). |
Yes my patch should be ready, unless we want to work on factoring more common logic out of the gzip read() method. |
Patch v9:
Antoine highlighted the fact that BufferedReader.peek() does not guarantee how much data it returns, so I removed the code trying to return at least 100 bytes. I revisited merging the gzip read() loop with the others, but gave up again. I tried adding calls to _start_member() and _end_member(), with more “if” statements and flags to account for the different edge cases. But then I remembered the “needs_input” flag of LZMA and bzip versus the “unconsumed_tail” of zlib. Handling this would probably require another delegation to a new _decompress_chunk() method, at which point I think the whole read() method will probably just be a mess of hacks trying to marry the two not-quite-compatible implementations. So I think it is better to override read() with a gzip-specific implementation, unless someone wants to prove me wrong with their own patch :) |
New changeset 62723172412c by Antoine Pitrou in branch 'default': |
Thank you very much for being so perseverant! The patch is now pushed into the default branch. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: