This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author martin.panter
Recipients alanmcintyre, benjamin.peterson, martin.panter, nadeem.vawda, pitrou, serhiy.storchaka, stutzbach
Date 2015-01-09.12:22:55
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1420806176.11.0.957684109278.issue19051@psf.upfronthosting.co.za>
In-reply-to
Content
For what it’s worth, it would be better if compressed streams did limit the amount of data they decompressed, so that they are not susceptible to decompression bombs; see Issue 15955. But having a flexible-sized buffer could be useful in other cases.

I haven’t looked closely at the code, but I wonder if there is much difference from the existing BufferedReader. Perhaps just that the underlying raw stream in this case can deliver data in arbitrary-sized chunks, but BufferedReader expects its raw stream to deliver data in limited-sized chunks?

If you exposed the buffer it could be useful to do many things more efficiently:

* readline() with custom newline or end-of-record codes, solving Issue 1152248, Issue 17083
* scan the buffer using string operations or regular expressions etc, e.g. to skip whitespace, read a run of unescaped symbols
* tentatively read data to see if a keyword is present, but roll back if the data doesn’t match the keyword
History
Date User Action Args
2015-01-09 12:22:56martin.pantersetrecipients: + martin.panter, alanmcintyre, pitrou, nadeem.vawda, benjamin.peterson, stutzbach, serhiy.storchaka
2015-01-09 12:22:56martin.pantersetmessageid: <1420806176.11.0.957684109278.issue19051@psf.upfronthosting.co.za>
2015-01-09 12:22:56martin.panterlinkissue19051 messages
2015-01-09 12:22:55martin.pantercreate