This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author martin.panter
Recipients llllllllll, martin.panter
Date 2016-02-18.21:16:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1455830181.1.0.167471866512.issue26379@psf.upfronthosting.co.za>
In-reply-to
Content
If you don’t know how large the buffer is when you call zlib.decompress(), there may be copying involved as the buffer is expanded. Have you found that the copying the bytes result into a bytearray is significantly worse than expanding a bytearray (e.g. timing results)?

The trouble with this proposal is that to me it doesn’t seem like there is a high demand for it. That is why I suggested the more minimal, low-level decompress_into() version. I think I would like to hear other opinions.

As for updating GzipFile, that uses an intermediate BufferedReader, so we cannot completely avoid copying. But it might be worthwhile updating the internal readinto() method anyway. Possibly also with the zipfile module, though I am not so familiar with that.
History
Date User Action Args
2016-02-18 21:16:21martin.pantersetrecipients: + martin.panter, llllllllll
2016-02-18 21:16:21martin.pantersetmessageid: <1455830181.1.0.167471866512.issue26379@psf.upfronthosting.co.za>
2016-02-18 21:16:21martin.panterlinkissue26379 messages
2016-02-18 21:16:20martin.pantercreate