New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue4428 - make io.BufferedWriter observe max_buffer_size limits #49261
Comments
http://codereview.appspot.com/12470/diff/1/2 http://codereview.appspot.com/12470/diff/1/2#newcode1055 http://codereview.appspot.com/12470/diff/1/2#newcode1060 http://codereview.appspot.com/12470/diff/1/2#newcode1066 http://codereview.appspot.com/12470/diff/1/2#newcode1070 http://codereview.appspot.com/12470/diff/1/3 http://codereview.appspot.com/12470/diff/1/3#newcode496 |
Mmmh, adding <report@bugs.python.org> in the Rietveld CC field didn't |
Reviewers: Antoine Pitrou, Message: I'll get to the other comments in the code soon with new unit tests for http://codereview.appspot.com/12470/diff/1/2 http://codereview.appspot.com/12470/diff/1/2#newcode1055
Agreed. But since I want to merge this into release30-maint doing that http://codereview.appspot.com/12470/diff/1/2#newcode1060
Same reason as above. http://codereview.appspot.com/12470/diff/1/3 http://codereview.appspot.com/12470/diff/1/3#newcode496
hmm in that case it might not be too large of a thing to break when Description: Please review this at http://codereview.appspot.com/12470 Affected files: Index: Lib/io.py --- Lib/io.py (revision 68796)
+++ Lib/io.py (working copy)
@@ -1047,11 +1047,42 @@
self._flush_unlocked()
except BlockingIOError as e:
# We can't accept anything else.
- # XXX Why not just let the exception pass through?
+ # Reraise this with 0 in the written field as none of
the
+ # data passed to this call has been written.
raise BlockingIOError(e.errno, e.strerror, 0)
before = len(self._write_buf)
- self._write_buf.extend(b)
- written = len(self._write_buf) - before
+ bytes_to_consume = self.max_buffer_size - before
+ # b is an iterable of ints, it won't always support len().
+ if hasattr(b, '__len__') and len(b) > bytes_to_consume:
+ try:
+ chunk = memoryview(b)[:bytes_to_consume]
+ except TypeError:
+ # No buffer API? Make intermediate slice copies
instead.
+ chunk = b[:bytes_to_consume]
+ # Loop over the data, flushing it to the underlying raw IO
+ # stream in self.max_buffer_size chunks.
+ written = 0
+ self._write_buf.extend(chunk)
+ while chunk and len(self._write_buf) > self.buffer_size:
+ try:
+ self._flush_unlocked()
+ except BlockingIOError as e:
+ written += e.characters_written
+ raise BlockingIOError(e.errno, e.strerror, written)
+ written += len(chunk)
+ assert not self._write_buf, "_write_buf should be
empty"
+ if isinstance(chunk, memoryview):
+ chunk = memoryview(b)[written:
+ written +
self.max_buffer_size]
+ else:
+ chunk = b[written:written + self.max_buffer_size]
+ self._write_buf.extend(chunk)
+ else:
+ # This could go beyond self.max_buffer_size as we don't
know
+ # the length of b. The alternative of iterating over it
one
+ # byte at a time in python would be slow.
+ self._write_buf.extend(b)
+ written = len(self._write_buf) - before
if len(self._write_buf) > self.buffer_size:
try:
self._flush_unlocked()
Index: Lib/test/test_io.py =================================================================== self.assertEquals(b"abcdefghijkl", writer._write_stack[0]) + def testWriteMaxBufferSize(self): |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: