Message387391
> In your code, huge data passed to .write(huge) may be
> remained in the internal buffer.
If you mean the buffered writer, then I don't see the problem. A large bytes object in pending_bytes gets temporarily referenced by _textiowrapper_writeflush(), and self->pending_bytes is cleared. If the buffer's write() method fails, then the bytes object is simply deallocated.
If you mean pending_bytes in the text wrapper, then I also don't see the problem. It always gets flushed if pending_bytes_count exceeds the chunk size. If pending_bytes is a list, it never exceeds the chunk size. It gets pre-flushed to avoid growing beyond the chunk size. If pending_bytes isn't a list, then it can only exceed the chunk size if it's a bytes object -- never for an ASCII str() object. _textiowrapper_writeflush() does not call PyBytes_AsStringAndSize() for that case. |
|
Date |
User |
Action |
Args |
2021-02-20 05:20:02 | eryksun | set | recipients:
+ eryksun, paul.moore, tim.golden, methane, zach.ware, steve.dower, Ramin Farajpour Cami |
2021-02-20 05:20:01 | eryksun | set | messageid: <1613798401.99.0.464984107092.issue43260@roundup.psfhosted.org> |
2021-02-20 05:20:01 | eryksun | link | issue43260 messages |
2021-02-20 05:20:01 | eryksun | create | |
|