msg384565 - (view) |
Author: Andrew MacDonald (amacd31) |
Date: 2021-01-07 08:03 |
When attempting to read a large file (> 2GB) over HTTPS the read fails with "OverflowError: signed integer is greater than maximum".
This occurs with Python >=3.8 and I've been able to reproduce the problem with the below snippet of code on Linux, Mac OS X, and Windows (the remote file can be any HTTPS hosted file larger than 2GB, e.g. an empty file generated with `dd if=/dev/zero of=2g.img bs=1 count=0 seek=2G` will also do the job.).
```
import http.client
connection = http.client.HTTPSConnection("mirror.aarnet.edu.au")
connection.request("GET", "/pub/centos/8/isos/x86_64/CentOS-8.3.2011-x86_64-dvd1.iso")
response = connection.getresponse()
data = response.read()
```
Doing a git bisect it looks like this is the result of a change in commit d6bf6f2d0c83f0c64ce86e7b9340278627798090 (https://github.com/python/cpython/commit/d6bf6f2d0c83f0c64ce86e7b9340278627798090). Looking over the associated issue and commit message it seems like this was not an intended outcome for the change.
|
msg384566 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-01-07 08:15 |
I cannot lift the overflow restriction until we drop support for OpenSSL 1.0.2. The function SSL_write() and SSL_read() are limited to signed 32bit int. OpenSSL 1.1.1 has new SSL_write_ex() and SSL_read_ex() functions that support size_t. Even size_t limits the maximum value to unsigned 32bit (~4GB) on 32bit systems.
|
msg384649 - (view) |
Author: Ronald Oussoren (ronaldoussoren) *  |
Date: 2021-01-08 09:52 |
The API documentation already implies that write might not write the entire buffer because it returns the number of bytes actually written (just like os.write).
A possible workaround on the SSL layer is hence to clamp the amount of bytes to write to MAX_INT (or later MAX_SSIZE_T) bytes.
That said, this does require checking that users of the SSL layer write method in the stdib actually check for the number of bytes written, otherwise we'd exchange the exception to a silent error.
|
msg384658 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-01-08 13:05 |
That's a good idea, Ronald! socket.c:sock_send_impl() already clamps the input length on Windows:
#ifdef MS_WINDOWS
if (ctx->len > INT_MAX)
ctx->len = INT_MAX;
ctx->result = send(s->sock_fd, ctx->buf, (int)ctx->len, ctx->flags);
#else
ctx->result = send(s->sock_fd, ctx->buf, ctx->len, ctx->flags);
#endif
I could implement a similar logic for SSLSocket. Applications have to check the return value of send() any way or use sendall(). The socket.send() method / send(2) libc function may also write less bytes.
|
msg391359 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-04-19 04:04 |
Python 3.10 will use SSL_write_ex() and SSL_read_ex(), which support > 2 GB data.
|
msg394947 - (view) |
Author: (jakirkham) |
Date: 2021-06-02 21:35 |
Would it be possible to check for these newer OpenSSL symbols during the builds of Python 3.8 & 3.9 (using them when available and otherwise falling back to the older API otherwise)? This would allow people to build Python 3.8 & 3.9 with the newer OpenSSL benefiting from the fix
That said, not sure if there are other obstacles to using OpenSSL 1.1.1 with Python 3.8 & 3.9
|
msg396454 - (view) |
Author: Matan Perelman (matan1008) * |
Date: 2021-06-24 06:51 |
A bit of extra context to save clicking through:
the PR which introduced the regression: https://github.com/python/cpython/pull/12698
and the bug: https://bugs.python.org/issue36050
Maybe some people have context about why we couldn't just roll back that PR and increase `MAXAMOUNT` to something > 1 MiB and < 2 GiB?
|
msg396458 - (view) |
Author: Inada Naoki (methane) *  |
Date: 2021-06-24 08:12 |
Because this is SSL issue. HTTPS is not the only user of SSL.
So we should try to fix SSL issue before reverting GH-12698.
|
msg396459 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-06-24 08:28 |
The ssl module supports sending or receiving buffers of more than 2GB in Python 3.10. It cannot be reliable fixed in Python 3.9 and earlier.
|
msg396460 - (view) |
Author: Inada Naoki (methane) *  |
Date: 2021-06-24 08:33 |
I see. But Python 3.8 is now security fix mode.
Let's revert the optimization in the 3.9 branch.
|
msg396461 - (view) |
Author: (jakirkham) |
Date: 2021-06-24 08:34 |
Right with this change ( https://github.com/python/cpython/pull/25468 ). Thanks for adding that Christian :)
I guess what I'm wondering is if in older Python versions we could do an `#ifdef` check to try and use `SSL_read_ex` & `SSL_write_ex` if the symbols are found at build time? This would allow package maintainers the option to build with a newer OpenSSL to fix this issue (even on older Pythons)
|
msg396463 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-06-24 08:54 |
No, it would be a backwards incompatible change and introduce inconsistent behavior. SSL_read_ex() is not available in LibreSSL, OpenSSL 1.0.2, and OpenSSL 1.1.0. Your code would work on CentOS 8, Debian 10, and Ubuntu 20.04, but break on CentOS 7, Debian 9, Ubuntu 18.04, and OpenBSD.
|
msg396509 - (view) |
Author: (jakirkham) |
Date: 2021-06-24 20:15 |
Not following. Why would it break? Presumably once one builds Python for a particular OS they keep there (like system package managers for example).
Or alternatively they build on a much older OS and then deploy to newer ones. The latter case is what we do in conda-forge (Anaconda does the same thing). We also build our own OpenSSL so have control of that knob too. Though I've seen application developers do similar things.
Do you have an example where this wouldn't work? Maybe that would help as we can define a better solution there.
|
msg396510 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-06-24 20:39 |
Right now sending and receiving buffers >2 GB over TLS consistently fails on all platforms with Python 3.9. A backport of bf624032c12c763b72594e5f41ff8af309b85264 to Python 3.9 would make the behavior inconsistent. Your code would work on your laptop with a recent Fedora or Ubuntu version that has OpenSSL 1.1.1. But it would suddenly fail on your production system with Debian 10, because it has OpenSSL 1.1.0 and Python would have to fall back to SSL_read().
In my experience these kinds of inconsistencies cause headaches and frustrations. A consistent error gives you a chance to notice a problem early and to implement a workaround.
|
msg398377 - (view) |
Author: Łukasz Langa (lukasz.langa) *  |
Date: 2021-07-28 13:27 |
New changeset 153365d864c411f6fb523efa752ccb3497d815ca by Inada Naoki in branch '3.9':
[3.9] bpo-42853: Fix http.client fails to download >2GiB data over TLS (GH-27405)
https://github.com/python/cpython/commit/153365d864c411f6fb523efa752ccb3497d815ca
|
msg398388 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2021-07-28 14:15 |
This fix is going to cause a regression of bpo-36050. Would it not be possible to fix this in _ssl.c (by breaking a large read into multiple smaller calls to SSL_read)? It seems like fixing this at the SSL layer is more appropriate than trying to work around it at the HTTP layer, and thus impacting the performance of all HTTP fetches (whether using TLS or not, and whether >2GB or not).
|
msg398408 - (view) |
Author: (jakirkham) |
Date: 2021-07-28 17:34 |
Agree with Bruce. It seems like we could have support for OpenSSL 1.1.1 at that level with a compile time fallback for previous OpenSSL versions that break up the work. Would hope this solution also yields something we can backport more easily
|
msg398412 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2021-07-28 18:09 |
> It seems like we could have support for OpenSSL 1.1.1 at that level with a compile time fallback for previous OpenSSL versions that break up the work. Would hope this solution also yields something we can backport more easily
I'd have to look at exactly how the SSL_read API works, but I think once we're in C land and can read into regions of a buffer, reading in 2GB chunks is unlikely to cause a performance hit (unlike the original bpo-36050, where Python had to read a bunch of separate buffers then join them together). So trying to have 3.9 support both SSL_read_ex AND have a fallback sounds like it's adding complexity and risking inconsistency if the fallback doesn't perfectly mimic the SSL_read_ex path, for very little gain.
If no-one else steps up sooner I can probably work on a patch, but before sinking time into it I'd like to hear if there is agreement that this is a reasonable approach and ideally have a volunteer to review it (hopefully someone who is familiar with OpenSSL, since I've only briefly dealt with it years ago and crypto isn't somewhere you want to make mistakes).
|
msg398436 - (view) |
Author: Christian Heimes (christian.heimes) *  |
Date: 2021-07-28 22:34 |
A patch would not land in Python 3.9 since this would be a new feature and out-of-scope for a released version.
Do you really want to store gigabytes of downloads in RAM instead of doing chunked reads and store them on disk? If you cannot or don't want to write to disk, then there are easier and better ways to deal with large buffers. You could either mmap() or you could use a memoryview of a bytearray buffer:
buf = bytearray(4 * 1024**3)
view = memoryview(buf)
pos = 0
while True:
read = conn.recv_into(view[pos:pos+1048576])
if not read:
break
pos += read
|
msg398464 - (view) |
Author: Bruce Merry (bmerry) * |
Date: 2021-07-29 07:30 |
> A patch would not land in Python 3.9 since this would be a new feature and out-of-scope for a released version.
I see it as a fix for this bug. While there is already a fix, it regresses another bug (bpo-36050), so this would be a better fix.
> Do you really want to store gigabytes of downloads in RAM instead of doing chunked reads and store them on disk?
I work on HPC applications where large quantities of data are stored in an S3-compatible object store and fetched over HTTP at 25Gb/s for processing. The data access layer tries very hard to avoid even making extra copies in memory (which is what caused me to file bpo-36050 in the first place) as it make a significant difference at those speeds. Buffering to disk would be right out.
> then there are easier and better ways to deal with large buffers
Your example code is probably fine if one is working directly on an SSLSocket, but http.client wraps it in a buffered reader (via `socket.makefile`), and that implements `readinto` by reading into a temporary and copying it (https://github.com/python/cpython/blob/8d0647485db5af2a0f0929d6509479ca45f1281b/Modules/_io/bufferedio.c#L88), which would add overhead.
I appreciate that what I'm proposing is a relatively complex change for a released version. A less intrusive option would to be change MAXAMOUNT in http.client from 1MiB to 2GiB-1byte (as suggested by @matan1008). That would still leave 3.9 slower than 3.8 when reading >2GiB responses over plain HTTP, but at least everything in the range [1MiB, 2GiB) would operate at full speed (which is the region I actually care about).
|
msg398494 - (view) |
Author: Łukasz Langa (lukasz.langa) *  |
Date: 2021-07-29 16:08 |
> This fix is going to cause a regression of bpo-36050. Would it not be possible to fix this in _ssl.c (by breaking a large read into multiple smaller calls to SSL_read)? It seems like fixing this at the SSL layer is more appropriate than trying to work around it at the HTTP layer, and thus impacting the performance of all HTTP fetches (whether using TLS or not, and whether >2GB or not).
Bruce, you're discussing the *revert* of the bpo-36050 change as if it was a new change. It isn't, it is literally returning to a state that was previously tested and worked fine since Python 3.5. By reverting, we're choosing correctness over performance here.
Introducing a new implementation for bpo-36050 is now out of scope for 3.9 as it's a performance-oriented change which we cannot accept in a bugfix release.
|
|
Date |
User |
Action |
Args |
2022-04-11 14:59:40 | admin | set | github: 87019 |
2021-07-29 16:08:26 | lukasz.langa | set | messages:
+ msg398494 |
2021-07-29 07:30:32 | bmerry | set | messages:
+ msg398464 |
2021-07-28 22:34:25 | christian.heimes | set | messages:
+ msg398436 |
2021-07-28 18:09:24 | bmerry | set | messages:
+ msg398412 |
2021-07-28 17:34:54 | jakirkham | set | messages:
+ msg398408 |
2021-07-28 14:15:45 | bmerry | set | nosy:
+ bmerry messages:
+ msg398388
|
2021-07-28 13:28:20 | lukasz.langa | set | status: open -> closed resolution: fixed stage: patch review -> resolved |
2021-07-28 13:27:57 | lukasz.langa | set | nosy:
+ lukasz.langa messages:
+ msg398377
|
2021-07-28 06:52:17 | methane | set | keywords:
+ patch stage: patch review pull_requests:
+ pull_request25938 |
2021-06-24 20:39:31 | christian.heimes | set | messages:
+ msg396510 versions:
- Python 3.8 |
2021-06-24 20:15:36 | jakirkham | set | messages:
+ msg396509 |
2021-06-24 08:54:07 | christian.heimes | set | messages:
+ msg396463 |
2021-06-24 08:34:13 | jakirkham | set | messages:
+ msg396461 |
2021-06-24 08:33:00 | methane | set | messages:
+ msg396460 |
2021-06-24 08:28:40 | christian.heimes | set | messages:
+ msg396459 |
2021-06-24 08:12:08 | methane | set | messages:
+ msg396458 |
2021-06-24 06:51:09 | matan1008 | set | nosy:
+ matan1008 messages:
+ msg396454
|
2021-06-02 21:35:28 | jakirkham | set | nosy:
+ jakirkham messages:
+ msg394947
|
2021-04-19 04:04:53 | christian.heimes | set | messages:
+ msg391359 versions:
- Python 3.10 |
2021-02-05 11:01:02 | jan-xyz | set | nosy:
+ jan-xyz
|
2021-01-08 13:05:25 | christian.heimes | set | messages:
+ msg384658 |
2021-01-08 09:52:17 | ronaldoussoren | set | nosy:
+ ronaldoussoren messages:
+ msg384649
|
2021-01-07 08:15:02 | christian.heimes | set | messages:
+ msg384566 |
2021-01-07 08:07:32 | christian.heimes | set | assignee: christian.heimes -> methane
components:
+ Library (Lib), - SSL nosy:
+ methane |
2021-01-07 08:03:09 | amacd31 | create | |