This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: Make SSL suppress_ragged_eofs default more secure
Type: security Stage: patch review
Components: Library (Lib), SSL Versions: Python 3.7
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: alex, cheryl.sabella, christian.heimes, jab, martin.panter, njs, vstinner
Priority: normal Keywords: patch

Created on 2016-08-20 13:19 by martin.panter, last changed 2022-04-11 14:58 by admin.

File name Uploaded Description Edit martin.panter, 2016-08-20 13:19
ragged-eofs.patch martin.panter, 2016-09-22 09:24 review
ragged-eofs.v2.patch martin.panter, 2016-10-15 23:30 review
Pull Requests
URL Status Linked Edit
PR 5266 vstinner, 2018-01-22 10:46
Messages (17)
msg273208 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2016-08-20 13:19
In the SSL module, the wrap_socket() function (and corresponding SSLContext method) take a flag called “suppress_ragged_eofs”. It defaults to True, which makes me uncomfortable. The documentation says:

The parameter “suppress_ragged_eofs” specifies how the SSLSocket.recv() method should signal unexpected EOF from the other end of the connection. If specified as True (the default), it returns a normal EOF (an empty bytes object) in response to unexpected EOF errors raised from the underlying socket; if False, it will raise the exceptions back to the caller.

I understand the “unexpected EOF error” happens when the underlying socket indicates EOF, but the connection was not shut down at the SSL protocol level. As well as EOF from the other end of the connection, it can happen due to a proxy, or anything else on the network that can affect the connection. I think it may be better report this error by default, just like other unsecured network-level errors like connection reset or timeout. Otherwise it is too easy to treat this insecure EOF condition as if it were a secure EOF signal of the remote peer.

The flag was added as part of r64578, for Issue 1223, in Python 2.6. The reason given in that bug report was to help Python work with a HTTP server. However my theory is the server was closing the connection wrongly; if so, the server should have been fixed, rather than Python.

There is plenty of precedence for using suppress_ragged_eofs=True with HTTPS servers. As well as Issue 1223, there is Issue 494762 and Issue 500311. And in my experiments, Curl and Firefox both seem to treat the error the same as a secure EOF. Maybe there should be a way to keep supporting HTTPS servers that trigger the error (though I would rather not by default).

Attached is a HTTP proxy server that will let you break a connection after returning a set number of bytes (or any time by killing the server), which can trigger the error condition.

Example output of proxy server:
$ python --limit 12345
Proxy server at
Proxying connection to
Forwarded 12345 B to client

Python 2’s httplib module does not treat a short HTTP response as an error, so the following request gets truncated without much indication of a problem:
$ https_proxy= python2 -c 'import urllib2; print(urllib2.urlopen("").read())'
<!doctype html>
[. . .]
        <!--[if lt IE 8]>
        <div id="oldie-warning" class
msg277212 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2016-09-22 09:24
I have been experimenting with a patch that changes the default to suppress_ragged_eofs=False.

One disadvantage of this change is it could make servers less robust. E.g. in the tests, I explicitly enabled suppress_ragged_eofs=True in a server, because otherwise I would have to add extra cleanup if an exception is raised on the server side. In another test server, based on socketserver, I added special code to silence logging an SSLEOFError exception (a side effect of a particular test case).

But ideally, real-world servers should already handle other exceptions that are triggered outside the server’s control like, ECONNRESET and TLSV1_ALERT_UNKNOWN_CA. Socketserver already logs all these kinds of errors. Plus I think SSLEOFError has always been possible in the handshake phase.

With HTTP I didn’t have to go further than Google to find a server that terminates the response with a non-SSL shutdown (although because truncated JSON is invalid, it is not a big deal). For the record, here is a stripped-down demo. The same problem also happens for a more complete request with valid parameters.

>>> import socket, ssl
>>> s = socket.create_connection(("", 443))
>>> ss = ssl.wrap_socket(s, suppress_ragged_eofs=False)
>>> ss.sendall(b"POST /o/oauth2/token HTTP/1.1\r\n"
...     b"Host:\r\n"
...     b"Content-Length: 0\r\n"
...     b"Connection: close\r\n"
...     b"\r\n")
>>> print("\n".join(map(repr, ss.recv(3000).splitlines(keepends=True))))
b'HTTP/1.1 400 Bad Request\r\n'
b'Content-Type: application/json; charset=utf-8\r\n'
b'Cache-Control: no-cache, no-store, max-age=0, must-revalidate\r\n'
b'Pragma: no-cache\r\n'
b'Expires: Mon, 01 Jan 1990 00:00:00 GMT\r\n'
b'Date: Thu, 22 Sep 2016 03:10:20 GMT\r\n'
b'X-Content-Type-Options: nosniff\r\n'
b'X-Frame-Options: SAMEORIGIN\r\n'
b'X-XSS-Protection: 1; mode=block\r\n'
b'Server: GSE\r\n'
b'Alt-Svc: quic=":443"; ma=2592000; v="36,35,34,33,32"\r\n'
b'Accept-Ranges: none\r\n'
b'Vary: Accept-Encoding\r\n'
b'Connection: close\r\n'
b'  "error" : "invalid_request",\n'
b'  "error_description" : "Required parameter is missing: grant_type"\n'
>>> ss.recv(3000)  # HTTP-level client does not know that this is the EOF yet
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/proj/python/cpython/Lib/", line 987, in recv
  File "/home/proj/python/cpython/Lib/", line 865, in read
    return, buffer)
  File "/home/proj/python/cpython/Lib/", line 627, in read
    v =
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2176)

In this case, if the client does not send “Connection: close”, the server uses chunked encoding and there is no problem. So this is another instance of Issue 12849 (Python’s unusual request triggering a server bug).

I wonder if a solution would be to use suppress_ragged_eofs=False by default, but add a way to let the user of the http.client module explicitly allow SSLEOFError to signal a proper EOF.
msg278737 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2016-10-15 23:30
Patch v2 also adds a new attribute to context objects. With this I can work around my Google server bug:

context = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
context.suppress_ragged_eofs = True
handler = urllib.request.HTTPSHandler(context=context)
urlopen = urllib.request.build_opener(handler).open
urlopen(Request(url="", ...))
msg301711 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2017-09-08 18:13
I don't consider myself qualified enough to make a decision. Alex, Victor, what do you think?
msg301751 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-09-08 22:49
Martin: can you please create a pull request? It would be easier to review your change.
msg301752 - (view) Author: Alex Gaynor (alex) * (Python committer) Date: 2017-09-08 22:51
Mmmm, my understanding is that ignoring TCP-FIN/RST-without-TLS-closenotify is pretty common for a lot of different clients.

We should probably survey the landscape, see what both browsers and non-browse clients (e.g. curl) do before making a decision.
msg301996 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2017-09-12 21:37
Even if some use cases depend on suppress_ragged_eofs=True, I think it is best to avoid that as the default. There could be a deprecation period if necessary.

I tested some HTTP clients I had handy. In summary, most seemed to handle a truncation attack on the Set-Cookie field sensibly (but Python is vulnerable without my patch or other solution). On the other hand, all the clients I tried handled one case of an insecurely-truncated body the same as a normal EOF (while I propose to treat this as an error).

The clients I tested: Firefox/46.0, curl/7.43.0, Wget/1.13.4, Links 2.12, ELinks/0.13.GIT, Python-urllib/3.5 (unpatched), and Python-urllib/3.7 with my patch. I tried three test cases:

1. Truncate Set-Cookie field, with no terminating newline. The client should not accept the cookie, in case an attribute such as “Secure” was removed, like in <>.
>>> c.sendall(b"Set-Cookie: COOKIE=meant-to-be-Secure")
>>> c.shutdown(SHUT_RDWR)

Python (unpatched) treats the Set-Cookie field as valid. It appears in the HTTPResponse object, with no clue that it was truncated. Wget was also vulnerable. Firefox and Curl did not record the cookie, but did not indicate any error either. Links does not support cookies, while Elinks tried 3 times and reported an error.

2. Content-Length response with truncated text/html. IMO a client should inform the user that the response was cut off (with or without SSL), but sometimes the user may want to see the first half of the response anyway.
>>> c.sendall(b"Content-Length: 100\r\n\r\n" b"Truncat")
>>> c.shutdown(SHUT_RDWR)

Curl, wget, Links and Elinks all outputted the incomplete response, and reported an error. Firefox displayed the truncated page without indicating any error. In most cases, Python raised an IncompleteRead exception, but it depended on the pattern of read calls, and some or all of the truncated data was hidden in the undocumented “IncompleteRead.partial” attribute.

3. “Connection: close” response with truncated HTML:
>>> c.sendall(b"Connection: close\r\n\r\n" b"Truncat")
>>> c.shutdown(SHUT_RDWR)

This is the case where all the clients (other than my patched Python) treated this like a valid non-truncated response. But IMO in general it should be dealt with like the Content-Length case if the SSL level wasn’t shut down properly.

Victor: Sorry, I’m unlikely to make a Git Hub pull request any time soon, but I don’t mind if someone else does.
msg310389 - (view) Author: Cheryl Sabella (cheryl.sabella) * (Python committer) Date: 2018-01-21 23:49
I converted the patch to a PR.  

It wouldn't merge which means I did it manually, so please check it for errors.  Some issues I ran into:
1. The patch had a change to __slots__, but that line no longer existed and I didn't know if I needed to make a different change to accommodate it.
2. I converted the 'What's New' to a blurb.
msg310390 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2018-01-22 01:42
The current default is hard to defend -- it's a clear violation of the TLS specs. But I suspect that changing it will be pretty disruptive, because suppress_ragged_eof=True is the de facto standard for HTTP-over-TLS (generally justified on the grounds that HTTP has its own framing, so the TLS-level close-notify stuff is redundant), and that means that if we flip it cases that used to work correctly will start raising errors. We could avoid some of these issues by making http.client explicitly use suppress_ragged_eof=True, but then that won't fix Martin's problems...

A discussion about changing it should also consider what to do on sslsock.close(), since right now Python always gives the other side a "ragged eof" unless you call "sslsock.unwrap()", and I've never seen any code that does this. So flipping the suppress_ragged_eof default by itself will make Python's ssl module fail at interoperating with itself.

> 1. Truncate Set-Cookie field, with no terminating newline. [...] Python (unpatched) treats the Set-Cookie field as valid. It appears in the HTTPResponse object, with no clue that it was truncated.

This sounds like a bug in http.client – the lack of terminating newline should cause a parse failure regardless of what the SSL layer does (and regardless of whether we're even using SSL).

> 2. Content-Length response with truncated text/html. [...] In most cases, Python raised an IncompleteRead exception, but it depended on the pattern of read calls

Also a bug in http.client.

> 3. “Connection: close” response with truncated HTML:

This one's a little more interesting... First, "Connection: close" doesn't affect the framing of the HTTP body, it just says that this connection can't be reused for another request afterwards. The way HTTP works, if you have a response with neither a Content-Length header *nor* a Transfer-Encoding: chunked header, then it's assumed that the framing is determined by the connection being closed -- I assume that this is what you generated here. Now, this is a hack for backwards compatibility with HTTP/1.0; in theory, no modern server should ever generate such a response. In practice, who knows, people do all kinds of nonsense.

But... do we really think that servers who can't be bothered to implement HTTP/1.1, are also going to take the trouble to make sure their SSL/TLS handling is correct to the spec? If the server decides to use connection-close framing, that's their decision, and the client is kind of at their mercy.
msg310403 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2018-01-22 08:49
Before we can disable ragged EOF handling, we first have to fix ssl.SSLSocket()'s shutdown behavior. It should support at least a proper unidirectional shutdown besides the slow bidirectional shutdown. It might even be a good idea to default to unidirectional shutdown. Curl does it,

I wrote this in a mail to Cory about PEP 543 two weeks ago:

Now to the nasty session and shutdown issue. I discovered it just a
couple of weeks ago.

We have to revisit TLS socket shutdown in order to make TLS session
resumption work. OpenSSL discards and invalidates sessions when a SSL
socket isn't properly shut down, e.g. hard-close I/O layer. Simply
speaking OpenSSL has four shutdown modes: bidirectional, unidirection,
quiet, and hard. Python's ssl module supports slow bidirectional mode
and hard mode.

* In bidirectional mode, both parties send a "close notify" alert and
wait for confirmation.
* Unidirectional means that one party just sends a "close notify" alert
and then closes the connection. It doesn't wait for ACK.
* Quiet shutdown doesn't send anything. It merely sets some internal
flags to mark the connection as closed and session as valid
* What I call 'hard mode' just kills the I/O layer and frees the SSL*.
Sessions are marked as invalid because OpenSSL rightly assumes that
something went wrong and the session cannot be reused safely.

We need an API to perform some sort of shutdown on __exit__(None, None,
None) / close().
msg310407 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2018-01-22 09:41
@Christian: I'm surprised and somewhat dismayed to hear that OpenSSL invalidates sessions on hard close -- that hasn't been part of the spec since 2006 when TLS 1.1 came out. I'm not a cryptographer, but the spec explicitly allows keeping the session, and I can't think of any particular reason why a network closure should imply that the secret material associated with the session has been compromised.

FWIW trio currently implements bidirectional mode (await ssl_stream.unwrap()),  unidirectional mode (default for await ssl_stream.aclose()), and what-I-thought-was-quiet-but-apparently-is-hard (await ssl_stream.aclose() if the stream has the https_compatible=True flag set, or if a call to aclose() times out) [1]. I guess I should make that last one soft [2]. You actually can do all these things with the current ssl module, but it's extremely tricky and confusing. In particular, the way you do 'unidirectional' is to put the socket in non-blocking mode or use SSLObject and then call unwrap(), and when you get SSLWantReadError you're done, you can close the socket.

You might also by amused by this comment I left Cory in June and only just managed to track down again: (and the rest of the thread too, but that comment specifically gets into shutdown semantics)

msg310410 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2018-01-22 10:17
You have to tell OpenSSL that a hard-close is expected and fine. If you just SSL_free() the SSL connection, then OpenSSL removes the session from the SSL_CTX's session cache. It assumes that something went wrong and defaults to secure defaults. "Wrong" doesn't necessarily mean that an attacker has compromised a connection.

In order to flag a session as valid, you have to SSL_set_quiet_shutdown(ssl, 1), SSL_shutdown(ssl), SSL_free(ss). With quiet shutdown set, SSL_shutdown() neither sends nor waits for any data.

One-way shutdown with non-blocking trick is evil. Cool hack :)

 SSLSocket and SSLObject should really support different shutdown modes, e.g. s.shutdown(mode=0) for quiet, mode=1 for unidirectional and mode=2 for bidirectional.
msg310413 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2018-01-22 10:25
It doesn't help that Python spells SSL_shutdown as "unwrap".

I think in practice basically the only two things you want are unidirectional shutdown, or "soft" shutdown. Bidirectional shutdown has some extremely theoretical uses, and we have to provide it anyway because it's in the API already. I don't know of any reason to ever use "hard" shutdown, though I'd be interested to hear of them.
msg310415 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2018-01-22 10:46
I don't know well, Cheryl's PR wasn't added to this issue:
msg310418 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2018-01-22 11:38
Perhaps a hard close is the right thing when SSLSocket.__exit__ encounters an exception?
msg310419 - (view) Author: Nathaniel Smith (njs) * (Python committer) Date: 2018-01-22 11:42
Why would an exception inside 'with ssl_sock' have anything to do with the validity of the session shared secret?

I mean, maybe it does, but I *really* don't think we should be waving our hands and guessing about this stuff.
msg310420 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2018-01-22 12:06
For example an invalid host name should invalidate the session until #31399 is resolved. Any TLS protocol violation should also invalidate the session. If somebody messes with the connection or the TLS protocol encounters a problem during MAC validation, the connection must be considered as tainted.

Some exception may be fine. IMO it's still safer hard-close the connection on any exceptions.

I agree with you. Let's not guess and ask some experts. I'm having meetings with security engineers from GnuTLS and NSS next week. I'll ask them.
Date User Action Args
2022-04-11 14:58:35adminsetgithub: 72002
2021-04-06 04:26:08jabsetnosy: + jab
2018-01-22 12:06:35christian.heimessetmessages: + msg310420
2018-01-22 11:42:10njssetmessages: + msg310419
2018-01-22 11:38:31christian.heimessetmessages: + msg310418
2018-01-22 10:46:18vstinnersetstage: patch review
messages: + msg310415
pull_requests: + pull_request5113
2018-01-22 10:25:07njssetmessages: + msg310413
2018-01-22 10:17:13christian.heimessetmessages: + msg310410
2018-01-22 09:41:50njssetmessages: + msg310407
2018-01-22 08:49:15christian.heimessetmessages: + msg310403
2018-01-22 01:43:01njssetnosy: + njs
messages: + msg310390
2018-01-21 23:50:00cheryl.sabellasetnosy: + cheryl.sabella
messages: + msg310389
2017-09-12 21:37:08martin.pantersetmessages: + msg301996
2017-09-08 22:51:53alexsetmessages: + msg301752
2017-09-08 22:49:08vstinnersetmessages: + msg301751
2017-09-08 18:13:22christian.heimessetassignee: christian.heimes ->

messages: + msg301711
nosy: + vstinner, alex
2016-10-15 23:30:19martin.pantersetfiles: + ragged-eofs.v2.patch

messages: + msg278737
2016-09-22 09:24:50martin.pantersetfiles: + ragged-eofs.patch
keywords: + patch
messages: + msg277212

versions: + Python 3.7, - Python 3.6
2016-09-15 08:01:10christian.heimessetassignee: christian.heimes

components: + SSL
nosy: + christian.heimes
2016-08-20 13:19:11martin.pantercreate