This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Add API to intercept socket.close()
Type: enhancement Stage: resolved
Components: asyncio, IO Versions: Python 3.7
process
Status: closed Resolution: rejected
Dependencies: Superseder:
Assigned To: Nosy List: Ben.Darnell, asvetlov, gvanrossum, pitrou, vstinner, yselivanov
Priority: normal Keywords:

Created on 2017-11-15 19:00 by yselivanov, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Messages (21)
msg306298 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-15 19:00
asyncio has a few APIs that accept a Python socket object: loop.sock_recv(), loop.sock_accept() etc.

asyncio (and event loops such as uvloop) expect that it controls the socket for the duration of the call.  However, it cannot prevent the socket object from being closing *while* it is working with it:

    loop = asyncio.get_event_loop()
    sock = socket.socket()
    sock.connect(('google.com', 80))
    sock.setblocking(0)
    f = loop.sock_recv(sock, 100)
    sock.close()
    await f

The above snippet makes asyncio to forever try to read from 'sock' (resource leak).  uvloop simply segfaults (at least on Linux) because libuv assumes that sockets can't be closed while it's working with them.

Same problem exists when a user calls `loop.create_server(sock=sock)` and later manually calls `sock.close()`.

My proposal is to add a new API: `socket.register_close(callback)`.  `callback` will be called whenever a socket object is about to be closed.

This will enable asyncio and uvloop to detect when sockets they are working with are about to be closed and to either raise an error, log a debug line, or/and to stop polling the socket.

See also: https://github.com/MagicStack/uvloop/issues/100
msg306299 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-11-15 19:01
Why not instead add `loop.sock_close()`?
msg306300 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-15 19:04
> Why not instead add `loop.sock_close()`?

It wouldn't help if the program calls `socket.close()` somewhere.  This can happen easily in big code bases.

Ideally (at least I think so) we need to provide a guarantee that the socket object can't be closed at all (or be closed in a controlled manner) if it's being operated by asyncio/event-loop.
msg306301 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-11-15 19:18
Ok, that sounds reasonable to me.  Also cc'ing Ben Darnell (maintainer of Tornado) in case he wants to chime in.
msg306335 - (view) Author: Ben Darnell (Ben.Darnell) * Date: 2017-11-16 04:14
It's worse than a resource leak - the same file descriptor number could be reused for a different file/socket, and then depending on the selector in use, you could see the data from a completely different connection. 

I did see a bug like this years ago (in libcurl), although it's not a common problem. I'd use the proposed hook if it existed, but it seems like an intrusive solution to a rare issue.
msg306368 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 16:18
> It's worse than a resource leak - the same file descriptor number could be reused for a different file/socket, and then depending on the selector in use, you could see the data from a completely different connection. 

I actually debugged a bug like this in asyncio code once.  Took me quite a bit of time to figure it out.

> I did see a bug like this years ago (in libcurl), although it's not a common problem. I'd use the proposed hook if it existed, but it seems like an intrusive solution to a rare issue.

I don't think the proposed solution is too intrusive.  If we don't like the "set a callback to intercept all socket.close()" idea, we can change it to: "add socket.add_close_callback() method to the socket object."
msg306370 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-16 16:24
The socket.socket (Python type) has a private _io_refs counter. close() does nothing until _io_refs reachs zero.

Maybe we can develop an API to temporary increase the _io_refs counter to prevent a real close?

Pseudo-code:

@contextlib.contextmanager
def dont_close_socket(sock):
    assert not sock._closed
    sock._io_refs += 1
    try:
        yield sock
    finally:
        sock._io_refs -= 1

It may be a context manager or a new object. It would probably be better to put in the socket.socke type, since it uses private attributes.
msg306372 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 16:34
> The socket.socket (Python type) has a private _io_refs counter. close() does nothing until _io_refs reachs zero.

I didn't know about this.  Looks like I can use '_io_refs' to fix some problems in uvloop, thanks Victor!

The only problem with '_io_refs' and `dont_close_socket` approach is that the original intent to close the socket is "lost".  Consider this example:

  fut = loop.sock_recv(sock)
  sock.close()
  await fut

Ideally, I'd expect `sock` to be closed right after `await fut` is completed.  One way to make that possible is to initialize socket object with 'self._io_refs = 1', not 0.  Whenever it goes to 0, the socket gets closed.  This is a slight change to the existing machinery, but it would make sock.close() more predictable.
msg306373 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-16 16:38
From my example: "finally: sock._io_refs -= 1"

Oh, sock._decref_socketios() must be uesd here to really close the socket if it was closed in the meanwhile:

    def _decref_socketios(self):
        if self._io_refs > 0:
            self._io_refs -= 1
        if self._closed:
            self.close()
msg306375 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 16:42
> Oh, sock._decref_socketios() must be uesd here to really close the socket if it was closed in the meanwhile:

There's also 'sock._closed' attribute that sock.close() and sock. _decref_socketios() interact with.. Alright, let me play with this in uvloop.  So far it looks like we don't need any new APIs :)
msg306378 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-11-16 16:50
Hmm... _decref_socketios() is not really a public API.  I think it would be nice to have an official way to deal with this, and a socket close callback sounds reasonable to me.
msg306379 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-16 16:51
> Hmm... _decref_socketios() is not really a public API.  I think it would be nice to have an official way to deal with this, and a socket close callback sounds reasonable to me.

I dislike the idea of an event when a socket is closed. It doesn't prevent a crash if you are using the socket with the GIL released.

I prefer to "hold" the socket while using it.
msg306380 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 16:52
> Hmm... _decref_socketios() is not really a public API.  I think it would be nice to have an official way to deal with this, and a socket close callback sounds reasonable to me.


We can just make it public (after some renames) :)  My first idea was to add a refcounting API to sockets, I just didn't know it's already there (I assumed that socketmodule.c implements the actual 'socket' object, and never even looked in socket.py module).

I'll experiment with this stuff in asyncio/uvloop and report back to this issue.
msg306383 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2017-11-16 17:04
Perhaps you can just dup() the socket? That's what the ref counter is for
IIRC.
msg306384 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-16 17:05
> Perhaps you can just dup() the socket? That's what the ref counter is for IIRC.

Sure. That's another option.

But for asyncio with a lot of concurrent users, I'm not sure that it's ok to create a temporary file descriptor, since there is a risk of hitting the maximum number of open file descriptors :-(

Depending on the platform and the event loop implementation, the limit may be low.
msg306385 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 17:09
> Perhaps you can just dup() the socket? That's what the ref counter is for IIRC.

I already dup them for loop.create_server(sock=sock) and loop.create_connection(sock=sock) cases.  Duping for sock_recv & friends will add too much overhead (they are relatively short operations).  Another argument against duping is that the number of open FDs is a limited OS resource.

So far Victor's idea of using '_io_refs' sounds like a perfect fit for both asyncio and uvloop.  After all, asyncio use case is very similar to `socket.makefile()` -- keep the socket object alive while another API relies on it.
msg306386 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2017-11-16 17:10
OK, then maybe socket objects should get a dup() method that just
increments the ref counter, and that would be the public API you're looking
for?
msg306388 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 17:17
> OK, then maybe socket objects should get a dup() method that just
increments the ref counter, and that would be the public API you're looking
for?

"dup()" returns a new socket object, but here we only want to protect the original one.

Maybe just 'sock.inc_io_ref()' and 'sock.dec_io_ref()'?  OTOH using the current private socket API in asyncio and uvloop doesn't really scare me as long as it works, so keeping the status quo is also OK.
msg306391 - (view) Author: Ben Darnell (Ben.Darnell) * Date: 2017-11-16 17:51
Note that a guard on socket objects can only solve part of the problem: in the case where i've seen this bug, it was with raw file descriptors from libcurl, and there's nothing python can do about that because there are no (python) socket objects.
msg306392 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-11-16 18:02
> Note that a guard on socket objects can only solve part of the problem: in the case where i've seen this bug, it was with raw file descriptors from libcurl, and there's nothing python can do about that because there are no (python) socket objects.

Well, in this case 'dup' is the only option.
msg317910 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2018-05-28 19:58
Closing this one. Using private socket.socket APIs works fine for uvloop/asyncio.
History
Date User Action Args
2022-04-11 14:58:54adminsetgithub: 76219
2018-05-28 19:58:25yselivanovsetstatus: open -> closed
type: enhancement
messages: + msg317910

resolution: rejected
stage: resolved
2017-11-16 18:02:25yselivanovsetmessages: + msg306392
2017-11-16 17:51:36Ben.Darnellsetmessages: + msg306391
2017-11-16 17:17:11yselivanovsetmessages: + msg306388
2017-11-16 17:10:05gvanrossumsetmessages: + msg306386
2017-11-16 17:09:27yselivanovsetmessages: + msg306385
2017-11-16 17:05:36vstinnersetmessages: + msg306384
2017-11-16 17:04:15gvanrossumsetmessages: + msg306383
2017-11-16 16:52:53yselivanovsetmessages: + msg306380
2017-11-16 16:51:43vstinnersetmessages: + msg306379
2017-11-16 16:50:41pitrousetmessages: + msg306378
2017-11-16 16:42:07yselivanovsetmessages: + msg306375
2017-11-16 16:41:12yselivanovsetmessages: - msg306374
2017-11-16 16:40:25yselivanovsetmessages: + msg306374
2017-11-16 16:38:50vstinnersetmessages: + msg306373
2017-11-16 16:34:58yselivanovsetmessages: + msg306372
2017-11-16 16:24:40vstinnersetnosy: + vstinner
messages: + msg306370
2017-11-16 16:18:01yselivanovsetmessages: + msg306368
2017-11-16 04:14:47Ben.Darnellsetmessages: + msg306335
2017-11-15 19:18:11pitrousetnosy: + Ben.Darnell
messages: + msg306301
2017-11-15 19:04:38yselivanovsetmessages: + msg306300
2017-11-15 19:01:35pitrousetmessages: + msg306299
2017-11-15 19:00:27yselivanovcreate