This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author njs
Recipients gregory.p.smith, ncoghlan, njs, pitrou, yselivanov
Date 2017-09-08.01:54:45
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1504835686.33.0.334958336143.issue31388@psf.upfronthosting.co.za>
In-reply-to
Content
Note also that adding a check to ceval is not sufficient -- there are also lots of calls to PyErr_CheckSignals scattered around the tree, e.g. next to every syscall that can return EINTR.

I've considered proposing something like this in the past, since it feels weird that currently "control-C actually works" is some proprietary advantage of trio instead of something that the interpreter supports. I haven't because I couldn't figure out how to do it in a clean way... there are some subtleties. But then I started to write a comment here saying why it's impossible, and realized maybe it's not so bad after all :-).

The main issue I was worried about was that in an event loop, there's a tricky potential race condition where you want the core loop to have protection enabled in general, but you need to receive signals when the loop blocks waiting for I/O, BUT you can't actually run the normal Python handlers here because if they raise an exception you'll lose the I/O state.

The solution is to use a mixture of set_wakeup_fd to handle the wakeup, and then an explicit run-signal-handlers primitive to collect the exception in a controlled manner:

set_wakeup_fd(wakeup_fd)
block_for_io([wakeup_fd, ...])
try:
    explicitly_run_signal_handlers_even_though_they_are_deferred()
except KeyboardInterrupt:
    # arrange to deliver KeyboardInterrupt to some victim task
    ...

So we should have a primitive like this.

The other issue is where how to store the "are we deferring signals?" state. The main feature we would like is for it to automatically vary depending on the execution context -- in particular, naively sticking it into a thread-local won't work, because if you use send/throw/next to switch contexts you want that to switch the deferred state as well. I started writing some complicated thing here involving new attributes on frames and functions and rules for initializing one from the other or inheriting it from a parent frame, and then I realized there were some complications around generators... and actually I was just reinventing the same machinery we need for exc_info and PEP 550. So probably the is just "use a PEP 550 context var". But, it will need a bit of special magic: we need to make it possible to atomically set this state to a particular value when invoking a function, and then restore it when exiting that function.

I'm imagining something like, you write:

@interrupts_deferred(state)
def __exit__(*args):
    ...

and it sets some magic flag on the function object that makes it act as if you wrote:

def __exit__(*args):
    with interrupts_deferred_cvar.assign(state):
        ...

except that the assignment happens atomically WRT interrupts.

So yeah, I think the necessary and sufficient features are:
- flag stored in a cvar
- check it from PyErr_CheckSignals and ceval.c
- some way to explicitly trigger any deferred processing
- some way to atomically assign the cvar value when entering a function
History
Date User Action Args
2017-09-08 01:54:46njssetrecipients: + njs, gregory.p.smith, ncoghlan, pitrou, yselivanov
2017-09-08 01:54:46njssetmessageid: <1504835686.33.0.334958336143.issue31388@psf.upfronthosting.co.za>
2017-09-08 01:54:46njslinkissue31388 messages
2017-09-08 01:54:45njscreate