classification
Title: asyncio: support fork
Type: Stage:
Components: asyncio Versions: Python 3.6
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: yselivanov Nosy List: Adam.Bishop, Christian H, christian.heimes, gvanrossum, haypo, martius, neologix, yselivanov
Priority: normal Keywords: patch

Created on 2014-07-17 16:10 by haypo, last changed 2015-12-23 00:22 by Adam.Bishop.

Files
File name Uploaded Description Edit
test2.py martius, 2014-11-28 16:51
close_self_pipe_after_selector.patch martius, 2014-12-08 11:20
at_fork.patch haypo, 2015-02-04 22:16
at_fork-2.patch haypo, 2015-02-05 11:03 review
at_fork-3.patch martius, 2015-02-17 21:30
Messages (43)
msg223344 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2014-07-17 16:10
It looks like asyncio does not suppport fork() currently: on fork(), the event loop of the parent and the child process share the same "self pipe".

Example:
---
import asyncio, os
loop = asyncio.get_event_loop()
pid = os.fork()
if pid:
    print("parent", loop._csock.fileno(), loop._ssock.fileno())
else:
    print("child", loop._csock.fileno(), loop._ssock.fileno())
---

Output:
---
parent 6 5
child 6 5
---

I'm not sure that it's revelant to use asyncio with fork, but I wanted to open an issue at least as a reminder.

I found the "self pipe" + fork issue when reading the issue #12060, while working on a fix for the signal handling race condition on FreeBSD: issue #21645.
msg223431 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2014-07-18 20:37
Oh, I wanted to use the atfork module for that, but then I saw that it does not exist :-(
msg225973 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2014-08-27 11:02
A first step would be to document the issue in the "developer" section of asyncio documentation. Mention that the event loop should be closed and a new event loop should be created.
msg226861 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2014-09-14 04:31
It's not that it doesn't work after fork, right? Should we add a recipe with pid monitoring a self-pipe re-initialization?
msg226889 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-09-14 23:45
Actually I expect that if you share an event loop across different processes via form, everything's a mess -- whenever a FD becomes ready, won't both the parent and the child be woken up?  Then both would attempt to read from it.  One would probably get EWOULDBLOCK (assuming all FDs are actually in non-blocking mode) but it would still be a mess.  The specific mess for the self-pipe would be that the race condition it's intended to solve might come back.

It's possible that some polling syscall might have some kind of protection against forking, but the Python data structures that map FDs to handlers don't know that, so it would still be a mess.

Pretty much the only thing you should expect to be able to do safely after forking is closing the event loop -- and I'm not even 100% sure that that's safe (I don't know what happens to a forked executor).

Is there a use case for sharing an event loop across forking?
msg226892 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2014-09-15 01:28
> Is there a use case for sharing an event loop across forking?

I don't think so.  I use forking mainly for the following two use-cases:

- Socket sharing for web servers. Solution: if you want to have a shared sockets between multiple child processes, just open them in master process, fork as many times as you want, and start event 
loops in child processes only.

- If sometimes you want to spawn some child process "on demand". Solution: fork before having a loop running, and use the child process as a "template", i.e. when you need a new child process - just ask the first child to fork it.

It would certainly be handy to have an ability to fork while the loop is running and safely close (and reopen) it in the forked child process. But now I can see that it's a non-trivial thing to do properly. Probably it's ~somewhat~ safe to re-initialize epoll (or whatever selector we use), re-open self-pipe etc, drop all queued callbacks and clear Task.all_tasks collection, but even then it's easy to miss something.

I think we just need to make sure that we have documented that asyncio loops are not fork safe, and forks in running loops should be avoided by all means.
msg226899 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2014-09-15 07:15
> Is there a use case for sharing an event loop across forking?

I don't know if asyncio must have a builtin support for this use case. The minimum is to document the behaviour, or maybe even suggest a recipe to support it.

For example, an event loop of asyncio is not thread-safe and we don't want to support this use case. But I wrote a short documentation with code snippets to show how to workaround this issue:
https://docs.python.org/dev/library/asyncio-dev.html#concurrency-and-multithreading

We need a similar section to explain how to use asyncio with the os.fork() function and the multiprocesing module.
msg226932 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-09-15 18:09
That sounds about right -- it's a doc issue. Let me propose a paragraph:

"""
NOTE: It is not safe to share an asyncio event loop between processes that are related by os.fork().  If an event loop exists in a process, and that process is forked, the only safe operation on the loop in the child process is to call its close() method.
"""

(I don't want to have to research what the various polling primitives do on fork(), so I don't want to suggest that it's okay to close the loop in the parent and use it in the child.)

A similar note should probably be added to the docs for the selectors module.
msg231819 - (view) Author: Martin Richard (martius) * Date: 2014-11-28 16:51
Hi,

Actually, closing and creating a new loop in the child doesn't work either, at least on Linux.

When, in the child, we call loop.close(), it performs:
self.remove_reader(self._ssock)
(in selector_events.py, _close_self_pipe() around line 85)

Both the parent and the child still refer to the same underlying epoll structure, at that moment, and calling remove_reader() has an effect on the parent process too (which will never watch the self-pipe again).

I attached a test case that demonstrates the issue (and the workaround, commented).
msg231845 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-11-29 01:12
Martin, what is the magic call to make in the child (or in the parent pre-fork???) to disable the epoll object in the child without disabling it in the parent?

(Perhaps just closing the selector and letting the unregister calls fail would work?)
msg231919 - (view) Author: Martin Richard (martius) * Date: 2014-12-01 09:23
Guido,

Currently in my program, I manually remove and then re-adds the reader to the loop in the parent process right after the fork(). I also considered a dirty monkey-patching of remove_reader() and remove_writer() which would act as the original versions but without removing the fds from the epoll object (ensuring I don't get bitten by the same behavior for an other fd).

The easiest fix, I think, is indeed to close the selector without unregistering the fds, but I don't know if doing so would have undesired side effects on an other platform than Linux (resources leak, or the close call failing maybe).
msg231921 - (view) Author: Martin Richard (martius) * Date: 2014-12-01 09:29
I said something wrong in my previous comment: removing and re-adding the reader callback right after the fork() is obviously subject to a race condition.

I'll go for the monkey patching.
msg231941 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-12-01 15:35
How about not doing anything in the parent but in the child, closing the
selector and then the event loop?

On Mon, Dec 1, 2014 at 1:29 AM, Martin Richard <report@bugs.python.org>
wrote:

>
> Martin Richard added the comment:
>
> I said something wrong in my previous comment: removing and re-adding the
> reader callback right after the fork() is obviously subject to a race
> condition.
>
> I'll go for the monkey patching.
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue21998>
> _______________________________________
>
msg232301 - (view) Author: Martin Richard (martius) * Date: 2014-12-08 11:20
Currently, this is what I do in the child after the fork:

>>> selector = loop._selector
>>> parent_class = selector.__class__.__bases__[0]
>>> selector.unregister = lambda fd: parent_class.unregister(selector, fd)

It replaces unregister() by _BaseSelectorImpl.unregister(), so "our" data structures are still cleaned (the dict _fd_to_key, for instance).

If a fix for this issue is desired in tulip, the first solution proposed by Guido (closing the selector and let the unregister call fail, see the -trivial- patch attached) is probably good enough.
msg232308 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2014-12-08 15:47
I suggest to split this issue: create a new issue focus on selectors.EpollSelector, it doesn't behave well with forking. If I understood correctly, you can workaround this specific issue by forcing the selector to selectors.SelectSelector for example, right?
msg235409 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-04 22:16
Attached at_fork.patch: detect fork and handle fork.

* Add _at_fork() method to asyncio.BaseEventLoop
* Add _detect_fork() method to asyncio.BaseEventLoop
* Add _at_fork() method to selectors.BaseSelector

I tried to minimize the number of calls to _detect_fork(): only when the self-pipe or the selector is used.

I only tried test2.py. More tests using two processes running two event loops should be done, and non-regression tests should be written.

The issue #22087 (multiprocessing) looks like a duplicate of this issue.
msg235410 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-04 22:18
close_self_pipe_after_selector.patch only fixes test2.py, it doesn't fix the general case: run the "same" event loop in two different event loops.
msg235429 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-05 11:03
Updated patch:
- drop PollSelector._at_fork(): PollSelector is not shared with the parent process
- _at_fork() of BaseEventLoop and SelectorEventLoop now do nothing by default: only _UnixSelectorEventLoop._at_fork() handle the fork, nothing is needed on Windows
- patch written for Python 3.5, not for Tulip (different directories)
msg236065 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-15 20:16
Can someone review at_fork-2.patch?

Martin: Can you please try my patch?
msg236131 - (view) Author: Martin Richard (martius) * Date: 2015-02-17 11:38
I read the patch, it looks good to me for python 3.5. It will (obviously) not work with python 3.4 since self._selector won't have an _at_fork() method.

I ran the tests on my project with python 3.5a1 and the patch, it seems to work as expected: ie. when I close the loop of the parent process in the child, it does not affect the parent.

I don't have a case where the loop of the parent is still used in the child though.
msg236134 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-17 14:12
> It will (obviously) not work with python 3.4 since self._selector won't have an _at_fork() method.

asyncio doc contains:
"The asyncio package has been included in the standard library on a provisional basis. Backwards incompatible changes (up to and including removal of the module) may occur if deemed necessary by the core developers."

It's not the case for selectors. Even if it would be possible to implement selector._at_fork() in asyncio, it would make more sense to implement it in the selectors module.

@neologix: Would you be ok to add a *private* _at_fork() method to selectors classes in Python 3.4 to fix this issue?

I know that you are not a fan of fork, me neither, but users like to do crazy things with fork and then report bugs to asyncio :-)
msg236136 - (view) Author: Martin Richard (martius) * Date: 2015-02-17 15:10
In that case, I suggest a small addition to your patch that would do the trick:

in unix_events.py:
+    def _at_fork(self):
+        super()._at_fork()
+        self._selector._at_fork()
+        self._close_self_pipe()
+        self._make_self_pipe()
+

becomes:

+    def _at_fork(self):
+        super()._at_fork()
+        if not hasattr(self._selector, '_at_fork'):
+            return
+        self._selector._at_fork()
+        self._close_self_pipe()
+        self._make_self_pipe()
msg236145 - (view) Author: Charles-Fran├žois Natali (neologix) * (Python committer) Date: 2015-02-17 19:16
> @neologix: Would you be ok to add a *private* _at_fork() method to selectors classes in Python 3.4 to fix this issue?

Not really: after fork(), you're hosed anyway:
"""

       Q6  Will closing a file descriptor cause it to be removed from
all epoll sets automatically?

       A6  Yes, but be aware of the following point.  A file
descriptor is a reference to an open file  description
           (see open(2)).  Whenever a descriptor is duplicated via
dup(2), dup2(2), fcntl(2) F_DUPFD, or fork(2), a
           new file descriptor referring to the same open file
description is created.  An  open  file  description
           continues  to  exist  until all file descriptors referring
to it have been closed.  A file descriptor is
           removed from an epoll set only after all the file
descriptors referring  to  the  underlying  open  file
           description  have  been  closed  (or  before  if the
descriptor is explicitly removed using epoll_ctl(2)
           EPOLL_CTL_DEL).  This means that even after a file
descriptor that is part of  an  epoll  set  has  been
           closed,  events may be reported for that file descriptor if
other file descriptors referring to the same
           underlying file description remain open.
"""

What would you do with the selector after fork(): register the FDs in
a new epoll, remove them?

There's no sensible default behavior, and I'd rrather avoid polluting
the code for this.
If asyncio wants to support this, it can create a new selector and
re-register everything it wants manually: there's a Selector.get_map()
exposing all that's needed.
msg236149 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-17 21:27
2015-02-17 20:16 GMT+01:00 Charles-Fran├žois Natali <report@bugs.python.org>:
> What would you do with the selector after fork(): register the FDs in
> a new epoll, remove them?

See the patch:

+        def _at_fork(self):
+            # don't unregister file descriptors: epoll is still shared with
+            # the parent process
+            self._epoll = select.epoll()
+            for key in self._fd_to_key.values():
+                self._register(key)

EpollSelector._at_fork() does nothing on the current epoll object,
create a new epoll object and register again all file descriptor.

Hum, I should maybe close explicitly the old epoll object.

> There's no sensible default behavior, and I'd rrather avoid polluting
> the code for this.

What is wrong with the proposed patch?

> If asyncio wants to support this, it can create a new selector and
> re-register everything it wants manually: there's a Selector.get_map()
> exposing all that's needed.

If possible, I would prefer to implement "at fork" in the selectors
module directly, the selectors module has a better knowledge of
seletors. For example, asyncio is not aware of the selector._epoll
attribute.
msg236150 - (view) Author: Martin Richard (martius) * Date: 2015-02-17 21:30
The goal of the patch is to create a duplicate selector (a new epoll() structure with the same watched fds as the original epoll). It allows to remove fds watched in the child's loop without impacting the parent process.

Actually, it's true that with the current implementation of the selectors module (using get_map()), we can achieve the same result than with victor's patch without touching the selector module. I attached a patch doing that, also working with python 3.4.

I thought about this at_fork() mechanism a bit more and I'm not sure of what we want to achieve with this. In my opinion, most of the time, we will want to recycle the loop in the child process (close it and create a new one) because we will not want to have the tasks and callbacks scheduled on the loop running on both the parent and the child (it would probably result in double writes on sockets, or double reads, for instance).

With the current implementation of asyncio, I can't recycle the loop for a single reason: closing the loop calls _close_self_pipe() which unregisters the pipe of the selector (hence breaking the loop in the parent). Since the self pipe is an object internal to the loop, I think it's safe to close the pipes without unregistering them of the selector. It is at least true with epoll() according to the documentation quoted by neologix, but I hope that we can expect it to be true with other unix platforms too.
msg236151 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-02-17 21:38
How do other event loops handle fork? Twisted, Tornado, libuv, libev,
libevent, etc.
msg244111 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 16:20
> How do other event loops handle fork? Twisted, Tornado, libuv, libev,
libevent, etc.

It looks like using fork() while an event loop is running isn't recommended in any of the above.  If I understand the code correctly, libev & gevent reinitialize loops in the forked process (essentially, you have a new loop).

I think we have the following options:

1. Document that using fork() is not recommended.

2. Detect fork() and re-initialize event loop in the child process (cleaning-up callback queues, re-initializing selectors, creating new self-pipe).

3. Detect fork() and raise a RuntimeError.  Document that asyncio event loop does not support forking at all.

4. The most recent patch by Martin detects the fork() and reinitializes self-pipe and selector (although all FDs are kept in the new selector).  I'm not sure I understand this option.

I'm torn between 2 & 3.  Guido, Victor, Martin, what do you think?
msg244112 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2015-05-26 16:49
I think only (3) is reasonable -- raise RuntimeError. There are too many use cases to consider and the behavior of the selectors seems to vary as well. Apps should ideally not fork with an event loop open; the only reasonable thing to do after a fork with an event loop open is to exec another binary (hopefully closing FDs using close-on-exec).

*Perhaps* it's possible to safely release some resources used by a loop after a fork but I'm skeptical even of that. Opportunistically closing the FDs used for the self-pipe and the selector seems fine (whatever is safe could be done the first time the loop is touched after the fork, just before raising RuntimeError).
msg244113 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 17:08
> I think only (3) is reasonable -- raise RuntimeError.

Just to be clear -- do we want to raise a RuntimeError in the parent, in the child, or both processes?
msg244114 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2015-05-26 17:23
I was thinking only in the child. The parent should be able to continue to use the loop as if the fork didn't happen, right?
msg244116 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 17:51
> I was thinking only in the child. The parent should be able to continue to use the loop as if the fork didn't happen, right?

Yes, everything should be fine.

I'll rephrase my question: do you think there is a way (and need) to at least throw a warning in the master process that the fork has failed (without monkey patching os.fork() which is not an option)?
msg244117 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2015-05-26 17:55
I don't understand. If the fork fails nothing changes right? I guess I'm missing some context or use case.
msg244118 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 18:01
> I don't understand. If the fork fails nothing changes right? I guess I'm missing some context or use case.

Maybe I'm wrong about this.  My line of thoughts is: a failed fork() call is a bug in the program.  Now, the master process will continue operating as it was, no warnings, no errors.  The child process will crash with a RuntimeError exception.  Will it be properly reported/logged?

I guess the forked child will share the stderr, so the exception won't pass completely unnoticed, right?
msg244119 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2015-05-26 18:03
That's really the problem of the code that calls fork(), not directly of
the event loop. There are some very solid patterns around that (I've
written several in the distant past, and Unix hasn't changed that much :-).

On Tue, May 26, 2015 at 11:01 AM, Yury Selivanov <report@bugs.python.org>
wrote:

>
> Yury Selivanov added the comment:
>
> > I don't understand. If the fork fails nothing changes right? I guess I'm
> missing some context or use case.
>
> Maybe I'm wrong about this.  My line of thoughts is: a failed fork() call
> is a bug in the program.  Now, the master process will continue operating
> as it was, no warnings, no errors.  The child process will crash with a
> RuntimeError exception.  Will it be properly reported/logged?
>
> I guess the forked child will share the stderr, so the exception won't
> pass completely unnoticed, right?
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue21998>
> _______________________________________
>
msg244120 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 18:05
> That's really the problem of the code that calls fork(), not directly of
> the event loop. There are some very solid patterns around that (I've
> written several in the distant past, and Unix hasn't changed that much :-).

Alright ;)  I'll draft a patch sometime soon.
msg244121 - (view) Author: Martin Richard (martius) * Date: 2015-05-26 18:23
Hi,

My patch was a variation of haypo's patch. The goal was to duplicate the
loop and its internal objects (loop and self pipes) without changing much
to its state from the outside (keeping callbacks and active tasks). I
wanted to be conservative with this patch, but it is not the option I
prefer.

I think that raising a RuntimeError in the child is fine, but may not be
enough:

Imho, saying "the loop can't be used anymore in the child" is fine, but "a
process in which lives an asyncio loop must not be forked" is too
restrictive (I'm not thinking of the fork+exec case, which is probably fine
anyway) because a library may rely on child processes, for instance.

Hence, we should allow a program to fork and eventually dispose the
resources of the loop by calling loop.close() - or any other mechanism that
you see fit (clearing all references to the loop is tedious because of the
global default event loop and the cycles between futures/tasks and the
loop).

However, the normal loop.close() sequence will unregister all the fds
registered to the selector, which will impact the parent. Under Linux with
epoll, it's fine if we only close the selector.

I would therefore, in the child after a fork, close the loop without
breaking the selector state (closing without unregister()'ing fds), unset
the default loop so get_event_loop() would create a new loop, then raise
RuntimeError.

I can elaborate on the use case I care about, but in a nutshell, doing so
would allow to spawn worker processes able to create their own loop without
requiring an idle "blank" child process that would be used as a base for
the workers. It adds the benefit, for instance, of allowing to share data
between the parent and the child leveraging OS copy-on-write.

2015-05-26 18:20 GMT+02:00 Yury Selivanov <report@bugs.python.org>:

>
> Yury Selivanov added the comment:
>
> > How do other event loops handle fork? Twisted, Tornado, libuv, libev,
> libevent, etc.
>
> It looks like using fork() while an event loop is running isn't
> recommended in any of the above.  If I understand the code correctly, libev
> & gevent reinitialize loops in the forked process (essentially, you have a
> new loop).
>
> I think we have the following options:
>
> 1. Document that using fork() is not recommended.
>
> 2. Detect fork() and re-initialize event loop in the child process
> (cleaning-up callback queues, re-initializing selectors, creating new
> self-pipe).
>
> 3. Detect fork() and raise a RuntimeError.  Document that asyncio event
> loop does not support forking at all.
>
> 4. The most recent patch by Martin detects the fork() and reinitializes
> self-pipe and selector (although all FDs are kept in the new selector).
> I'm not sure I understand this option.
>
> I'm torn between 2 & 3.  Guido, Victor, Martin, what do you think?
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue21998>
> _______________________________________
>
msg244123 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2015-05-26 18:40
> I would therefore, in the child after a fork, close the loop without 
> breaking the selector state (closing without unregister()'ing fds), unset 
> the default loop so get_event_loop() would create a new loop, then raise 
> RuntimeError. 
>
> I can elaborate on the use case I care about, but in a nutshell, doing so
> would allow to spawn worker processes able to create their own loop without
> requiring an idle "blank" child process that would be used as a base for
> the workers. It adds the benefit, for instance, of allowing to share data
> between the parent and the child leveraging OS copy-on-write.

The only solution to safely fork a process is to fix loop.close() to
check if it's called from a forked process and to close the loop in
a safe way (to avoid breaking the master process).  In this case
we don't even need to throw a RuntimeError.  But we won't have a 
chance to guarantee that all resources will be freed correctly (right?)

So the idea is (I guess it's the 5th option):

1. If the forked child doesn't call loop.close() immediately after
forking we raise RuntimeError on first loop operation.

2. If the forked child calls (explicitly) loop.close() -- it's fine, 
we just close it, the error won't be raised.  When we close we only 
close the selector (without unregistering or re-regestering any FDs),
we cleanup callback queues without trying to close anything).

Guido, do you still think that raising a "RuntimeError" in a child
process in an unavoidable way is a better option?
msg244130 - (view) Author: Martin Richard (martius) * Date: 2015-05-26 19:28
015-05-26 20:40 GMT+02:00 Yury Selivanov <report@bugs.python.org>:

>
> Yury Selivanov added the comment:
> The only solution to safely fork a process is to fix loop.close() to
> check if it's called from a forked process and to close the loop in
> a safe way (to avoid breaking the master process).  In this case
> we don't even need to throw a RuntimeError.  But we won't have a
> chance to guarantee that all resources will be freed correctly (right?)
>

If all the tasks are cancelled and loop's internal structures (callback
lists, tasks sets, etc) are cleared, I believe that the garbage collector
will eventually be able to dispose everything.

However, it's indeed not enough: resources created by other parts of
asyncio may leak (transports, subprocess). For instance, I proposed to add
a "detach()" method for SubprocessTransport here:
http://bugs.python.org/issue23540 : in this case, I need to close stdin,
stdout, stderr pipes without killing the subprocess.

> So the idea is (I guess it's the 5th option):
>
> 1. If the forked child doesn't call loop.close() immediately after
> forking we raise RuntimeError on first loop operation.
>
> 2. If the forked child calls (explicitly) loop.close() -- it's fine,
> we just close it, the error won't be raised.  When we close we only
> close the selector (without unregistering or re-regestering any FDs),
> we cleanup callback queues without trying to close anything).
>
> Guido, do you still think that raising a "RuntimeError" in a child
> process in an unavoidable way is a better option?
>

> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue21998>
> _______________________________________
>
msg244135 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2015-05-26 20:30
I don't actually know if the 5th option is possible. My strong requirement is that no matter what the child process does, the parent should still be able to continue using the loop. IMO it's better to leak a FD in the child than to close a resource owned by the parent. Within those constraints I'm okay with various solutions.
msg249722 - (view) Author: Larry Hastings (larry) * (Python committer) Date: 2015-09-04 07:06
Surely this is too late for 3.5?
msg249724 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2015-09-04 07:07
> Surely this is too late for 3.5?

I'm not 100% convinced that asyncio must support fork, so it's too late :-) Anyway, we don't care, asyncio will be under provisional status for one more cycle (3.5) :-p
msg249732 - (view) Author: Larry Hastings (larry) * (Python committer) Date: 2015-09-04 07:13
I've remarked it as "normal" priority and moved it to 3.6.  Not my problem anymore!  :D
msg256889 - (view) Author: Adam Bishop (Adam.Bishop) Date: 2015-12-23 00:22
A note about this issue should really be added to the documentation - on OS X, it fails with the rather non-sensical "OSError: [Errno 9] Bad file descriptor", making this very hard to debug.

I don't have any specific requirement for fork support in asyncio as it's trivial to move loop creation after the fork, but having to run the interpreter through GDB to diagnose the problem is not a good state of affairs.
History
Date User Action Args
2015-12-23 00:22:53Adam.Bishopsetnosy: + Adam.Bishop
messages: + msg256889
2015-10-27 14:53:22Christian Hsetnosy: + Christian H
2015-09-04 07:13:50larrysetnosy: - larry
2015-09-04 07:13:43larrysetpriority: deferred blocker -> normal

versions: + Python 3.6, - Python 3.4, Python 3.5
messages: + msg249732
nosy: gvanrossum, haypo, larry, christian.heimes, neologix, yselivanov, martius
2015-09-04 07:07:36hayposetmessages: + msg249724
2015-09-04 07:06:22larrysetnosy: + larry
messages: + msg249722
2015-05-26 20:30:26gvanrossumsetmessages: + msg244135
2015-05-26 19:28:19martiussetmessages: + msg244130
2015-05-26 18:40:52yselivanovsetmessages: + msg244123
2015-05-26 18:23:46martiussetmessages: + msg244121
2015-05-26 18:05:44yselivanovsetassignee: yselivanov
messages: + msg244120
2015-05-26 18:03:16gvanrossumsetmessages: + msg244119
2015-05-26 18:01:08yselivanovsetmessages: + msg244118
2015-05-26 17:55:12gvanrossumsetmessages: + msg244117
2015-05-26 17:51:24yselivanovsetmessages: + msg244116
2015-05-26 17:27:25christian.heimessetnosy: + christian.heimes
2015-05-26 17:23:26gvanrossumsetmessages: + msg244114
2015-05-26 17:08:47yselivanovsetmessages: + msg244113
2015-05-26 16:49:32gvanrossumsetmessages: + msg244112
2015-05-26 16:20:12yselivanovsetmessages: + msg244111
2015-05-26 03:29:03yselivanovsetpriority: normal -> deferred blocker
2015-02-17 21:38:03hayposetmessages: + msg236151
2015-02-17 21:30:23martiussetfiles: + at_fork-3.patch

messages: + msg236150
2015-02-17 21:27:13hayposetmessages: + msg236149
2015-02-17 19:16:32neologixsetmessages: + msg236145
2015-02-17 15:10:13martiussetmessages: + msg236136
2015-02-17 14:12:38hayposetnosy: + neologix
messages: + msg236134
2015-02-17 11:38:58martiussetmessages: + msg236131
2015-02-15 20:16:12hayposetmessages: + msg236065
2015-02-15 20:13:49hayposettitle: asyncio: a new self-pipe should be created in the child process after fork -> asyncio: support fork
2015-02-05 11:03:50hayposetfiles: + at_fork-2.patch

messages: + msg235429
2015-02-04 22:18:46hayposetmessages: + msg235410
2015-02-04 22:16:22hayposetfiles: + at_fork.patch

messages: + msg235409
2014-12-08 15:47:57hayposetmessages: + msg232308
2014-12-08 11:20:24martiussetfiles: + close_self_pipe_after_selector.patch
keywords: + patch
messages: + msg232301
2014-12-01 15:35:02gvanrossumsetmessages: + msg231941
2014-12-01 09:29:49martiussetmessages: + msg231921
2014-12-01 09:23:06martiussetmessages: + msg231919
2014-11-29 01:12:50gvanrossumsetmessages: + msg231845
2014-11-28 16:51:51martiussetfiles: + test2.py
nosy: + martius
messages: + msg231819

2014-09-15 18:09:35gvanrossumsetmessages: + msg226932
2014-09-15 07:15:50hayposetmessages: + msg226899
2014-09-15 01:28:23yselivanovsetmessages: + msg226892
2014-09-14 23:45:33gvanrossumsetmessages: + msg226889
2014-09-14 04:31:40yselivanovsetmessages: + msg226861
2014-08-27 11:02:18hayposetmessages: + msg225973
2014-07-18 20:37:46hayposetmessages: + msg223431
2014-07-17 16:10:45haypocreate