Author martius
Recipients christian.heimes, gvanrossum, martius, neologix, vstinner, yselivanov
Date 2015-05-26.19:28:19
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
In-reply-to <>
015-05-26 20:40 GMT+02:00 Yury Selivanov <>:

> Yury Selivanov added the comment:
> The only solution to safely fork a process is to fix loop.close() to
> check if it's called from a forked process and to close the loop in
> a safe way (to avoid breaking the master process).  In this case
> we don't even need to throw a RuntimeError.  But we won't have a
> chance to guarantee that all resources will be freed correctly (right?)

If all the tasks are cancelled and loop's internal structures (callback
lists, tasks sets, etc) are cleared, I believe that the garbage collector
will eventually be able to dispose everything.

However, it's indeed not enough: resources created by other parts of
asyncio may leak (transports, subprocess). For instance, I proposed to add
a "detach()" method for SubprocessTransport here: : in this case, I need to close stdin,
stdout, stderr pipes without killing the subprocess.

> So the idea is (I guess it's the 5th option):
> 1. If the forked child doesn't call loop.close() immediately after
> forking we raise RuntimeError on first loop operation.
> 2. If the forked child calls (explicitly) loop.close() -- it's fine,
> we just close it, the error won't be raised.  When we close we only
> close the selector (without unregistering or re-regestering any FDs),
> we cleanup callback queues without trying to close anything).
> Guido, do you still think that raising a "RuntimeError" in a child
> process in an unavoidable way is a better option?

> ----------
> _______________________________________
> Python tracker <>
> <>
> _______________________________________
Date User Action Args
2015-05-26 19:28:19martiussetrecipients: + martius, gvanrossum, vstinner, christian.heimes, neologix, yselivanov
2015-05-26 19:28:19martiuslinkissue21998 messages
2015-05-26 19:28:19martiuscreate