Message244123
> I would therefore, in the child after a fork, close the loop without
> breaking the selector state (closing without unregister()'ing fds), unset
> the default loop so get_event_loop() would create a new loop, then raise
> RuntimeError.
>
> I can elaborate on the use case I care about, but in a nutshell, doing so
> would allow to spawn worker processes able to create their own loop without
> requiring an idle "blank" child process that would be used as a base for
> the workers. It adds the benefit, for instance, of allowing to share data
> between the parent and the child leveraging OS copy-on-write.
The only solution to safely fork a process is to fix loop.close() to
check if it's called from a forked process and to close the loop in
a safe way (to avoid breaking the master process). In this case
we don't even need to throw a RuntimeError. But we won't have a
chance to guarantee that all resources will be freed correctly (right?)
So the idea is (I guess it's the 5th option):
1. If the forked child doesn't call loop.close() immediately after
forking we raise RuntimeError on first loop operation.
2. If the forked child calls (explicitly) loop.close() -- it's fine,
we just close it, the error won't be raised. When we close we only
close the selector (without unregistering or re-regestering any FDs),
we cleanup callback queues without trying to close anything).
Guido, do you still think that raising a "RuntimeError" in a child
process in an unavoidable way is a better option? |
|
Date |
User |
Action |
Args |
2015-05-26 18:40:52 | yselivanov | set | recipients:
+ yselivanov, gvanrossum, vstinner, christian.heimes, neologix, martius |
2015-05-26 18:40:52 | yselivanov | set | messageid: <1432665652.48.0.414557073893.issue21998@psf.upfronthosting.co.za> |
2015-05-26 18:40:52 | yselivanov | link | issue21998 messages |
2015-05-26 18:40:52 | yselivanov | create | |
|