Title: Allow registering after-fork initializers in multiprocessing
Type: enhancement Stage: needs patch
Components: Library (Lib) Versions: Python 3.7
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: davin, njs, pitrou, sbt, yselivanov
Priority: normal Keywords:

Created on 2017-03-16 16:45 by pitrou, last changed 2017-03-16 21:37 by yselivanov.

Messages (7)
msg289721 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-03-16 16:45
Currently, multiprocessing has hard-coded logic to re-seed the Python random generator (in the random module) whenever a process is forked.  This is present in two places: `Popen._launch` in `` and `serve_one` in `` (for the "fork" and "forkserver" spawn methods, respectively).

However, other libraries would like to benefit from this mechanism.  For example, Numpy has its own random number generator that would also benefit from re-seeding after fork().  Currently, this is solvable using multiprocessing.Pool which has an `initializer` argument.  However, concurrent.futures' ProcessPool does not offer such facility; nor do other ways of launching child processes, such as (simply) instantiating a new Process object.

Therefore, I'd like to propose adding a new top-level function in multiprocessing (and also a new Context method) to register a new initializer function for use after fork().  That way, each library can add its own initializers if desired, freeing users from the burden of doing so in their applications.
msg289726 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-03-16 18:24
Maybe a better way would be to proceed with
msg289728 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-03-16 19:01
That issue seems to have stalled as it seems to have focussed on low-level APIs, and also because it is proposing a new module with the API question that entails.

Another possible stance is that os.fork() should be left as-is, as a low-level primitive, and this functionality should be provided by the higher-level multiprocessing module.
msg289730 - (view) Author: Davin Potts (davin) * (Python committer) Date: 2017-03-16 20:50
Having a read through issue16500 and issue6721, I worry that this could again become bogged down with similar concerns.

With the specific example of NumPy, I am not sure I would want its random number generator to be reseeded with each forked process.  There are many situations where I very much need to preserve the original seed and/or current PRNG state.

I do not yet see a clear, motivating use case even after reading those two older issues.  I worry that if it were added it would (almost?) never get used either because the need is rare or because developers will more often think of how this can be solved in their own target functions when they first start up.  The suggestion of a top-level function and Context method make good sense to me as a place to offer such a thing but is there a clearer use case?
msg289731 - (view) Author: Nathaniel Smith (njs) * Date: 2017-03-16 20:55
I think ideally on numpy's end we would reseed iff the RNG was unseeded. Now that I think about it I'm a little surprised that we haven't had more complaints about this, so I guess it's not a super urgent issue, but that would be an improvement over the status quo, I think.
msg289733 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2017-03-16 21:00
The use case is quite clear here.  The specific need to re-seed the Numpy PRNG has already come up in two different projects I work on: Numba and Dask.  I wouldn't be surprised if other libraries have similar needs.

If you want a reproducible RNG sequence, you should actually use a specific, explicit seed (and possibly instantiate a dedicated random state instead of using the default one).  When not using an explicit seed, people expect different random numbers regardless of whether a function is executed in one or several processes.

Note that multiprocessing *already* re-seeds the stdlib PRNG after fork, so re-seeding the Numpy PRNG is consistent with current behaviour.

About it being rarely used: the aim is not use by application developers but by library authors; e.g. Numpy itself could register the re-seeding callback, which would free users from doing it themselves.  It doesn't have to be used a lot to be useful.
msg289735 - (view) Author: Yury Selivanov (yselivanov) * (Python committer) Date: 2017-03-16 21:37
BTW, why can't you use `pthread_atfork` in numpy?
Date User Action Args
2017-03-16 21:37:44yselivanovsetmessages: + msg289735
2017-03-16 21:00:37pitrousetmessages: + msg289733
2017-03-16 20:55:04njssetmessages: + msg289731
2017-03-16 20:50:18davinsetmessages: + msg289730
2017-03-16 19:01:00pitrousetmessages: + msg289728
2017-03-16 18:24:33yselivanovsetnosy: + yselivanov
messages: + msg289726
2017-03-16 16:46:23pitrousetnosy: + njs
2017-03-16 16:45:39pitroucreate