This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Handle a non-importable __main__ in multiprocessing
Type: behavior Stage: resolved
Components: Library (Lib) Versions: Python 3.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: ncoghlan Nosy List: Olivier.Grisel, brett.cannon, christian.heimes, eric.snow, larry, ncoghlan, pitrou, python-dev, sbt, zach.ware
Priority: release blocker Keywords: patch

Created on 2013-12-10 13:08 by Olivier.Grisel, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
issue19946.diff Olivier.Grisel, 2013-12-11 09:52 review
issue19946_pep_451_multiprocessing_v2.diff ncoghlan, 2013-12-16 12:39 Work in progress patch (Fork+, ForkServer-, Spawn--) review
test_multiprocessing_main_handling.py ncoghlan, 2013-12-17 11:42 Final test case
issue19946.diff zach.ware, 2013-12-19 03:55 review
skip_forkserver.patch christian.heimes, 2013-12-19 21:35 review
Messages (47)
msg205810 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-10 13:08
Here is a simple python program that uses the new forkserver feature introduced in 3.4b1:


name: checkforkserver.py
"""
import multiprocessing
import os


def do(i):
    print(i, os.getpid())


def test_forkserver():
    mp = multiprocessing.get_context('forkserver')
    mp.Pool(2).map(do, range(3))


if __name__ == "__main__":
    test_forkserver()
"""

When running this using the "python check_forkserver.py" command everything works as expected.

When running this using the nosetests launcher ("nosetests -s check_forkserver.py"), I get:

"""
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/forkserver.py", line 141, in main
    spawn.import_main_path(main_path)
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 252, in import_main_path
    methods.init_module_attrs(main_module)
  File "<frozen importlib._bootstrap>", line 1051, in init_module_attrs
AttributeError: 'NoneType' object has no attribute 'loader'
"""

Indeed, the spec variable in multiprocessing/spawn.py's import_main_path
function is None as the nosetests script is not a regular python module: in particular is does not have a ".py" extension.

If I copy or symlink or renamed the "nosetests" script as "nosetests.py" in the same folder, this works as expected. I am not familiar enough with the importlib machinery to suggest a fix for this bug.

Also there is a typo in the comment: "causing a psuedo fork bomb" => "causing a pseudo fork bomb".

Note: I am running CPython head updated today.
msg205831 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-12-10 15:47
This sounds related to the ModuleSpec changes.
msg205832 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-10 16:12
The returning of None means that importlib.find_spec() didn't find the spec/loader for the specified module. So the question is exactly what module is being passed to importlib.find_spec() and why isn't it finding a spec/loader for that module.

Did this code work in Python 3.3?
msg205833 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-10 16:14
I see Oliver says he is testing a new forkserver feature from 3.4b1, so it might not necessarily be importlib's fault then. Does using the old importlib.find_loader() approach work?
msg205847 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-10 19:18
> So the question is exactly what module is being passed to importlib.find_spec() and why isn't it finding a spec/loader for that module.

The module is the `nosetests` python script. module_name == 'nosetests' in this case. However, nosetests is not considered an importable module because of the missing '.py' extension in the filename.

> Did this code work in Python 3.3?

This code did not exist in Python 3.3.
msg205854 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-10 20:27
So at the bare minimum, the multiprocessing code should raise an ImportError when it can't find the spec for the module to help debug this kind of thing. Also that typo should get fixed.

Second, there is no way that 'nosetests' will ever succeed as an import since, as Oliver pointed out, it doesn't end in '.py' or any other identifiable way for a finder to know it can handle the file. So this is not a bug and simply a side-effect of how import works. The only way around it would be to symlink nosetests to nosetests.py or to somehow pre-emptively set up 'nosetests' for supported importing.
msg205857 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-12-10 20:39
I guess this is a case where we should not be trying to import the main module.  The code for determining the path of the main module (if any) is rather crufty.

What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run under nose?
msg205885 - (view) Author: Eric Snow (eric.snow) * (Python committer) Date: 2013-12-11 06:06
There is definitely room for improvement relative to module specs and __main__ (that's the topic of issue #19701).  That issue is waiting for __main__ to get a proper spec (see issues #19700 and #19697).
msg205893 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-11 08:33
I agree that a failure to lookup the module should raise an explicit exception.

> Second, there is no way that 'nosetests' will ever succeed as an import since, as Oliver pointed out, it doesn't end in '.py' or any other identifiable way for a finder to know it can handle the file. So this is not a bug and simply a side-effect of how import works. The only way around it would be to symlink nosetests to nosetests.py or to somehow pre-emptively set up 'nosetests' for supported importing.

I don't agree that (unix) Python programs that don't end with ".py" should be modified to have multiprocessing work correctly. I think it should be the multiprocessing responsibility to transparently find out how to spawn the new process independently of the fact that the program ends in '.py' or not.

Note: the fork mode works always under unix (with or without the ".py" extension). The spawn mode always work under windows as AFAIK there is no way to have Python programs that don't end in .py under windows and furthermore I think multiprocessing does execute the __main__ under windows (but I haven't tested if it's still the case in Python HEAD).
msg205894 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-11 08:41
> what is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run under nose?

$ cat check_stuff.py 
import sys

def test_main():
    print("sys.modules['__main__']=%r"
          % sys.modules['__main__'])
    print("sys.modules['__main__'].__file__=%r"
          % sys.modules['__main__'].__file__)


if __name__ == '__main__':
    test_main()
(pyhead) ogrisel@is146148:~/tmp$ python check_stuff.py 
sys.modules['__main__']=<module '__main__' from 'check_stuff.py'>
sys.modules['__main__'].__file__='check_stuff.py'
(pyhead) ogrisel@is146148:~/tmp$ nosetests -s check_stuff.py 
sys.modules['__main__']=<module '__main__' from '/volatile/ogrisel/envs/pyhead/bin/nosetests'>
sys.modules['__main__'].__file__='/volatile/ogrisel/envs/pyhead/bin/nosetests'
.
----------------------------------------------------------------------
Ran 1 test in 0.001s

OK
msg205895 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-11 08:44
Note however that the problem is not specific to nose. If I rename my initial 'check_forserver.py' script to 'check_forserver', add the '#!/usr/bin/env python' header and make it 'chmod +x' I get the same crash.

So the problem is related to the fact that under posix, valid Python programs can be executable scripts without the '.py' extension.
msg205898 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-11 09:52
Here is a patch that uses `imp.load_source` when the first importlib name-based lookup fails.

Apparently it fixes the issue on my box but I am not sure whether this is the correct way to do it.
msg205904 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-11 12:34
Rerunning main in the subprocess has always been a slightly dubious feature
of multiprocessing, but IIRC it's a necessary hack to ensure pickle
compatibility for things defined in __main__. Using "runpy.run_path" would
be a better solution, but we'll need to add the "target" parameter that
missed the beta1 deadline.
msg206115 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-12-13 16:43
New changeset cea42629ddf5 by Brett Cannon in branch 'default':
Issue #19946: Raise ImportError when the main module cannot be found
http://hg.python.org/cpython/rev/cea42629ddf5
msg206118 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-13 16:45
Created http://bugs.python.org/issue19978 to track using runpy.run_path in 3.5.
msg206120 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-13 16:56
Why has this issue been closed? Won't the spawn and forkserver mode work in Python 3.4 for Python program started by a Python script (which is probably the majority of programs written in Python under unix)?

Is there any reason not to use the `imp.load_source` code I put in my patch as a temporary workaround if the cleaner runpy.run_path solution is too tricky to implement for the Python 3.4 release time frame?
msg206126 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-13 17:29
Multiple questions from Oliver to answer.

> Why has this issue been closed?

Because the decided issue of this bug -- raising AttributeError over ImportError -- was fixed.

> Won't the spawn and forkserver mode work in Python 3.4 for Python program started by a Python script (which is probably the majority of programs written in Python under unix)?

The semantics are not going to change in python 3.4 and will just stay as they were in Python 3.3.

> Is there any reason not to use the `imp.load_source` code I put in my patch as a temporary workaround if the cleaner runpy.run_path solution is too tricky to implement for the Python 3.4 release time frame?

There are two reasons. One is that the imp module is deprecated in Python 3.4 (http://docs.python.org/3.4/library/imp.html#module-imp). Two is that temporarily inserting for a single release a solution that will simply be ripped out in the following release is just asking for trouble. Someone will use the temporary fix in some way in production code and then be shocked when the better solution doesn't work in exactly the same way. It's best to simply wait until 3.5 has the proper solution available.

I know it's frustrating to either name your scripts with a .py until Python 3.5 comes out or just wait until 3.5, but we can't risk a hacky solution for a single release as users will be living with the hack for several years.
msg206129 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-13 17:40
> The semantics are not going to change in python 3.4 and will just stay as they were in Python 3.3.

Well the semantics do change: in Python 3.3 the spawn and forkserver modes did not exist at all. The "spawn" mode existed but only implicitly and only under Windows.

So Python 3.4 is introducing a new feature for POSIX systems that will only work in the rare cases where the Python program is launched by a ".py" ending script.

Would running the `imp.load_source` trick only if `sys.platform != "win32"` be a viable way to preserve the semantics of Python 3.3 under the windows while not introducing a partially broken feature in Python 3.4?
msg206133 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2013-12-13 18:05
I'm sorry, Oliver, you are simply going to have to wait for Python 3.5 at this point to get the new semantics you want.
msg206134 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-12-13 18:09
> I'm sorry, Oliver, you are simply going to have to wait for Python 3.5
> at this point to get the new semantics you want.

Side note: it's Olivier, not Oliver.
msg206137 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-13 19:18
I can wait (or monkey-patch the stuff I need as a temporary workaround in my code). My worry is that Python 3.4 will introduce a new feature that is very crash-prone.

Take this simple program that uses the newly introduced `get_context` function (the same problem happens with `set_start_method`):

filename: mytool
"""
#!/usr/bin/env python
from multiprocessing import freeze_support, get_context


def compute_stuff(i):
    # in real life you could use a lib that uses threads
    # like cuda and that would crash with the default 'fork'
    # mode under POSIX
    return i ** 2


if __name__ == "__main__":
     freeze_support()
     ctx = get_context('spawn')
     ctx.Pool(4).map(compute_stuff, range(8))

"""

If you chmod +x this file and run it with ./mytool, the user will get an infinitely running process that keeps displaying on stderr:

"""
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 96, in spawn_main
    exitcode = _main(fd)
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 105, in _main
    prepare(preparation_data)
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 210, in prepare
    import_main_path(data['main_path'])
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 256, in import_main_path
    raise ImportError(name=main_name)
ImportError
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 96, in spawn_main
    exitcode = _main(fd)
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 105, in _main
    prepare(preparation_data)
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 210, in prepare
    import_main_path(data['main_path'])
  File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 256, in import_main_path
    raise ImportError(name=main_name)
ImportError
...
"""

until the user kills the process. Is there really nothing we can do to avoid releasing Python 3.4 with this bug?
msg206138 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-12-13 19:26
Let's reopen, shall we? If not for 3.4, at least for 3.5.

It's likely that multiprocessing needs a __main__ simply because it needs a way to replicate the parent process' state in the child (for example, the set of imported modules, the logging configuration, etc.). Perhaps Richard can elaborate.

But, AFAIU, the __main__ could be imported as a script rather than a "proper" module from sys.path.
msg206140 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-13 19:40
For Python 3.4:

Maybe rather than raising ImportError, we could issue warning to notify the users that names from the __main__ namespace could not be loaded and make the init_module_attrs return early.

This way a multiprocessing program that only calls functions defined in non-main namespaces could still use the new "start method" feature introduced in Python 3.4 while not changing the Python 3.3 semantics for windows programs and not relying on any deprecated hack.
msg206180 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-14 13:17
Long term fix: runpy.run_path and runpy.run_module need to accept a "target" parameter, multiprocessing needs to use the appropriate one based on whether or not __main__.__spec__ is None.

Short term (3.4) fix: we can expose a private API in runpy akin to the _run_module_as_main that we use to implement the -m switch that will do the right thing for multiprocessing.
msg206224 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-15 10:36
Issue 19700 isn't quite finished, but I believe it is finished enough to let me get this working properly.
msg206228 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-12-15 12:19
So there are really two situations:

1) The __main__ module *should not* be imported.  This is the case if you use __main__.py in a package or if you use nose to call test_main().

This should really be detected in get_preparation_data() in the parent process so that import_main_path() does not get called in the child process.

2) The __main__ module *should* be imported but it does not have a .py extension.
msg206229 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-15 12:43
Bumping the priority on this, as multiprocessing is currently creating invalid child processes by failing to set __main__.__spec__ appropriately.

The attached patch is designed to get us started down that path. It's currently broken, but I need feedback from folks that know the multiprocessing code better than I do in order to know where best to start poking and prodding.

With the patch, invoking regrtest directly still works:

    ./python Lib/test/regrtest.py -v test_multiprocessing_spawn

But relying on module execution fails:
    ./python -m test -v test_multiprocessing_spawn

I appear to be somehow getting child processes where __main__.__file__ is set, but __main__.__spec__ is not.
msg206230 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-15 12:47
With the restructuring in my patch, it would be easy enough to move the "early return" cases from the _fixup_main_* functions to instead be "don't set the variable" cases in get_preparation_data.
msg206252 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-12-15 20:11
> I appear to be somehow getting child processes where __main__.__file__ is
> set, but __main__.__spec__ is not.

That seems to be true for the __main__ module even when multiprocessing is not involved.  Running a file /tmp/foo.py containing

    import sys
    print(sys.modules['__main__'].__spec__, sys.modules['__main__'].__file__)

I get output

    None /tmp/foo.py

I am confused by why you would ever want to load by module name rather than file name.  What problem would that fix?  If the idea is just to support importing a main module without a .py extension, isn't __file__ good enough?
msg206260 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-15 21:21
Scripts (whether in source form or precompiled) work via direct execution,
but all the other execution paths (directories, zipfiles, -m) rely on the
import system (via runpy). multiprocessing has been broken for years in
that regard, hence my old comment about the way it derived the module name
from the file name being problematic (although it only outright *broke*
with submodule execution, and even then you would likely get away with it
if you didn't use relative imports).

Historically, it was a hard problem to solve, since even the parent process
forgot the original name of __main__, but PEP 451 has now fixed that
limitation.

I also have an idea as to what may be wrong with my patch - I'm going to
try adjusting the first early return from _fixup_main_from_name to ensure
that __main__.__spec__ is set correctly.
msg206296 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-16 12:29
I created a test suite to ensure that all the various cases were handled correctly by the eventual patch (it doesn't test some of the namespace package related edge cases, but they devolve to normal module execution in terms of the final state of __main__, and that's covered by these tests).
msg206297 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-16 12:39
Current work in progress patch. The existing multiprocessing tests all pass, but the new main handling tests fail.

The fork start_method passes all the tests
The forkserver and spawn start methods fail the directory, zipfile and package tests.
msg206298 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-16 12:44
Updated test that handles timeouts better.

I also realised the current test failures are due to an error in the test design - the "failing" cases are ones where we deliberately *don't* rerun __main__ because the entire __main__.py file is assumed to be inside an implicit __main__-only guard.

So the code changes should be complete, I just need to figure out a way to tweak the tests appropriately.
msg206301 - (view) Author: Olivier Grisel (Olivier.Grisel) * Date: 2013-12-16 12:56
I applied issue19946_pep_451_multiprocessing_v2.diff and I confirm that it fixes the problem that I reported initially.
msg206421 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-17 11:42
OK, fixed test case attached. Turns out the ipython workaround test was completely wrong and never even loaded multiprocessing, and hence always passed, even with the workaround disabled. So I fixed that test case, and used the same approach for the zipfile, directory and package tests. I also fixed the submodule test to check that explicit relative imports work properly from __mp_main__ in the child processes.

With this updated test cast, the v2 patch handles everything correctly, but there are 4 failures on Linux without the patch. Specifically:

- test_basic_script_no_suffix fails for the spawn and forkserver start methods (the child processes fail to find a spec for __mp_main__)
- test_module_in_package fails for the spawn and forkserver start methods (the explicit relative import from __mp_main__ fails because the import system isn't initialised correctly in the child processes)

The former case is the one Olivier reported in this issue. It's a new case for 3.4, since the spawn start method was previously only available on Windows, where scripts always have an extension.

The latter edge case is the one my "XXX (ncoghlan): The following code makes several bogus assumptions regarding the relationship between __file__ and a module's real name." comment was about.

I believe we could actually adjust earlier versions to handle things as well as this new PEP 451 based approach (by using a combination of __package__ and __file__ rather than __spec__), but that's much harder for me to work on in those versions where the "spawn" start method is only available on Windows :)
msg206424 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-12-17 12:17
New changeset b6d6f3b4b100 by Nick Coghlan in branch 'default':
Close #19946: use runpy as needed in multiprocessing
http://hg.python.org/cpython/rev/b6d6f3b4b100
msg206478 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-12-17 20:02
Thanks for your hard work Nick!
msg206559 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2013-12-18 23:01
The commit broken a couple of buildbots like all Windows bots and OpenIndiana.
msg206575 - (view) Author: Zachary Ware (zach.ware) * (Python committer) Date: 2013-12-19 03:55
The problem on Windows at least is that the skips for the 'fork' and 'forkserver' start methods aren't firing due to setUpClass being improperly set up in MultiProcessingCmdLineMixin: it's not decorated as a classmethod and the 'u' is lower-case instead of upper.  Just fixing that makes for some unusual output ("skipped '"fork" start method not available'" with no indication of which test was skipped) and a variable number of tests depending on available start methods, so a better fix is to just do the check in setUp.

Unrelated to the failure, but we're also in the process of moving away from using test_main(), preferring unittest.main().

The attached patch addresses both, passes on Windows and Linux, and I suspect should help on OpenIndiana as well judging by the tracebacks it's giving.
msg206607 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-12-19 11:54
New changeset 460961e80e31 by Nick Coghlan in branch 'default':
Issue #19946: appropriately skip new multiprocessing tests
http://hg.python.org/cpython/rev/460961e80e31
msg206650 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2013-12-19 21:35
The OpenIndiana tests are still failing. OpenIndiana doesn't support forkserver because it doesn't implement the send handle feature. The patch skips the forkserver tests if HAVE_SEND_HANDLE is false.
msg206653 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-19 22:00
I think that needs to be fixed on the multiprocessing side rather than just
in the tests - we shouldn't create a concrete context for a start method
that isn't going to work on that platform. Finding that kind of discrepancy
was part of my rationale for basing the skips on the available contexts
(although my main motivation was simplicity).

There may also be docs implications in describing which methods are
supported on different platforms (although I haven't looked at how that is
currently documented).
msg206655 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-12-20 00:38
On 19/12/2013 10:00 pm, Nick Coghlan wrote:
> I think that needs to be fixed on the multiprocessing side rather than just
> in the tests - we shouldn't create a concrete context for a start method
> that isn't going to work on that platform. Finding that kind of discrepancy
> was part of my rationale for basing the skips on the available contexts
> (although my main motivation was simplicity).
>
> There may also be docs implications in describing which methods are
> supported on different platforms (although I haven't looked at how that is
> currently documented).
>
If by "concrete context" you mean _concrete_contexts['forkserver'], then 
that is supposed to be private.  If you write

     ctx = multiprocessing.get_context('forkserver')

then this will raise ValueError if the forkserver method is not 
available.  You can also use

    'forkserver' in multiprocessing.get_all_start_methods()

to check if it is available.
msg206659 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-20 01:58
Ah, I should have looked more closely at the docs to see if there was a public API for that before poking around in the package internals.

In that case, I suggest we change this bit in the test:

    # We look inside the context module to find out which
    # start methods we can check
    from multiprocessing.context import _concrete_contexts

to use the appropriate public API:

    # Need to know which start methods we should test
    import multiprocessing
    AVAILABLE_START_METHODS = set(multiprocessing.get_all_start_methods())

And then adjust the skip check to look in AVAILABLE_START_METHODS rather than _concrete_contexts.

I'll make that change tonight if nobody beats me to it.
msg206677 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-12-20 12:17
New changeset 00d09afb57ca by Nick Coghlan in branch 'default':
Issue #19946: use public API for multiprocessing start methods
http://hg.python.org/cpython/rev/00d09afb57ca
msg206678 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-20 12:19
Pending a clean bill of health from the stable buildbots :)
msg206683 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-12-20 14:33
Now passing on all the stable buildbots (the two red Windows bots are for other issues, such as issue 15599 for the threaded import test failure)
History
Date User Action Args
2022-04-11 14:57:55adminsetgithub: 64145
2015-02-24 09:59:58ncoghlansetresolution: fixed
2013-12-20 14:33:20ncoghlansetstatus: pending -> closed

messages: + msg206683
2013-12-20 12:19:14ncoghlansetstatus: open -> pending

messages: + msg206678
2013-12-20 12:18:00python-devsetmessages: + msg206677
2013-12-20 01:58:02ncoghlansetmessages: + msg206659
2013-12-20 00:38:03sbtsetmessages: + msg206655
2013-12-19 22:00:13ncoghlansetmessages: + msg206653
2013-12-19 21:35:05christian.heimessetfiles: + skip_forkserver.patch

messages: + msg206650
2013-12-19 11:54:34python-devsetmessages: + msg206607
2013-12-19 03:55:20zach.waresetfiles: + issue19946.diff
nosy: + zach.ware
messages: + msg206575

2013-12-18 23:01:42christian.heimessetstatus: closed -> open

nosy: + christian.heimes
messages: + msg206559

resolution: fixed -> (no value)
2013-12-17 20:02:27sbtsetmessages: + msg206478
2013-12-17 12:40:56ncoghlanlinkissue19702 dependencies
2013-12-17 12:22:38ncoghlansetfiles: - test_multiprocessing_main_handling.py
2013-12-17 12:17:42python-devsetstatus: open -> closed
resolution: fixed
messages: + msg206424

stage: patch review -> resolved
2013-12-17 11:43:31ncoghlansetfiles: - test_multiprocessing_main_handling.py
2013-12-17 11:43:10ncoghlansetfiles: - issue19946_pep_451_multiprocessing.diff
2013-12-17 11:42:50ncoghlansetfiles: + test_multiprocessing_main_handling.py

messages: + msg206421
2013-12-16 12:56:12Olivier.Griselsetmessages: + msg206301
2013-12-16 12:45:00ncoghlansetstage: needs patch -> patch review
2013-12-16 12:44:53ncoghlansetfiles: + test_multiprocessing_main_handling.py

messages: + msg206298
2013-12-16 12:39:08ncoghlansetfiles: + issue19946_pep_451_multiprocessing_v2.diff

messages: + msg206297
2013-12-16 12:29:54ncoghlansetfiles: + test_multiprocessing_main_handling.py

messages: + msg206296
2013-12-15 21:21:03ncoghlansetmessages: + msg206260
2013-12-15 20:11:08sbtsetmessages: + msg206252
2013-12-15 12:47:41ncoghlansetmessages: + msg206230
2013-12-15 12:43:44ncoghlansetfiles: + issue19946_pep_451_multiprocessing.diff
priority: low -> release blocker

nosy: + larry
messages: + msg206229
2013-12-15 12:19:14sbtsetmessages: + msg206228
2013-12-15 10:36:46ncoghlansetresolution: fixed -> (no value)
dependencies: - Update runpy for PEP 451
messages: + msg206224
2013-12-14 23:14:10ncoghlansetassignee: ncoghlan
2013-12-14 13:18:47ncoghlanlinkissue19978 dependencies
2013-12-14 13:18:29ncoghlansettitle: Have multiprocessing raise ImportError when spawning a process that can't find the "main" module -> Handle a non-importable __main__ in multiprocessing
2013-12-14 13:17:37ncoghlansetdependencies: + Update runpy for PEP 451
messages: + msg206180
2013-12-13 19:40:13Olivier.Griselsetmessages: + msg206140
2013-12-13 19:26:27pitrousetstatus: closed -> open
assignee: brett.cannon -> (no value)
messages: + msg206138
2013-12-13 19:18:24Olivier.Griselsetmessages: + msg206137
2013-12-13 18:09:20pitrousetmessages: + msg206134
2013-12-13 18:05:24brett.cannonsetmessages: + msg206133
2013-12-13 17:40:31Olivier.Griselsetmessages: + msg206129
2013-12-13 17:29:40brett.cannonsetmessages: + msg206126
2013-12-13 16:56:12Olivier.Griselsetmessages: + msg206120
2013-12-13 16:45:49brett.cannonsetstatus: open -> closed
resolution: fixed
messages: + msg206118
2013-12-13 16:43:19python-devsetnosy: + python-dev
messages: + msg206115
2013-12-11 12:34:41ncoghlansetmessages: + msg205904
2013-12-11 09:52:56Olivier.Griselsetfiles: + issue19946.diff
keywords: + patch
messages: + msg205898
2013-12-11 08:44:37Olivier.Griselsetmessages: + msg205895
2013-12-11 08:41:19Olivier.Griselsetmessages: + msg205894
2013-12-11 08:33:17Olivier.Griselsetmessages: + msg205893
2013-12-11 06:06:10eric.snowsetmessages: + msg205885
2013-12-10 20:39:39sbtsetmessages: + msg205857
2013-12-10 20:27:32brett.cannonsetpriority: normal -> low
title: multiprocessing crash with forkserver or spawn when run from a non ".py" ending script -> Have multiprocessing raise ImportError when spawning a process that can't find the "main" module
messages: + msg205854

assignee: brett.cannon
type: crash -> behavior
stage: needs patch
2013-12-10 19:18:35Olivier.Griselsetmessages: + msg205847
2013-12-10 16:14:10brett.cannonsetmessages: + msg205833
2013-12-10 16:12:37brett.cannonsetmessages: + msg205832
2013-12-10 15:47:51pitrousetnosy: + brett.cannon, ncoghlan, eric.snow
messages: + msg205831
2013-12-10 14:21:37vstinnersetnosy: + pitrou, sbt
2013-12-10 13:09:51Olivier.Griselsettype: crash
2013-12-10 13:08:59Olivier.Griselcreate