Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle a non-importable __main__ in multiprocessing #64145

Closed
ogrisel mannequin opened this issue Dec 10, 2013 · 47 comments
Closed

Handle a non-importable __main__ in multiprocessing #64145

ogrisel mannequin opened this issue Dec 10, 2013 · 47 comments
Assignees
Labels
release-blocker stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@ogrisel
Copy link
Mannequin

ogrisel mannequin commented Dec 10, 2013

BPO 19946
Nosy @brettcannon, @ncoghlan, @pitrou, @larryhastings, @tiran, @ericsnowcurrently, @zware, @ogrisel
Files
  • issue19946.diff
  • issue19946_pep_451_multiprocessing_v2.diff: Work in progress patch (Fork+, ForkServer-, Spawn--)
  • test_multiprocessing_main_handling.py: Final test case
  • issue19946.diff
  • skip_forkserver.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/ncoghlan'
    closed_at = <Date 2013-12-20.14:33:20.194>
    created_at = <Date 2013-12-10.13:08:59.356>
    labels = ['type-bug', 'library', 'release-blocker']
    title = 'Handle a non-importable __main__ in multiprocessing'
    updated_at = <Date 2015-02-24.09:59:58.484>
    user = 'https://github.com/ogrisel'

    bugs.python.org fields:

    activity = <Date 2015-02-24.09:59:58.484>
    actor = 'ncoghlan'
    assignee = 'ncoghlan'
    closed = True
    closed_date = <Date 2013-12-20.14:33:20.194>
    closer = 'ncoghlan'
    components = ['Library (Lib)']
    creation = <Date 2013-12-10.13:08:59.356>
    creator = 'Olivier.Grisel'
    dependencies = []
    files = ['33091', '33162', '33175', '33201', '33222']
    hgrepos = []
    issue_num = 19946
    keywords = ['patch']
    message_count = 47.0
    messages = ['205810', '205831', '205832', '205833', '205847', '205854', '205857', '205885', '205893', '205894', '205895', '205898', '205904', '206115', '206118', '206120', '206126', '206129', '206133', '206134', '206137', '206138', '206140', '206180', '206224', '206228', '206229', '206230', '206252', '206260', '206296', '206297', '206298', '206301', '206421', '206424', '206478', '206559', '206575', '206607', '206650', '206653', '206655', '206659', '206677', '206678', '206683']
    nosy_count = 10.0
    nosy_names = ['brett.cannon', 'ncoghlan', 'pitrou', 'larry', 'christian.heimes', 'python-dev', 'sbt', 'eric.snow', 'zach.ware', 'Olivier.Grisel']
    pr_nums = []
    priority = 'release blocker'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue19946'
    versions = ['Python 3.4']

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 10, 2013

    Here is a simple python program that uses the new forkserver feature introduced in 3.4b1:

    name: checkforkserver.py
    """
    import multiprocessing
    import os

    def do(i):
        print(i, os.getpid())
    
    
    def test_forkserver():
        mp = multiprocessing.get_context('forkserver')
        mp.Pool(2).map(do, range(3))
    
    
    if __name__ == "__main__":
        test_forkserver()
    """

    When running this using the "python check_forkserver.py" command everything works as expected.

    When running this using the nosetests launcher ("nosetests -s check_forkserver.py"), I get:

    """
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/forkserver.py", line 141, in main
        spawn.import_main_path(main_path)
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 252, in import_main_path
        methods.init_module_attrs(main_module)
      File "<frozen importlib._bootstrap>", line 1051, in init_module_attrs
    AttributeError: 'NoneType' object has no attribute 'loader'
    """

    Indeed, the spec variable in multiprocessing/spawn.py's import_main_path
    function is None as the nosetests script is not a regular python module: in particular is does not have a ".py" extension.

    If I copy or symlink or renamed the "nosetests" script as "nosetests.py" in the same folder, this works as expected. I am not familiar enough with the importlib machinery to suggest a fix for this bug.

    Also there is a typo in the comment: "causing a psuedo fork bomb" => "causing a pseudo fork bomb".

    Note: I am running CPython head updated today.

    @ogrisel ogrisel mannequin added stdlib Python modules in the Lib dir type-crash A hard crash of the interpreter, possibly with a core dump labels Dec 10, 2013
    @pitrou
    Copy link
    Member

    pitrou commented Dec 10, 2013

    This sounds related to the ModuleSpec changes.

    @brettcannon
    Copy link
    Member

    The returning of None means that importlib.find_spec() didn't find the spec/loader for the specified module. So the question is exactly what module is being passed to importlib.find_spec() and why isn't it finding a spec/loader for that module.

    Did this code work in Python 3.3?

    @brettcannon
    Copy link
    Member

    I see Oliver says he is testing a new forkserver feature from 3.4b1, so it might not necessarily be importlib's fault then. Does using the old importlib.find_loader() approach work?

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 10, 2013

    So the question is exactly what module is being passed to importlib.find_spec() and why isn't it finding a spec/loader for that module.

    The module is the nosetests python script. module_name == 'nosetests' in this case. However, nosetests is not considered an importable module because of the missing '.py' extension in the filename.

    Did this code work in Python 3.3?

    This code did not exist in Python 3.3.

    @brettcannon
    Copy link
    Member

    So at the bare minimum, the multiprocessing code should raise an ImportError when it can't find the spec for the module to help debug this kind of thing. Also that typo should get fixed.

    Second, there is no way that 'nosetests' will ever succeed as an import since, as Oliver pointed out, it doesn't end in '.py' or any other identifiable way for a finder to know it can handle the file. So this is not a bug and simply a side-effect of how import works. The only way around it would be to symlink nosetests to nosetests.py or to somehow pre-emptively set up 'nosetests' for supported importing.

    @brettcannon brettcannon self-assigned this Dec 10, 2013
    @brettcannon brettcannon changed the title multiprocessing crash with forkserver or spawn when run from a non ".py" ending script Have multiprocessing raise ImportError when spawning a process that can't find the "main" module Dec 10, 2013
    @brettcannon brettcannon added type-bug An unexpected behavior, bug, or error and removed type-crash A hard crash of the interpreter, possibly with a core dump labels Dec 10, 2013
    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Dec 10, 2013

    I guess this is a case where we should not be trying to import the main module. The code for determining the path of the main module (if any) is rather crufty.

    What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run under nose?

    @ericsnowcurrently
    Copy link
    Member

    There is definitely room for improvement relative to module specs and __main__ (that's the topic of issue bpo-19701). That issue is waiting for __main__ to get a proper spec (see issues bpo-19700 and bpo-19697).

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 11, 2013

    I agree that a failure to lookup the module should raise an explicit exception.

    Second, there is no way that 'nosetests' will ever succeed as an import since, as Oliver pointed out, it doesn't end in '.py' or any other identifiable way for a finder to know it can handle the file. So this is not a bug and simply a side-effect of how import works. The only way around it would be to symlink nosetests to nosetests.py or to somehow pre-emptively set up 'nosetests' for supported importing.

    I don't agree that (unix) Python programs that don't end with ".py" should be modified to have multiprocessing work correctly. I think it should be the multiprocessing responsibility to transparently find out how to spawn the new process independently of the fact that the program ends in '.py' or not.

    Note: the fork mode works always under unix (with or without the ".py" extension). The spawn mode always work under windows as AFAIK there is no way to have Python programs that don't end in .py under windows and furthermore I think multiprocessing does execute the __main__ under windows (but I haven't tested if it's still the case in Python HEAD).

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 11, 2013

    what is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you run under nose?

    $ cat check_stuff.py 
    import sys
    def test_main():
        print("sys.modules['__main__']=%r"
              % sys.modules['__main__'])
        print("sys.modules['__main__'].__file__=%r"
              % sys.modules['__main__'].__file__)
    
    
    if __name__ == '__main__':
        test_main()
    (pyhead) ogrisel@is146148:~/tmp$ python check_stuff.py 
    sys.modules['__main__']=<module '__main__' from 'check_stuff.py'>
    sys.modules['__main__'].__file__='check_stuff.py'
    (pyhead) ogrisel@is146148:~/tmp$ nosetests -s check_stuff.py 
    sys.modules['__main__']=<module '__main__' from '/volatile/ogrisel/envs/pyhead/bin/nosetests'>
    sys.modules['__main__'].__file__='/volatile/ogrisel/envs/pyhead/bin/nosetests'
    .

    Ran 1 test in 0.001s

    OK

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 11, 2013

    Note however that the problem is not specific to nose. If I rename my initial 'check_forserver.py' script to 'check_forserver', add the '#!/usr/bin/env python' header and make it 'chmod +x' I get the same crash.

    So the problem is related to the fact that under posix, valid Python programs can be executable scripts without the '.py' extension.

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 11, 2013

    Here is a patch that uses imp.load_source when the first importlib name-based lookup fails.

    Apparently it fixes the issue on my box but I am not sure whether this is the correct way to do it.

    @ncoghlan
    Copy link
    Contributor

    Rerunning main in the subprocess has always been a slightly dubious feature
    of multiprocessing, but IIRC it's a necessary hack to ensure pickle
    compatibility for things defined in __main__. Using "runpy.run_path" would
    be a better solution, but we'll need to add the "target" parameter that
    missed the beta1 deadline.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Dec 13, 2013

    New changeset cea42629ddf5 by Brett Cannon in branch 'default':
    Issue bpo-19946: Raise ImportError when the main module cannot be found
    http://hg.python.org/cpython/rev/cea42629ddf5

    @brettcannon
    Copy link
    Member

    Created http://bugs.python.org/issue19978 to track using runpy.run_path in 3.5.

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 13, 2013

    Why has this issue been closed? Won't the spawn and forkserver mode work in Python 3.4 for Python program started by a Python script (which is probably the majority of programs written in Python under unix)?

    Is there any reason not to use the imp.load_source code I put in my patch as a temporary workaround if the cleaner runpy.run_path solution is too tricky to implement for the Python 3.4 release time frame?

    @brettcannon
    Copy link
    Member

    Multiple questions from Oliver to answer.

    Why has this issue been closed?

    Because the decided issue of this bug -- raising AttributeError over ImportError -- was fixed.

    Won't the spawn and forkserver mode work in Python 3.4 for Python program started by a Python script (which is probably the majority of programs written in Python under unix)?

    The semantics are not going to change in python 3.4 and will just stay as they were in Python 3.3.

    Is there any reason not to use the imp.load_source code I put in my patch as a temporary workaround if the cleaner runpy.run_path solution is too tricky to implement for the Python 3.4 release time frame?

    There are two reasons. One is that the imp module is deprecated in Python 3.4 (http://docs.python.org/3.4/library/imp.html#module-imp). Two is that temporarily inserting for a single release a solution that will simply be ripped out in the following release is just asking for trouble. Someone will use the temporary fix in some way in production code and then be shocked when the better solution doesn't work in exactly the same way. It's best to simply wait until 3.5 has the proper solution available.

    I know it's frustrating to either name your scripts with a .py until Python 3.5 comes out or just wait until 3.5, but we can't risk a hacky solution for a single release as users will be living with the hack for several years.

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 13, 2013

    The semantics are not going to change in python 3.4 and will just stay as they were in Python 3.3.

    Well the semantics do change: in Python 3.3 the spawn and forkserver modes did not exist at all. The "spawn" mode existed but only implicitly and only under Windows.

    So Python 3.4 is introducing a new feature for POSIX systems that will only work in the rare cases where the Python program is launched by a ".py" ending script.

    Would running the imp.load_source trick only if sys.platform != "win32" be a viable way to preserve the semantics of Python 3.3 under the windows while not introducing a partially broken feature in Python 3.4?

    @brettcannon
    Copy link
    Member

    I'm sorry, Oliver, you are simply going to have to wait for Python 3.5 at this point to get the new semantics you want.

    @pitrou
    Copy link
    Member

    pitrou commented Dec 13, 2013

    I'm sorry, Oliver, you are simply going to have to wait for Python 3.5
    at this point to get the new semantics you want.

    Side note: it's Olivier, not Oliver.

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 13, 2013

    I can wait (or monkey-patch the stuff I need as a temporary workaround in my code). My worry is that Python 3.4 will introduce a new feature that is very crash-prone.

    Take this simple program that uses the newly introduced get_context function (the same problem happens with set_start_method):

    filename: mytool
    """
    #!/usr/bin/env python
    from multiprocessing import freeze_support, get_context

    def compute_stuff(i):
        # in real life you could use a lib that uses threads
        # like cuda and that would crash with the default 'fork'
        # mode under POSIX
        return i ** 2
    
    
    if __name__ == "__main__":
         freeze_support()
         ctx = get_context('spawn')
         ctx.Pool(4).map(compute_stuff, range(8))

    """

    If you chmod +x this file and run it with ./mytool, the user will get an infinitely running process that keeps displaying on stderr:

    """
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 96, in spawn_main
        exitcode = _main(fd)
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 105, in _main
        prepare(preparation_data)
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 210, in prepare
        import_main_path(data['main_path'])
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 256, in import_main_path
        raise ImportError(name=main_name)
    ImportError
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 96, in spawn_main
        exitcode = _main(fd)
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 105, in _main
        prepare(preparation_data)
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 210, in prepare
        import_main_path(data['main_path'])
      File "/opt/Python-HEAD/lib/python3.4/multiprocessing/spawn.py", line 256, in import_main_path
        raise ImportError(name=main_name)
    ImportError
    ...
    """

    until the user kills the process. Is there really nothing we can do to avoid releasing Python 3.4 with this bug?

    @pitrou
    Copy link
    Member

    pitrou commented Dec 13, 2013

    Let's reopen, shall we? If not for 3.4, at least for 3.5.

    It's likely that multiprocessing needs a __main__ simply because it needs a way to replicate the parent process' state in the child (for example, the set of imported modules, the logging configuration, etc.). Perhaps Richard can elaborate.

    But, AFAIU, the __main__ could be imported as a script rather than a "proper" module from sys.path.

    @ncoghlan ncoghlan changed the title Have multiprocessing raise ImportError when spawning a process that can't find the "main" module Handle a non-importable __main__ in multiprocessing Dec 14, 2013
    @ncoghlan ncoghlan self-assigned this Dec 14, 2013
    @ncoghlan
    Copy link
    Contributor

    bpo-19700 isn't quite finished, but I believe it is finished enough to let me get this working properly.

    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Dec 15, 2013

    So there are really two situations:

    1. The __main__ module *should not* be imported. This is the case if you use __main__.py in a package or if you use nose to call test_main().

    This should really be detected in get_preparation_data() in the parent process so that import_main_path() does not get called in the child process.

    1. The __main__ module *should* be imported but it does not have a .py extension.

    @ncoghlan
    Copy link
    Contributor

    Bumping the priority on this, as multiprocessing is currently creating invalid child processes by failing to set __main__.__spec__ appropriately.

    The attached patch is designed to get us started down that path. It's currently broken, but I need feedback from folks that know the multiprocessing code better than I do in order to know where best to start poking and prodding.

    With the patch, invoking regrtest directly still works:

    ./python [Lib/test/regrtest.py](https://github.com/python/cpython/blob/main/Lib/test/regrtest.py) -v test_multiprocessing_spawn
    

    But relying on module execution fails:
    ./python -m test -v test_multiprocessing_spawn

    I appear to be somehow getting child processes where __main__.__file__ is set, but __main__.__spec__ is not.

    @ncoghlan
    Copy link
    Contributor

    With the restructuring in my patch, it would be easy enough to move the "early return" cases from the _fixup_main_* functions to instead be "don't set the variable" cases in get_preparation_data.

    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Dec 15, 2013

    I appear to be somehow getting child processes where __main__.__file__ is
    set, but __main__.__spec__ is not.

    That seems to be true for the __main__ module even when multiprocessing is not involved. Running a file /tmp/foo.py containing

        import sys
        print(sys.modules['__main__'].__spec__, sys.modules['__main__'].__file__)

    I get output

    None /tmp/foo.py
    

    I am confused by why you would ever want to load by module name rather than file name. What problem would that fix? If the idea is just to support importing a main module without a .py extension, isn't __file__ good enough?

    @ncoghlan
    Copy link
    Contributor

    Scripts (whether in source form or precompiled) work via direct execution,
    but all the other execution paths (directories, zipfiles, -m) rely on the
    import system (via runpy). multiprocessing has been broken for years in
    that regard, hence my old comment about the way it derived the module name
    from the file name being problematic (although it only outright *broke*
    with submodule execution, and even then you would likely get away with it
    if you didn't use relative imports).

    Historically, it was a hard problem to solve, since even the parent process
    forgot the original name of __main__, but PEP-451 has now fixed that
    limitation.

    I also have an idea as to what may be wrong with my patch - I'm going to
    try adjusting the first early return from _fixup_main_from_name to ensure
    that __main__.__spec__ is set correctly.

    @ncoghlan
    Copy link
    Contributor

    I created a test suite to ensure that all the various cases were handled correctly by the eventual patch (it doesn't test some of the namespace package related edge cases, but they devolve to normal module execution in terms of the final state of __main__, and that's covered by these tests).

    @ncoghlan
    Copy link
    Contributor

    Current work in progress patch. The existing multiprocessing tests all pass, but the new main handling tests fail.

    The fork start_method passes all the tests
    The forkserver and spawn start methods fail the directory, zipfile and package tests.

    @ncoghlan
    Copy link
    Contributor

    Updated test that handles timeouts better.

    I also realised the current test failures are due to an error in the test design - the "failing" cases are ones where we deliberately *don't* rerun __main__ because the entire __main__.py file is assumed to be inside an implicit __main__-only guard.

    So the code changes should be complete, I just need to figure out a way to tweak the tests appropriately.

    @ogrisel
    Copy link
    Mannequin Author

    ogrisel mannequin commented Dec 16, 2013

    I applied issue19946_pep_451_multiprocessing_v2.diff and I confirm that it fixes the problem that I reported initially.

    @ncoghlan
    Copy link
    Contributor

    OK, fixed test case attached. Turns out the ipython workaround test was completely wrong and never even loaded multiprocessing, and hence always passed, even with the workaround disabled. So I fixed that test case, and used the same approach for the zipfile, directory and package tests. I also fixed the submodule test to check that explicit relative imports work properly from __mp_main__ in the child processes.

    With this updated test cast, the v2 patch handles everything correctly, but there are 4 failures on Linux without the patch. Specifically:

    • test_basic_script_no_suffix fails for the spawn and forkserver start methods (the child processes fail to find a spec for __mp_main__)
    • test_module_in_package fails for the spawn and forkserver start methods (the explicit relative import from __mp_main__ fails because the import system isn't initialised correctly in the child processes)

    The former case is the one Olivier reported in this issue. It's a new case for 3.4, since the spawn start method was previously only available on Windows, where scripts always have an extension.

    The latter edge case is the one my "XXX (ncoghlan): The following code makes several bogus assumptions regarding the relationship between __file__ and a module's real name." comment was about.

    I believe we could actually adjust earlier versions to handle things as well as this new PEP-451 based approach (by using a combination of __package__ and __file__ rather than __spec__), but that's much harder for me to work on in those versions where the "spawn" start method is only available on Windows :)

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Dec 17, 2013

    New changeset b6d6f3b4b100 by Nick Coghlan in branch 'default':
    Close bpo-19946: use runpy as needed in multiprocessing
    http://hg.python.org/cpython/rev/b6d6f3b4b100

    @python-dev python-dev mannequin closed this as completed Dec 17, 2013
    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Dec 17, 2013

    Thanks for your hard work Nick!

    @tiran
    Copy link
    Member

    tiran commented Dec 18, 2013

    The commit broken a couple of buildbots like all Windows bots and OpenIndiana.

    @tiran tiran reopened this Dec 18, 2013
    @zware
    Copy link
    Member

    zware commented Dec 19, 2013

    The problem on Windows at least is that the skips for the 'fork' and 'forkserver' start methods aren't firing due to setUpClass being improperly set up in MultiProcessingCmdLineMixin: it's not decorated as a classmethod and the 'u' is lower-case instead of upper. Just fixing that makes for some unusual output ("skipped '"fork" start method not available'" with no indication of which test was skipped) and a variable number of tests depending on available start methods, so a better fix is to just do the check in setUp.

    Unrelated to the failure, but we're also in the process of moving away from using test_main(), preferring unittest.main().

    The attached patch addresses both, passes on Windows and Linux, and I suspect should help on OpenIndiana as well judging by the tracebacks it's giving.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Dec 19, 2013

    New changeset 460961e80e31 by Nick Coghlan in branch 'default':
    Issue bpo-19946: appropriately skip new multiprocessing tests
    http://hg.python.org/cpython/rev/460961e80e31

    @tiran
    Copy link
    Member

    tiran commented Dec 19, 2013

    The OpenIndiana tests are still failing. OpenIndiana doesn't support forkserver because it doesn't implement the send handle feature. The patch skips the forkserver tests if HAVE_SEND_HANDLE is false.

    @ncoghlan
    Copy link
    Contributor

    I think that needs to be fixed on the multiprocessing side rather than just
    in the tests - we shouldn't create a concrete context for a start method
    that isn't going to work on that platform. Finding that kind of discrepancy
    was part of my rationale for basing the skips on the available contexts
    (although my main motivation was simplicity).

    There may also be docs implications in describing which methods are
    supported on different platforms (although I haven't looked at how that is
    currently documented).

    @sbt
    Copy link
    Mannequin

    sbt mannequin commented Dec 20, 2013

    On 19/12/2013 10:00 pm, Nick Coghlan wrote:

    I think that needs to be fixed on the multiprocessing side rather than just
    in the tests - we shouldn't create a concrete context for a start method
    that isn't going to work on that platform. Finding that kind of discrepancy
    was part of my rationale for basing the skips on the available contexts
    (although my main motivation was simplicity).

    There may also be docs implications in describing which methods are
    supported on different platforms (although I haven't looked at how that is
    currently documented).

    If by "concrete context" you mean _concrete_contexts['forkserver'], then
    that is supposed to be private. If you write

         ctx = multiprocessing.get_context('forkserver')

    then this will raise ValueError if the forkserver method is not
    available. You can also use

    'forkserver' in multiprocessing.get_all_start_methods()
    

    to check if it is available.

    @ncoghlan
    Copy link
    Contributor

    Ah, I should have looked more closely at the docs to see if there was a public API for that before poking around in the package internals.

    In that case, I suggest we change this bit in the test:

        # We look inside the context module to find out which
        # start methods we can check
        from multiprocessing.context import _concrete_contexts

    to use the appropriate public API:

        # Need to know which start methods we should test
        import multiprocessing
        AVAILABLE_START_METHODS = set(multiprocessing.get_all_start_methods())

    And then adjust the skip check to look in AVAILABLE_START_METHODS rather than _concrete_contexts.

    I'll make that change tonight if nobody beats me to it.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Dec 20, 2013

    New changeset 00d09afb57ca by Nick Coghlan in branch 'default':
    Issue bpo-19946: use public API for multiprocessing start methods
    http://hg.python.org/cpython/rev/00d09afb57ca

    @ncoghlan
    Copy link
    Contributor

    Pending a clean bill of health from the stable buildbots :)

    @ncoghlan
    Copy link
    Contributor

    Now passing on all the stable buildbots (the two red Windows bots are for other issues, such as bpo-15599 for the threaded import test failure)

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    release-blocker stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    6 participants