This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: multithreading.Pool.map() crashes Windows computer
Type: enhancement Stage:
Components: Windows Versions: Python 2.6
process
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: jnoller Nosy List: ac.james, jnoller, valhallasw
Priority: normal Keywords:

Created on 2009-05-30 00:04 by ac.james, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
test_a10.py ac.james, 2009-05-30 00:04 test case
Messages (5)
msg88557 - (view) Author: Alex James (ac.james) Date: 2009-05-30 00:04
When calling multithreading.Pool().map() to distribute computational
load I've recently got system crashes.  

The attached minimalist script exhibits this issue.  
On a Windows Vista home premium sp1 running Python 2.6.2 on a dual-core
laptop, the script stops executing at
threading.Condition(threading.Lock()).wait() called from
multithreading.ApplyResult().wait() called from
multithreading.ApplyResult().get() called from
multithreading.Pool().map() and then compiles the original script as it
starts the script form the beggining twice simultaneously.  The printed
output gets mixed content and both new instances of the script hit the
same problem and spawn more instances.  All old processes are still
active in memory so this leads to system resources being fully consumed.  

This behavior started occurring recently, immediately after attempting
to install a Python .Egg package.  I have uninstalled python an all
extensions, restarted windows, deleted all orphan files and registry
keys I could find, restarted windows, and then re-installed a fresh
download of 2.6.2, but the problem remains.  

The error output retrievable from Keyboard Interrupt (which only works
when script was started on commandline) contains several copies of """
Traceback:
file "<string>", line 1, in <module>
'import site' failed: use -v for traceback
script opening print statement
script opening print statement
'import site' failed: use -v for traceback
File "C:\Python26\lib\multiprocessing\forking.py", line 341, in main
prepare(preparation_data)
File "C:\Python26\lib\multiprocessing\forking.py", line 456, in prepare
'__parents_main__', file, path_name, etc
File "H:\builder26\test_a6.py", line 1, in <module>
Traceback:
from custom_module import *
"""
amoung many pieces thereof stuffed bytewise into other pieces thereof.  

Identical code running in Unix operates just fine, and the identical
code worked on my machine last week.  Reproducability of this error is
thus doubtful.  Repeatability is perfect however.  

Any possible workarounds and/or understanding of root cause is
appreciated for this very rare extreme error.
msg88560 - (view) Author: Jesse Noller (jnoller) * (Python committer) Date: 2009-05-30 01:44
Can you wrap the execution of the main code in a if __name__ == 
"__main__": block, as shown in the documentation? Failure to do so can 
cause a fork bomb on windows
msg88632 - (view) Author: Alex James (ac.james) Date: 2009-06-01 02:43
Ok Jesse, that did stop the bomb problem.  
Unfortunately the real code belongs in a scientific research
distributable module that is called by another function in the module
where both have been imported into the script that is run.  So it isn't
being called by __main__ in the first place.  So we'll have to make sure
client scripts are encapsulated with the if __name__ == '__main__' by
our collegues running windows.  Aka I expect to recieve this same bug
report.  
I'll go through the docs again, but I didn't find any built-in way to
get the recursion level of an operation (other than the 0 = __main__
level).    

Looking like this just turned into a feature request.
msg88633 - (view) Author: Jesse Noller (jnoller) * (Python committer) Date: 2009-06-01 03:05
Hey Alex; This isn't a bug, or a feature request. On win32, the way 
multiprocessing fakes a fork() is by creating a special subprocess which 
essentially imports and executes the function/process to be run, 
communication is handled through pickling and pipes.

For more information, see:
http://svn.python.org/view/python/trunk/Lib/multiprocessing/forking.py?
view=markup

Search for "# Windows" in that file, you'll see the basic process we use 
to "fork" on windows. Without a completely different implementation, 
this can not be changed. This probably will not change for some time, as 
such, I'm going to close this issue.
msg155338 - (view) Author: Merlijn van Deen (valhallasw) * Date: 2012-03-10 18:26
Two questions:
(1) can this be at least be added as a big fat warning in the documentation?
(2) would it be a reasonable option to let either
  (a) the creation of a Pool
  (b) executing something using the Pool
cause an exception when it happens during the import of the function to run?

I think it makes sense to prevent any accidental forkbombs, especially if they are /this/ easy to create. Untested (for obvious reasons...), but this should be enough:

import multiprocessing
def x(val):
   return val
multiprocessing.Pool().map(x, range(10))
History
Date User Action Args
2022-04-11 14:56:49adminsetgithub: 50397
2012-03-10 18:26:45valhallaswsetnosy: + valhallasw
messages: + msg155338
2009-06-01 03:09:25benjamin.petersonsetstatus: closed
2009-06-01 03:05:38jnollersetresolution: wont fix
messages: + msg88633
2009-06-01 02:43:03ac.jamessettype: crash -> enhancement
messages: + msg88632
2009-05-30 01:44:49jnollersetmessages: + msg88560
2009-05-30 00:39:50benjamin.petersonsetstatus: open -> (no value)
assignee: jnoller

nosy: + jnoller
2009-05-30 00:04:46ac.jamescreate