This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author lesha
Recipients Giovanni.Bajo, avian, bobbyi, gregory.p.smith, jcea, lesha, neologix, nirai, pitrou, sbt, sdaoden, vinay.sajip, vstinner
Date 2012-06-01.01:51:38
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Re "threading locks cannot be used to protect things outside of a single process":

The Python standard library already violates this, in that the "logging" module uses such a lock to protect the file/socket/whatever, to which it is writing.

If I had a magic wand that could fix all the places in the world where people do this, I'd accept your argument.

In practice, threading locks are abused in this way all the time.

Most people don't even think about the interaction of fork and threads until they hit a bug of this nature.

Right now, we are discussing a patch that will take broken code, and instead of having it deadlock, make it actually destroy data. 

I think this is a bad idea. That is all I am arguing.

I am glad that my processes deadlocked instead of corrupting files. A deadlock is easier to diagnose.

You are right: subprocess does do a hard exit on exceptions. However, the preexec_fn and os.fork() cases definitely happen in the wild. I've done both of these before.

I'm arguing for a simple thing: let's not increase the price of error. A deadlock sucks, but corrupted data sucks much worse -- it's both harder to debug, and harder to fix.
Date User Action Args
2012-06-01 01:51:39leshasetrecipients: + lesha, gregory.p.smith, vinay.sajip, jcea, pitrou, vstinner, nirai, bobbyi, neologix, Giovanni.Bajo, sdaoden, sbt, avian
2012-06-01 01:51:39leshasetmessageid: <>
2012-06-01 01:51:38leshalinkissue6721 messages
2012-06-01 01:51:38leshacreate