This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: subprocess doesn\'t handle SIGPIPE
Type: enhancement Stage: test needed
Components: Library (Lib) Versions: Python 3.2
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: gregory.p.smith Nosy List: ajaksu2, astrand, barry, diekhans, gregory.p.smith
Priority: normal Keywords:

Created on 2006-12-14 00:21 by diekhans, last changed 2022-04-11 14:56 by admin. This issue is now closed.

File name Uploaded Description Edit diekhans, 2006-12-14 00:21 demo of issue
Messages (4)
msg54948 - (view) Author: Mark Diekhans (diekhans) Date: 2006-12-14 00:21
subprocess keeps other side of child pipe open, making
use of SIGPIPE to terminate writers in a pipeline
not possible.

This is probably a matter of documentation or
providing a method to link up processes, as 
the parent end of the pipe must remain open
until it is connected to the next process in
the pipeline.

An option to enable sigpipe in child would be

Simple example attached.
msg54949 - (view) Author: Peter Åstrand (astrand) * (Python committer) Date: 2007-01-07 14:01
One easy solution is to simply close the pipe in the parent after starting both processes, before calling p1.wait():


It's not "perfect", though, p1 will execute a while before recieving SIGPIPE. For a perfect solution, it would be necessary to close the pipe end in the parent after the fork but before the exec in the child. This would require some kind of synchronization. 

Moving to feature request. 
msg84596 - (view) Author: Daniel Diniz (ajaksu2) * (Python triager) Date: 2009-03-30 18:02
Confirmed on trunk and py3k.
msg128029 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2011-02-05 22:04
The need to call p1.stdout.close() has now been documented as part of issue7678.  Python 3.2's subprocess also has restore_signals=True as its default behavior so SIGPIPE is restored by default.

I do not think it is appropriate to to add the synchronization Peter suggested to the subprocess module to optimize that close call.  The potential delay due to python having to call p1.stdout.close() is non-fatal and should be assumed to exist anyways as you can't guarantee when an async event like a signal (in this case SIGPIPE) will actually reach the other process.
Date User Action Args
2022-04-11 14:56:21adminsetgithub: 44337
2011-02-05 22:04:54gregory.p.smithsetstatus: open -> closed
nosy: barry, gregory.p.smith, astrand, diekhans, ajaksu2
messages: + msg128029

assignee: gregory.p.smith
resolution: wont fix
2011-01-04 01:08:27pitrousetassignee: astrand -> (no value)

nosy: + gregory.p.smith
2010-09-27 15:33:07barrysetnosy: + barry
2010-08-09 03:45:03terry.reedysetversions: + Python 3.2, - Python 3.1, Python 2.7
2009-03-30 18:02:01ajaksu2setversions: + Python 3.1, Python 2.7
nosy: + ajaksu2

messages: + msg84596

stage: test needed
2006-12-14 00:21:37diekhanscreate