This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: slow close file descriptors in subprocess, popen2, os.popen*
Type: performance Stage: patch review
Components: Library (Lib) Versions: Python 3.1, Python 3.2, Python 3.3, Python 2.7
process
Status: closed Resolution: duplicate
Dependencies: Superseder:
Assigned To: gregory.p.smith Nosy List: gregory.p.smith, loewis, neologix, pitrou, rosslagerwall, s7v7nislands, vstinner
Priority: high Keywords: patch

Created on 2011-02-22 09:38 by s7v7nislands, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Messages (14)
msg129043 - (view) Author: s7v7nislands (s7v7nislands) Date: 2011-02-22 09:38
when use popen*() and close_fds is True, python will close unused fds. but the MAXFD is not the real max. especially in freebsd, subprocess.MAXFD=655000. so python will try to close to many fd, it's too slow, in my test on freebsd, using about 3 seconds.

poor english.

patch for 2.7 trunck:
jiangxiaobing@s7v7nislands ~/source/svn/python27 $ svn diff
Index: Lib/subprocess.py
===================================================================
--- Lib/subprocess.py	(revision 88499)
+++ Lib/subprocess.py	(working copy)
@@ -1065,11 +1065,16 @@
 
 
         def _close_fds(self, but):
+            maxfd = MAX_FD
+            try:
+                maxfd = os.dup(0) + 1
+            except:
+                pass
             if hasattr(os, 'closerange'):
                 os.closerange(3, but)
-                os.closerange(but + 1, MAXFD)
+                os.closerange(but + 1, maxfd)
             else:
-                for i in xrange(3, MAXFD):
+                for i in xrange(3, maxfd):
                     if i == but:
                         continue
                     try:
Index: Lib/popen2.py
===================================================================
--- Lib/popen2.py	(revision 88499)
+++ Lib/popen2.py	(working copy)
@@ -82,8 +82,13 @@
     def _run_child(self, cmd):
         if isinstance(cmd, basestring):
             cmd = ['/bin/sh', '-c', cmd]
-        os.closerange(3, MAXFD)
+        maxfd = MAXFD
         try:
+            maxfd = os.dup(0) + 1
+        except:
+            pass
+        os.closerange(3, maxfd)
+        try:
             os.execvp(cmd[0], cmd)
         finally:
             os._exit(1)



patch for 3.3 truck:
diff --git a/Lib/subprocess.py b/Lib/subprocess.py
index c02fb52..98a25b3 100644
--- a/Lib/subprocess.py
+++ b/Lib/subprocess.py
@@ -1112,8 +1112,14 @@ class Popen(object):
                 if fd >= start_fd:
                     os.closerange(start_fd, fd)
                     start_fd = fd + 1
-            if start_fd <= MAXFD:
-                os.closerange(start_fd, MAXFD)
+            maxfd = MAXFD
+            try:
+                maxfd = os.dup(0) + 1
+            except:
+                pass
+
+            if start_fd <= maxfd:
+                os.closerange(start_fd, maxfd)
 
 
         def _execute_child(self, args, executable, preexec_fn, close_fds,
msg129045 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-02-22 09:49
dup(2) returns the lowest numbered available file descriptor: if there's a discontinuity in the FDs allocation, this code is going to close only the FDs up to the first available FD.
Imagine for example the following:
open("/tmp/foo") = 3
open("/tmp/bar") = 4
close(3)

Then dup(0) will return 3, and not the max FD.
msg129118 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-02-22 20:16
To elaborate on this, to my knowledge, there's no portable and reliable way to close all open file descriptors.
Even with the current code, it's still possible that some FD aren't properly closed, since getconf(SC_OPEN_MAX) often returns RLIMIT_NOFILE soft limit, which might have been lowered by the application after having opened a whole bunch of files.
Anyway, if the performance hit is too high, I think you only have two options:
- set your FD as CLOEXEC (note that it's already the case for FD used by popen and friends), and don't pass close_fds argument
- if you don't need many FD, you could explicitely set RLIMIT_NOFILE to a low value so that close_fds doesn't need to try too many FD, e.g.:
import resource
resource.setrlimit(resource.RLIMIT_NOFILE, (1024, 1024))
msg129154 - (view) Author: s7v7nislands (s7v7nislands) Date: 2011-02-23 03:20
thanks, neologix.

I think should put this hint in python doc.
msg129891 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2011-03-02 13:22
As written by Charles-Francois Natali (msg129890), we can use closefrom(). Pseudo-code: find the biggest fd than that be kept open, call closefrom(highest+1), and then use close() (os.closerange) for fd in 0..highest that have to be closed.

closefrom() is available on OpenBSD, Solaris and NetBSD. I don't know for FreeBSD:
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/hackers/2007-07/msg00035.html

On Linux, we can use os.listdir("/proc/self/fd") to get the list of open files.
msg129892 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2011-03-02 13:25
The patch is wrong, as explained by Charles-François.
If someone wants subprocess to use closefrom(), a patch is required :)

But I think at least a mention in the documentation is warranted.
msg129894 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2011-03-02 13:49
Benchmark extracted from #11314, Python 3.2 on Linux: subprocess("/bin/false", close_fds=True) is 22% slower than subprocess("/bin/false", close_fds=False).
msg129902 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-03-02 15:13
Attached is a patch adding os.closefrom.
If closefrom(2) is available, it's used.
Otherwise, two options:
- if sysconf and _SC_OPEN_MAX are defined, we close each file descriptor up to _SC_OPEN_MAX
- if not, we choose a default value (256), and close every FD up to this value
subprocess has been converted to use it, and a test has been added in test_os
Unfortunately, I only have Linux boxes, so I can't really test it.

Remarks:
- is it OK to provide posix_closefrom even though the underlying platform doesn't support it ?
- no error code is returned (since when closing every FD manually this wouldn't make much sense), even though closefrom(2) does return one
- for the test, I only close FDs > 7 to avoid closing stdin/stdout/stder, but you might have a better idea
- this won't fix the problem for Linux, which doesn't have closefrom(2). Is it worth using /proc/self/fd interface ?
msg129904 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-03-02 17:01
Attached is a new version falling back to /proc/self/fd when closefrom(2) is not available (on Unix), working on Linux.
It's indeed much faster than the current approach.
Note that it's only used if _posixsubprocess is not available, because in that case the FD are closed from _posixsubprocess.c:child_exec.
To make it available to _posixsubprocess, I was thinking of putting the closefrom code in a helper function, which would then be called from posix_closefrom and _posixsubprocess.
Is this the proper way to do ?
If yes, what file would be a good recipient for that ?
msg129912 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2011-03-02 19:11
There is no OS API call to provide a *safe* way to get a list of all open file descriptors as part of POSIX in general that can be called after the fork() and before the exec().  It must be async signal safe. The closefrom() call available in Solaris and recent BSDs looks useful, is it async signal safe?  I still want to find a way to do this nicely on Linux (even if it means me going and implementing a closefrom syscall to be added to 2.6.39).

Your posix_closefrom() implementation as written today is not safe to call between fork() and exec() due to the opendir/readdir implementation.  It can and will hang processes at unexpected times.


Another thought that I've seen done in other subprocess implementations as a compromise:

Offer a way to use the hacky *not guaranteed to close everything if the process is multithreaded* version that has the parent get the list of fds before the fork() and the child closing those before exec().  That leaves the race condition of new fds being opened between the creation of that list and the fork() but would be fast.  If we can detect if any other threads exist in the program (they may have been created by C extension modules or by a C program that is embedding Python within it) we could conditionally use that approach vs the close-everything-possible approach so that people using this with generic programs that don't involve threads are not so heavily impacted.
msg129913 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2011-03-02 19:14
According to http://www.unix.com/man-page/All/3c/closefrom/ closefrom() is not async-signal-safe. :(
msg129916 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-03-02 19:54
> Your posix_closefrom() implementation as written today is not safe to call between fork() and exec() due to the opendir/readdir implementation.  It can and will hang processes at unexpected times.

Yeah, I remove the patch when I realized that.

> According to http://www.unix.com/man-page/All/3c/closefrom/ closefrom() is not async-signal-safe. :(

Strange. I was sure closefrom was implemented with fcntl.

> I still want to find a way to do this nicely on Linux (even if it means me going and implementing a closefrom syscall to be added to 2.6.39).

Well, arguably, CLOEXEC is designed to cope with this kind of situation.
closefrom is more like a hack (and mostly useless if it's really not async-safe).
msg132296 - (view) Author: Ross Lagerwall (rosslagerwall) (Python committer) Date: 2011-03-27 09:11
Closing this as a duplicate of #8052
msg132668 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2011-03-31 13:32
test_subprocess.test_leaking_fds_on_error takes more than 5 minutes on x86 FreeBSD 7.2 build slave! This tests creates 1,024 subprocesses, and subprocess has to close 655,000 file descriptors just to create on child process (whereas there are only something like 10 open files...).
History
Date User Action Args
2022-04-11 14:57:13adminsetgithub: 55493
2011-03-31 13:32:28vstinnersetmessages: + msg132668
2011-03-27 09:11:31rosslagerwallsetstatus: open -> closed

nosy: + rosslagerwall
messages: + msg132296

resolution: duplicate
2011-03-02 19:54:51neologixsetnosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
messages: + msg129916
2011-03-02 19:14:34gregory.p.smithsetnosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
messages: + msg129913
2011-03-02 19:11:32gregory.p.smithsetassignee: gregory.p.smith
messages: + msg129912
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 18:20:31neologixsetfiles: - py3k_closefrom.diff
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 17:03:11s7v7nislandssetfiles: - python27.patch
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 17:03:05s7v7nislandssetfiles: - py3k.patch
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 17:01:07neologixsetnosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
messages: + msg129904
2011-03-02 16:52:10neologixsetfiles: + py3k_closefrom.diff
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 16:51:34neologixsetfiles: - py3k_closefrom.diff
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
2011-03-02 15:13:47neologixsetfiles: + py3k_closefrom.diff
nosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
messages: + msg129902
2011-03-02 13:49:40vstinnersetnosy: loewis, gregory.p.smith, pitrou, vstinner, s7v7nislands, neologix
messages: + msg129894
stage: patch review
2011-03-02 13:25:53pitrousetpriority: normal -> high

nosy: + pitrou, gregory.p.smith
messages: + msg129892

stage: patch review -> (no value)
2011-03-02 13:22:41vstinnersetnosy: + vstinner
messages: + msg129891
2011-02-25 19:45:40terry.reedysetnosy: + loewis
title: slow close file descriptors in subprocess, popen2, os.pepen* -> slow close file descriptors in subprocess, popen2, os.popen*

stage: patch review
versions: - Python 2.6, Python 2.5
2011-02-23 03:20:45s7v7nislandssetmessages: + msg129154
2011-02-22 20:16:49neologixsetmessages: + msg129118
2011-02-22 09:49:12neologixsetnosy: + neologix
messages: + msg129045
2011-02-22 09:39:36s7v7nislandssetfiles: + python27.patch
2011-02-22 09:38:09s7v7nislandscreate