This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: POpen does not close fds when fds have been inherited from a process with a higher resource limit
Type: behavior Stage: commit review
Components: Versions: Python 3.4, Python 3.5
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: gregory.p.smith Nosy List: akira, gregory.p.smith, larry, python-dev, sstewartgallus
Priority: normal Keywords: patch

Created on 2014-05-31 04:24 by sstewartgallus, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
python.patch sstewartgallus, 2014-05-31 05:06 review
issue21618-34-gps01.diff gregory.p.smith, 2014-06-01 07:18 review
Messages (13)
msg219440 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-05-31 04:24
The sysconf(_SC_OPEN_MAX) approach to closing fds is kind of flawed. It is kind of hacky and slow (see http://bugs.python.org/issue1663329). Moreover, this approach is incorrect as fds can be inherited from previous processes that have had higher resource limits. This is especially important because this is a possible security problem. I would recommend using the closefrom system call on BSDs or the /dev/fd directory on BSDs and /proc/self/fd on Linux (remember not to close fds as you read directory entries from the fd directory as that gives weird results because you're concurrently reading and modifying the entries in the directory at the same time). A C program that illustrates the problem of inheriting fds past lowered resource limits is shown below.

#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>

int main() {
    struct rlimit const limit = {
        .rlim_cur = 0,
        .rlim_max = 0
    };
    if (-1 == setrlimit(RLIMIT_NOFILE, &limit)) {
        fprintf(stderr, "error: %s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    puts("Printing to standard output even though the resource limit is lowered past standard output's number!");

    return EXIT_SUCCESS;
}
msg219442 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-05-31 05:06
Okay here's a stub patch that address FreeBSD, NetBSD and Linux. I'm not sure how to address the other platforms.
msg219443 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-05-31 05:20
Oh right! I forgot a possible problem with my proposed patch. It is incompatible with Valgrind (see issue https://bugs.kde.org/show_bug.cgi?id=331311). Either this patch won't be applied, Valgrind compatibility is judged not essential or the Valgrind developers will start emulating /proc/self/fd.
msg219455 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2014-05-31 16:00
Are you aware that the subprocess module does use /proc/self/fd in Python 3.2 and later?  The fd closing is not done from Python code.

See Modules/_posixsubprocess.c - http://hg.python.org/cpython/file/53fa2c9523d4/Modules/_posixsubprocess.c
msg219460 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2014-05-31 16:44
regardless, the current C code for this does limit itself to the sysconf(_SC_OPEN_MAX) max_fd from module import time when closing fds found in /proc/self/fd so this code does still have a bug in that fds higher than that will remain unclosed (at which point your valgrind issue would come into play unless we can detect we are running under valgrind and alter our behavior to obey the max in that case).
msg219477 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2014-05-31 22:23
There appear to be a two bugs here, depending on which platform subprocess is being used on.

1) on systems where it uses /prod/self/fd, /dev/fd or similar:

 It should not pay attention to end_fd at all.  It knows the list of actual open fds and should use that.  If possible, consider detecting and avoiding closing valgrind fds; but that is a special case for a valgrind bug and likely not worth it.

2) on systems without a way to get the list of open file descriptors:

 The sysconf("SC_OPEN_MAX") value is only saved at module import time but may be changed up or down at runtime by the process by using the setrlimit(RLIMIT_NOFILE, ...) libc call.  what sysconf returns is the same as the current rlim_cur setting.  (at least on Linux where this code path wouldn't actually be taken because #1 is available).

 possible solution: call getrlimit(RLIMIT_NOFILE) and use rlim_max instead of sysconf("SC_OPEN_MAX") at module import time.
 caveat: rlim_max can be raised by processes granted that capbility.  It is impossible to do anything about that in this scenario given we're operating w/o a way to get a list of open fds.
 impact: only on OSes that lack implementations that get a list of open fds as in #1 above. so... nothing that anyone really uses unless they choose to come contribute support for that themselves. (linux, bsd and os x all fall into #1 above)

Neither of these are likely scenarios so I wouldn't consider this a high priority to fix but it should be done.  Most code never ever touches its os resource limits.  getrlimit and setrlimit are not exposed in the os module; you must use ctypes or an extension module to call them from Python:

import ctypes
class StructRLimit(ctypes.Structure):
  _fields_ = [('rlim_cur', ctypes.c_ulong), ('rlim_max', ctypes.c_ulong)]
libc = ctypes.cdll.LoadLibrary('libc.so.6')
RLIMIT_NOFILE = 7  # Linux
limits = StructRLimit()
assert libc.getrlimit(RLIMIT_NOFILE, ctypes.byref(limits)) == 0
print(limits.rlim_cur, limits.rlim_max)
limits.rlim_cur = limits.rlim_max
assert libc.setrlimit(RLIMIT_NOFILE, ctypes.byref(limits)) == 0
msg219478 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-06-01 02:11
I agree that this is not a likely scenario but I can think of one
mildly plausible scenario. Suppose some web server runs a Python CGI
script but has a bug that leaks a file descriptor into the script. The
web server sandboxes the Python CGI script a little bit with resource
limits so the leaked file descriptor is higher than the script's file
descriptor maximum. The Python CGI script then runs a sandboxed
(perhaps it's run as a different user) utility and leaks the file
descriptor again (because the descriptor is above the resource
limits). This utility is somehow exploited by an attacker over the
internet by being fed bad input. Because of the doubly leaked file
descriptor the attacker could possibly break out of a chroot or start
bad input through a sensitive file descriptor. Anyways, the bug should
be fixed regardless.

Thanks for correcting me on the location of the fd closing code. Some
observations.

Strangely, there seems to be a _close_fds method in the Python
subprocess module that is not used anywhere. Either it should be
removed or fixed similarly. For understandability if it is fixed it
should simply delegate to the C code.

The bug I mentioned earlier about concurrently modifing the fd dir and
reading from it occurs in _close_open_fd_range_safe which is a genuine
security issue (although I don't know if it's ver likely to happen in
practise). Because _close_open_fd_range_safe can't allocate memory the
code there will be pretty ugly but oh well.

There doesn't seem to be any point to caching max_fd in a variable on
module load. Why not just use sysconf every time it is needed? Is
there some need for really fast performance? Does sysconf allocate
memory or something?

Anyways, the code should be refactored to not use max_fd on the
platforms that support that.

Thank you for your thoughts. Also, should I keep discussion of some of
the bugs I observed here or raise them in other issues so they don't
get lost?
msg219479 - (view) Author: Akira Li (akira) * Date: 2014-06-01 02:28
> getrlimit and setrlimit are not exposed in the os module; you must use ctypes or an extension module to call them from Python:

There is `resource` module:

  >>> import resource
  >>> resource.getrlimit(resource.RLIMIT_NOFILE) 
  (1024, 4096)
msg219480 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-06-01 02:29
I found another problem with _close_open_fd_range_safe. POSIX leaves
the state of a file descriptor given to close undefined if the close
fails with EINTR. I believe but haven't double checked that close
should not be retried on EINTR on all of our supported platforms. If
you must have absolute portability, block all signals so that close
can't fail with EINTR and then unblock them after close. This isn't an
actual problem because the code will just close an extra time but it's
still bothersome.
msg219491 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2014-06-01 07:18
Here's a patch with a unittest that reproduces the problem with fixes to stop using any end_fds.  The max fd is only ever used in the absolute fallback situation where no way to get a list of open fd's is available. In that case it is obtained from sysconf() at the time it is needed rather than module load time as sysconf() is async-signal-safe.

I'm not worried about calling close() an additional time on EINTR in the single threaded child process prior to exec().  The most that will happen is one extra call with a different error if the fd is in a bad state from the previous one.  That is better than any chance of one being left open.
msg219509 - (view) Author: Steven Stewart-Gallus (sstewartgallus) * Date: 2014-06-01 16:24
Thank you for the very quick patch Gregory P. Smith.

It's fair enough if you don't bother to fix the EINTR issue.

One small note:

> +        """Confirm that issue21618 is fixed (mail fail under valgrind)."""

That's a typo right? Shouldn't it be may instead of mail?
msg219523 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2014-06-01 20:22
New changeset 5453b9c59cd7 by Gregory P. Smith in branch '3.4':
Don't restrict ourselves to a "max" fd when closing fds before exec()
http://hg.python.org/cpython/rev/5453b9c59cd7

New changeset 012329c8c4ec by Gregory P. Smith in branch 'default':
Don't restrict ourselves to a "max" fd when closing fds before exec()
http://hg.python.org/cpython/rev/012329c8c4ec
msg219525 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2014-06-01 21:04
Backported to subprocess32 in https://code.google.com/p/python-subprocess32/source/detail?r=1c27bfe7e98f78e6aaa746b5c0a4d902a956e2a5
History
Date User Action Args
2022-04-11 14:58:04adminsetgithub: 65817
2014-06-01 21:04:03gregory.p.smithsetstatus: open -> closed
versions: + Python 3.5
messages: + msg219525

resolution: fixed
stage: patch review -> commit review
2014-06-01 20:22:24python-devsetnosy: + python-dev
messages: + msg219523
2014-06-01 16:24:22sstewartgallussetmessages: + msg219509
2014-06-01 07:18:19gregory.p.smithsetfiles: + issue21618-34-gps01.diff
type: security -> behavior
messages: + msg219491

stage: patch review
2014-06-01 02:29:05sstewartgallussetmessages: + msg219480
2014-06-01 02:28:09akirasetnosy: + akira
messages: + msg219479
2014-06-01 02:11:12sstewartgallussetmessages: + msg219478
2014-05-31 22:23:09gregory.p.smithsetmessages: + msg219477
2014-05-31 16:44:38gregory.p.smithsetassignee: gregory.p.smith
messages: + msg219460
2014-05-31 16:33:02gregory.p.smithsetnosy: - gps
2014-05-31 16:00:29gregory.p.smithsetnosy: + gregory.p.smith
messages: + msg219455
2014-05-31 05:53:16ned.deilysetnosy: + larry, gps
2014-05-31 05:20:46sstewartgallussetmessages: + msg219443
2014-05-31 05:06:37sstewartgallussetfiles: + python.patch
keywords: + patch
messages: + msg219442
2014-05-31 04:25:06sstewartgallussettype: security
versions: + Python 3.4
2014-05-31 04:24:41sstewartgalluscreate