This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author sstewartgallus
Recipients sstewartgallus
Date 2014-05-31.04:24:39
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1401510281.2.0.0561893965896.issue21618@psf.upfronthosting.co.za>
In-reply-to
Content
The sysconf(_SC_OPEN_MAX) approach to closing fds is kind of flawed. It is kind of hacky and slow (see http://bugs.python.org/issue1663329). Moreover, this approach is incorrect as fds can be inherited from previous processes that have had higher resource limits. This is especially important because this is a possible security problem. I would recommend using the closefrom system call on BSDs or the /dev/fd directory on BSDs and /proc/self/fd on Linux (remember not to close fds as you read directory entries from the fd directory as that gives weird results because you're concurrently reading and modifying the entries in the directory at the same time). A C program that illustrates the problem of inheriting fds past lowered resource limits is shown below.

#include <errno.h>
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <sys/time.h>
#include <sys/resource.h>

int main() {
    struct rlimit const limit = {
        .rlim_cur = 0,
        .rlim_max = 0
    };
    if (-1 == setrlimit(RLIMIT_NOFILE, &limit)) {
        fprintf(stderr, "error: %s\n", strerror(errno));
        exit(EXIT_FAILURE);
    }

    puts("Printing to standard output even though the resource limit is lowered past standard output's number!");

    return EXIT_SUCCESS;
}
History
Date User Action Args
2014-05-31 04:24:41sstewartgallussetrecipients: + sstewartgallus
2014-05-31 04:24:41sstewartgallussetmessageid: <1401510281.2.0.0561893965896.issue21618@psf.upfronthosting.co.za>
2014-05-31 04:24:41sstewartgalluslinkissue21618 messages
2014-05-31 04:24:39sstewartgalluscreate