This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author serhiy.storchaka
Recipients Gumnos, Roger Erens, docs@python, gvanrossum, r.david.murray, roysmith, serhiy.storchaka, steven.daprano
Date 2020-05-31.17:50:02
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1590947402.43.0.653333084413.issue22167@roundup.psfhosted.org>
In-reply-to
Content
Yes, for the pattern 'a*/b*/c*' you will have an open file descriptor for every component with metacharacters:

    for a in scandir('.'):
        if fnmatch(a.name, 'a*'):
            for b in scandir(a.path):
                if fnmatch(b.name, 'b*'):
                    for c in scandir(b.path):
                        if fnmatch(c.name, 'c*'):
                            yield c.path

You can have hundreds nested directories. Looks not bad, because by default the limit on the number of file descriptors is 1024 on Linux. But imagine you run a server and it handles tens requests simultaneously. Some of them or all of them will fail, and not just return an error, but return an incorrect result, because all OSError, including "Too many open files", are silenced in glob().

Also all these file descriptors will not be closed until you finish the iteration, or, in case of error, until the garbage collector close them (because interrupted generators tend to create reference loops).

So it is vital to close the file descriptor before you open other file descriptors in the recursion.
History
Date User Action Args
2020-05-31 17:50:02serhiy.storchakasetrecipients: + serhiy.storchaka, gvanrossum, roysmith, steven.daprano, r.david.murray, docs@python, Gumnos, Roger Erens
2020-05-31 17:50:02serhiy.storchakasetmessageid: <1590947402.43.0.653333084413.issue22167@roundup.psfhosted.org>
2020-05-31 17:50:02serhiy.storchakalinkissue22167 messages
2020-05-31 17:50:02serhiy.storchakacreate