This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Chris Langton
Recipients Chris Langton, Henrique Andrade, davin, pablogsal, pitrou
Date 2019-02-01.01:29:59
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1548984599.6.0.785921805558.issue33081@roundup.psfhosted.org>
In-reply-to
Content
interestingly, while it is expected Process or Queue would actually close resource file descriptors and doesn't because a dev decided they prefer to defer to the user how to manage gc themselves, the interesting thing is if you 'upgrade' your code to use a pool, the process fd will be closed as the pool will destroy the object (so it is gc more often);

Say you're limited to a little over 1000 fd in your o/s you can do this

#######################################################################

import multiprocessing
import json


def process(data):
    with open('/tmp/fd/%d.json' % data['name'], 'w') as f:
        f.write(json.dumps(data))
    return 'processed %d' % data['name']

if __name__ == '__main__':
    pool = multiprocessing.Pool(1000)
    try:
        for _ in range(10000000):
            x = {'name': _}
            pool.apply(process, args=(x,))
    finally:
        pool.close()
        del pool

#######################################################################

only the pool fd hangs around longer then it should, which is a huge improvement, and you might not find a scenario where you need many pool objects.
History
Date User Action Args
2019-02-01 01:30:01Chris Langtonsetrecipients: + Chris Langton, pitrou, davin, Henrique Andrade, pablogsal
2019-02-01 01:29:59Chris Langtonsetmessageid: <1548984599.6.0.785921805558.issue33081@roundup.psfhosted.org>
2019-02-01 01:29:59Chris Langtonlinkissue33081 messages
2019-02-01 01:29:59Chris Langtoncreate