This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author primal
Recipients aeros, asvetlov, primal, yselivanov
Date 2019-11-01.09:35:52
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1572600953.17.0.227970841584.issue32309@roundup.psfhosted.org>
In-reply-to
Content
I don't think changing the default executor is a good approach. What happens, if two or more thread pools are running at the same time? In that case they will use the same default executor anyway, so creating a new executor each time seems like a waste. 

Shutting down the default executor seems unnecessary and could impact lower level code which is using it. The default executor is shutdown at the end of asyncio.run anyway.

I also think it would be good to have a synchronous entry point, and not require a context manager. Having a ThreadPool per class instance would be a common pattern.


class ThreadPool:
    def __init__(self, timeout=None):
        self.timeout = timeout
        self._loop = asyncio.get_event_loop()
        self._executor = concurrent.futures.ThreadPoolExecutor()

    async def close(self): 
        await self._executor.shutdown(timeout=self.timeout)  
        
    async def __aenter__(self):
        return self

    async def __aexit__(self, *args):
        await self.close()

    def run(self, func, *args, **kwargs):
        call = functools.partial(func, *args, **kwargs)
        return self._loop.run_in_executor(self._executor, call)


I'm not sure if a new ThreadPoolExecutor really needs to be created for each ThreadPool though.
History
Date User Action Args
2019-11-01 09:35:53primalsetrecipients: + primal, asvetlov, yselivanov, aeros
2019-11-01 09:35:53primalsetmessageid: <1572600953.17.0.227970841584.issue32309@roundup.psfhosted.org>
2019-11-01 09:35:53primallinkissue32309 messages
2019-11-01 09:35:52primalcreate