Author sophia2
Recipients aeros, asvetlov, dralley, sophia2, yselivanov
Date 2020-10-30.02:22:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
While not calling executor.shutdown() may leave some resources still used, it should be small and fixed. Regularly calling executor.shutdown() and then instantiating a new ThreadPoolExecutor in order to run an asyncio program does not seem like a good API to me.

You mention there appear to be both an event loop and a futures leak -- I think I have a good test case for the futures, without using threads at all. This seems to be leaking `future._result`s somehow even though their __del__ is called.

import asyncio
from concurrent.futures import Executor, Future
import gc

result_gcs = 0
suture_gcs = 0

class ResultHolder:
    def __init__(self, mem_size):
        self.mem = list(range(mem_size)) # so we can see the leak
    def __del__(self):
        global result_gcs
        result_gc += 1

class Suture(Future):
   def __del__(self):
       global suture_gcs
       suture_gcs += 1

class SimpleExecutor(Executor):
    def submit(self, fn):
        future = Suture()
        return future

async def function():
    loop = asyncio.get_running_loop()
    for i in range(10000):
        loop.run_in_executor(SimpleExecutor(), lambda x:x)

def run():
    print(suture_gcs, result_gcs)

> run()
10000 10000

Both result_gcs and suture_gcs are 10000 every time. My best guess for why this would happen (for me it doesn't seem to happen without the loop.run_in_executor) is the conversion from a concurrent.Future to an asyncio.Future, which involves callbacks to check on the result, but that doesn't make sense, because the result itself has __del__ called on it but somehow it doesn't free the memory!
Date User Action Args
2020-10-30 02:22:37sophia2setrecipients: + sophia2, asvetlov, yselivanov, aeros, dralley
2020-10-30 02:22:37sophia2setmessageid: <>
2020-10-30 02:22:37sophia2linkissue41699 messages
2020-10-30 02:22:37sophia2create