Message379907
While not calling executor.shutdown() may leave some resources still used, it should be small and fixed. Regularly calling executor.shutdown() and then instantiating a new ThreadPoolExecutor in order to run an asyncio program does not seem like a good API to me.
You mention there appear to be both an event loop and a futures leak -- I think I have a good test case for the futures, without using threads at all. This seems to be leaking `future._result`s somehow even though their __del__ is called.
```
import asyncio
from concurrent.futures import Executor, Future
import gc
result_gcs = 0
suture_gcs = 0
class ResultHolder:
def __init__(self, mem_size):
self.mem = list(range(mem_size)) # so we can see the leak
def __del__(self):
global result_gcs
result_gc += 1
class Suture(Future):
def __del__(self):
global suture_gcs
suture_gcs += 1
class SimpleExecutor(Executor):
def submit(self, fn):
future = Suture()
future.set_result(ResultHolder(1000))
return future
async def function():
loop = asyncio.get_running_loop()
for i in range(10000):
loop.run_in_executor(SimpleExecutor(), lambda x:x)
def run():
asyncio.run(function())
print(suture_gcs, result_gcs)
```
10MB
```
> run()
10000 10000
```
100MB
Both result_gcs and suture_gcs are 10000 every time. My best guess for why this would happen (for me it doesn't seem to happen without the loop.run_in_executor) is the conversion from a concurrent.Future to an asyncio.Future, which involves callbacks to check on the result, but that doesn't make sense, because the result itself has __del__ called on it but somehow it doesn't free the memory! |
|
Date |
User |
Action |
Args |
2020-10-30 02:22:37 | sophia2 | set | recipients:
+ sophia2, asvetlov, yselivanov, aeros, dralley |
2020-10-30 02:22:37 | sophia2 | set | messageid: <1604024557.29.0.0191396556386.issue41699@roundup.psfhosted.org> |
2020-10-30 02:22:37 | sophia2 | link | issue41699 messages |
2020-10-30 02:22:37 | sophia2 | create | |
|