This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: ThreadPoolExecutor with wait=True shuts down too early
Type: Stage:
Components: Library (Lib) Versions: Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: fireattack, gaoxinge
Priority: normal Keywords:

Created on 2020-03-27 23:21 by fireattack, last changed 2022-04-11 14:59 by admin.

Messages (7)
msg365194 - (view) Author: fireattack (fireattack) * Date: 2020-03-27 23:21
Example

```
from concurrent.futures import ThreadPoolExecutor
from time import sleep

def wait_on_future():
    sleep(1)
    print(f.done()) # f is not done obviously
    f2 = executor.submit(pow, 5, 2)
    print(f2.result())
    sleep(1)    
    

executor = ThreadPoolExecutor(max_workers=100)
f = executor.submit(wait_on_future)
executor.shutdown(wait=True)
```

When debugging, it shows "cannot schedule new futures after shutdown":

    Exception has occurred: RuntimeError
    cannot schedule new futures after shutdown
    File "test2.py", line 7, in wait_on_future
        f2 = executor.submit(pow, 5, 2)

According to https://docs.python.org/3/library/concurrent.futures.html, `shutdown(wait=True)` "[s]ignal the executor that it should free any resources that it is using when the currently pending futures are done executing". But when f2 is being submitted, f is not done yet, so executor shouldn't be shut down.
msg365259 - (view) Author: gaoxinge (gaoxinge) Date: 2020-03-29 15:18
```
from concurrent.futures import ThreadPoolExecutor
from time import sleep

def wait_on_future():
    sleep(1)
    print(f.done()) # f is not done obviously
    f2 = executor.submit(pow, 5, 2)
    print(f2.result())
    sleep(1)    
    

executor = ThreadPoolExecutor(max_workers=100)
f = executor.submit(wait_on_future)
executor.shutdown(wait=True)
print(f.done())         # True
print(f.result())       # raise errror: cannot schedule new futures after shutdown
# print(f.exception())
```

Actually `executor.shutdown(wait=True)` works, it really wait f to be done.
msg365261 - (view) Author: gaoxinge (gaoxinge) Date: 2020-03-29 15:25
The workflow is like below:

- executor submit wait_on_future, and return the future f
- executor call shutdown
- executor submit pow (because executor already call shutdown, this submit will fail and raise a runtime error)
- then fail above will cause work thread fail when running wait_on_future
- then work thread will set both done singal and exception to f, which is the future of wait_on_future
msg365269 - (view) Author: fireattack (fireattack) * Date: 2020-03-29 17:29
Hi gaoxinge, thanks for the reply.

I assume what you mean is that while shutdown will wait, it won't accept any new job/future after it is called.

That makes sense, but this still doesn't work:

```
from concurrent.futures import ThreadPoolExecutor
from time import sleep

def wait_on_future():
    sleep(1)
    f2 = executor.submit(pow, 5, 2)    
    print(f2.result())
    executor.shutdown(wait=True) #Uncomment/comment this makes no difference

executor = ThreadPoolExecutor(max_workers=100)
f = executor.submit(wait_on_future)
```

This shows no error, but f2.result() is still not printed (`result()` has built-in wait). Actually, f2 is not executed at all (you can change pow to other func to see that).

Please notice this example is modified from an example from the official documentation: https://docs.python.org/3/library/concurrent.futures.html 

In that example, the comment says "This will never complete because there is only one worker thread and it is executing this function.", which is correct. But this is kinda misleading because it implies it will work if you increase the number of the worker. However, it will NOT work if you increase the number of worker, but also add a small delay, like shown above.

The point is, I failed to find a way to keep executor from main thread alive to accept future/job from a child thread if there is a delay. I tried shutdown/nothing/with/as_completed they all fail in similar fashion.
msg365270 - (view) Author: fireattack (fireattack) * Date: 2020-03-29 17:44
Here is another more bizarre showcase of the issue I come up with.

If you do:

```
import concurrent.futures
import time

def test():    
    time.sleep(3)    
    print('test')

ex = concurrent.futures.ThreadPoolExecutor(max_workers=10)
ex.submit(test)
```

This will print "test" after 3 seconds just fine.

Now, if you do:

```
import concurrent.futures
import time

def test():    
    time.sleep(3)
    ex.submit(print, 'ex-print')
    print('test') #this is not printed

ex = concurrent.futures.ThreadPoolExecutor(max_workers=10)
ex.submit(test)
```

Not only it doesn't print "ex-print", it does NOT even print "test" any more. And this is no error.
msg365298 - (view) Author: gaoxinge (gaoxinge) Date: 2020-03-30 05:27
> I assume what you mean is that while shutdown will wait, it won't accept any new job/future after it is called.

Yes, you are right. This is a feature of the ThreadPoolExecutor.
msg366582 - (view) Author: fireattack (fireattack) * Date: 2020-04-16 08:29
>Yes, you are right. This is a feature of the ThreadPoolExecutor.

So is there any way to make the Executor actually wait and accept new job(s) after a while? I tried as_completed(), wait(), none seem to work.
History
Date User Action Args
2022-04-11 14:59:28adminsetgithub: 84274
2020-04-16 08:29:21fireattacksetmessages: + msg366582
2020-03-30 05:27:21gaoxingesetmessages: + msg365298
2020-03-29 17:44:29fireattacksetmessages: + msg365270
2020-03-29 17:29:10fireattacksetmessages: + msg365269
2020-03-29 15:25:36gaoxingesetmessages: + msg365261
2020-03-29 15:18:22gaoxingesetnosy: + gaoxinge
messages: + msg365259
2020-03-27 23:21:42fireattackcreate