This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: subprocess.Popen._cleanup() "The handle is invalid" error when some old process is gone
Type: crash Stage: resolved
Components: Windows Versions: Python 3.9, Python 3.8, Python 3.7
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: Chip Lynch, efiop, emanuel, eryksun, gregory.p.smith, paul.moore, steve.dower, sylvain.corlay, tim.golden, vstinner, zach.ware
Priority: normal Keywords: patch

Created on 2019-06-23 18:42 by efiop, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
invalid_handle.py emanuel, 2019-07-21 08:07 A small reproduction that triggers the 'invalid handle' error
Pull Requests
URL Status Linked Edit
PR 14327 closed python-dev, 2019-06-23 19:13
PR 14360 merged efiop, 2019-06-25 01:09
PR 15706 merged miss-islington, 2019-09-05 16:00
PR 15707 merged miss-islington, 2019-09-05 16:00
Messages (36)
msg346332 - (view) Author: Ruslan Kuprieiev (efiop) * Date: 2019-06-23 18:42
subprocess keeps the list of active processes in its `_active` list. Whenever you try to Popen a new process, it is immediately (like right in the first line of Popen.__init__ [1]) trying to go clean up the old ones. If one of the processes in that list is gone, then

```
(_WaitForSingleObject(self._handle, 0)
```

[2] will get `OSError: [WinError 6] The handle is invalid` and will prevent you from creating any new processes with Popen after that, because the line where that handle is removes from _active list is not reached. On *nix [3] _internal_poll will actually try-except waitpid and will catch OSError without re-raising it, so if the process is gone, it will simply report it as dead and successfully remove it from _active list. It seems like we should do the same thing on Windows and if we catch `The handle is invalid` error, just report that process as dead and remove it from _active list.


[1] https://github.com/python/cpython/blob/v3.8.0b1/Lib/subprocess.py#L715

[2] https://github.com/python/cpython/blob/v3.8.0b1/Lib/subprocess.py#L1310

[3] https://github.com/python/cpython/blob/v3.8.0b1/Lib/subprocess.py#L1710
msg346333 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-06-23 19:04
See issue 36067 for a related discussion. The _active list and _cleanup function are not required in Windows. When a process exits, it gets rundown to free its handle table and virtual memory. Only the kernel object remains, which is kept alive by pointer and handle references to it that can query information such as the exit status code. As soon as the last reference is closed, the Process object is automatically reaped. It doesn't have to be waited on.
msg346338 - (view) Author: Ruslan Kuprieiev (efiop) * Date: 2019-06-23 19:46
Hi Eryk!

Thanks for a swift reply! So the current plan for fixing it is:

1) disable _cleanup, _active, and remove _internal_poll() for windows
2) ignore EBADF(OSError: [WinError 6] The handle is invalid) in terminate() and probably some other methods

Please let me know if I'm missing something. Also, is anyone working on this, or should I prepare a PR?

Thanks,
Ruslan
msg346343 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-06-23 21:24
> 1) disable _cleanup, _active, and remove _internal_poll() for windows

_active only gets appended to in __del__. We can skip the entire body of __del__. Also, calling _cleanup can be skipped in __init__.

_internal_poll is required for poll().

2) ignore EBADF(OSError: [WinError 6] The handle is invalid) in terminate() and probably some other methods

ERROR_INVALID_HANDLE should not be ignored. 

terminate() is doing the right thing by not masking ERROR_INVALID_HANDLE. The only concern is the case where our handle has been closed elsewhere (not by subprocess) and the handle value was subsequently reused as a handle for another process to which we have terminate access. This is a bad handle at a logical, application level, but it's valid and accessible for TerminateProcess. It was suggested to initially get the pid and process startup time in order to validate the call, but Alexey Izbyshev is right that it's not worth complicating the code in subprocess to address an uncommon bug in particular application or library code.
msg346461 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-25 00:14
> 2) ignore EBADF(OSError: [WinError 6] The handle is invalid) in terminate() and probably some other methods

That sounds like a bad idea. We should avoid using a handle if it can become invalid out of our control. A handle might be recycled, no?
msg346470 - (view) Author: Ruslan Kuprieiev (efiop) * Date: 2019-06-25 01:20
Thanks for pointing that out! I've submitted a tiny patch to skip ``_collect()`` and ``__del__``. Please find it in the Github PR attached. Looking forward to your feedback!
msg346568 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-25 22:49
> See issue 36067 for a related discussion. The _active list and _cleanup function are not required in Windows. When a process exits, it gets rundown to free its handle table and virtual memory. Only the kernel object remains, which is kept alive by pointer and handle references to it that can query information such as the exit status code. As soon as the last reference is closed, the Process object is automatically reaped. It doesn't have to be waited on.

Linux (for example) has the same design: the kernel doesn't keep a "full process" alive, but a lightweight structure just for its parent process which gets the exit status. That's the concept of "zombie process".

One issue on Linux is that the zombie process keeps the pid used until the parent reads the child exit status, and Linux pids are limited to 32768 by default.

Now I'm not sure that I understand what it means on Windows. The subprocess module uses a magic Handle object which calls CloseHandle(handle) in its __del__() method. I dislike relying on destructors. If an object is kept alive by a reference cycle, it's never released: CloseHandle() isn't called.

So code spawning process should wait until subprocesses complete to ensure that CloseHandle() is called, no?

Except that Popen._internal_poll() doesn't clear the handle even after the process completes.

If Popen.__del__() doesn't add active processes to the _active list, it should at least explicitly call CloseHandle(), no?
msg346578 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2019-06-26 01:56
The handle can deliberately live beyond the process, but it does not have to. If it is closed early then the kernel object will be freed when no handles remain, which will be at process exit.

So it's a classic __exit__/__del__ case, where both are needed if you want deterministic resource disposal. But it is in no way tied to the life of the child process, so waiting first is optional.
msg346581 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-06-26 03:43
> One issue on Linux is that the zombie process keeps the pid used until 
> the parent reads the child exit status, and Linux pids are limited to
> 32768 by default.

Windows allocates Process and Thread IDs out of a kernel handle table, which can grow to about 2**24 entries (more than 16 million). So the practical resource limit for inactive Process and Thread objects is available memory, not running out of PID/TID values.

> Linux (for example) has the same design: the kernel doesn't keep a 
> "full process" alive, but a lightweight structure just for its parent
> process which gets the exit status. That's the concept of "zombie 
> process".

In Unix, the zombie remains visible in the task list (marked as <defunct> in Linux), but in Windows an exited process is removed from the Process Manager's active list, so it's no longer visible to users. Also, a Process object is reaped as soon as the last reference to it is closed, since clearly no one needs it anymore. 

> The subprocess module uses a magic Handle object which calls 
> CloseHandle(handle) in its __del__() method. I dislike relying on 
> destructors. If an object is kept alive by a reference cycle, it's
> never released: CloseHandle() isn't called.

We could call self._handle.Close() in _wait(), right after calling GetExitCodeProcess(self._handle). With this change, __exit__ will ensure that _handle gets closed in a deterministic context. Code that needs the handle indefinitely can call _handle.Detach() before exiting the with-statement context, but that should rarely be necessary.

I don't understand emitting a resource warning in Popen.__del__ if a process hasn't been waited on until completion beforehand (i.e. self.returncode is None). If a script wants to be strict about this, it can use a with statement, which is documented to wait on the process. 

I do understand emitting a resource warning in Popen.__del__ if self._internal_poll() is None. In this case, in Unix only now, the process gets added to the _active list to try to avoid leaking a zombie. The list gets polled in subprocess._cleanup, which is called in Popen.__init__. Shouldn't _cleanup also be set as an atexit function?

There should be a way to indicate a Popen instance is intended to continue running detached from our process, so scripts don't have to ignore an irrelevant resource warning.
msg346606 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-26 12:07
> In Unix, the zombie remains visible in the task list (marked as <defunct> in Linux), but in Windows an exited process is removed from the Process Manager's active list, so it's no longer visible to users. Also, a Process object is reaped as soon as the last reference to it is closed, since clearly no one needs it anymore. 
> (...)
> We could call self._handle.Close() in _wait(), right after calling GetExitCodeProcess(self._handle). With this change, __exit__ will ensure that _handle gets closed in a deterministic context.

I created bpo-37410 and PR 14391 to close the handle when the process completes. IMHO this change is easy and safe, it can be backported to Python 3.7 and 3.8.

> Code that needs the handle indefinitely can call _handle.Detach() before exiting the with-statement context, but that should rarely be necessary.
> (...)
> There should be a way to indicate a Popen instance is intended to continue running detached from our process, so scripts don't have to ignore an irrelevant resource warning.

My point is that if you want to "detach" a process, it must be *explicit*. In my experience, processes are "leaked" implicitly: by mistake. In multiprocessing, it can be subtle, you don't manipulate subprocess.Popen directly.

Users must not access private attributes (Popen._handle). After I added the ResourceWarning, I created bpo-27068 "Add a detach() method to subprocess.Popen". But I only created to reply to a request of Martin Panter, I didn't use this feature myself. At some point, I decided to just close the issue. Maybe it's time to reopen it.


> I don't understand emitting a resource warning in Popen.__del__ if a process hasn't been waited on until completion beforehand (i.e. self.returncode is None). If a script wants to be strict about this, it can use a with statement, which is documented to wait on the process.

The purpose is to help developers to find bugs in their bugs. Many are now aware that programs continue to run in the background and that leaking zombie processes is an issue on Unix.

The process is not polled to be able to emit the warning in more cases, again, to ease debug.

The warning is ignored by default.
msg346635 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-06-26 15:28
> The process is not polled to be able to emit the warning in more 
> cases, again, to ease debug.

It emits an incorrect warning if the process has already exited: "subprocess %s is still running". This can be rectified. Here's my current understanding:

    def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
        if not self._child_created or self.returncode is not None:
            return
        # In Unix, not reading the subprocess exit status creates a zombie
        # process, which is only destroyed at the parent Python process exit.
        # In Windows, no one else should have a reference to our _handle, so
        # it should get finalized and thus closed, but we use the same warning
        # in order to consistently educate developers.
        if self._internal_poll(_deadstate=_maxsize) is not None:
            _warn("subprocess %s was implicitly finalized" % self.pid,
                  ResourceWarning, source=self)
        else:
            _warn("subprocess %s is still running" % self.pid,
                  ResourceWarning, source=self)
            # Keep this instance alive until we can wait on it, if needed.
            if _active is not None:
                _active.append(self)
msg346638 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-26 15:36
Sorry, I'm lost :-( More and more different issues are discussed here. Would it be possible please to stick this issue to _active list and _cleanup() on Windows? From what I read, remove/disable _active and _cleanup() makes sense on Windows.

I already created bpo-37410 to call CloseHandle() earlier.

Eryk: Please open a separated issue if you want to change the ResourceWarning message.
msg346695 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-26 23:13
Here is a concrete example of ResourceWarning when debugging/stressing multiprocessing code, I interrupt (CTRL+c) test_resource_tracker while it's running:

vstinner@apu$ ./python -m test test_multiprocessing_spawn --fail-env-changed -v -m test_resource_tracker
== CPython 3.9.0a0 (heads/master:689830ee62, Jun 26 2019, 23:07:31) [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)]
== Linux-5.1.11-300.fc30.x86_64-x86_64-with-glibc2.29 little-endian
== cwd: /home/vstinner/prog/python/master/build/test_python_18573
== CPU count: 8
== encodings: locale=UTF-8, FS=utf-8
Run tests sequentially
0:00:00 load avg: 0.32 [1/1] test_multiprocessing_spawn
test_resource_tracker (test.test_multiprocessing_spawn.TestResourceTracker) ... 

^C

/home/vstinner/prog/python/master/Lib/subprocess.py:917: ResourceWarning: subprocess 18576 is still running
  _warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
/home/vstinner/prog/python/master/Lib/test/libregrtest/runtest.py:283: ResourceWarning: unclosed file <_io.BufferedReader name=6>
  return INTERRUPTED
ResourceWarning: Enable tracemalloc to get the object allocation traceback

== Tests result: INTERRUPTED ==
Test suite interrupted by signal SIGINT.

1 test omitted:
    test_multiprocessing_spawn

Total duration: 304 ms
Tests result: INTERRUPTED
msg346827 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-28 16:12
New changeset 042821ae3cf537e01963c9ec85d1a454d921e826 by Victor Stinner (Ruslan Kuprieiev) in branch 'master':
bpo-37380: subprocess: don't use _active on win (GH-14360)
https://github.com/python/cpython/commit/042821ae3cf537e01963c9ec85d1a454d921e826
msg346828 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-06-28 16:13
I merged PR 14360 into master. Are Python 2.7, 3.7 and 3.8 impacted by the bug as well? Should we backport the change, or would it be too dangerous?
msg346889 - (view) Author: Ruslan Kuprieiev (efiop) * Date: 2019-06-29 17:11
The `_active` code is pretty old and has not changed since 2.7, so it makes sense to backport it for now(also, should be very trivial to do that). To me, it doesn't seem dangerous to backport that, but I would love to hear what Eryk and others think :)
msg346973 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-07-01 08:19
> To me, it doesn't seem dangerous to backport that, but I would love to hear what Eryk and others think :)

In my experience, any change is dangerous :-)

The initial bug doesn't contain any reproducer. I'm talking about:

> If one of the processes in that list is gone, then (...) "The handle is invalid" (...) will prevent you from creating any new processes with Popen after that

It's the first time that I see such bug report. It seems like subprocess works for most people :-) Can you elobrate how to reproduce this issue to justify a backport?
msg346985 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-07-01 09:30
>> If one of the processes in that list is gone, then (...) "The handle 
>> is invalid" (...) will prevent you from creating any new processes 
>> with Popen after that
>
> It's the first time that I see such bug report.

IIRC it's the second time I've seen that issue reported. It should be rare. It should only occur if some other code in our process or another process closes our _handle value. It's not our bug. Here's an example in 3.7:

    >>> p = subprocess.Popen('python -c "input()"', stdin=subprocess.PIPE)
    >>> del p
    >>> subprocess._active
    [<subprocess.Popen object at 0x0000015E5C94E160>]
    >>> subprocess._active[0]._handle.Close()

The process is active, but something closed our handle. The next time Popen.__init__ calls _cleanup, _internal_poll raises OSError due to the invalid handle:

    >>> subprocess.call('python -V')
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Program Files\Python37\lib\subprocess.py", line 323, in call
        with Popen(*popenargs, **kwargs) as p:
      File "C:\Program Files\Python37\lib\subprocess.py", line 664, in __init__
        _cleanup()
      File "C:\Program Files\Python37\lib\subprocess.py", line 228, in _cleanup
        res = inst._internal_poll(_deadstate=sys.maxsize)
      File "C:\Program Files\Python37\lib\subprocess.py", line 1216, in _internal_poll
        if _WaitForSingleObject(self._handle, 0) == _WAIT_OBJECT_0:
    OSError: [WinError 6] The handle is invalid

I wouldn't want _internal_poll to silence this error, but maybe it could be translated into a warning. That said, since there's no need to wait on a process in Windows, there's no need for __del__ to insert a running process in a list of active process, in which case there's nothing for _cleanup to do. Given we're in __del__, our private _handle is about to be closed, at least it will be in CPython. Other implementations should wait() or use a with statement, in order to explicitly close the process handle. The latter isn't implemented currently, but you opened issue 37410 to address this.
msg346994 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-07-01 10:41
> subprocess._active[0]._handle.Close()

Why would you do that? You should not access the private _active list, nor access the private _handle attribute. I understand that it's a way to trigger such bug, but is it possible to trigger this bug without accessing any private attribute?


> I wouldn't want _internal_poll to silence this error, but maybe it could be translated into a warning

I disagree with that. It's very bad is suddenly the handle becomes invalid for no obvious reason. It's better to get an hard error (exception) in such case.
msg347006 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-07-01 11:44
>> subprocess._active[0]._handle.Close()
>
> Why would you do that? You should not access the private _active list, 
> nor access the private _handle attribute. I understand that it's a way 
> to trigger such bug, but is it possible to trigger this bug without 
> accessing any private attribute?

The example is just emulating a problem from someone else's code that's closing our handle. Typically this situation occurs because the code is holding onto a handle value for a kernel object (File, Section, Job, Process, Thread, Event, etc) that got closed. The handle value eventually gets reused, such as for our _handle. 

>> I wouldn't want _internal_poll to silence this error, but maybe it 
>> could be translated into a warning
>
> I disagree with that. It's very bad is suddenly the handle becomes 
> invalid for no obvious reason. It's better to get an hard error 
> (exception) in such case.

I agree, but the exception breaks Popen.__init__. It would have to be ignored or translated to a warning somewhere if we continues to poll a list of active processes every time __init__() is called. But since the latter is unnecessary in Windows, I suggested to just skip it.
msg347016 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-07-01 13:17
> The example is just emulating a problem from someone else's code that's closing our handle. Typically this situation occurs because the code is holding onto a handle value for a kernel object (File, Section, Job, Process, Thread, Event, etc) that got closed.

Without accessing private attributes, I don't see how someone can discover the private handle. So for me, it's more a serious bug in an application, no? Blindly closing random handles doesn't sound like a good idea to me.

> The handle value eventually gets reused, such as for our _handle.

That's a side effect on the blindly closing random handles.
msg347019 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-07-01 13:49
> Without accessing private attributes, I don't see how someone can 
> discover the private handle. So for me, it's more a serious bug in an 
> application, no? Blindly closing random handles doesn't sound like a 
> good idea to me.

Say a library calls CreateEventW and gets handle 32. It passes this handle to some other library, which uses the event and closes the handle when it no longer needs it. But due to a miscommunication in the documentation, the first library thinks the handle remains open. Now handle 32 is free for reuse, but the library doesn't know this. subprocess.Popen subsequently calls CreateProcessW and gets handle 32. Later on, the library closes handle 32, making it invalid, at least until it gets assigned to some other kernel object.
msg347024 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-07-01 14:44
> Say a library calls CreateEventW and gets handle 32. It passes this handle to some other library, which uses the event and closes the handle when it no longer needs it. But due to a miscommunication in the documentation, the first library thinks the handle remains open. Now handle 32 is free for reuse, but the library doesn't know this. subprocess.Popen subsequently calls CreateProcessW and gets handle 32. Later on, the library closes handle 32, making it invalid, at least until it gets assigned to some other kernel object.

So yeah, it's a severe bug in an application. An application should not close the wrong handle by mistake :-) 

Thanks for the explanation. So it would be "nice" to backport the change to reduce the impact of such application bug, but it's not really a bug in Python itself. Python cannot fully protect developers for such class of bugs.

On Unix, there is a similar bug with applications trying to close a file descriptor which is already closed. It can lead to very severe bugs (crashes):
https://bugs.python.org/issue18748
msg348231 - (view) Author: Emanuel Zephir (emanuel) Date: 2019-07-21 08:07
I believe this to be a bug in the standard library instead of solely being the result of an application error.

When a Popen instance is finalized by the garbage collector, the internal handle is also finalized and closed despite the instance being put on the active list. This results in _cleanup throwing because the handle can no longer be used.

I've attached a small reproduction.
msg351205 - (view) Author: Chip Lynch (Chip Lynch) Date: 2019-09-05 15:22
I see there is a merged patch for this (https://github.com/python/cpython/pull/14360); is it possible to get it tagged for backport to 3.7?  I haven't seen the 3.7.5 tag drop yet so wanted to recommend it's included, it's giving us fits.  On our Windows 10 dev boxes and Win 2016 servers, emanuel's repro fails 3.5.4, 3.6.8, 3.7.4, and 3.8.0b4, but not apparently 2.7 (I think we tested 2.7.12).  Per vstinner's request, I agree and would like to see it get a 3.7 and 3.8 backport.

Many thanks!
msg351209 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2019-09-05 16:01
I triggered automatic backports to see how they go. Doesn't necessarily mean we'll merge them without looking more closely, but may as well check out what our automated tools suggest.
msg351235 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-09-06 09:13
> When a Popen instance is finalized by the garbage collector, the internal handle is also finalized and closed despite the instance being put on the active list. This results in _cleanup throwing because the handle can no longer be used.

Right, that's a more practical case where this bug still occurs on Python 3.8 and older. I would suggest to fix applications which should get a ResourceWarning warning in this case.

Chip Lynch wrote me an email saying that his team is impacted by the bug on Windows with Python 3.7.

I tried to write a simpler patch ignoring ERROR_INVALID_HANDLE, but then I read again this issue discussion: Eryk Sun and me agree that ignorning ERROR_INVALID_HANDLE is a bad idea, it can be worse: silence a real bug.

So I now agree to backport the fix from master to 3.7 and 3.8 branches.

I prefer to leave 2.7 unchanged even if it's affected. I don't want to take the risk of introducing another regression in 2.7. Moreover, subprocess.Popen has a __del__() method. Python 2.7 handles reference cycles differently than Python 3: it doesn't implement finalizers (PEP 442). While Python 2.7 might be affected, it should be affected differently.
msg351236 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-09-06 09:14
New changeset 1e2707d7e82aedf73c59772bc7aa228286323c3c by Victor Stinner (Miss Islington (bot)) in branch '3.7':
bpo-37380: subprocess: don't use _active on win (GH-14360) (GH-15706)
https://github.com/python/cpython/commit/1e2707d7e82aedf73c59772bc7aa228286323c3c
msg351237 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-09-06 09:14
New changeset 4d1abedce9422473af2ac78047e55cde73208208 by Victor Stinner (Miss Islington (bot)) in branch '3.8':
bpo-37380: subprocess: don't use _active on win (GH-14360) (GH-15707)
https://github.com/python/cpython/commit/4d1abedce9422473af2ac78047e55cde73208208
msg351238 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-09-06 09:20
Thanks Ruslan Kuprieiev for the bug report and the fix. Thanks Eryk Sun (as usual!) for the great analysis of this issue.

Ok, the bug should now be fixed in 3.7, 3.8 and master branches.

I chose to not fix Python 2.7: see my previous comment.

See bpo-37410 for the follow-up: "[Windows] subprocess: close the handle when the process completes".
msg351240 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2019-09-06 09:29
Steve Dower: "I triggered automatic backports to see how they go. Doesn't necessarily mean we'll merge them without looking more closely, but may as well check out what our automated tools suggest."

Thanks, I merged the backports.
msg351261 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-09-06 16:48
> When a Popen instance is finalized by the garbage collector, the 
> internal handle is also finalized and closed despite the instance 
> being put on the active list. This results in _cleanup throwing 
> because the handle can no longer be used.

Thanks. I hadn't considered that. It can lead to unusual behavior like this when __del__ keeps an object alive. The object can't assume that referenced objects haven't already been finalized. All of the subsequently required instance methods should take this into account. For example, Popen._internal_poll could set the return code to -1 if self._handle.closed is true. That's at a higher level, and much more reasonable, than trying to handle ERROR_INVALID_HANDLE, which could mask bugs.
msg370325 - (view) Author: Sylvain Corlay (sylvain.corlay) * Date: 2020-05-29 16:49
Hello,

Is there a means to work around that bug for Python 3.6 ? It seems that the fix was only backported to 3.7, and we were not planning on dropping Python 3.6 support quite yet in our project.
msg370326 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-05-29 17:29
Sadly, 3.6 no longer get bug fixes, only security fixes:
https://devguide.python.org/#status-of-python-branches
msg370328 - (view) Author: Sylvain Corlay (sylvain.corlay) * Date: 2020-05-29 17:47
Yes, I understand that.

I was wondering if there was a workaround that we could use to not drop 3.6
support right away!

On Fri, May 29, 2020, 19:29 STINNER Victor <report@bugs.python.org> wrote:

>
> STINNER Victor <vstinner@python.org> added the comment:
>
> Sadly, 3.6 no longer get bug fixes, only security fixes:
> https://devguide.python.org/#status-of-python-branches
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <https://bugs.python.org/issue37380>
> _______________________________________
>
msg370334 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2020-05-29 19:00
> import subprocess
> subprocess._cleanup = lambda: None
History
Date User Action Args
2022-04-11 14:59:17adminsetgithub: 81561
2020-05-29 19:00:45steve.dowersetmessages: + msg370334
2020-05-29 17:47:48sylvain.corlaysetmessages: + msg370328
2020-05-29 17:29:04vstinnersetmessages: + msg370326
2020-05-29 16:49:45sylvain.corlaysetnosy: + sylvain.corlay
messages: + msg370325
2019-09-06 16:48:09eryksunsetmessages: + msg351261
2019-09-06 09:29:20vstinnersetmessages: + msg351240
2019-09-06 09:20:35vstinnersetstatus: open -> closed
resolution: fixed
messages: + msg351238

stage: backport needed -> resolved
2019-09-06 09:14:41vstinnersetmessages: + msg351237
2019-09-06 09:14:41vstinnersetmessages: + msg351236
2019-09-06 09:13:59vstinnersetmessages: + msg351235
2019-09-05 16:01:19steve.dowersetstage: patch review -> backport needed
versions: + Python 3.8, Python 3.9
2019-09-05 16:01:11steve.dowersetmessages: + msg351209
2019-09-05 16:00:56miss-islingtonsetpull_requests: + pull_request15361
2019-09-05 16:00:49miss-islingtonsetpull_requests: + pull_request15360
2019-09-05 15:22:12Chip Lynchsetnosy: + Chip Lynch
messages: + msg351205
2019-07-21 08:07:31emanuelsetfiles: + invalid_handle.py
nosy: + emanuel
messages: + msg348231

2019-07-01 14:44:50vstinnersetmessages: + msg347024
2019-07-01 13:49:36eryksunsetmessages: + msg347019
2019-07-01 13:17:17vstinnersetmessages: + msg347016
2019-07-01 11:44:01eryksunsetmessages: + msg347006
2019-07-01 10:41:39vstinnersetmessages: + msg346994
2019-07-01 09:30:39eryksunsetmessages: + msg346985
2019-07-01 08:19:56vstinnersetmessages: + msg346973
2019-06-29 17:11:18efiopsetmessages: + msg346889
2019-06-28 16:13:20vstinnersetmessages: + msg346828
2019-06-28 16:12:20vstinnersetmessages: + msg346827
2019-06-26 23:13:03vstinnersetmessages: + msg346695
2019-06-26 15:36:42vstinnersetmessages: + msg346638
2019-06-26 15:28:33eryksunsetmessages: + msg346635
2019-06-26 12:07:35vstinnersetmessages: + msg346606
2019-06-26 03:43:46eryksunsetmessages: + msg346581
2019-06-26 01:56:21steve.dowersetmessages: + msg346578
2019-06-25 22:49:47vstinnersetmessages: + msg346568
2019-06-25 01:20:33efiopsetmessages: + msg346470
2019-06-25 01:09:41efiopsetpull_requests: + pull_request14176
2019-06-25 00:14:04vstinnersetnosy: + vstinner
messages: + msg346461
2019-06-23 21:24:26eryksunsetmessages: + msg346343
2019-06-23 19:46:35efiopsetmessages: + msg346338
2019-06-23 19:13:45python-devsetkeywords: + patch
stage: patch review
pull_requests: + pull_request14147
2019-06-23 19:04:59eryksunsetnosy: + eryksun
messages: + msg346333
2019-06-23 18:47:26xtreaksetnosy: + gregory.p.smith
2019-06-23 18:42:24efiopcreate