This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: test_multiprocessing: memory leaks
Type: resource usage Stage: resolved
Components: Extension Modules Versions: Python 3.3
process
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: Nosy List: neologix, pitrou, sbt, skrah, vstinner
Priority: normal Keywords:

Created on 2011-10-02 09:10 by skrah, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Messages (13)
msg144764 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2011-10-02 09:10
Seen in test_multiprocessing:

==31662== 37 bytes in 1 blocks are definitely lost in loss record 629 of 10,548
==31662==    at 0x4C2154B: malloc (vg_replace_malloc.c:236)
==31662==    by 0x53BBE9: PyBytes_FromStringAndSize (bytesobject.c:98)
==31662==    by 0x4E2363: posix_read (posixmodule.c:6980)
==31662==    by 0x5759D8: PyCFunction_Call (methodobject.c:81)
==31662==    by 0x48E294: call_function (ceval.c:3980)
==31662==    by 0x4895E1: PyEval_EvalFrameEx (ceval.c:2605)
==31662==    by 0x48C54F: PyEval_EvalCodeEx (ceval.c:3355)
==31662==    by 0x48E786: fast_function (ceval.c:4078)
==31662==    by 0x48E3C7: call_function (ceval.c:4001)
==31662==    by 0x4895E1: PyEval_EvalFrameEx (ceval.c:2605)
==31662==    by 0x48C54F: PyEval_EvalCodeEx (ceval.c:3355)
==31662==    by 0x48E786: fast_function (ceval.c:4078)
msg153383 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-02-15 01:36
I have a hard time believing posix_read in the culprit. Perhaps, on the other hand, that object is "lost" by test_multiprocessing without being decref'ed enough?
msg153441 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2012-02-15 21:56
Yes, the leak does not seem to be in posix_read() itself. Sadly
enough, the number of leaks in test_multiprocessing has grown to
five in e9d01c5c92ed (the four additional ones were definitely not
present when I reported this).


==3047== 32 bytes in 1 blocks are definitely lost in loss record 340 of 6,704
==3047==    at 0x4C27878: malloc (vg_replace_malloc.c:236)
==3047==    by 0x4D5C35: PyThread_allocate_lock (thread_pthread.h:280)
==3047==    by 0x4A817C: _PyImport_AcquireLock (import.c:292)
==3047==    by 0x4ADE7A: PyImport_ImportModuleLevelObject (import.c:2903)
==3047==    by 0x4ADF64: PyImport_ImportModuleLevel (import.c:2922)
==3047==    by 0x4AF8BB: PyImport_Import (import.c:3510)
==3047==    by 0x4AD8AF: PyImport_ImportModule (import.c:2728)
==3047==    by 0x5567E9: _PyExc_Init (exceptions.c:2345)
==3047==    by 0x4BBA8F: Py_InitializeEx (pythonrun.c:277)
==3047==    by 0x4BBCA8: Py_Initialize (pythonrun.c:341)
==3047==    by 0x4D875C: Py_Main (main.c:595)
==3047==    by 0x41ABFF: main (python.c:63)
==3047== 


==3047== 37 bytes in 1 blocks are definitely lost in loss record 390 of 6,704
==3047==    at 0x4C27878: malloc (vg_replace_malloc.c:236)
==3047==    by 0x53F997: PyBytes_FromStringAndSize (bytesobject.c:98)
==3047==    by 0x4E5AB8: posix_read (posixmodule.c:7010)
==3047==    by 0x57B4AD: PyCFunction_Call (methodobject.c:81)
==3047==    by 0x490397: call_function (ceval.c:4051)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x48E509: PyEval_EvalCodeEx (ceval.c:3426)
==3047==    by 0x4908D7: fast_function (ceval.c:4149)
==3047==    by 0x4904EA: call_function (ceval.c:4072)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x48E509: PyEval_EvalCodeEx (ceval.c:3426)
==3047==    by 0x4908D7: fast_function (ceval.c:4149)
==3047== 


==3047== 72 bytes in 1 blocks are definitely lost in loss record 2,859 of 6,704
==3047==    at 0x4C27878: malloc (vg_replace_malloc.c:236)
==3047==    by 0x4DA93E: _PyObject_GC_Malloc (gcmodule.c:1455)
==3047==    by 0x4DAA5A: _PyObject_GC_NewVar (gcmodule.c:1487)
==3047==    by 0x427A77: PyTuple_New (tupleobject.c:91)
==3047==    by 0x427F3A: PyTuple_Pack (tupleobject.c:195)
==3047==    by 0x494607: compiler_add_o (compile.c:1024)
==3047==    by 0x4947EA: compiler_addop_o (compile.c:1056)
==3047==    by 0x49B1BB: compiler_visit_expr (compile.c:3341)
==3047==    by 0x49AEBB: compiler_visit_expr (compile.c:3293)
==3047==    by 0x498B3A: compiler_visit_stmt (compile.c:2411)
==3047==    by 0x494B32: compiler_body (compile.c:1247)
==3047==    by 0x494C33: compiler_mod (compile.c:1267)


==3047== 120 bytes in 3 blocks are definitely lost in loss record 3,649 of 6,704
==3047==    at 0x4C27878: malloc (vg_replace_malloc.c:236)
==3047==    by 0x4DC686: thread_PyThread_start_new_thread (_threadmodule.c:1052)
==3047==    by 0x57B4AD: PyCFunction_Call (methodobject.c:81)
==3047==    by 0x490397: call_function (ceval.c:4051)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x4907CB: fast_function (ceval.c:4139)
==3047==    by 0x4904EA: call_function (ceval.c:4072)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x48E509: PyEval_EvalCodeEx (ceval.c:3426)
==3047==    by 0x56263C: function_call (funcobject.c:669)
==3047==    by 0x532B87: PyObject_Call (abstract.c:2150)
==3047==    by 0x549EF6: method_call (classobject.c:320)


==3047== 120 bytes in 3 blocks are definitely lost in loss record 3,650 of 6,704
==3047==    at 0x4C27878: malloc (vg_replace_malloc.c:236)
==3047==    by 0x4DC686: thread_PyThread_start_new_thread (_threadmodule.c:1052)
==3047==    by 0x57B4AD: PyCFunction_Call (methodobject.c:81)
==3047==    by 0x490397: call_function (ceval.c:4051)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x4907CB: fast_function (ceval.c:4139)
==3047==    by 0x4904EA: call_function (ceval.c:4072)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
==3047==    by 0x48E509: PyEval_EvalCodeEx (ceval.c:3426)
==3047==    by 0x4908D7: fast_function (ceval.c:4149)
==3047==    by 0x4904EA: call_function (ceval.c:4072)
==3047==    by 0x48B826: PyEval_EvalFrameEx (ceval.c:2674)
msg153442 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2012-02-15 21:59
Stefan Krah <report@bugs.python.org> wrote:
> enough, the number of leaks in test_multiprocessing has grown to
> five in e9d01c5c92ed (the four additional ones were definitely not

To be sure, e9d01c5c92ed is completely innocent. I just tested
with that revision. :)
msg153443 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-02-15 22:04
The first one seems to be the import lock. Two others seem to be the internal "bootstate" structure used to run thread objects (daemon threads?).

Perhaps you could try to find whether a single test is responsible.
msg153701 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2012-02-19 12:39
> Two others seem to be the internal "bootstate" structure used to
> run thread objects (daemon threads?).

Even for daemon threads, since the boostate structure is freed when
the thread's run() method returns, this shouldn't show up as a
"definitely lost" leak. It wouldn't be freed on exit, but it should
still be reachable.

However, I have a stupid question: are those logs for the main
process, or for child processes ?
Because if fork() is called while other threads are running, since
only the main thread exists in the child process, all the blocks only
reachable from the other threads stacks at the time of the fork -
among which the bootstate structure, and probably the buffer allocated
in posix_read() while a thread is blocked on the read() syscall - are
effectively leaked in the child process since they're not referenced
anymore.
msg153720 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2012-02-19 19:27
Charles-Fran??ois Natali <report@bugs.python.org> wrote:
> However, I have a stupid question: are those logs for the main
> process, or for child processes ?

Valgrind was run with --trace-children=no. However, the option is a bit
tricky, since it only affects tracing of sub-processes initiated via the
exec system call.

Even with --trace-children=no Valgrind *does* trace into the child of a fork:

http://valgrind.org/docs/manual/manual-core.html#manual-core.basicopts

> Because if fork() is called while other threads are running, since
> only the main thread exists in the child process, all the blocks only
> reachable from the other threads stacks at the time of the fork -
> among which the bootstate structure, and probably the buffer allocated
> in posix_read() while a thread is blocked on the read() syscall - are
> effectively leaked in the child process since they're not referenced
> anymore.

All lines of the log I posted are prefixed with ==3047==, which should
be the process number of the main thread.

I don't know if Valgrind reports leaks in a forked-without-exec-child
as part of the parent process log. But it could be the case here.

Regarding the posix_read leak: Running with --num-callers=50 shows
that it's also thread related:

==8736== 37 bytes in 1 blocks are definitely lost in loss record 383 of 10,841
==8736==    at 0x4C2154B: malloc (vg_replace_malloc.c:236)
==8736==    by 0x5476A5: PyBytes_FromStringAndSize (bytesobject.c:98)
==8736==    by 0x4EC94B: posix_read (posixmodule.c:7010)
==8736==    by 0x58417C: PyCFunction_Call (methodobject.c:81)
==8736==    by 0x496AE0: call_function (ceval.c:4051)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x494D5D: PyEval_EvalCodeEx (ceval.c:3426)
==8736==    by 0x496FD2: fast_function (ceval.c:4149)
==8736==    by 0x496C13: call_function (ceval.c:4072)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x494D5D: PyEval_EvalCodeEx (ceval.c:3426)
==8736==    by 0x496FD2: fast_function (ceval.c:4149)
==8736==    by 0x496C13: call_function (ceval.c:4072)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x494D5D: PyEval_EvalCodeEx (ceval.c:3426)
==8736==    by 0x496FD2: fast_function (ceval.c:4149)
==8736==    by 0x496C13: call_function (ceval.c:4072)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x494D5D: PyEval_EvalCodeEx (ceval.c:3426)
==8736==    by 0x56A637: function_call (funcobject.c:669)
==8736==    by 0x53A203: PyObject_Call (abstract.c:2150)
==8736==    by 0x497B32: ext_do_call (ceval.c:4366)
==8736==    by 0x49213B: PyEval_EvalFrameEx (ceval.c:2715)
==8736==    by 0x496EC7: fast_function (ceval.c:4139)
==8736==    by 0x496C13: call_function (ceval.c:4072)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x496EC7: fast_function (ceval.c:4139)
==8736==    by 0x496C13: call_function (ceval.c:4072)
==8736==    by 0x491D87: PyEval_EvalFrameEx (ceval.c:2674)
==8736==    by 0x494D5D: PyEval_EvalCodeEx (ceval.c:3426)
==8736==    by 0x56A637: function_call (funcobject.c:669)
==8736==    by 0x53A203: PyObject_Call (abstract.c:2150)
==8736==    by 0x551FE1: method_call (classobject.c:320)
==8736==    by 0x53A203: PyObject_Call (abstract.c:2150)
==8736==    by 0x49638A: PyEval_CallObjectWithKeywords (ceval.c:3931)
==8736==    by 0x4E347F: t_bootstrap (_threadmodule.c:997)
==8736==    by 0x4E2DFC6: start_thread (in /lib/libpthread-2.7.so)
==8736==    by 0x579C64C: clone (in /lib/libc-2.7.so)
==8736==
msg153721 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-02-19 19:38
> Regarding the posix_read leak: Running with --num-callers=50 shows
> that it's also thread related:

Would be nice to know if it's a daemon thread. Daemon threads are simply
killed by the OS at shutdown.
msg203259 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-11-18 08:45
If the result of os.read() was stored in a Python daemon thread, the memory should be released since the following changeset. Can someone check if this issue still exist?

changeset:   87070:c2a13acd5e2b
user:        Victor Stinner <victor.stinner@gmail.com>
date:        Tue Nov 12 16:37:55 2013 +0100
files:       Lib/test/test_threading.py Misc/NEWS Python/pythonrun.c
description:
Close #19466: Clear the frames of daemon threads earlier during the Python
shutdown to call objects destructors. So "unclosed file" resource warnings are
now corretly emitted for daemon threads.
msg203281 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2013-11-18 11:30
The leaks still exist here.
msg203293 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-11-18 14:05
> If the result of os.read() was stored in a Python daemon thread, the 
> memory should be released since the following changeset. Can someone 
> check if this issue still exist?

If a daemon thread is killed while it is blocking on os.read() then the bytes object used as the read buffer will never be decrefed.
msg203294 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2013-11-18 14:07
> If a daemon thread is killed while it is blocking on os.read()
> then the bytes object used as the read buffer will never be decrefed.

Ah yes, so another reason to ensure that daemon threads exit normally at Python shutdown :-)
msg229303 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2014-10-14 15:27
Well, since I was the one who opened this:  It seems difficult if
not impossible to fix the problem, so we can probably close the
issue.

[Please just reopen if you disagree.]
History
Date User Action Args
2022-04-11 14:57:22adminsetgithub: 57299
2014-10-14 15:27:43skrahsetstatus: open -> closed
2014-10-14 15:27:32skrahsetresolution: wont fix
messages: + msg229303
stage: needs patch -> resolved
2013-11-18 14:07:37vstinnersetmessages: + msg203294
2013-11-18 14:05:54sbtsetnosy: + sbt
messages: + msg203293
2013-11-18 11:30:14skrahsetmessages: + msg203281
2013-11-18 08:45:45vstinnersetmessages: + msg203259
2012-02-19 19:38:54pitrousetmessages: + msg153721
2012-02-19 19:27:13skrahsetmessages: + msg153720
2012-02-19 12:39:45neologixsetmessages: + msg153701
2012-02-15 22:04:56pitrousetnosy: + neologix
messages: + msg153443
2012-02-15 21:59:52skrahsetmessages: + msg153442
2012-02-15 21:56:42skrahsetmessages: + msg153441
title: posix_read: memory leak -> test_multiprocessing: memory leaks
2012-02-15 01:36:38pitrousetnosy: + pitrou
messages: + msg153383
2011-10-02 16:07:55pitrousetnosy: + vstinner
2011-10-02 09:10:29skrahcreate