This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Ignored ResourceWarning warnings leak memory in warnings registries
Type: resource usage Stage: resolved
Components: IO Versions: Python 3.7
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: vstinner Nosy List: Decorater, jbfzs, ncoghlan, r.david.murray, rhettinger, serhiy.storchaka, thehesiod, vstinner, Александр Карпинский
Priority: low Keywords: patch

Created on 2016-07-17 01:53 by Александр Карпинский, last changed 2022-04-11 14:58 by admin.

Files
File name Uploaded Description Edit
memory.py Александр Карпинский, 2016-07-17 01:53
memory2.py vstinner, 2017-11-21 15:15
bench_ignore_warn.py vstinner, 2017-11-21 23:54
bench_c_warn.patch vstinner, 2017-11-24 21:17
bench_ignore_warn_c.py vstinner, 2017-11-24 21:17
Pull Requests
URL Status Linked Edit
PR 4489 merged vstinner, 2017-11-21 15:09
PR 4502 closed vstinner, 2017-11-22 16:34
PR 4508 merged vstinner, 2017-11-22 21:56
PR 4516 merged vstinner, 2017-11-23 09:33
PR 4587 closed python-dev, 2017-11-27 16:04
PR 4588 closed vstinner, 2017-11-27 16:08
Messages (39)
msg270600 - (view) Author: Александр Карпинский (Александр Карпинский) Date: 2016-07-17 01:53
Actually, this issue is related to the warning module. The test script creates a lot of files with different names and deletes them. On the first pass the scripts calls the `f.close()` method for every file. On the second it doesn't.

As a result, on the first pass, the memory consumption is constant and is about 3.9 Mb for my environment. For the second pass, the memory consumption constantly grows up to 246 Mb for 1 million files. I.e. memory leak is about 254 bytes for every opened file.

This happens because of warning about not closed files. The memory is consumed for storing all possible warning messages generated for different file names to prevent further warnings with the same messages. Which is not the case if we are always opening new files.

While not calling f.close() **might** lead to memory issues, the warning which notifies about this **absolutely guaranteed** leads to memory issue. Although f.close() is highly recommended, this is still the issue for code which doesn't use it for some reasons and doesn't experience other problems with not closed files because files are actually always closed in IOBase.__del__.

This issue was discovered in this report: https://github.com/python-pillow/Pillow/issues/2019


The best way to fix this is excluding file name from warning message. As a result, the message for every file will be the same and warnings registry will not grow. For now, the message looks like this:

./memory.py:23: ResourceWarning: unclosed file <_io.FileIO name='__999.tmp' mode='wb' closefd=True>
  open_files(number, close)

It may looks like this:

./memory.py:23: ResourceWarning: unclosed file <FileIO mode='wb' closefd=True>
  open_files(number, close)
msg270608 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-07-17 05:38
The file name is helpful for determining the source of the leak. If you don't want to close files in your script, set an appropriate warning filter to silence this kind of warnings.
msg270609 - (view) Author: Александр Карпинский (Александр Карпинский) Date: 2016-07-17 05:46
@serhiy.storchaka

Any filters not solves the problem because warnings module SAVES EVERY WARNING MESSAGE for further duplication checks.

Yes, the file name is helpful for determining the source of the POSSIBLE leak. While file name in error message IS SOURCE OF THE LEAK. When you aware about not closed files, you can log file names. You have a way to do that. But if you want to ignore the warning, the memory will leak. Not because of not closed files, but because of warning itself.

I do want to close files in my script, but in Pillow, it is not so easy due to backward compatibility.
msg270644 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2016-07-17 15:11
I recommend rejecting this.  Properly closing flies is the correct programming habit, which is why the ResourceWarning exists.  The Pillow example seems to just indicate that people using Pillow need to pay attention to the ResourceWarning and close the files they open with pillow.
msg270647 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-07-17 15:32
Thus, the problem is that even ignored warnings are saved. If there is a simple fix of this problem we can fix it. But if there is no simple and straightforward way, it may be not worth to complicate the code for such case. I'll look at the code.
msg270666 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-07-17 20:15
I would like to see this fixed.  It is rather unfortunate that a tool designed to help find possible resource leaks is itself a certain resource leak.
msg306636 - (view) Author: Jonathan Bastien-Filiatrault (jbfzs) Date: 2017-11-21 13:35
We just got hit by this. We had one specific case where files with unique names were not being closed and Python was leaking a lot of memory over a few days.
msg306645 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-21 14:46
The problem is not the function displaying the warning, but warnings registries (__warningregistry__) no?

The behaviour depends on the action of the filter matching the ResourceWarning warning:

* always: don't touch the registry => no leak
* ignore: add the key to registry => leak!
* another action: add the key to registry => leak!

The key is the tuple (text, category, lineno).

I understand why always doesn't touch the registry. But why does "ignore" action touch the registry? Not only the user will not see the message, but the memory will slowly grow in the background.
msg306648 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-21 15:15
I created PR 4489 to fix the memory leak for the "ignore "action". IMHO it's interesting to modify the "ignore" action because Python uses ignore many warnings by default:

haypo@selma$ python3 -c 'import pprint, warnings; pprint.pprint(warnings.filters)'
[('ignore', None, <class 'DeprecationWarning'>, None, 0),
 ('ignore', None, <class 'PendingDeprecationWarning'>, None, 0),
 ('ignore', None, <class 'ImportWarning'>, None, 0),
 ('ignore', None, <class 'BytesWarning'>, None, 0),
 ('ignore', None, <class 'ResourceWarning'>, None, 0)]


I created memory2.py (based on attached memory.py) to measure the leak.

Currently, the "always" action doesn't leak, but "ignore" and "default" actions leak:

haypo@selma$ ./python -W always memory2.py 
Memory peak grow: +3.2 kB
Warning filters:
[('always',
  re.compile('', re.IGNORECASE),
  <class 'Warning'>,
  re.compile(''),
  0),
 ('ignore', None, <class 'BytesWarning'>, None, 0),
 ('always', None, <class 'ResourceWarning'>, None, 0)]

haypo@selma$ ./python -W ignore memory2.py 
Memory peak grow: +26692.3 kB
Warning filters:
[('ignore',
  re.compile('', re.IGNORECASE),
  <class 'Warning'>,
  re.compile(''),
  0),
 ('ignore', None, <class 'BytesWarning'>, None, 0),
 ('always', None, <class 'ResourceWarning'>, None, 0)]

haypo@selma$ ./python -W default memory2.py 
Memory peak grow: +26693.2 kB
Warning filters:
[('default',
  re.compile('', re.IGNORECASE),
  <class 'Warning'>,
  re.compile(''),
  0),
 ('ignore', None, <class 'BytesWarning'>, None, 0),
 ('always', None, <class 'ResourceWarning'>, None, 0)]


With my PR 4489, the "ignore" action doesn't leak anymore:

haypo@selma$ ./python -W ignore memory2.py 
Memory peak grow: +2.4 kB
Warning filters:
[('ignore',
  re.compile('', re.IGNORECASE),
  <class 'Warning'>,
  re.compile(''),
  0),
 ('ignore', None, <class 'BytesWarning'>, None, 0),
 ('always', None, <class 'ResourceWarning'>, None, 0)]
msg306649 - (view) Author: Jonathan Bastien-Filiatrault (jbfzs) Date: 2017-11-21 15:17
@vstinner Yes, from what I saw, the leak was from the registry / deduplication logic.
msg306652 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-21 15:31
> The best way to fix this is excluding file name from warning message. As a result, the message for every file will be the same and warnings registry will not grow.

Your warning comes from the io module which uses the message: "unclosed file %r" % file.

I dislike the idea of removing the filename from this warning message, since the filename is very useful to identify where the bug comes from.

Different things are discussed here:

* First of all, if an application is correctly written, ResourceWarning is not emitted, and so Python doesn't leak memory

* Using the "always" action, ResourceWarning doesn't leak neither.

* Using a different action like "default", ResourceWarning does leak memory in hidden warning registries.

While I don't see how we can avoid "leaking memory" (growing the registry) for actions like "once", I think that it's ok to don't touch the registry for the "ignore" action. So at least, an application ignoring ResourceWarning will not leak anymore.

But it will still "leak" when you display ResourceWarning warnings with an action different than "always". In this case, IMHO the root issue is more the code which doesn't close the resource, than Python itself.
msg306661 - (view) Author: Jonathan Bastien-Filiatrault (jbfzs) Date: 2017-11-21 15:54
> But it will still "leak" when you display ResourceWarning warnings with an action different than "always". In this case, IMHO the root issue is more the code which doesn't close the resource, than Python itself.

Not closing a file is a bug, but under normal circumstances it causes no leak by itself. The fact that the warnings module leaks in this case seems a problem. Had I logged warnings correctly, I would have found the bug by looking at the application log rather than by investigating the cause of the OOM killer invocation.

IMHO, the warnings module should have upper bounds on memory consumption to avoid DOSing itself.
msg306664 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-21 16:08
> While I don't see how we can avoid "leaking memory" (growing the registry) for actions like "once", I think that it's ok to don't touch the registry for the "ignore" action. So at least, an application ignoring ResourceWarning will not leak anymore.

This is a caching for speed. If the same line of code emits the same warning, it is enough to tests the registry and don't apply all warning filters.

I think we should find better compromise between performance if all messages equal and memory usage if all messages are different. Since caching for the "ignore" action affects only performance, not semantic, we could check whether there are too much warnings for the "ignore" action are cached in the same line. The problem is how to make this effectively. And what is "too much".
msg306695 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-21 23:54
Attached bench_ignore_warn.py: microbenchmark (using the perf module) to measure the performance of emitting a warning when the warning is ignored.

I added two basic optimizations to my PR 4489. With these optimizations, the slowdown is +17% on such microbenchmark:

$ ./python -m perf compare_to ref.json patch.json 
Mean +- std dev: [ref] 903 ns +- 70 ns -> [patch] 1.06 us +- 0.06 us: 1.17x slower (+17%)

The slowdown was much larger without optimizations, +42%:

$ ./python -m perf compare_to ref.json ignore.json 
Mean +- std dev: [ref] 881 ns +- 59 ns -> [ignore] 1.25 us +- 0.08 us: 1.42x slower (+42%)


About the memory vs CPU tradeoff, we are talking around ~1000 ns. IMHO 1000 ns is "cheap" (fast) and emitting warnings is a rare event Python. I prefer to make warnings slower than "leaking" memory (current behaviour: warnings registry growing with no limit).
msg306696 - (view) Author: Decorater (Decorater) * Date: 2017-11-22 00:23
If it was me I would store the warning registry in an error log or something in the current directory that ran python itself. (maybe something like ``[main script name].log``? This way it generates the warnings like usual and does not eat up memory. And then the log could to turned off when they dont want any warnings (when ignored).
msg306712 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2017-11-22 10:04
Touching the filesystem implicitly can have all sorts of unintentional side effects (especially in read-only environments), so we strongly prefer not to do that.

The most general solution here would be to turn the warnings registry into an LRU cache - short-lived applications wouldn't see any changes, while long-lived applications with changing warning messages would hit the upper bound and then stay there.

I also like Victor's change to have the "ignore" option skip touching the registry. If we decide we still want to optimise out the linear filter scan for those cases, then I'd suggest lazily building a dispatch table with separate per-category warnings filter lists (updating the dispatch table on the next emitted warning after the filter list changes), and then only falling back to scanning the full filter list for warning categories that aren't an exact match for anything in the dispatch table.
msg306716 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-22 10:23
I think it is common if filters don't depend on the warning test. In this case we can test and set the key (None, category, lineno) before (text, category, lineno).
msg306721 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-22 11:09
> $ ./python -m perf compare_to ref.json patch.json 
> Mean +- std dev: [ref] 903 ns +- 70 ns -> [patch] 1.06 us +- 0.06 us: 1.17x slower (+17%)

We are talking about a difference of 157 nanoseconds. On the same laptop, a Python function call which does nothing already takes 76.6 ns +- 4.1 ns. So the overhead of my PR 4489 is the cost of two Python function calls which do nothing. Do you think that it's an inacceptable overhead?
msg306723 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-22 11:29
The performance bottleneck of warnings.warn() is the dance between the C _warnings module and the Python warnings module. The C code retrieves many attributes, like warnings.filters, at each call, and does conversion from Python objects to C objects.

There is already a mecanism to invalidate a "cache" in the C module: warnings._filters_mutated() is called by warnings.filterwarnings() for example.

Maybe the C module could convert all filters at once into an efficient C structure, but throw this away once on cache invalidation.

The problem is that I'm not sure that it's ok to implement such deeper cache at C level. Is it part of the warnings "semantics" to allow users to modify directly warnings.filters? Must the C module always lookup in sys.modules if the 'warnings' modules didn't change?

Outside test_warnings, do you know an use case where lookup for the 'warnings' module and 'warnings.filters' must be done at *each* warnings.warn() call?
msg306725 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-22 12:23
I implemented a cache for warnings.filters in C. With my WIP patch which doesn't touch the registry for the ignore action, warnings.warn() is now faster than the current code ;-)

haypo@selma$ ./python -m perf compare_to ref.json patch.json 
Mean +- std dev: [ref] 938 ns +- 72 ns -> [patch] 843 ns +- 57 ns: 1.11x faster (-10%)

There is a single test in test_warnings which modifies directly warnings.filters:

    @support.cpython_only
    def test_issue31416(self):
        # warn_explicit() shouldn't cause an assertion failure in case of a
        # bad warnings.filters or warnings.defaultaction.
        wmod = self.module
        with original_warnings.catch_warnings(module=wmod):
            wmod.filters = [(None, None, Warning, None, 0)]
            with self.assertRaises(TypeError):
                wmod.warn_explicit('foo', Warning, 'bar', 1)

            wmod.filters = []
            with support.swap_attr(wmod, 'defaultaction', None), \
                 self.assertRaises(TypeError):
                wmod.warn_explicit('foo', Warning, 'bar', 1)

I don't think that it's common to modify warnings.filters directly. Maybe we can make this sequence immutable in the public API to prevent misuse of the warnings API? To force users to use warnings.simplefilter() and warnings.filterwarnings()?

IMHO the main usage of modifying directyl warnings.filters it to save/restore filters. But there is already an helper for that: warnings.catch_warnings().
msg306735 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-22 16:35
I published my WIP work on optimizing warnings: PR 4502, it's based on PR 4489. The code should be cleaned up, especially the "_PyRuntime" part. The default action caching should be fixed.
msg306758 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-22 22:51
New changeset 82656276caf4cb889193572d2d14dbc5f3d2bdff by Victor Stinner in branch 'master':
bpo-27535: Optimize warnings.warn() (#4508)
https://github.com/python/cpython/commit/82656276caf4cb889193572d2d14dbc5f3d2bdff
msg306773 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-23 00:43
My PR 4502 implements all optimizations, but also converts warnings.filters to a tuple, to detect when detect attempts to modify directly warnings.filters, rather than using function calls.

Problem: test.libregrtest module uses a pattern like this:

  old_filters = warnings.filters[:]  # in function 1
  (...)
  warnings.filters[:] = old_filters  # in function 2

While this pattern is perfectly fine when filters is a list, "warnings.filters[:] = old_filters" fails since a tuple is immutable.

Again, the question is if it's ok to break such code. I'm no more sure that catch_warnings() handles completly this use case, and a context manager is not convenient in the specific case of test.libregrest.
msg306828 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-23 16:13
New changeset b98f1715a35d2cbd1e9e45e1e7ae51a39e00dc4a by Victor Stinner in branch 'master':
bpo-27535: Cleanup create_filter() (#4516)
https://github.com/python/cpython/commit/b98f1715a35d2cbd1e9e45e1e7ae51a39e00dc4a
msg306919 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-24 21:17
New benchmark on the emitting a warning which is ignored. Benchmark the PR 4489.


Warning emitted in Python, warnings.warn():

vstinner@apu$ ./python -m perf compare_to master.json ignore.json 
Mean +- std dev: [master] 705 ns +- 24 ns -> [ignore] 838 ns +- 18 ns: 1.19x slower (+19%)

==> +133 ns


Warning emitted in C, PyErr_WarnEx():

vstinner@apu$ python3 -m perf compare_to master2.json ignore2.json 
Mean +- std dev: [master2] 702 ns +- 9 ns -> [ignore2] 723 ns +- 9 ns: 1.03x slower (+3%)

==> +21 ns


C benchmark, attached files:

* bench_c_warn.patch
* bench_ignore_warn_c.py
msg306920 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-24 21:22
These results look strange to me. I expected the same difference but smaller the base time.
msg306923 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-24 21:37
> These results look strange to me. I expected the same difference but smaller the base time.

Honestly, I was also surprised.

I checked the whole benchmark twice. I also rebased the PR locally to make sure that it's not a side effect of a recent change in master.

Results are reliable and reproductible.

FYI I ran the C benchmark on my laptop "apu" using CPU isolation.

--

I reproduced the benchmark on my other "selma" laptop, without CPU isolation (so less reliable).

C benchmark:

haypo@selma$ python3 -m perf compare_to master.json ignore.json 
Mean +- std dev: [master] 932 ns +- 66 ns -> [ignore] 1.01 us +- 0.05 us: 1.09x slower (+9%)

==> +78 ns
msg307057 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-27 15:57
New changeset c9758784eb321fb9771e0bc7205b296e4d658045 by Victor Stinner in branch 'master':
bpo-27535: Fix memory leak with warnings ignore (#4489)
https://github.com/python/cpython/commit/c9758784eb321fb9771e0bc7205b296e4d658045
msg307058 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-27 16:00
https://github.com/python/cpython/pull/4489#issuecomment-346673704

Serhiy: "Could you please make a benchmark for warnings emitted in a tight loop in the C code. It would be nice to know what is the worst case. If it is less than 2x slower I will prefer this simpler PR."

I merged my PR since it's not 2x slower. I also prefer the simple PR 4489, rather than the complex PR 4502.

I rejected my PR 4502 since it's backward incompatible. I don't think that it's worth it, even if it's faster.
msg307059 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-27 16:04
Thank you Victor!
msg307061 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-27 16:09
I consider that PR 4489 is a bugfix, so I proposed backports to Python 2.7 (PR 4588) and 3.6 (PR 4587).
msg307133 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-28 15:45
Victor: "I consider that PR 4489 is a bugfix, so I proposed backports to Python 2.7 (PR 4588) and 3.6 (PR 4587)."

Serhiy: Are you ok to backport the change to stable branches?
msg307135 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-28 15:54
I'm not sure. Ask the RM for 3.6.

As for 2.7, does this problem exist in 2.7?
msg307137 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-28 15:56
Serhiy: "As for 2.7, does this problem exist in 2.7?"

Yes, see my PR 4588. If you run the test without the fix, the test fails.
msg307139 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-11-28 16:01
I meant the original issue. AFAIK ResourceWarningis not raised in 2.7. And other warnings less likely have unique messages. Seems the problem is less critical in 2.7.
msg307161 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-28 20:38
Serhiy: "I meant the original issue. AFAIK ResourceWarningis not raised in 2.7. And other warnings less likely have unique messages. Seems the problem is less critical in 2.7."

Oh yes, sorry, I forgot your comment and I forgot that Python 2.7 doesn't have ResourceWarning. I closed my PR for Python 2.7: https://github.com/python/cpython/pull/4588
msg307165 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2017-11-28 20:48
I ran a quick benchmark on Python 3.6: current code (3.6 branch, ref) => PR 4587 (patch):

vstinner@apu$ python3 -m perf compare_to ref.json patch.json 
Mean +- std dev: [ref] 597 ns +- 10 ns -> [patch] 830 ns +- 15 ns: 1.39x slower (+39%)

I don't want to backport an optimization to a stable branch, so I prefer to not backport the warnings change on the ignore action to Python 3.6. I rejected my PR 4587.

The bug was fixed in master. I don't want to backport the change to 2.7 nor 3.6, so I close the issue.
msg317421 - (view) Author: Alexander Mohr (thehesiod) * Date: 2018-05-23 16:16
not fixing this means that 3.6 slowly leaks for many people in prod.  It's not often possible to fix all the warnings on large dynamic applications, I highly suggest finding a way to get this into 3.6.  I bet there are a lot of frustrated people out there who don't know why their applications slowly leak.
msg317460 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2018-05-23 21:26
Alexander Mohr reported a memory leak in his code using botocore (Python client for Amazon S3): bpo-33565. His code emited ResourceWarning, but these warnings are ignored by default. The link between ignored warnings and a memory leak is non obvious.

Serhiy: what do you think of backporting my _warnings change for the ignore action?
History
Date User Action Args
2022-04-11 14:58:33adminsetgithub: 71722
2018-05-23 21:26:16vstinnersetstatus: closed -> open
resolution: fixed ->
messages: + msg317460
2018-05-23 16:16:29thehesiodsetnosy: + thehesiod
messages: + msg317421
2017-11-28 20:48:10vstinnersetstatus: open -> closed
resolution: fixed
messages: + msg307165

stage: patch review -> resolved
2017-11-28 20:38:18vstinnersetmessages: + msg307161
2017-11-28 16:01:56serhiy.storchakasetmessages: + msg307139
2017-11-28 15:56:48vstinnersetmessages: + msg307137
2017-11-28 15:54:51serhiy.storchakasetmessages: + msg307135
2017-11-28 15:45:59vstinnersetmessages: + msg307133
2017-11-27 16:09:32vstinnersetmessages: + msg307061
2017-11-27 16:08:28vstinnersetpull_requests: + pull_request4511
2017-11-27 16:04:29python-devsetpull_requests: + pull_request4510
2017-11-27 16:04:05serhiy.storchakasetassignee: serhiy.storchaka -> vstinner
messages: + msg307059
versions: + Python 3.7, - Python 3.6
2017-11-27 16:00:28vstinnersetmessages: + msg307058
2017-11-27 15:57:14vstinnersetmessages: + msg307057
2017-11-24 21:37:01vstinnersetmessages: + msg306923
2017-11-24 21:22:12serhiy.storchakasetmessages: + msg306920
2017-11-24 21:17:36vstinnersetfiles: + bench_ignore_warn_c.py
2017-11-24 21:17:28vstinnersetfiles: + bench_c_warn.patch

messages: + msg306919
2017-11-23 16:13:46vstinnersetmessages: + msg306828
2017-11-23 09:33:08vstinnersetpull_requests: + pull_request4453
2017-11-23 00:43:39vstinnersetmessages: + msg306773
2017-11-22 22:51:45vstinnersetmessages: + msg306758
2017-11-22 21:56:09vstinnersetpull_requests: + pull_request4445
2017-11-22 16:35:27vstinnersetmessages: + msg306735
2017-11-22 16:34:31vstinnersetpull_requests: + pull_request4440
2017-11-22 12:23:34vstinnersetmessages: + msg306725
2017-11-22 11:29:35vstinnersetmessages: + msg306723
2017-11-22 11:09:52vstinnersetmessages: + msg306721
2017-11-22 10:23:57serhiy.storchakasetmessages: + msg306716
2017-11-22 10:04:00ncoghlansetnosy: + ncoghlan
messages: + msg306712
2017-11-22 00:23:10Decoratersetnosy: + Decorater
messages: + msg306696
2017-11-21 23:54:27vstinnersetfiles: + bench_ignore_warn.py

messages: + msg306695
2017-11-21 16:08:13serhiy.storchakasetmessages: + msg306664
2017-11-21 15:54:01jbfzssetmessages: + msg306661
2017-11-21 15:31:14vstinnersetmessages: + msg306652
2017-11-21 15:24:40vstinnersettitle: Memory leaks when opening tons of files -> Ignored ResourceWarning warnings leak memory in warnings registries
2017-11-21 15:17:45jbfzssetmessages: + msg306649
2017-11-21 15:15:14vstinnersetfiles: + memory2.py

messages: + msg306648
2017-11-21 15:09:14vstinnersetkeywords: + patch
stage: patch review
pull_requests: + pull_request4427
2017-11-21 14:46:11vstinnersetnosy: + vstinner
messages: + msg306645
2017-11-21 13:35:16jbfzssetnosy: + jbfzs
messages: + msg306636
2016-07-17 20:15:07rhettingersetnosy: + rhettinger
messages: + msg270666
2016-07-17 15:32:58serhiy.storchakasetpriority: normal -> low
assignee: serhiy.storchaka
messages: + msg270647

versions: + Python 3.6, - Python 3.4, Python 3.5
2016-07-17 15:11:14r.david.murraysetnosy: + r.david.murray
messages: + msg270644
2016-07-17 05:46:24Александр Карпинскийsetmessages: + msg270609
2016-07-17 05:38:04serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg270608
2016-07-17 01:53:43Александр Карпинскийcreate