Message306695
Attached bench_ignore_warn.py: microbenchmark (using the perf module) to measure the performance of emitting a warning when the warning is ignored.
I added two basic optimizations to my PR 4489. With these optimizations, the slowdown is +17% on such microbenchmark:
$ ./python -m perf compare_to ref.json patch.json
Mean +- std dev: [ref] 903 ns +- 70 ns -> [patch] 1.06 us +- 0.06 us: 1.17x slower (+17%)
The slowdown was much larger without optimizations, +42%:
$ ./python -m perf compare_to ref.json ignore.json
Mean +- std dev: [ref] 881 ns +- 59 ns -> [ignore] 1.25 us +- 0.08 us: 1.42x slower (+42%)
About the memory vs CPU tradeoff, we are talking around ~1000 ns. IMHO 1000 ns is "cheap" (fast) and emitting warnings is a rare event Python. I prefer to make warnings slower than "leaking" memory (current behaviour: warnings registry growing with no limit). |
|
Date |
User |
Action |
Args |
2017-11-21 23:54:27 | vstinner | set | recipients:
+ vstinner, rhettinger, r.david.murray, serhiy.storchaka, Александр Карпинский, jbfzs |
2017-11-21 23:54:27 | vstinner | set | messageid: <1511308467.69.0.213398074469.issue27535@psf.upfronthosting.co.za> |
2017-11-21 23:54:27 | vstinner | link | issue27535 messages |
2017-11-21 23:54:27 | vstinner | create | |
|