This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author methane
Recipients methane, rhettinger, serhiy.storchaka, xiang.zhang
Date 2017-01-26.18:54:31
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1485456872.04.0.883683988827.issue29304@psf.upfronthosting.co.za>
In-reply-to
Content
dict-refactoring-3.patch:

$ ../python.default -m perf compare_to default.json patched2.json -G --min-speed=2
Slower (7):
- scimark_lu: 422 ms +- 35 ms -> 442 ms +- 11 ms: 1.05x slower (+5%)
- logging_silent: 736 ns +- 7 ns -> 761 ns +- 21 ns: 1.03x slower (+3%)
- scimark_sor: 482 ms +- 8 ms -> 494 ms +- 7 ms: 1.03x slower (+3%)
- meteor_contest: 200 ms +- 2 ms -> 205 ms +- 2 ms: 1.02x slower (+2%)
- unpickle: 32.2 us +- 0.4 us -> 32.9 us +- 0.5 us: 1.02x slower (+2%)
- unpickle_pure_python: 829 us +- 13 us -> 848 us +- 14 us: 1.02x slower (+2%)
- scimark_sparse_mat_mult: 8.71 ms +- 0.32 ms -> 8.89 ms +- 0.13 ms: 1.02x slower (+2%)

Faster (8):
- unpack_sequence: 132 ns +- 2 ns -> 123 ns +- 2 ns: 1.07x faster (-7%)
- call_simple: 14.3 ms +- 0.5 ms -> 13.4 ms +- 0.3 ms: 1.07x faster (-6%)
- call_method: 15.1 ms +- 0.1 ms -> 14.5 ms +- 0.2 ms: 1.04x faster (-4%)
- mako: 40.7 ms +- 0.5 ms -> 39.6 ms +- 0.5 ms: 1.03x faster (-3%)
- scimark_monte_carlo: 266 ms +- 7 ms -> 258 ms +- 6 ms: 1.03x faster (-3%)
- chameleon: 30.4 ms +- 0.4 ms -> 29.6 ms +- 0.4 ms: 1.03x faster (-3%)
- xml_etree_parse: 319 ms +- 11 ms -> 312 ms +- 15 ms: 1.02x faster (-2%)
- pickle_pure_python: 1.28 ms +- 0.03 ms -> 1.26 ms +- 0.02 ms: 1.02x faster (-2%)

# microbench

$ ./python -m perf timeit --compare-to=`pwd`/python.default -s 'r=range(1000)' -- '{k:k for k in r}'                                                                               
python.default: ..................... 60.0 us +- 0.3 us
python: ..................... 61.7 us +- 0.4 us

Median +- std dev: [python.default] 60.0 us +- 0.3 us -> [python] 61.7 us +- 0.4 us: 1.03x slower (+3%)

$ ./python -m perf timeit --compare-to=`pwd`/python.default -s 'ks=[str(k) for k in range(1000)]; d={k:k for k in ks}' -- 'for k in ks: d[k]'                                      
python.default: ..................... 37.1 us +- 0.9 us
python: ..................... 37.7 us +- 0.9 us

Median +- std dev: [python.default] 37.1 us +- 0.9 us -> [python] 37.7 us +- 0.9 us: 1.02x slower (+2%)

Hmm, 3% slower?
I'll rerun benchmarks with PGO+LTO build.
History
Date User Action Args
2017-01-26 18:54:32methanesetrecipients: + methane, rhettinger, serhiy.storchaka, xiang.zhang
2017-01-26 18:54:32methanesetmessageid: <1485456872.04.0.883683988827.issue29304@psf.upfronthosting.co.za>
2017-01-26 18:54:32methanelinkissue29304 messages
2017-01-26 18:54:31methanecreate