Message277146
Maciej Fijalkowski also sent me the following article a few months ago, it also explains indirectly why using the minimum for benchmarks is not reliable:
"Virtual Machine Warmup Blows Hot and Cold"
http://arxiv.org/pdf/1602.00602.pdf
Even if the article is more focused on JIT compilers, it shows that benchmarks are not straightforward but always full of bad surprises.
A benchmark doesn't have a single value but a *distribution*. The best question is how to summarize the full distribution without loosing too much information.
In the perf module I decided to not take a decision: a JSON file stores *all* data :-D But by default, perf displays mean +- std dev. |
|
Date |
User |
Action |
Args |
2016-09-21 14:20:43 | vstinner | set | recipients:
+ vstinner, tim.peters, brett.cannon, pitrou, ned.deily, fijall, serhiy.storchaka, yselivanov |
2016-09-21 14:20:43 | vstinner | set | messageid: <1474467643.8.0.892269378533.issue28240@psf.upfronthosting.co.za> |
2016-09-21 14:20:43 | vstinner | link | issue28240 messages |
2016-09-21 14:20:43 | vstinner | create | |
|