This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients gvanrossum, mark.dickinson, methane, ned.deily, skrah, vstinner, yselivanov
Date 2018-01-26.23:16:30
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1517008590.46.0.467229070634.issue32630@psf.upfronthosting.co.za>
In-reply-to
Content
> Likewise, on the same builds, running _decimal/tests/bench.py does not show a significant difference: https://gist.github.com/elprans/fb31510ee28a3aa091aee3f42fe65e00

Note: it may be interesting to rewrite this benchmark my perf module to be able to easily check if a benchmark result is significant.

http://perf.readthedocs.io/en/latest/cli.html#perf-compare-to

"perf determines whether two samples differ significantly using a Student’s two-sample, two-tailed t-test with alpha equals to 0.95."

=> https://en.wikipedia.org/wiki/Student's_t-test


Usually, I consider that between 5% slower and 5% faster is not significant. But it depends how the benchmark was run, it depends on the type of benchmark, etc. Here I don't know bench.py so I cannot judge.

For example, for an optimization, I'm more interested by an optimization making a benchmark 10% faster ;-)
History
Date User Action Args
2018-01-26 23:16:30vstinnersetrecipients: + vstinner, gvanrossum, mark.dickinson, ned.deily, methane, skrah, yselivanov
2018-01-26 23:16:30vstinnersetmessageid: <1517008590.46.0.467229070634.issue32630@psf.upfronthosting.co.za>
2018-01-26 23:16:30vstinnerlinkissue32630 messages
2018-01-26 23:16:30vstinnercreate