This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients rbcollins, serhiy.storchaka, vstinner
Date 2016-06-09.23:07:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1465513653.26.0.603214409924.issue23693@psf.upfronthosting.co.za>
In-reply-to
Content
Hi,

I develop a new implementation of timeit which should be more reliable:
http://perf.readthedocs.io/en/latest/

* Run 25 processes instead of just 1
* Compute average and standard deviation rather than the minimum
* Don't disable the garbage collector
* Skip the first timing to "warmup" the benchmark

Using the minimum and disable the garbage collector is a bad practice, it is not reliable:

* multiple processes are need to test different random hash functions, since Python hash function is now randomized by default in Python 3
* Linux also randomizes the address space by default (ASLR) and so the exact timing of memory accesses is different in each process

My following blog post "My journey to stable benchmark, part 3 (average)" explains in depth the multiple issues of using the minimum:
https://haypo.github.io/journey-to-stable-benchmark-average.html

My perf module is very yound, it's still a work-in-progress. It should be better than timeit right now. It works on Python 2.7 and 3 (I tested 3.4).

We may pick the best ideas into the timeit module.

See also my article explaining how to tune Linux to reduce the "noise" of the operating system on microbenchmarks:
https://haypo.github.io/journey-to-stable-benchmark-system.html
History
Date User Action Args
2016-06-09 23:07:33vstinnersetrecipients: + vstinner, rbcollins, serhiy.storchaka
2016-06-09 23:07:33vstinnersetmessageid: <1465513653.26.0.603214409924.issue23693@psf.upfronthosting.co.za>
2016-06-09 23:07:33vstinnerlinkissue23693 messages
2016-06-09 23:07:32vstinnercreate