Author haypo
Recipients amaury.forgeotdarc, haypo, merwok, ncoghlan, pitrou, rhettinger, scott_daniels, tshepang
Date 2012-06-28.00:46:07
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1340844369.41.0.55095615737.issue6422@psf.upfronthosting.co.za>
In-reply-to
Content
Hi, I wrote recently a similar function because timeit is not reliable by default. Results look random and require to run the same benchmark 3 times or more on the command line.

https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py

By default, the benchmark takes at least 5 measures, one measure should be greater than 100 ms, and the benchmark should not be longer than 1 second. I chose these parameters to get reliable results on microbenchmarks like "abc".encode("utf-8").

The calibration function uses also the precision of the timer. The user may define a minimum time (of one measure) smaller than the timer precision, so the calibration function tries to solve such issue. The calibration computes the number of loops and the number of repetitions.

Look at BenchmarkRunner.calibrate_timer() and BenchmarkRunner.run_benchmark().
https://bitbucket.org/haypo/misc/src/bfacfb9a1224/python/benchmark.py#cl-362
History
Date User Action Args
2012-06-28 00:46:09hayposetrecipients: + haypo, rhettinger, scott_daniels, amaury.forgeotdarc, ncoghlan, pitrou, merwok, tshepang
2012-06-28 00:46:09hayposetmessageid: <1340844369.41.0.55095615737.issue6422@psf.upfronthosting.co.za>
2012-06-28 00:46:08haypolinkissue6422 messages
2012-06-28 00:46:07haypocreate