This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients amaury.forgeotdarc, eric.araujo, ncoghlan, pitrou, rhettinger, scott_daniels, tshepang, vstinner
Date 2012-06-28.00:46:07
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Hi, I wrote recently a similar function because timeit is not reliable by default. Results look random and require to run the same benchmark 3 times or more on the command line.

By default, the benchmark takes at least 5 measures, one measure should be greater than 100 ms, and the benchmark should not be longer than 1 second. I chose these parameters to get reliable results on microbenchmarks like "abc".encode("utf-8").

The calibration function uses also the precision of the timer. The user may define a minimum time (of one measure) smaller than the timer precision, so the calibration function tries to solve such issue. The calibration computes the number of loops and the number of repetitions.

Look at BenchmarkRunner.calibrate_timer() and BenchmarkRunner.run_benchmark().
Date User Action Args
2012-06-28 00:46:09vstinnersetrecipients: + vstinner, rhettinger, scott_daniels, amaury.forgeotdarc, ncoghlan, pitrou, eric.araujo, tshepang
2012-06-28 00:46:09vstinnersetmessageid: <>
2012-06-28 00:46:08vstinnerlinkissue6422 messages
2012-06-28 00:46:07vstinnercreate