Message164216
Hi, I wrote recently a similar function because timeit is not reliable by default. Results look random and require to run the same benchmark 3 times or more on the command line.
https://bitbucket.org/haypo/misc/src/tip/python/benchmark.py
By default, the benchmark takes at least 5 measures, one measure should be greater than 100 ms, and the benchmark should not be longer than 1 second. I chose these parameters to get reliable results on microbenchmarks like "abc".encode("utf-8").
The calibration function uses also the precision of the timer. The user may define a minimum time (of one measure) smaller than the timer precision, so the calibration function tries to solve such issue. The calibration computes the number of loops and the number of repetitions.
Look at BenchmarkRunner.calibrate_timer() and BenchmarkRunner.run_benchmark().
https://bitbucket.org/haypo/misc/src/bfacfb9a1224/python/benchmark.py#cl-362 |
|
Date |
User |
Action |
Args |
2012-06-28 00:46:09 | vstinner | set | recipients:
+ vstinner, rhettinger, scott_daniels, amaury.forgeotdarc, ncoghlan, pitrou, eric.araujo, tshepang |
2012-06-28 00:46:09 | vstinner | set | messageid: <1340844369.41.0.55095615737.issue6422@psf.upfronthosting.co.za> |
2012-06-28 00:46:08 | vstinner | link | issue6422 messages |
2012-06-28 00:46:07 | vstinner | create | |
|