Message259582
What would happen if we shifted to counting the number of executions within a set amount of time instead of how fast a single execution occurred? I believe some JavaScript benchmarks started to do this about a decade ago when they realized CPUs were getting so fast that older benchmarks were completing too quickly to be reliably measured. This also would allow one to have a very strong notion of how long a benchmark run would take based on the number of iterations and what time length bucket a benchmark was placed in (i.e., for microbenchmarks we could say a second while for longer running benchmarks we can increase that threshold). And it won't hurt benchmark comparisons since we have always done relative comparisons rather than absolute ones. |
|
Date |
User |
Action |
Args |
2016-02-04 17:47:37 | brett.cannon | set | recipients:
+ brett.cannon, pitrou, vstinner, skrah, serhiy.storchaka, yselivanov, zbyrne, florin.papa |
2016-02-04 17:47:37 | brett.cannon | set | messageid: <1454608057.77.0.557218400017.issue26275@psf.upfronthosting.co.za> |
2016-02-04 17:47:37 | brett.cannon | link | issue26275 messages |
2016-02-04 17:47:37 | brett.cannon | create | |
|