This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author brett.cannon
Recipients brett.cannon, florin.papa, pitrou, serhiy.storchaka, skrah, vstinner, yselivanov, zbyrne
Date 2016-02-04.17:47:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
What would happen if we shifted to counting the number of executions within a set amount of time instead of how fast a single execution occurred? I believe some JavaScript benchmarks started to do this about a decade ago when they realized CPUs were getting so fast that older benchmarks were completing too quickly to be reliably measured. This also would allow one to have a very strong notion of how long a benchmark run would take based on the number of iterations and what time length bucket a benchmark was placed in (i.e., for microbenchmarks we could say a second while for longer running benchmarks we can increase that threshold). And it won't hurt benchmark comparisons since we have always done relative comparisons rather than absolute ones.
Date User Action Args
2016-02-04 17:47:37brett.cannonsetrecipients: + brett.cannon, pitrou, vstinner, skrah, serhiy.storchaka, yselivanov, zbyrne, florin.papa
2016-02-04 17:47:37brett.cannonsetmessageid: <>
2016-02-04 17:47:37brett.cannonlinkissue26275 messages
2016-02-04 17:47:37brett.cannoncreate