Rietveld Code Review Tool
Help | Bug tracker | Discussion group | Source code | Sign in
(8)

Side by Side Diff: Tools/pybench/README

Issue 15550: Trailing white spaces
Patch Set: Created 7 years, 6 months ago
Left:
Right:
Use n/p to move between diff chunks; N/P to move between comments. Please Sign in to add in-line comments.
Jump to:
View unified diff | Download patch
« no previous file with comments | « Tools/msi/README.txt ('k') | Tools/pynche/README » ('j') | no next file with comments »
Toggle Intra-line Diffs ('i') | Expand Comments ('e') | Collapse Comments ('c') | Show Comments Hide Comments ('s')
OLDNEW
1 ________________________________________________________________________ 1 ________________________________________________________________________
2 2
3 PYBENCH - A Python Benchmark Suite 3 PYBENCH - A Python Benchmark Suite
4 ________________________________________________________________________ 4 ________________________________________________________________________
5 5
6 Extendable suite of of low-level benchmarks for measuring 6 Extendable suite of of low-level benchmarks for measuring
7 the performance of the Python implementation 7 the performance of the Python implementation
8 (interpreter, compiler or VM). 8 (interpreter, compiler or VM).
9 9
10 pybench is a collection of tests that provides a standardized way to 10 pybench is a collection of tests that provides a standardized way to
11 measure the performance of Python implementations. It takes a very 11 measure the performance of Python implementations. It takes a very
12 close look at different aspects of Python programs and let's you 12 close look at different aspects of Python programs and let's you
13 decide which factors are more important to you than others, rather 13 decide which factors are more important to you than others, rather
14 than wrapping everything up in one number, like the other performance 14 than wrapping everything up in one number, like the other performance
15 tests do (e.g. pystone which is included in the Python Standard 15 tests do (e.g. pystone which is included in the Python Standard
16 Library). 16 Library).
17 17
18 pybench has been used in the past by several Python developers to 18 pybench has been used in the past by several Python developers to
19 track down performance bottlenecks or to demonstrate the impact of 19 track down performance bottlenecks or to demonstrate the impact of
20 optimizations and new features in Python. 20 optimizations and new features in Python.
21 21
22 The command line interface for pybench is the file pybench.py. Run 22 The command line interface for pybench is the file pybench.py. Run
23 this script with option '--help' to get a listing of the possible 23 this script with option '--help' to get a listing of the possible
24 options. Without options, pybench will simply execute the benchmark 24 options. Without options, pybench will simply execute the benchmark
25 and then print out a report to stdout. 25 and then print out a report to stdout.
26 26
27 27
28 Micro-Manual 28 Micro-Manual
29 ------------ 29 ------------
30 30
31 Run 'pybench.py -h' to see the help screen. Run 'pybench.py' to run 31 Run 'pybench.py -h' to see the help screen. Run 'pybench.py' to run
32 the benchmark suite using default settings and 'pybench.py -f <file>' 32 the benchmark suite using default settings and 'pybench.py -f <file>'
33 to have it store the results in a file too. 33 to have it store the results in a file too.
34 34
35 It is usually a good idea to run pybench.py multiple times to see 35 It is usually a good idea to run pybench.py multiple times to see
36 whether the environment, timers and benchmark run-times are suitable 36 whether the environment, timers and benchmark run-times are suitable
37 for doing benchmark tests. 37 for doing benchmark tests.
38 38
39 You can use the comparison feature of pybench.py ('pybench.py -c 39 You can use the comparison feature of pybench.py ('pybench.py -c
40 <file>') to check how well the system behaves in comparison to a 40 <file>') to check how well the system behaves in comparison to a
41 reference run. 41 reference run.
42 42
43 If the differences are well below 10% for each test, then you have a 43 If the differences are well below 10% for each test, then you have a
44 system that is good for doing benchmark testings. Of you get random 44 system that is good for doing benchmark testings. Of you get random
45 differences of more than 10% or significant differences between the 45 differences of more than 10% or significant differences between the
46 values for minimum and average time, then you likely have some 46 values for minimum and average time, then you likely have some
47 background processes running which cause the readings to become 47 background processes running which cause the readings to become
48 inconsistent. Examples include: web-browsers, email clients, RSS 48 inconsistent. Examples include: web-browsers, email clients, RSS
49 readers, music players, backup programs, etc. 49 readers, music players, backup programs, etc.
50 50
51 If you are only interested in a few tests of the whole suite, you can 51 If you are only interested in a few tests of the whole suite, you can
(...skipping 173 matching lines...) Expand 10 before | Expand all | Expand 10 after
225 ------------------ 225 ------------------
226 226
227 from pybench import Test 227 from pybench import Test
228 228
229 class IntegerCounting(Test): 229 class IntegerCounting(Test):
230 230
231 # Version number of the test as float (x.yy); this is important 231 # Version number of the test as float (x.yy); this is important
232 # for comparisons of benchmark runs - tests with unequal version 232 # for comparisons of benchmark runs - tests with unequal version
233 # number will not get compared. 233 # number will not get compared.
234 version = 1.0 234 version = 1.0
235 235
236 # The number of abstract operations done in each round of the 236 # The number of abstract operations done in each round of the
237 # test. An operation is the basic unit of what you want to 237 # test. An operation is the basic unit of what you want to
238 # measure. The benchmark will output the amount of run-time per 238 # measure. The benchmark will output the amount of run-time per
239 # operation. Note that in order to raise the measured timings 239 # operation. Note that in order to raise the measured timings
240 # significantly above noise level, it is often required to repeat 240 # significantly above noise level, it is often required to repeat
241 # sets of operations more than once per test round. The measured 241 # sets of operations more than once per test round. The measured
242 # overhead per test round should be less than 1 second. 242 # overhead per test round should be less than 1 second.
243 operations = 20 243 operations = 20
244 244
245 # Number of rounds to execute per test run. This should be 245 # Number of rounds to execute per test run. This should be
(...skipping 11 matching lines...) Expand all
257 """ 257 """
258 # Init the test 258 # Init the test
259 a = 1 259 a = 1
260 260
261 # Run test rounds 261 # Run test rounds
262 # 262 #
263 for i in range(self.rounds): 263 for i in range(self.rounds):
264 264
265 # Repeat the operations per round to raise the run-time 265 # Repeat the operations per round to raise the run-time
266 # per operation significantly above the noise level of the 266 # per operation significantly above the noise level of the
267 # for-loop overhead. 267 # for-loop overhead.
268 268
269 # Execute 20 operations (a += 1): 269 # Execute 20 operations (a += 1):
270 a += 1 270 a += 1
271 a += 1 271 a += 1
272 a += 1 272 a += 1
273 a += 1 273 a += 1
274 a += 1 274 a += 1
275 a += 1 275 a += 1
276 a += 1 276 a += 1
277 a += 1 277 a += 1
(...skipping 73 matching lines...) Expand 10 before | Expand all | Expand 10 after
351 overhead more accurately 351 overhead more accurately
352 - modified the tests to each give a run-time of between 352 - modified the tests to each give a run-time of between
353 100-200ms using warp 10 353 100-200ms using warp 10
354 - changed default warp factor to 10 (from 20) 354 - changed default warp factor to 10 (from 20)
355 - compared results with timeit.py and confirmed measurements 355 - compared results with timeit.py and confirmed measurements
356 - bumped all test versions to 2.0 356 - bumped all test versions to 2.0
357 - updated platform.py to the latest version 357 - updated platform.py to the latest version
358 - changed the output format a bit to make it look 358 - changed the output format a bit to make it look
359 nicer 359 nicer
360 - refactored the APIs somewhat 360 - refactored the APIs somewhat
361 1.3+: Steve Holden added the NewInstances test and the filtering 361 1.3+: Steve Holden added the NewInstances test and the filtering
362 option during the NeedForSpeed sprint; this also triggered a long 362 option during the NeedForSpeed sprint; this also triggered a long
363 discussion on how to improve benchmark timing and finally 363 discussion on how to improve benchmark timing and finally
364 resulted in the release of 2.0 364 resulted in the release of 2.0
365 1.3: initial checkin into the Python SVN repository 365 1.3: initial checkin into the Python SVN repository
366 366
367 367
368 Have fun, 368 Have fun,
369 -- 369 --
370 Marc-Andre Lemburg 370 Marc-Andre Lemburg
371 mal@lemburg.com 371 mal@lemburg.com
OLDNEW
« no previous file with comments | « Tools/msi/README.txt ('k') | Tools/pynche/README » ('j') | no next file with comments »

RSS Feeds Recent Issues | This issue
This is Rietveld 894c83f36cb7+