This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author lemburg
Recipients lemburg, pitrou
Date 2008-12-22.10:05:50
SpamBayes Score 4.1108578e-10
Marked as misclassified No
Message-id <494F667D.6000301@egenix.com>
In-reply-to <1229895457.43.0.758313096043.issue4714@psf.upfronthosting.co.za>
Content
On 2008-12-21 22:37, Antoine Pitrou wrote:
> New submission from Antoine Pitrou <pitrou@free.fr>:
> 
> This patch prints opcode statistics at the end of a pybench run if
> DYNAMIC_EXECUTION_PROFILE has been enabled when compiling the interpreter.
> 
> Is it ok? Is it better to add it to the Benchmark class?

I don't think it's worth doing this for low-level and highly
artificial benchmarks like the ones run by pybench.

Opcode statistics are much better taken in real life applications,
e.g. let Django, Zope, etc. run for a day or two and then look
at the opcode statistics.

If at all, then opcode statistics should be an optional feature
enabled by a command line switch. I'd then create new methods
bench.start_opcode_stats(), bench.stop_opcode_stats() and
bench.get_opcode_stats().

Also note that this line will result in wrong results:

+            if opstats:
+                opstats = [new - old
+                    for new, old in zip(sys.getdxp(), opstats)]

It should be:

start_opstats = sys.getdxp()
...
stop_opstats = sys.getdxp()
opstats = [new_value - old_value
           for new_value, old_value in zip(stop_opstats, start_opstats]
History
Date User Action Args
2008-12-22 10:05:53lemburgsetrecipients: + lemburg, pitrou
2008-12-22 10:05:52lemburglinkissue4714 messages
2008-12-22 10:05:50lemburgcreate