Author lemburg
Recipients docs@python, flox, lemburg, pitrou, python-dev, vstinner
Date 2016-09-15.09:00:27
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
In-reply-to <>
On 14.09.2016 15:20, STINNER Victor wrote:
> STINNER Victor added the comment:
>> I'd also like to request that you reword this dismissive line in the performance package's readme: (...)
> Please report issues of the performance module on its own bug tracker:
> Can you please propose a new description? You might even create a pull
> request ;-)

I'll send a PR.

> Note: I'm not sure that we should keep pybench, this benchmark really
> looks unreliable. But I should still try at least to use the same
> number of iterations for all worker child processes. Currently the
> calibration is done in each child process.

Well, pybench is not just one benchmark, it's a whole collection of
benchmarks for various different aspects of the CPython VM and per
concept it tries to calibrate itself per benchmark, since each
benchmark has different overhead.

The number of iterations per benchmark will not change between
runs, since this number is fixed in each benchmark. These numbers
do need an update, though, since at the time of writing pybench
CPUs were a lot less powerful compare to today.

Here's the comment with the guideline for the number of rounds
to use per benchmark:

    # Number of rounds to execute per test run. This should be
    # adjusted to a figure that results in a test run-time of between
    # 1-2 seconds.
    rounds = 100000

BTW: Why would you want to run benchmarks in child processes
and in parallel ? This will usually dramatically effect the
results of the benchmark runs. Ideally, the pybench process
should be the only CPU intense work load on the entire CPU
to get reasonable results.
Date User Action Args
2016-09-15 09:00:28lemburgsetrecipients: + lemburg, pitrou, vstinner, flox, docs@python, python-dev
2016-09-15 09:00:28lemburglinkissue15369 messages
2016-09-15 09:00:27lemburgcreate