This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients brett.cannon, florin.papa, pitrou, serhiy.storchaka, skrah, vstinner, yselivanov, zbyrne
Date 2016-02-04.13:33:05
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <CAMpsgwY1VbJ9kDJhSeaes8FjHXca5iUUaCpkm6LUrCqaQPksaQ@mail.gmail.com>
In-reply-to <1454584080.54.0.844985385662.issue26275@psf.upfronthosting.co.za>
Content
Florin Papa added the comment:
> I ran perf to use calibration and there is no difference in stability
> compared to the unpatched version.

Sorry, what are you calling "stability"? For me, stability means that
you run the same benchmark 3, 5 or 10 times, and the results must be
as close as possible: see variance and standard deviation of my
previous message.

I'm not talking of variance/deviation of the N runs of bm_xxx.py
scripts, but variance/deviation of the mean value displayed by
perf.py.

perf_calibration.patch is a proof-of-concept. I changed the number of
runs from 50 to 10 to test my patch more easily. You should modify the
patch to keep 50 runs if you want to compare the stability.

By the way, --fast/--rigorous options should not only change the
minimum duration of a single run to calibrate the number of loops, but
they should also change the "maximum" duration of perf.py by using
different number of runs.
History
Date User Action Args
2016-02-04 13:33:05vstinnersetrecipients: + vstinner, brett.cannon, pitrou, skrah, serhiy.storchaka, yselivanov, zbyrne, florin.papa
2016-02-04 13:33:05vstinnerlinkissue26275 messages
2016-02-04 13:33:05vstinnercreate