Author vstinner
Recipients Yury.Selivanov, casevh, josh.r, lemburg, mark.dickinson, pitrou, rhettinger, serhiy.storchaka, skrah, vstinner, yselivanov, zbyrne
Date 2016-02-10.10:02:13
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1455098535.34.0.464930020923.issue21955@psf.upfronthosting.co.za>
In-reply-to
Content
> The test suite can be run directly from the source tree. The test suite includes timing information for individual tests and for the the entire test. Sample invocation:

I extracted the slowest test (test_polyroots_legendre) and put it in a loop of 5 iterations: see attached mpmath_bench.py. I ran this benchmark on Linux with 4 isolated CPU (/sys/devices/system/cpu/isolated=2-3,6-7).
http://haypo-notes.readthedocs.org/misc.html#reliable-micro-benchmarks

On such setup, the benchmark looks stable. Example:

Run #1/5: 12.28 sec
Run #2/5: 12.27 sec
Run #3/5: 12.29 sec
Run #4/5: 12.28 sec
Run #5/5: 12.30 sec

test_polyroots_legendre (min of 5 runs):

* Original: 12.51 sec
* fastint5_4.patch: (min of 5 runs): 12.27 sec (-1.9%)
* fastint6.patch: 12.21 sec (-2.4%)

I ran tests without GMP, to stress the Python int type.

I guess that the benchmark is dominated by CPU time spent on computing operations on large Python int, not by the time spent in ceval.c. So the speedup is low (2%). Such use case doesn't seem to benefit of micro optimization discussed in this issue.

mpmath is an arbitrary-precision floating-point arithmetic using Python int (or GMP if available).
History
Date User Action Args
2016-02-10 10:02:15vstinnersetrecipients: + vstinner, lemburg, rhettinger, mark.dickinson, pitrou, casevh, skrah, Yury.Selivanov, serhiy.storchaka, yselivanov, josh.r, zbyrne
2016-02-10 10:02:15vstinnersetmessageid: <1455098535.34.0.464930020923.issue21955@psf.upfronthosting.co.za>
2016-02-10 10:02:15vstinnerlinkissue21955 messages
2016-02-10 10:02:14vstinnercreate