Author elliot.gorokhovsky
Recipients elliot.gorokhovsky, mdk, tim.peters, vstinner
Date 2016-11-14.22:01:09
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
In-reply-to <>
On Mon, Nov 14, 2016 at 10:32 AM STINNER Victor <>

> You can use perf timeit --compare-to to check if the result is significant
> or not, and it displays you the "N.NNx faster" or "N.NNx slower" if it's
> significant.

Will do -- I'm writing this up as a paper since this is my science fair
project, so I'll redo the measurements that way and upload a pdf here.

> About benchmarks, I also would like to see a benchmark on the bad case,
> when specialization is not used. And not only on an empty list :-) For
> example, sort 1000 objects which implement compare operators and/or a sort
> function.

The worst case is the third benchmark from the top -- a list of floats with
a single, sad, solitary long at the end. That disables optimization because
keys_are_all_same_type gets set to 0 when assign_compare_function finds the
long (since key_type == &PyFloat_Type). That benchmark is the *absolute*
worst case for two reasons:

1. Float compares are really cheap, so the ratio doesn't get messed up by
the common term "time spent actually comparing floats" (since the total
time is "time spent on overhead + time spend actually comparing floats").
2. The list is of size 10. I guess I could've used size 3 or 4, but it
shouldn't be too far off... smaller lists give worse ratios because the
overhead is O(n) while sorting is O(nlogn).

So, again, the absolute worst possible case is the third benchmark, which
suffers a 10% slowdown. Certainly a reasonable price to pay considering how
rare that case is in practice, and considering the 40-75% gains we get on
the common cases.
Date User Action Args
2016-11-14 22:01:10elliot.gorokhovskysetrecipients: + elliot.gorokhovsky, tim.peters, vstinner, mdk
2016-11-14 22:01:10elliot.gorokhovskylinkissue28685 messages
2016-11-14 22:01:09elliot.gorokhovskycreate