The notion of cost is that merging runs of lengths A and B has "cost" A+B, period. Nothing to do with logarithms. Merge runs of lengths 1 and 1000, and it has cost 1001.
They don't care about galloping, only about how the order in which merges are performed affect that notion of cost. Example: suppose we have runs of lengths 1, 2, and 3. Merge 1 and 2 first for a cost of 1+2 = 3, and we're left with runs of length 3 and 3. Merge those for a cost of 3+3 = 6, and add to the earlier cost of 3 for a total cost of 9. But if 2 and 3 are merged first that has cost 5, then merging runs of length 1 and 5 has a cost of 6, added to the earlier 5 gives a total cost of 11. So it's cheaper to merge 1 and 2 first.
Galloping has nothing to do with this measure, nor does the binary insertion sort portion of timsort. And Java itself doesn't use timsort for sorting integers anyway (the overheads are too high when comparisons are dirt cheap). They're trying to quantify the effects of their merge-ordering strategies, relative to timsort's merge-ordering strategy.
Which could have been done more clearly by ignoring Java's timsort implementation entirely and just changing the parts of their code that implement _their_ merge-ordering strategy. timsort's strategy is hard to analyze, but is trivially easy to _implement_. As is, they're inadvertently timing all sorts of stuff that's orthogonal to the merge-ordering strategy. |