This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tixxit
Recipients LeWiemann, gvanrossum, mark.dickinson, rhettinger, tixxit
Date 2009-12-07.15:46:20
SpamBayes Score 3.118362e-08
Marked as misclassified No
Message-id <853aa53f0912070746q705d00d3o2743d47f1d0ff93e@mail.gmail.com>
In-reply-to <1260120452.15.0.355798430029.issue1771@psf.upfronthosting.co.za>
Content
If the equal min y-coords are handled, I think it'd be quicker too. As Guido
noted, O(n) function calls is better then O(n log n) =] Though the general
case is still unhandled. And, though it doesn't help my case, the Graham
Scan can also be performed on points sorted lexicographically too, by
constructing the upper & lower hull separately, hehe.

Now, I understand cmp on the whole was removed from the language. Using
__lt__, __eq__, etc. really is more natural. However, having an explicit cmp
function for sorting makes sense to me. At the very least, it is more
obvious and natural for some problems - though I respect that using a key
func. is often faster. In some rare (though "rare" is very subjective) cases
it is required; packing a cmp function into __cmp__ in a wrapper object is
really just a hard-to-read cmp function and highlights the need for cmp. I
would actually love to see it added for min/max too actually, since I find I
often use a simple reduce function in place of a min(lst, cmp=...).
Enforcing proper comparisons (<, >, ==, etc) makes sense, but would having
the cmp function live, so to speak, in sorting really be that bad? Just
inform the user in the docs that key is preferred and often faster.
Files
File name Uploaded
unnamed tixxit, 2009-12-07.15:46:19
History
Date User Action Args
2009-12-07 15:46:22tixxitsetrecipients: + tixxit, gvanrossum, rhettinger, mark.dickinson, LeWiemann
2009-12-07 15:46:20tixxitlinkissue1771 messages
2009-12-07 15:46:20tixxitcreate