This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author mark.dickinson
Recipients LeWiemann, gvanrossum, mark.dickinson, rhettinger, tixxit
Date 2009-12-06.17:27:30
SpamBayes Score 5.9619714e-07
Marked as misclassified No
Message-id <1260120452.15.0.355798430029.issue1771@psf.upfronthosting.co.za>
In-reply-to
Content
Ah.  Thanks for the explanation;  I see your point.  I guess you do just 
have to use a CmpToKey-type wrapper for this sort of comparison.

Though for this *particular* example, it seems to me that you could still use a key function

lambda q: (q[0] - p[0])/(q[1]-p[1]),

which would be even more efficient.  This is assuming that your extreme 
point p has minimal second coordinate amongst points being sorted, which 
as I understand it is how the Graham Scan typically starts.  There's the 
minor difficulty of dealing with points with the *same* second coordinate 
as p, where I guess the key value should be some form of +Infinity.  I can 
also see that this might be problematic if you're working with a type 
that's exact for addition and multiplication but not for division (e.g., 
Decimal with unbounded precision).

It would be interesting to see some relative timings for the sort stage of 
the Graham scan (on a million random points, say): key function versus cmp 
function.
History
Date User Action Args
2009-12-06 17:27:32mark.dickinsonsetrecipients: + mark.dickinson, gvanrossum, rhettinger, LeWiemann, tixxit
2009-12-06 17:27:32mark.dickinsonsetmessageid: <1260120452.15.0.355798430029.issue1771@psf.upfronthosting.co.za>
2009-12-06 17:27:30mark.dickinsonlinkissue1771 messages
2009-12-06 17:27:30mark.dickinsoncreate