Author kristjan.jonsson
Recipients beazley, dabeaz, flox, kristjan.jonsson, loewis, pitrou, r.david.murray, techtonik, torsten
Date 2010-04-17.10:53:13
SpamBayes Score 7.59234e-08
Marked as misclassified No
Message-id <1271501596.32.0.47591511012.issue8299@psf.upfronthosting.co.za>
In-reply-to
Content
>I'm not trying to be a pain here, but do you have any explanation as to >why, with fair scheduling, the observed execution time of multiple CPU->bound threads is substantially worse than with unfair scheduling?
Yes.  This is because the GIL yield now actually succeeds most of the time, every 100 opcodes (the default).  Profiling indicates that on multicore machines this causes significant instruction cache misses as the new thread is scheduled on the other core.   Try raising the sys.checkinterval to 1000 and watch the performance difference fall away.

>If so, why would I want fair locking?   Wouldn't I want the solution >that offers the fastest overall execution time?
Because of IO latency.  Your IO thread, waiting to wake up when somethin happen also has to compete unfairly with the CPU threads.

"a fair" lock allows you to balance the cost of switching (on multicore) vs thread latency using sys.checkinterval (if we are based on opcodes).  The unfair lock all but disables this control.

>One other comment.  Running the modified fair.py file on my Linux >system using Python compiled with semaphores shows they they are >*definitely* not fair.  Here's the relevant part of your test:
Interesting.  let me stress that I am using windows and making assumptions about how something like sem_wait() behaves.  And the posix standard does not require "fair" behaviour of the semaphore functions according to http://www.opengroup.org/onlinepubs/009695399/functions/sem_wait.html.

The kernel-based windows synchronization functions achieve an amount of "fairness" through WaitForSingleObject() and friends:
http://msdn.microsoft.com/en-us/magazine/cc163642.aspx#S2

Are you absolutely sure that USE_SEMAPHORES is defined?  There could be a latent config problem somewhere.
History
Date User Action Args
2010-04-17 10:53:17kristjan.jonssonsetrecipients: + kristjan.jonsson, loewis, beazley, pitrou, techtonik, r.david.murray, flox, dabeaz, torsten
2010-04-17 10:53:16kristjan.jonssonsetmessageid: <1271501596.32.0.47591511012.issue8299@psf.upfronthosting.co.za>
2010-04-17 10:53:15kristjan.jonssonlinkissue8299 messages
2010-04-17 10:53:13kristjan.jonssoncreate