This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author dabeaz
Recipients beazley, dabeaz, flox, kristjan.jonsson, loewis, pitrou, r.david.murray, techtonik, torsten
Date 2010-04-17.03:18:08
SpamBayes Score 0.0023365445
Marked as misclassified No
Message-id <1271474291.1.0.921601209556.issue8299@psf.upfronthosting.co.za>
In-reply-to
Content
I'm not trying to be a pain here, but do you have any explanation as to why, with fair scheduling, the observed execution time of multiple CPU-bound threads is substantially worse than with unfair scheduling?

From your own benchmarks, consider this result (Fair scheduling)

Treaded, balanced execution:
fast A: 0.973000 (0 left)
fast C: 0.992000 (0 left)
fast B: 1.013000 (0 left)

Versus this result with unfair scheduling:

Treaded, balanced execution:
fast A: 0.362000 (0 left)
fast B: 0.464000 (0 left)
fast C: 0.549000 (0 left)

If I'm reading this right, it takes the three threads with fair locking almost twice as long to complete (1.01s) as the three threads with unfair locking (0.55s) .  If so, why would I want fair locking?   Wouldn't I want the solution that offers the fastest overall execution time?
History
Date User Action Args
2010-04-17 03:18:11dabeazsetrecipients: + dabeaz, loewis, beazley, pitrou, kristjan.jonsson, techtonik, r.david.murray, flox, torsten
2010-04-17 03:18:11dabeazsetmessageid: <1271474291.1.0.921601209556.issue8299@psf.upfronthosting.co.za>
2010-04-17 03:18:09dabeazlinkissue8299 messages
2010-04-17 03:18:08dabeazcreate