This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author nirai
Recipients DazWorrall, aconrad, alex, andrix, brian.curtin, carljm, coderanger, cool-RR, dabeaz, djc, donaldjeo, durin42, eric.araujo, eric.smith, flox, gregory.p.smith, jcea, jhylton, karld, kevinwatters, konryd, larry, loewis, mahmoudimus, movement, neologix, nirai, pitrou, rcohen, rh0dium, tarek, thouis, ysj.ray
Date 2010-04-27.15:00:55
SpamBayes Score 3.1420738e-10
Marked as misclassified No
Message-id <1272380460.56.0.644911408925.issue7946@psf.upfronthosting.co.za>
In-reply-to
Content
On Tue, Apr 27, 2010 at 12:23 PM, Charles-Francois Natali wrote:

> @nirai
> I have some more remarks on your patch:
> - /* Diff timestamp capping results to protect against clock differences
>  * between cores. */
> _LOCAL(long double) _bfs_diff_ts(long double ts1, long double ts0) {
>
> I'm not sure I understand. You can have problem with multiple cores when reading directly the 
> TSC register, but that doesn't affect gettimeofday. gettimeofday should be reliable and accurate 
> (unless the OS is broken of course), the only issue is that since it's wall clock time, if a process 
> like ntpd is running, then you'll run into problem

I think gettimeofday() might return different results on different cores as result of kernel/hardware problems or clock drift issues in VM environments:
http://kbase.redhat.com/faq/docs/DOC-7864
https://bugzilla.redhat.com/show_bug.cgi?id=461640

In Windows the high-precision counter might return different results on different cores in some hardware configurations (older multi-core processors). I attempted to alleviate these problems by using capping and by using a "python time" counter constructed from accumulated slices, with the assumption that IO bound threads are unlikely to get migrated often between cores while running. I will add references to the patch docs.

> - did you experiment with the time slice ? I tried some higher values and got better results, 
> without penalizing the latency. Maybe it could be interesting to look at it in more detail (and 
> on various platforms).

Can you post more details on your findings? It is possible that by using a bigger slice, you helped the OS classify CPU bound threads as such and improved "synchronization" between BFS and the OS scheduler.

Notes on optimization of code taken, thanks.
History
Date User Action Args
2010-04-27 15:01:01niraisetrecipients: + nirai, loewis, jhylton, gregory.p.smith, jcea, pitrou, movement, larry, eric.smith, kevinwatters, tarek, djc, karld, carljm, coderanger, durin42, eric.araujo, alex, andrix, konryd, brian.curtin, flox, DazWorrall, cool-RR, rh0dium, rcohen, dabeaz, mahmoudimus, aconrad, ysj.ray, neologix, thouis, donaldjeo
2010-04-27 15:01:00niraisetmessageid: <1272380460.56.0.644911408925.issue7946@psf.upfronthosting.co.za>
2010-04-27 15:00:56nirailinkissue7946 messages
2010-04-27 15:00:55niraicreate