This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author stutzbach
Recipients collinwinter, eric.smith, pitrou, rhettinger, stutzbach, terry.reedy, tim.peters
Date 2010-09-24.22:02:16
SpamBayes Score 4.1532278e-08
Marked as misclassified No
Message-id <1285365738.52.0.0031136682724.issue9915@psf.upfronthosting.co.za>
In-reply-to
Content
> Does this help any?

No :-)

The problem is that the random data you run in interpreter 1 won't be the same data you run in interpreter 2, so the results are not directly comparable.  One of the sets of random data may be more easily sortable than the other.

That leaves two options:
1. save the random data to a file and use it in both interpreters, or
2. run a sufficiently large number of tests, with new random data for each test, such that you get a good measurement of the time required to sort average random data

I have been using approach #2.
History
Date User Action Args
2010-09-24 22:02:20stutzbachsetrecipients: + stutzbach, tim.peters, collinwinter, rhettinger, terry.reedy, pitrou, eric.smith
2010-09-24 22:02:18stutzbachsetmessageid: <1285365738.52.0.0031136682724.issue9915@psf.upfronthosting.co.za>
2010-09-24 22:02:17stutzbachlinkissue9915 messages
2010-09-24 22:02:16stutzbachcreate