Author pitrou
Recipients alex, carljm, coderanger, dabeaz, eric.smith, flox, loewis, mahmoudimus, pitrou
Date 2010-02-17.03:00:35
SpamBayes Score 5.91898e-06
Marked as misclassified No
Message-id <1266375639.71.0.245501264547.issue7946@psf.upfronthosting.co.za>
In-reply-to
Content
Just a quick test under Linux (on a dual quad core machine):
- with iotest.py and echo_client.py both running Python 2.7: 25.562 seconds (410212.450 bytes/sec)
- with iotest.py and echo_client.py both running Python 3.2: 28.160 seconds (372362.459 bytes/sec)

As already said, the "spinning endlessly" loop is a best case for thread switching latency in 2.x, because the opcodes are very short. If each opcode in the loop has an average duration of 20 ns, and with the default check interval of 100, the GIL gets speculatively released every 2 us (yes, microseconds). That's why I suggested trying more "realistic" workloads, as in ccbench.

Also, as I told you, there might also be interactions with the various timing heuristics the TCP stack of the kernel applies. It would be nice to test with UDP.

That said, the observations are interesting.
History
Date User Action Args
2010-02-17 03:00:40pitrousetrecipients: + pitrou, loewis, eric.smith, carljm, coderanger, alex, flox, dabeaz, mahmoudimus
2010-02-17 03:00:39pitrousetmessageid: <1266375639.71.0.245501264547.issue7946@psf.upfronthosting.co.za>
2010-02-17 03:00:38pitroulinkissue7946 messages
2010-02-17 03:00:35pitroucreate