Message80181
My results don't match yours. (8 cores, Mac OS/X):
-------- testing multiprocessing on 8 cores ----------
100000 elements map() time 0.0444118976593 s
100000 elements pool.map() time 0.0366489887238 s
100000 elements pool.apply_async() time 24.3125801086 s
Now, this could be for a variety of reasons: More cores, different OS
(which means different speed at which processes can be forked) and so
on. As Antoine/Amaury point out you really need a use case that is large
enough to offset the cost of forking the processes in the first place.
I also ran this on an 8 core Ubuntu box with kernel 2.6.22.19 and
py2.6.1 and
16gb of ram:
-------- testing multiprocessing on 8 cores ----------
100000 elements map() time 0.0258889198303 s
100000 elements pool.map() time 0.0339770317078 s
100000 elements pool.apply_async() time 11.0373139381 s
OS/X is pretty snappy when it comes for forking.
Now, if you cut the example you provided over to Amaury's example, you
see a significant difference:
OS/X, 8 cores:
-------- testing multiprocessing on 8 cores ----------
100000 elements map() time 30.704061985 s
100000 elements pool.map() time 4.95880293846 s
100000 elements pool.apply_async() time 23.6090102196 s
Ubuntu, kernel 2.6.22.19 and py2.6.1:
-------- testing multiprocessing on 8 cores ----------
100000 elements map() time 38.3818569183 s
100000 elements pool.map() time 5.65878105164 s
100000 elements pool.apply_async() time 14.1757941246 s |
|
Date |
User |
Action |
Args |
2009-01-19 15:32:54 | jnoller | set | recipients:
+ jnoller, amaury.forgeotdarc, pitrou, 0x666 |
2009-01-19 15:32:54 | jnoller | set | messageid: <1232379174.86.0.377024193436.issue5000@psf.upfronthosting.co.za> |
2009-01-19 15:32:54 | jnoller | link | issue5000 messages |
2009-01-19 15:32:53 | jnoller | create | |
|