This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author amaury.forgeotdarc
Recipients 0x666, amaury.forgeotdarc, jnoller, pitrou
Date 2009-01-19.15:28:19
SpamBayes Score 3.426867e-05
Marked as misclassified No
Message-id <1232378900.04.0.630291269187.issue5000@psf.upfronthosting.co.za>
In-reply-to
Content
The multiprocessing module indeed has some overhead:

- the processes are spawned when needed. Before you perform performance
timings, you should "warm up" the Pool with a line like
    pool.map(f, range(mul.cpu_count()))
(starting a process is a slowish operation specially on Windows)
This reduces timings by a factor of two.

- the dispatch overhead of multiprocessing is certainly greater than a
single multiplication. multiprocessing is for CPU-bound functions!
And do not forget that you have *tree* processes here: two from the
Pool, and your main program.

As Antoine said, try with this function instead:

def f(x):
    for i in range(10):
        x = x * x
    return x

And the timings are much better...
History
Date User Action Args
2009-01-19 15:28:20amaury.forgeotdarcsetrecipients: + amaury.forgeotdarc, pitrou, jnoller, 0x666
2009-01-19 15:28:20amaury.forgeotdarcsetmessageid: <1232378900.04.0.630291269187.issue5000@psf.upfronthosting.co.za>
2009-01-19 15:28:19amaury.forgeotdarclinkissue5000 messages
2009-01-19 15:28:19amaury.forgeotdarccreate