This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author jwuttke
Recipients docs@python, jwuttke
Date 2017-02-15.22:59:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1487199560.37.0.441891248359.issue29575@psf.upfronthosting.co.za>
In-reply-to
Content
The »basic example of data parallelism using Pool« is too basic. It demonstrates the syntax, but otherwise makes no sense, and therefore is potentially confusing. It is blatant nonsense to run 5 processes when there are only 3 data to be treated. Let me suggest the following:


from multiprocessing import Pool
import time

def f(x):
    time.sleep(1)
    return x*x

if __name__ == '__main__':
    start_time = time.time()
    with Pool(4) as p:
        print(p.map(f, range(20)))
    print("elapsed wall time: ", time.time()-start_time)

The sleep command makes f representative for a function that takes significant time to execute. Printing the elapsed time shows the user that the 20 calls of f have indeed taken place in parallel.
History
Date User Action Args
2017-02-15 22:59:20jwuttkesetrecipients: + jwuttke, docs@python
2017-02-15 22:59:20jwuttkesetmessageid: <1487199560.37.0.441891248359.issue29575@psf.upfronthosting.co.za>
2017-02-15 22:59:20jwuttkelinkissue29575 messages
2017-02-15 22:59:20jwuttkecreate