Message206334
Nice to see you, Jurjen! Been a long time :-)
I'd like to see changes here too. It's unclear what "a lazy version" is intended to mean, exactly, but I agree the actual behavior is surprising, and that mpool.py is a lot less surprising in several ways.
I got bitten by this just last week, when running a parallelized search over a massive space _expected_ to succeed after exploring a tiny fraction of the search space. Ran out of system resources because imap_unordered() tried to queue up countless millions of work descriptions. I had hoped/expected that it would interleave generating and queue'ing "a few" inputs with retrieving outputs, much as mpool.py behaves.
In that case I switched to using apply_async() instead, interposing my own bounded queue (a collections.deque used only in the main program) to throttle the main program. I'm still surprised it was necessary ;-) |
|
Date |
User |
Action |
Args |
2013-12-16 17:25:37 | tim.peters | set | recipients:
+ tim.peters, jneb |
2013-12-16 17:25:37 | tim.peters | set | messageid: <1387214737.38.0.685132557079.issue19993@psf.upfronthosting.co.za> |
2013-12-16 17:25:37 | tim.peters | link | issue19993 messages |
2013-12-16 17:25:36 | tim.peters | create | |
|