classification
Title: itertools.product with infinite iterator cause MemoryError.
Type: behavior Stage: resolved
Components: Library (Lib) Versions: Python 3.2, Python 3.3, Python 2.7
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: rhettinger Nosy List: Lucas Wiman, Sumudu.Fernando, eric.araujo, falsetru, nneonneo, rhettinger, terry.reedy, tim.peters, yegle
Priority: normal Keywords:

Created on 2010-10-15 00:45 by falsetru, last changed 2016-09-18 07:10 by terry.reedy. This issue is now closed.

Files
File name Uploaded Description Edit
product.py Sumudu.Fernando, 2012-01-18 11:01 alt. implementation of cartesian product generator
Messages (15)
msg118735 - (view) Author: Jeong-Min Lee (falsetru) Date: 2010-10-15 00:44
According to the documentation, itertools.product is equivalent to nested for-loops in a generator expression.
But, itertools.product(itertools.count(2010)) is not.

>>> import itertools
>>> (year for year in itertools.count(2010))
<generator object <genexpr> at 0x026367D8>
>>> itertools.product(itertools.count(2010))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError
msg118831 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2010-10-15 21:18
The input to itertools.product must be a finite sequence of finite iterables. itertools.count(startvalue) produces an infinite sequence of ints (which are not iterable). Passing the latter to the former causes the latter to run until memory is exhausted. You should get the same result with tuple(itertools.count()), which is what happens within itertools.product.

Your generator example does not show the same problem because it is not nested and never runs.

If you are unfamiliar with Python and its stdlib, please ask questions on python-list (mirror on comp.lang.python and gmane.comp.python.general) before reporting what you think might be an error.
msg151533 - (view) Author: Sumudu Fernando (Sumudu.Fernando) Date: 2012-01-18 11:01
I don't agree with the response to this.

It is true that as implemented (at least in 2.7, I don't have 3.x handy to check) itertools.product requires finite iterables.  However this seems to be simply a consequence of the implementation and not part of the "spirit" of the function, which as falsetru pointed out is stated to be "equivalent to nested for-loops in a generator expression".

Indeed, implementing product in Python (in a recursive way) doesn't have this problem.

Perhaps a more convincing set of testcases to show why this could be considered a problem:

>>> import itertools
>>> itertools.product(xrange(100))
<itertools.product object at 0xb7ed334c>
>>> itertools.product(xrange(1000000))
<itertools.product object at 0xb7ed620c>
>>> itertools.product(xrange(1000000000))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError

Note that I'm not even using an infinite iterable, just a really big one.  The issue is that creating the iterator fails with a MemoryError, before I've even asked for any values.  Consider the following:

for (i, v) in enumerate(itertools.product(a, b, c)):
    if i < 1000:
        print v
    else:
        break

When a, b, and c are relatively small, finite iterables, this code works fine.  However, if *any* of them are too large (or infinite), we see a MemoryError before the loop even starts, even though only 1000 elements are required.  I think it's conceivable that we might want something like "a = itertools.cycle(xrange(5))", and even that will break this loop.

That said, in all such cases I could think of, we can always either truncate big iterators before passing them to product, or use zip/comprehensions to add their values into the tuple (or some combination of those).  So maybe it isn't a huge deal.

I've attached my implementation of product which deals with infinite iterators by leveraging enumerate and itertools.cycle, and is pretty much a direct translation of the "odometer" idea.  This doesn't support the "repeat" parameter (but probably could using itertools.tee).  One thing that should be changed is itertools.cycle shouldn't be called / doesn't need to be called on infinite iterators, but I couldn't figure out how to do that.  Maybe there is some way to handle it in the C implementation?)

In summary: the attached implementation of product can accept any mix of infinite / finite iterators, returning a generator intended for partial consumption.  The existing itertools.product doesn't work in this case.
msg151575 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2012-01-18 22:55
Proposing an expansion of the definition of product() is a *completely* different issue from the validity of count() as an input. I answered correctly given the current definition of product(): it is not valid input. It is also not valid input to your proposed revision:
>>> tuple(itertools.cycle(enumerate(it)) for it in itertools.count())
...
TypeError: 'int' object is not iterable
-- just as I said.

If you want to propose an enhancement, either open an new, enhancement issue or post to python-ideas. Since new features can only go in 3.3+, post 3.x code, not 2.x. And please do not quibble about the difference between 'infinite' and 'too large to fit in memory'.
msg151579 - (view) Author: Sumudu Fernando (Sumudu.Fernando) Date: 2012-01-18 23:13
>>> tuple(itertools.cycle(enumerate(it)) for it in itertools.count())
  ...
  TypeError: 'int' object is not iterable

That is not what happens in the function, though!  That would correspond to doing product(*itertools.count(2010)), but if you try that you won't even get past argument expansion (obviously).  Doing product(*xrange(10)) gives the error you're talking about, for example.

product(itertools.count(2010)) works perfectly well with the version I posted, though it is a bit silly to do it that way since it produces the same values as count itself (which is what "cartesian product" should do), while saving extra bookkeeping along the way.

Anyway, I'm pretty new to python and I don't think this is quite relevant enough to warrant opening a new ticket.  I'm happy to leave it here for the education of the next neophyte who stumbles across this idiosyncracy of itertools.product.
msg151605 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2012-01-19 03:44
A relatively simple change would be to allow the first iterable to be 'infinite', when repeat==1, by not calling tuple() on it. The reason for turning the iterables into concrete sequences is because they might not be reiterable. (cycle() does the same for the same reason.) But since the first iterable is only iterated once, this does not apply to it.

    if repeat == 1:
        pools = [args[0:1]].extend(tuple(pool) for pool in args[1:])
    else:
        pools = [tuple(pool) for pool in args] * repeat

The counter argument to this or any generalized proposal is that one can expand the product() into enough loops to avoid infinite (or very large) args. For example, the following produces '1AA', '1AB', ..., '1EE', '2AA', ... indefinitely.

naa=(''.join((str(n),)+s) for n in itertools.count(1)
     for s in itertools.product(string.ascii_uppercase[0:5], repeat=2))

RAYMOND: Do you think the doc should specify that each iterable must be finite, and that explicit loops are the alternative if not?
msg228151 - (view) Author: yegle (yegle) Date: 2014-10-02 03:08
Found another example that shows horrible performance when using itertools.product:

def gen():
  l = [itertools.permutations(range(10)) for _ in range(10)]
  g = itertools.product(*l)
  for i in g:
    yield i

A simple next() to this generator takes 16 seconds on my desktop.

I use this recursive product() instead and the performance is acceptable:

def product(*args):
    if len(args) == 1:
        for i in args[0]:
            yield [i]
    else:
        for i in args[0]:
            for j in product(*args[1:]):
                j.append(i)
                yield j
msg266774 - (view) Author: Lucas Wiman (Lucas Wiman) Date: 2016-05-31 20:17
I realize this is an old (and closed!) thread, but here are a few notes from running into this issue recently, working on a search problem with infinite iterators.

First, note that yegle's recursive solution is not correct, since it exhausts the iterators:
>>> def product(*args):
...     if len(args) == 1:
...         for i in args[0]:
...             yield [i]
...     else:
...         for i in args[0]:
...             for j in product(*args[1:]):
...                 j.append(i)
...                 yield j
... 
>>> 
>>> assert len(list(itertools.product(*[iter(range(2)) for _ in range(10)]))) == 2 ** 10
>>> assert len(list(product(*[iter(range(2)) for _ in range(10)]))) == 2 ** 10
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AssertionError

This is fairly tricky to get right, and even a correct fully-lazy solution requires each iterator to eventually be fully stored in memory. For that reason, Sumudu's MemoryError is not something that can be easily escaped, though it can be delayed.

For infinite iterators, the naive "lexicographic" solution to product(count(0), count(0)) would never get to 1 on the first iterator, yielding (0, 0), (0, 1), (0, 2), ...  To fully explore the space, the user needs to think about how to iterate over that space, so IMO keeping infinite iterators as invalid arguments to itertools.product makes sense.
msg276836 - (view) Author: Robert Xiao (nneonneo) * Date: 2016-09-17 20:12
I think this _is_ a bug. Most of the itertools are quite thrifty with memory, and exceptions are explicitly documented.

The itertools generally only require memory proportional to the number of iterations consumed. However, `itertools.product` requires an enormous up-front memory cost even if only a few iterations are performed. There are good reasons to extract only a few iterations from a `product` - for example, a search procedure where only the first item in the product is infinite:

    for year, day in product(count(2010), range(365)):
        if rare_event(year, day):
            break

Or, in an application I recently tried, to count occurrences of something up to a limit:

    prod = product(count(), range(n))
    filtered = ifilter(pred, prod)
    want = list(islice(filtered, 101))
    if len(want) > 100:
        print "Too many matches"

Having `product` greedily consume all of its input iterators is rather antithetical to the notion of itertools being lazy generators, and it breaks efficient composability with the other itertools.

The fix should be fairly straightforward: like tee, maintain lists of items already generated as the input iterators are exhausted, and add a note that "product"'s memory usage may slowly grow over time (as opposed to growing to maximum size immediately).
msg276843 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2016-09-17 20:43
I see nothing wrong with combinatorial generators materializing their inputs before generation.  Perhaps it should be documented clearly.  It's certainly not limited to `product()`.  For example,

>>> for i in itertools.combinations(itertools.count(), 2):
...	print(i)

dies with MemoryError too.  But I expected that ;-)
msg276853 - (view) Author: Robert Xiao (nneonneo) * Date: 2016-09-17 21:47
It wouldn't be that complicated to have the combinatorial functions be lazy too - I feel like there's some value to be had there.

What's the major objection to making all of the itertools functions "maximally lazy"?
msg276855 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-09-17 22:06
> What's the major objection to making all of the itertools functions "maximally lazy"?

It complicates the code and provides nearly zero benefit in practical use cases of the combinatoric functions.
msg276857 - (view) Author: Lucas Wiman (Lucas Wiman) Date: 2016-09-17 23:28
It is quite thrifty with memory compared to the size of the search space O(n*k) memory for a search space of size O(n**k).

I think a reasonable expectation for itertools.product is that it should _eventually_ reach any element in the search space. The only way to maintain that expectation for infinite iterators would be be to use a Cantor-style zig-zag enumeration. Even for your example of searching over a single infinite iterator, you could end up exploring dates in the order (2010, 0), (2011, 0), (2012, 0), ... by switching the ordering of inputs. You'd never hit the rare date unless it happened on the first day of a year. 

There's no way to have special behavior _only_ for infinite iterators, since an infinite iterator is indistinguishable from a long-finite one due to the halting problem, so this would require changing the behavior for finite iterators too. While the output ordering is not explicitly documented, the current search order is probably depended on by plenty of existing programs, and has less surprise than a Cantor enumeration.

So the only use case where the lazy enumeration matters is if:
1. You have one or more _very_ large iterators.
2. You _do not care_ what order they are searched in, or even if all possible tuples appear in the output.
3. Despite not caring about the output ordering, you still only want a few examples.

I'm struggling to think of cases where that would come up.
msg276860 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2016-09-18 00:09
Lucas, I largely agree, but it is documented that the various combinatorial generators emit items in a particular lexicographic order.  So that is documented, and programs definitely rely on it.

That's why, in an earlier comment, Terry suggested that perhaps `product()` could make a special case of its (and only its) first argument (and only when repeat=1).  Each element of the first iterable is needed only once (although it may copied into any number of outputs), so there's no actual need to make a tuple of it first.  The implementation is just simpler and clearer by treating all arguments alike.

Which is a good enough reason for me - and the "use cases" for an unbounded first argument look exceptionally weak to me.
msg276872 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2016-09-18 07:10
Recipes for handling an infinite first iterator for product, or an infinite iterator for combinations (they could be similar), that use and build on the current functions, without the output order constraint, might be candidates for the recipe section.  I will probably write at least one of them for my own purposes someday.
History
Date User Action Args
2016-09-18 07:10:49terry.reedysetmessages: + msg276872
2016-09-18 00:09:15tim.peterssetmessages: + msg276860
2016-09-17 23:28:29Lucas Wimansetmessages: + msg276857
2016-09-17 22:06:47rhettingersetmessages: + msg276855
2016-09-17 21:47:24nneonneosetmessages: + msg276853
2016-09-17 20:43:38tim.peterssetnosy: + tim.peters
messages: + msg276843
2016-09-17 20:12:01nneonneosetnosy: + nneonneo
messages: + msg276836
2016-05-31 20:17:43Lucas Wimansetnosy: + Lucas Wiman
messages: + msg266774
2014-10-02 03:08:02yeglesetnosy: + yegle
messages: + msg228151
2012-01-19 03:44:25terry.reedysetmessages: + msg151605
2012-01-18 23:13:38Sumudu.Fernandosetmessages: + msg151579
2012-01-18 22:55:13terry.reedysetstage: resolved
messages: + msg151575
versions: + Python 3.3, - Python 3.1
2012-01-18 11:01:33Sumudu.Fernandosetfiles: + product.py
nosy: + Sumudu.Fernando
messages: + msg151533

2010-10-15 21:18:43terry.reedysetstatus: open -> closed
versions: - Python 2.6
nosy: + terry.reedy

messages: + msg118831

resolution: not a bug
2010-10-15 14:19:44eric.araujosetnosy: + eric.araujo
2010-10-15 01:25:41benjamin.petersonsetassignee: rhettinger

nosy: + rhettinger
2010-10-15 00:45:00falsetrucreate