Message149493
I just noticed this problem as well.
I don't know the code well enough to determine if Brian's patch is the
right thing to do. The documentation claims that maxtries is used to
put a limit on recursion: http://docs.python.org/dev/library/urllib.request.html#urllib.request.FancyURLopener.
The way the code is written it is limiting recursion, but the recursion count
is *not* being reset when an exception is thrown from 'redirect_internal'.
I see why this is a problem with our test code. 'Lib/test/test_urllib.py'
has the following helper function:
_urlopener = None
def urlopen(url, data=None, proxies=None):
"""urlopen(url [, data]) -> open file-like object"""
global _urlopener
if proxies is not None:
opener = urllib.request.FancyURLopener(proxies=proxies)
elif not _urlopener:
opener = urllib.request.FancyURLopener()
_urlopener = opener
else:
opener = _urlopener
if data is None:
return opener.open(url)
else:
return opener.open(url, data)
Notice that the 'FancyURLopener' instance is cached in a global variable.
The fact that the same instance is used from run to run causes max tries to
be overrun. If resetting maxtries on the exception path isn't safe, then we
can just remove the caching from the tests. The more I think about it, the
more Brian's patch seem correct, though.
Senthil, can you chime in? |
|
Date |
User |
Action |
Args |
2011-12-15 03:38:14 | meador.inge | set | recipients:
+ meador.inge, orsenthil, skrah, bbrazil |
2011-12-15 03:38:14 | meador.inge | set | messageid: <1323920294.08.0.244175490159.issue12923@psf.upfronthosting.co.za> |
2011-12-15 03:38:13 | meador.inge | link | issue12923 messages |
2011-12-15 03:38:12 | meador.inge | create | |
|