This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author jacques
Recipients jacques
Date 2009-01-29.22:34:58
SpamBayes Score 0.0039253514
Marked as misclassified No
Message-id <1233268501.09.0.849179242722.issue5102@psf.upfronthosting.co.za>
In-reply-to
Content
When doing a urllib2 fetch of a url that results in a redirect, the
connection to the redirect does not pass along the timeout of the
original url opener.  The result is that the redirected url fetch (which
is a new request) will get the default socket timeout, instead of the
timeout that the user requested originally.  This is obviously a bug.

So we have in urllib2.py in 2.6.1:

def http_error_302(self, req, fp, code, msg, headers):
.....
    return self.parent.open(new)

this should be:
    return self.parent.open(new, timeout=req.timeout)

or something in that vein.


Of course, to be 100% correct, you should probably keep track of how
much time has elapsed since the original url fetch went out, and reduce
the timeout based on this, but I'm not asking for miracles :-)


Jacques
History
Date User Action Args
2009-01-29 22:35:01jacquessetrecipients: + jacques
2009-01-29 22:35:01jacquessetmessageid: <1233268501.09.0.849179242722.issue5102@psf.upfronthosting.co.za>
2009-01-29 22:34:59jacqueslinkissue5102 messages
2009-01-29 22:34:58jacquescreate