Message80787
When doing a urllib2 fetch of a url that results in a redirect, the
connection to the redirect does not pass along the timeout of the
original url opener. The result is that the redirected url fetch (which
is a new request) will get the default socket timeout, instead of the
timeout that the user requested originally. This is obviously a bug.
So we have in urllib2.py in 2.6.1:
def http_error_302(self, req, fp, code, msg, headers):
.....
return self.parent.open(new)
this should be:
return self.parent.open(new, timeout=req.timeout)
or something in that vein.
Of course, to be 100% correct, you should probably keep track of how
much time has elapsed since the original url fetch went out, and reduce
the timeout based on this, but I'm not asking for miracles :-)
Jacques |
|
Date |
User |
Action |
Args |
2009-01-29 22:35:01 | jacques | set | recipients:
+ jacques |
2009-01-29 22:35:01 | jacques | set | messageid: <1233268501.09.0.849179242722.issue5102@psf.upfronthosting.co.za> |
2009-01-29 22:34:59 | jacques | link | issue5102 messages |
2009-01-29 22:34:58 | jacques | create | |
|