This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author chrisl
Recipients cameron, chrisl, djc, doko, dpeterson, facundobatista, kxroberto, matb, mihalis68, mjpieters, mwilck, nfl, orsenthil, poeml, vila
Date 2008-07-10.21:59:53
SpamBayes Score 0.0747749
Marked as misclassified No
Message-id <>
In-reply-to <>

After a closer look of your testing program. I believe that your testing
program is not correct. It might be the urllib way of doing things. But
that is not the urllib2 way of doing it. You should add a proxy_handler
rather than directly set_proxy on request objects.

The reason is that, you mess req directly. The urllib2 will not handle
correctly if the request needs rebuild and forward to other handlers.
One example is if the request needs redirect, I believe your test program
will not work.

So the right way of doing things, IMHO, is just adding a proxy handler
which contain the proxy you want. Then that proxy handler will just
automatically set_proxy() to every request it goes out. It even works
if the request needs to be rebuilt.

Then my patch should just work in that case. Try to make it work
for your test script will make things unnecessary complicated.
That defeats the benefit of the handler call chain in urllib2.

Does it make sense to you?



On Mon, Jun 30, 2008 at 1:49 AM, Nagy Ferenc László
<> wrote:
> The test program was:
> import urllib2
> targeturl = ''
> proxyhost = ''
> req = urllib2.Request(targeturl)
> req.set_proxy(proxyhost, 'https')
> response = urllib2.urlopen(req)
> print
Date User Action Args
2008-07-10 22:04:56chrislsetspambayes_score: 0.0747749 -> 0.0747749
recipients: + chrisl, doko, facundobatista, mjpieters, orsenthil, kxroberto, vila, djc, mwilck, mihalis68, dpeterson, poeml, cameron, matb, nfl
2008-07-10 21:59:56chrisllinkissue1424152 messages
2008-07-10 21:59:54chrislcreate