This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author apocalyptech
Recipients Alex Quinn, apocalyptech, laurento.frittella, martin.panter, msornay, orsenthil, pitrou, raylu, serhiy.storchaka
Date 2017-02-12.18:42:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1486924941.48.0.37492621278.issue14044@psf.upfronthosting.co.za>
In-reply-to
Content
I've just encountered this problem on Python 3.6, on a different URL.  The difference being that it's not encountered with EVERY page load, though I'd say it happens with at least half:

import urllib.request
html = urllib.request.urlopen('http://www.basicinstructions.net/').read()
print('Succeeded!')

I realize that the root problem here may be an HTTP server doing something improper, but I've got no way of fixing someone else's webserver.  It'd be really nice if there was a reasonable way of handling this in Python itself.  As mentioned in the original report, other methods of retreiving this URL work without fail (curl/wget/etc).  As it is, the only way for me to be sure of retreiving the entire page contents is by looping until I don't get an IncompleteRead, which is hardly ideal.
History
Date User Action Args
2017-02-12 18:42:21apocalyptechsetrecipients: + apocalyptech, orsenthil, pitrou, Alex Quinn, martin.panter, serhiy.storchaka, msornay, raylu, laurento.frittella
2017-02-12 18:42:21apocalyptechsetmessageid: <1486924941.48.0.37492621278.issue14044@psf.upfronthosting.co.za>
2017-02-12 18:42:21apocalyptechlinkissue14044 messages
2017-02-12 18:42:20apocalyptechcreate