This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Lukasa
Recipients Arfrever, Lukasa, christian.heimes, r.david.murray
Date 2013-12-16.15:16:48
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1387207009.23.0.842589492686.issue19996@psf.upfronthosting.co.za>
In-reply-to
Content
Maybe. If we do it we have to apply that timeout to all the socket actions on that HTTP connection. This would have the effect of changing the default value of the timeout parameter on the HTTPConnection object from socket._GLOBAL_DEFAULT_TIMEOUT to whatever value was chosen. We could do this for reads only, and avoid applying the timeout to connect() calls, but that's kind of weird.

We hit the same problem though: by default, HTTPConnections block indefinitely on all socket calls: we'd be changing that default to some finite timeout instead. Does that sound like a good way to go?

As for curl's error recovery strategy, I'm pretty sure it just keeps parsing the header block. That can definitely be done here. We do have an error reporting mechanism as well (sort of): we set the HTTPMessage.status field to some error string. We could do that, and continue to parse the header block: that's probably the least destructive way to fix this.
History
Date User Action Args
2013-12-16 15:16:49Lukasasetrecipients: + Lukasa, christian.heimes, Arfrever, r.david.murray
2013-12-16 15:16:49Lukasasetmessageid: <1387207009.23.0.842589492686.issue19996@psf.upfronthosting.co.za>
2013-12-16 15:16:49Lukasalinkissue19996 messages
2013-12-16 15:16:48Lukasacreate