This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: Making urlparse WHATWG conformant
Type: behavior Stage: needs patch
Components: Versions:
Status: open Resolution:
Dependencies: Superseder:
Assigned To: orsenthil Nosy List: Mike.Lissner, gregory.p.smith, orsenthil, serhiy.storchaka, vstinner, xtreak
Priority: normal Keywords:

Created on 2021-04-18 19:43 by orsenthil, last changed 2022-04-11 14:59 by admin.

Messages (4)
msg391344 - (view) Author: Senthil Kumaran (orsenthil) * (Python committer) Date: 2021-04-18 19:43
Mike Lissner reported that a set test suites that exercise extreme conditions with URLs, but in conformance with
was maintained here:

These test cases were used against urlparse and urljoin method.

Quoting verbatim

The basic idea is to iterate over the test cases and try joining and parsing them. The script wound up messier than I wanted b/c there's a fair bit of normalization you have to do (e.g., the test cases expect blank paths to be '/', while urlparse returns an empty string), but you'll get the idea.

The bad news is that of the roughly 600 test cases fewer than half pass. Some more normalization would fix some more of this, and I don't imagine all of these have security concerns (I haven't thought through it, honestly, but there are issues with domain parsing too that look meddlesome). For now I've taken it as far as I can, and it should be a good start, I think.

The final numbers the script cranks out are:

Done. 231/586 successes. 1 skipped.
msg391347 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2021-04-18 22:01
It would be interesting to test also with the yarl module. It is based on urlparse and urljoin, but does extra normalization of %-encoding.
msg391427 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2021-04-20 10:41
See also bpo-43882.
msg392969 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2021-05-05 01:34
FWIW rather than implementing our own URL parsing at all... wrapping a library extracted from a compatible-license major browser (Chromium or Firefox) and keeping it updated would avoid disparities.

Unfortunately, I'm not sure how feasible this really is.  Do all of the API surfaces we must support in the stdlib for compatibility's sake with urllib line up with such a browser core URL parsing library?

Something to ponder.  Unlikely something we'll actually do.
Date User Action Args
2022-04-11 14:59:44adminsetgithub: 88049
2021-05-05 01:34:26gregory.p.smithsetmessages: + msg392969
2021-04-23 19:38:04gregory.p.smithsetnosy: + gregory.p.smith
2021-04-20 10:41:16vstinnersetnosy: + vstinner
messages: + msg391427
2021-04-19 20:24:35Mike.Lissnersetnosy: + Mike.Lissner
2021-04-19 03:24:56xtreaksetnosy: + xtreak
2021-04-18 22:01:16serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg391347
2021-04-18 19:43:37orsenthilcreate