This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author dalke
Recipients
Date 2006-11-06.10:37:58
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=190903

# new
>>> uriparse.urljoin("http://spam/", "foo/bar")
'http://spam//foo/bar'
>>> 

# existing
>>> urlparse.urljoin("http://spam/", "foo/bar")
'http://spam/foo/bar'
>>> 

Should not have the "//" again in your code.


>>> import urlparse
>>> import uriparse
>>> urlparse.urljoin("http://blah", "/spam/")
'http://blah/spam/'
>>> uriparse.urljoin("http://blah", "/spam/")
'http://blah/spam'
>>> 

join 'http://www.guardian.co.uk/' u' '
urlparse: u'http://www.guardian.co.uk/ ' !=
uriparse: u'http://www.guardian.co.uk// '

join 'http://boingboing.net/' u'
http://www.newsalloy.com/subrss4.gif'
  (yes, with a leading space in the relative URL)
urlparse: u'http://boingboing.net/
http://www.newsalloy.com/subrss4.gif' !=
uriparse: u' http://www.newsalloy.com/subrss4.gif'

I'll add a script to test wild web pages and compare
urlparse and uriparse's
respective urljoin methods.

ALSO: Need an __all__ which excludes those *URIParser classes.
History
Date User Action Args
2007-08-23 15:47:33adminlinkissue1462525 messages
2007-08-23 15:47:33admincreate