This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Title: urllib.request.urlopen does not handle non-ASCII characters
Type: behavior Stage: patch review
Components: Extension Modules Versions: Python 3.8, Python 3.7, Python 3.6, Python 3.4, Python 3.5
Status: open Resolution:
Dependencies: 9679 Superseder:
Assigned To: Nosy List: Graham.Oliver, a.badger, ajaksu2, ezio.melotti, janssen, martin.panter, orsenthil, r.david.murray, remi.lapeyre, thezulk, vstinner
Priority: normal Keywords: easy, patch

Created on 2008-09-28 18:47 by a.badger, last changed 2022-04-11 14:56 by admin.

File name Uploaded Description Edit
non_ascii_path.diff ajaksu2, 2009-02-08 21:50 Calls quote() on the request path if path.encode('ascii') fails review
issue3991.diff thezulk, 2013-02-23 17:58 review
issue3991_2017-01-27.diff thezulk, 2017-01-27 21:58 Diff against 3.7.0a0 review
Messages (14)
msg73982 - (view) Author: Toshio Kuratomi (a.badger) * Date: 2008-09-28 18:47
Tested on python-3.0rc1 -- Linux Fedora 9

I wanted to make sure that python3.0 would handle url's in different
encodings.  So I created two files on an apache server which were named
½ñ.html.  One of the filenames was encoded in utf-8 and the other in
latin-1.  Then I tried the following::

from urllib.request import urlopen
url = 'http://localhost/u/½ñ.html'

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.0/urllib/", line 122, in urlopen
    return, data, timeout)
  File "/usr/lib/python3.0/urllib/", line 350, in open
    req.timeout = timeout
AttributeError: 'bytes' object has no attribute 'timeout'

The same thing happens if I give None for the two optional arguments
(data and timeout).

Next I tried using a raw Unicode string:

>>> urlopen(url).read()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python3.0/urllib/", line 122, in urlopen
    return, data, timeout)
  File "/usr/lib/python3.0/urllib/", line 359, in open
    response = self._open(req, data)
  File "/usr/lib/python3.0/urllib/", line 377, in _open
    '_open', req)
  File "/usr/lib/python3.0/urllib/", line 337, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.0/urllib/", line 1082, in http_open
    return self.do_open(http.client.HTTPConnection, req)
  File "/usr/lib/python3.0/urllib/", line 1068, in do_open
    h.request(req.get_method(), req.get_selector(),, headers)
  File "/usr/lib/python3.0/http/", line 843, in request
    self._send_request(method, url, body, headers)
  File "/usr/lib/python3.0/http/", line 860, in _send_request
    self.putrequest(method, url, **skips)
  File "/usr/lib/python3.0/http/", line 751, in putrequest
UnicodeEncodeError: 'ascii' codec can't encode characters in position
7-8: ordinal not in range(128)

So, in python-3.0rc1, this method is badly broken.
msg74046 - (view) Author: Bill Janssen (janssen) * (Python committer) Date: 2008-09-29 20:47
As I read RFC 2396,

1.5:  "A URI is a sequence of characters from a very
   limited set, i.e. the letters of the basic Latin alphabet, digits,
   and a few special characters."

2.4:  "Data must be escaped if it does not have a representation using an
   unreserved character; this includes data that does not correspond to
   a printable character of the US-ASCII coded character set, or that
   corresponds to any US-ASCII character that is disallowed, as
   explained below."

So your URL string is invalid.  You need to escape the characters properly.

(RFC 2396 is what the HTTP RFC cites as its authority on URLs.)
msg74053 - (view) Author: Toshio Kuratomi (a.badger) * Date: 2008-09-29 22:27
Possibly.  This is a change from python-2.x's urlopen() which escaped
the URL automatically, though.  I can see the case for having the user
call an escape function themselves instead of having urlopen() perform
the escape for them.  However, that function would need to be written.
(The present parse.quote() method only quotes correctly if only the path
component is passed; there's no function to take a full URL and quote it

Without such a function, a whole lot of code bases will have to reinvent
the wheel creating functions to parse the path out, run it through
urllib.parse.quote() and then pass the result to urlib.urlopen().
msg74085 - (view) Author: Bill Janssen (janssen) * (Python committer) Date: 2008-09-30 17:43
It's not immediately clear to me how an auto-quote function can be
written; as you say (and as the URI spec points out), you have to take
a URL apart before quoting it, and you can't parse an invalid URL,
which is what the input is.

Best to think of this as a difference from 2.x.
msg74088 - (view) Author: Toshio Kuratomi (a.badger) * Date: 2008-09-30 18:11
The purpose of such a function would be to take something that is not a
valid uri but 1) is a common way of expressing the way to get to the
resource and 2) follows certain rules and turns that into something that
is a valid uri.  non-ASCii strings in the path are a good example of
this since there is a well defined method to encode the strings into the
URL if you are given a character encoding to apply to it.

My first, naive thought is that if the input can be parsed by
urlparse(), then there is a very good chance that we have the ability to
escape the string properly.  Looking at the invalid uri that I gave, for
instance, if you additionally specified an encoding for the path element
there's no reason a function couldn't do the escaping.

What are example inputs that you are concerned about?  I'll see if I can
come up with code that works with them.
msg74110 - (view) Author: Bill Janssen (janssen) * (Python committer) Date: 2008-10-01 03:24
I'm not concerned about any example inputs.  I was just trying to
explain why this isn't a bug.

On the other hand, the IRI spec (RFC 3897) is another thing we might
try to implement for Python.
msg74117 - (view) Author: Toshio Kuratomi (a.badger) * Date: 2008-10-01 06:39
Oh, that's cool.  I've been fine with this being a request for a needed
function to quote and unquote full urls rather than a bug in urlopen().

I think iri's are a distraction here, though.  The RFC for iris even
says that specifications that call for uris and do not mention iris
should not take iris.  So there's definitely a need for a function to
quote a full uri.
msg81423 - (view) Author: Daniel Diniz (ajaksu2) * (Python triager) Date: 2009-02-08 21:50
I think Toshio's usecase is important enough to deserve a fix (patch
attached) or a special-cased error message. IMO, newbies trying to fix
failures from urlopen may have a hard time figuring out the maze:

urlopen -> _opener -> open -> _open -> _call_chain -> http_open -> 
do_open (and that's before leaving urllib!).

>>> from urllib.request import urlopen
>>> url = 'http://localhost/ñ.html'
>>> urlopen(url).read()
Traceback (most recent call last):
UnicodeEncodeError: 'ascii' codec can't encode character '\xf1' in
position 5: ordinal not in range(128)

If the newbie isn't completely lost by then, how about:
>>> from urllib.parse import quote
>>> urlopen(quote(url)).read()
Traceback (most recent call last):
ValueError: unknown url type: http%3A//localhost/%C3%B1.html
msg182785 - (view) Author: Andreas Åkerlund (thezulk) * Date: 2013-02-23 17:58
This is a patch against 3.2 adding urllib.parse.quote_uri

It splits the URI in 5 parts (protocol, authentication, hostname, port and path) then runs urllib.parse.quote on the path and encodes the hostname to punycode if it's not in ascii.

It's not perfect, but should be usable in most cases.
I created some test cases aswell.
msg219325 - (view) Author: Graham Oliver (Graham.Oliver) Date: 2014-05-29 01:21
I came across this bug when using 'ā' in a url
To get around the problem I used the 'URL encoded' version '%C4%81' instead of 'ā'
See this page
I tried using the 'puny code' for 'ā' 'xn--yda' but that didn't work
msg285721 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2017-01-18 11:40
Issue 9679: Focusses on encoding just the DNS name
Issue 20559: Maybe a duplicate, or opportunity for better documentation or error message as a bug fix?

Andreas’s patch just proposes a new function called quote_uri(). It would need documentation. We already have a quote() and quote_plus() function. Since it sounds like this is for IRIs (, would it be more appropriate to call it quote_iri()?

See revision cb09fdef19f5, especially the quote(safe=...) parameter, for how I avoided the double encoding problem.
msg286386 - (view) Author: Andreas Åkerlund (thezulk) * Date: 2017-01-27 21:58
Changed the patch after pointers from vadmium.
And quote_uri is changed to quote_iri as martin.panter thought it was more appropriate.
msg286423 - (view) Author: Martin Panter (martin.panter) * (Python committer) Date: 2017-01-29 01:06
I’m not really an expert on non-ASCII URLs / IRIs. Maybe it is obvious to other people that this is a good general implementation, but for me to thoroughly review it I would need time to research the relevant RFCs, other implementations, suitability for the URL schemes listed at <>, security implications, etc.

One problem problem with using urlunsplit() is it would strip empty URL components, e.g. quote_iri("http://example/file#") -> "http://example/file". See Issue 22852. This is highlighted by the file:///[. . .] → file:/[. . .] test case.

FYI Martin Panter and vadmium are both just me, no need to get too excited. :) I just updated my settings for Rietveld (code review), so hopefully that is more obvious now.
msg286444 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2017-01-29 14:09
I believe the last time this subject was discussed the conclusion was that we really needed a full IRI module that conformed to the relevant RFCs, and that putting something on pypi would be one way to get there.  

Someone should research the existing packages.  It might be that we need something simpler than what exists, but whatever we do should be informed by what exists, I think.
Date User Action Args
2022-04-11 14:56:39adminsetgithub: 48241
2019-02-20 20:02:12remi.lapeyresetnosy: + remi.lapeyre
2019-02-20 19:38:11dfrojassettype: enhancement -> behavior
components: + Extension Modules, - Library (Lib), Unicode
versions: + Python 3.4, Python 3.5, Python 3.6, Python 3.8
2017-01-29 14:09:42r.david.murraysetnosy: + r.david.murray
messages: + msg286444
2017-01-29 01:06:54martin.pantersetmessages: + msg286423
2017-01-27 21:58:31thezulksetfiles: + issue3991_2017-01-27.diff

messages: + msg286386
2017-01-18 11:53:01martin.panterlinkissue29305 superseder
2017-01-18 11:40:38martin.pantersetversions: + Python 3.7, - Python 3.2, Python 3.3, Python 3.4
nosy: + martin.panter

messages: + msg285721

dependencies: + unicode DNS names in urllib, urlopen
stage: test needed -> patch review
2014-05-29 01:21:55Graham.Oliversetnosy: + Graham.Oliver
messages: + msg219325
2013-02-25 18:41:05vstinnersetnosy: + vstinner
2013-02-23 17:58:20thezulksetfiles: + issue3991.diff
nosy: + thezulk
messages: + msg182785

2012-10-02 06:01:47ezio.melottisetversions: + Python 3.2, Python 3.3, Python 3.4, - Python 3.0
2009-04-22 18:47:58ajaksu2setpriority: normal
2009-02-12 18:33:27ajaksu2setnosy: + orsenthil
2009-02-12 02:37:16ajaksu2setkeywords: + easy
components: + Library (Lib)
stage: test needed
2009-02-09 07:27:01ezio.melottisetnosy: + ezio.melotti
2009-02-08 21:50:19ajaksu2setfiles: + non_ascii_path.diff
keywords: + patch
messages: + msg81423
nosy: + ajaksu2
2008-10-01 06:39:58a.badgersetmessages: + msg74117
2008-10-01 03:24:17janssensettype: enhancement
messages: + msg74110
2008-09-30 18:11:23a.badgersetmessages: + msg74088
2008-09-30 17:44:00janssensetmessages: + msg74085
2008-09-29 22:27:44a.badgersetmessages: + msg74053
2008-09-29 20:47:31janssensetnosy: + janssen
messages: + msg74046
2008-09-28 18:47:16a.badgercreate