New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clearly document the use of PYTHONIOENCODING to set surrogateescape #62913
Comments
One problem with Unicode in 3.x is that surrogateescape isn't normally enabled on stdin and stdout. This means the following code will fail with UnicodeEncodeError in the presence of invalid filesystem metadata: print(os.listdir()) We don't really want to enable surrogateescape on sys.stdin or sys.stdout unilaterally, as it increases the chance of data corruption errors when the filesystem encoding and the IO encodings don't match. Last night, Toshio and I thought of a possible solution: enable surrogateescape by default for sys.stdin and sys.stdout on non-Windows systems if (and only if) they're using the same codec as that returned by sys.getfilesystemencoding() (allowing for codec aliases rather than doing a simple string comparison) This means that for full UTF-8 systems (which includes most modern Linux installations), roundtripping will be enabled by default between the standard streams and OS facing APIs, while systems where the encodings don't match will still fail noisily. A more general alternative is also possible: default to errors='surrogatescape' for *any* text stream that uses the filesystem encoding. It's primarily the standard streams we're interested in fixing, though. |
My gut reaction to this is that it feels dangerous. That doesn't mean my gut is right, I'm just reporting my reaction :) |
Everything about surrogateescape is dangerous - we're trying to work |
Nick and I had talked about this at a recent conference and came to it from different directions. On the one hand, Nick made the point that any encoding of surrogateescape'd text to bytes via a different encoding is corrupting the data as a whole. On the other hand, I made the point that raising an exception when doing something as basic as printing something that's text type was reintroducing the issues that python2 had wrt unicode, bytes, and encodings -- particularly with the exception being raised far from the source of the problem (when the data is introduced into the program). After some thought, Nick came up with this solution. The idea is that surrogateescape was originally accepted to allow roundtripping data from the OS and back when the OS considers it to be a "string" but python does not consider it to be "text". When that's the case, we know what the encoding was used to attempt to construct the text in python. If that same encoding is used to re-encode the data on the way back to the OS, then we're successfully roundtripping the data we were given in the first place. So this is just applying the original goal to another API. |
Which reminds me: I'm curious what "ls" currently does for malformed |
On Linux, the locale encoding is usually UTF-8. If a filename cannot IMO Python must raise an error here because I want to generate a valid So using surrogateescape error handler if the encoding is What is your use case where you need to display a filename? Is it |
2013/8/21 Nick Coghlan <report@bugs.python.org>:
The "ls" command works on bytes, not on characters. You can
os.fsencode() does exactly the opposite of os.fsdecode(). There is a I ensured that all OS functions can be used directly with bytes |
Think sysadmins running scripts on Linux, writing to the console or a pipe. Specifically, if a system is properly configured to use UTF-8 for all If the bytes oriented os tools like ls don't fall over on it, then neither |
I think that outlook is a bit naïve. The text source is not always the I'm myself quite partial to the "round-tripping" use case, but I'm not |
I think the essential use case is using a python program in a unix pipeline. I'm very sympathetic to that use case, despite my unease. |
Currently, Python 3 fails miserabily when it gets a non-ASCII It works fine when OS data can be decoded from and encoded to the $ mkdir test
$ cd test
$ touch héhé.txt
$ ls
héhé.txt
$ python3 -c 'import os; print(", ".join(os.listdir()))'
héhé.txt
$ echo "héhé"|python3 -c 'import sys; sys.stdout.write(sys.stdin.read())'|cat
héhé It fails miserabily when OS data cannot be decoded from or encoded to $ mkdir test
$ cd test
$ touch héhé.txt
$ export LANG= # switch to ASCII locale encoding
$ ls
h??h??.txt
$ python3 -c 'import os; print(", ".join(os.listdir()))'
Traceback (most recent call last):
File "<string>", line 1, in <module>
UnicodeEncodeError: 'ascii' codec can't encode characters in position
1-2: ordinal not in range(128)
$ echo "héhé"|LANG= python3 -c 'import sys;
sys.stdout.write(sys.stdin.read())'|cat
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/vstinner/prog/python/default/Lib/encodings/ascii.py",
line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position
1: ordinal not in range(128) The ls output is not the expected "héhé" string, but it is an issue $ ls|hexdump -C
00000000 68 c3 a9 68 c3 a9 2e 74 78 74 0a |h..h...txt.|
0000000b ("héhé" encoded to UTF-8 gives b'h\xc3\xa9h\xc3\xa9') I agree that we can do something to improve the situation on standard $ LANG= PYTHONIOENCODING=utf-8:surrogateescape python3 -c 'import os;
print(", ".join(os.listdir()))'
héhé.txt Something similar can be done in Python. For example, sys.stdout = open(stdout.fileno(), 'w',
encoding=sys.stdout.encoding,
errors="backslashreplace",
closefd=False,
newline='\n') |
Attached patch changes the error handle of stdin, stdout and stderr to surrogateescape by default. It can still be changed explicitly using the PYTHONIOENCODING environment variable. |
The surrogateescape error handler works only with UTF-8. As a side effect of this change an input from stdin will be incompatible in general with extensions which implicitly encode a string to bytes with UTF-8 (e.g. tkinter, XML parsers, sqlite3, datetime, locale, curses, etc.) |
"The surrogateescape error handler works with any codec." The surrogatepass only works with utf-8 if I remember correctly. The "As a side effect of this change an input from stdin will be incompatible Correct, but it's not something new: os.listdir(), sys.argv, os.environ and |
Ah, sorry. You are correct.
I'm only saying that this will increase a number of cases when an exception will raised in unexpected place. Perhaps it will be safe left the "strict" default error handler and make the errors attribute of text streams modifiable. |
The print() instruction is much more common than input(). IMO changing Python functions decoding OS data from the filesystem encoding with
|
Shouldn't be safer use surrogateescape for output and strict for input. |
Serhiy Storchaka added the comment:
Nick wrote "Think sysadmins running scripts on Linux, writing to the See my message msg195769: Python3 cannot be simply used as a pipe Hum, I realized that the subprocess should also be patched to be Serhiy Storchaka also noticed (in the review of my patch) than errors |
If somebody doesn't care about unicode, they can use sys.stdin.buffer. Note: enabling surrogateescape on stdin enables precisely the |
I don't understand what you say. Could you rephrase? |
With my patch, sys.stdin.errors is "surrogateescape" by default, but |
Is it a bug in your patch, or is it deliberate? |
It was not deliberate, and I think that it would be more consistent to |
The surrogateescape error handler is dangerous with utf-16/32. It can produce globally invalid output. |
I don't understand, can you give an example? surrogateescape generate invalid encoded string with any encoding. Example with UTF-8: >>> b"a\xffb".decode("utf-8", "surrogateescape")
'a\udcffb'
>>> 'a\udcffb'.encode("utf-8", "surrogateescape")
b'a\xffb'
>>> b'a\xffb'.decode("utf-8")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 1: invalid start byte So str.encode("utf-8", "surrogateescape") produces an invalid UTF-8 sequence. |
>>> ('\udcff' + 'qwerty').encode('utf-16le', 'surrogateescape')
b'\xff\xdcq\x00w\x00e\x00r\x00t\x00y\x00'
>>> ('\udcff' + 'qwerty').encode('utf-16le', 'surrogateescape').decode('utf-16le', 'surrogateescape')
'\udcff\udcdcqwerty'
>>> ('\udcff' + 'qwerty').encode('utf-16le', 'surrogateescape').decode('utf-16le', 'surrogateescape').encode('utf-16le', 'surrogateescape')
b'\xff\xdc\xdc\xdcq\x00w\x00e\x00r\x00t\x00y\x00'
>>> ('\udcff' + 'qwerty').encode('utf-16le', 'surrogateescape').decode('utf-16le', 'surrogateescape').encode('utf-16le', 'surrogateescape').decode('utf-16le', 'surrogateescape')
'\udcff\udcdc\udcdc\udcdcqwerty' |
Note that the specific case I'm really interested is printing on systems that are properly configured to use UTF-8, but are getting bad metadata from an OS API. I'm OK with the idea of *only* changing it for UTF-8 rather than for arbitrary encodings, as well as restricting it to sys.stdout when the codec used matches the default filesystem encoding. To double check the current behaviour, I created a directory to tinker with this. Filenames were created with the following: >>> open("ℙƴ☂ℌøἤ".encode("utf-8"), "w")
>>> open("basic_ascii".encode("utf-8"), "w")
>>> b"\xd0\xd1\xd2\xd3".decode("latin-1")
'ÐÑÒÓ'
>>> open(b"\xd0\xd1\xd2\xd3", "w") That last generates an invalid UTF-8 filename. "ls" actually degrades less gracefully than I thought, and just prints question marks for the bad file: $ ls -l
total 0
-rw-rw-r--. 1 ncoghlan ncoghlan 0 Aug 23 00:04 ????
-rw-rw-r--. 1 ncoghlan ncoghlan 0 Aug 23 00:01 basic_ascii
-rw-rw-r--. 1 ncoghlan ncoghlan 0 Aug 23 00:01 ℙƴ☂ℌøἤ Python 2 & 3 both work OK if you just print the directory listing directly, since repr() happily displays the surrogate escaped string: $ python -c "import os; print(os.listdir('.'))"
['basic_ascii', '\xd0\xd1\xd2\xd3', '\xe2\x84\x99\xc6\xb4\xe2\x98\x82\xe2\x84\x8c\xc3\xb8\xe1\xbc\xa4']
$ python3 -c "import os; print(os.listdir('.'))"
['basic_ascii', '\udcd0\udcd1\udcd2\udcd3', 'ℙƴ☂ℌøἤ'] Where it falls down is when you try to print the strings directly in Python 3: $ python3 -c "import os; [print(fname) for fname in os.listdir('.')]"
basic_ascii
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 1, in <listcomp>
UnicodeEncodeError: 'utf-8' codec can't encode character '\udcd0' in position 0: surrogates not allowed While setting the IO encoding produces behaviour closer to that of the native tools: On the other hand, setting PYTHONIOENCODING as shown provides an environmental workaround, and http://bugs.python.org/issue15216 will provide an improved programmatic workaround (which tools like http://code.google.com/p/pyp/ could use to configure surrogateescape by default). So perhaps pursuing bpo-15216 further would be a better approach than selectively changing the default behaviour? And better documentation for ways to handle the surrogate escape error when it arises? |
If you pipe the ls (eg: ls >temp) the bytes are preserved. Since setting the escape handler via PYTHONIOENCODING sets it for both stdin in and stdout, it sounds like that solves the sysadmin use case. The sysadmin can just put that environment variable setting in their default profile, and python will once again work like the other unix shell tools. (I double checked, and this does indeed work...doing the equivalent of ls >temp via python preserves the bytes with that PYTHONIOENCODING setting. I don't quite understand, however, why I get the � chars if I don't redirect the output.). I'd be inclined to consider the above as reason enough to close this issue. As usual with Python, explicit is better than implicit. |
>>> ('\udcff' + 'qwerty').encode('utf-16le', 'surrogateescape')
b'\xff\xdcq\x00w\x00e\x00r\x00t\x00y\x00' Oh, this is a bug in the UTF-16 encoder: it should not encode surrogate characters => see issue bpo-12892 I read that it's possible to set a standard stream like stdout in UTF-16 mode on Windows. I don't know if it's commonly used, nor it would impact Python. I never see a platform using UTF-16 or UTF-32 for standard streams. |
On 23 Aug 2013 01:40, "R. David Murray" <report@bugs.python.org> wrote: I assume the terminal window is doing the substitution for the improperly Regarding the issue, perhaps we should convert this to a docs bug? Attempt |
I think it would be great to have a "Unicode/bytes" howto with information like this included. |
Note: I created bpo-18814 to cover some additional tools for working with surrogate escaped strings. For this issue, we currently have http://docs.python.org/3/howto/unicode.html, which aims to be a more comprehensive guide to understanding Unicode issues. I'm thinking we may want a "Debugging Unicode Errors" document, which defers to the existing howto guide for those that really want to understand Unicode, and instead focuses on quick fixes for resolving various problems that may present themselves. Application developers will likely want to read the longer guide, while the debugging document would be aimed at getting script writers past their immediate hurdle, without necessarily gaining a full understanding of Unicode. The would be for this page to become the top hit for "python surrogates not allowed", rather than the current top hit, which is a rejected bug report about it (http://bugs.python.org/issue13717). For example: ================================ Operating system metadata on POSIX based systems like Linux and Mac OS X may include improperly encoded text values. To cope with this, Python uses the "surrogateescape" error handler to store those arbitrary bytes inside a Unicode object. When converted back to bytes using the same encoding and error handler, the original byte sequence is reproduced exactly. This allows operations like opening a file based on a directory listing to work correctly, even when the metadata is not properly encoded according to the system settings. The "surrogates not allowed" error appears when a string from one of these operating system interfaces contains an embedded arbitrary byte sequence, but an attempt is made to encode it using the default "strict" error handler rather than the "surrogateescape" handler. This commonly occurs when printing improperly encoded operating system data to the console, or writing it to a file, database or other serialised interface. The $ python3 -c "import sys; print(sys.getfilesystemencoding())"
utf-8 This can then be used to specify an appropriate setting for $ export PYTHONIOENCODING=utf-8:surrogateescape For other interfaces, there is no such general solution. If allowing the invalid byte sequence to propagate further is acceptable, then enabling the surrogateescape handler may be appropriate. Alternatively, it may be better to track these corrupted strings back to their point of origin, and either fix the underlying metadata, or else filter them out early on. If bpo-18814 is implemented, then it could point to those tools. Similarly, bpo-15216 could be referenced if that is implemented. |
With new subject this issue looks as a duplicate of (or tightly related to) bpo-12832. |
The first thing that would come to my mind when reading Nick's proposed document (without first reading this bug report) is "So why the heck is this not the default?". It would probably save a lot of people a lot of anger if there was also a brief explanation addressing this obvious first response :-). |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: