Title: Change invalid unicode characters to replacement characters in argv
Type: crash Stage:
Components: Interpreter Core Versions: Python 3.7, Python 3.6
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: Neui, SilentGhost, eryksun, ncoghlan
Priority: normal Keywords:

Created on 2019-02-01 16:55 by Neui, last changed 2019-02-01 23:49 by eryksun.

Messages (5)
msg334703 - (view) Author: (Neui) Date: 2019-02-01 16:55
When an invalid unicode character is given to argv (cli arguments), then python abort()s with an fatal error about an character not in range (ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff]).

I am wondering if this behaviour should change to replace those with U+FFFD REPLACEMENT CHARACTER (like .decode(..., 'replace')) or even with something similar/better (see )

The reason for this is that other applications can use the invalid character since it is just some data (like GDB for use as an argument to the program to be debugged), where in python this becomes an limitation, since the script (if specified) never runs.

The main motivation for me is that there is an command-not-found debian package that gets the wrongly-typed command as a command argument. If that then contains an invalid unicode character, it then just fails rather saying it couldn't find the/a similar command. If this doesn't get changed, it either then has to accept that this is a limitation, use an other way of passing the command or re-write it in not python.

# Requires bash 4.2+
# Specifying a script omits the first two lines
$ python3.6 $'\U7fffbeba'
Failed checking if argv[0] is an import path entry
ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff]
Fatal Python error: no mem for sys.argv
ValueError: character U+7fffbeba is not in range [U+0000; U+10ffff]

Current thread 0x00007fd212eaf740 (most recent call first):
Aborted (core dumped)

$ python3.6 --version
Python 3.6.7

$ uname -a
Linux nopea 4.15.0-39-generic #42-Ubuntu SMP Tue Oct 23 15:48:01 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 18.04.1 LTS
Release:	18.04
Codename:	bionic

GDB backtrace just before throwing the error: (note that it's argc=2 since first argument is a script)
#0  find_maxchar_surrogates (begin=begin@entry=0xa847a0 L'\x7fffbeba' <repeats 12 times>, end=end@entry=0xa847d0 L"", maxchar=maxchar@entry=0x7fffffffde94, 
    num_surrogates=num_surrogates@entry=0x7fffffffde98) at ../Objects/unicodeobject.c:1626
#1  0x00000000004cee4b in PyUnicode_FromUnicode (u=u@entry=0xa847a0 L'\x7fffbeba' <repeats 12 times>, size=12) at ../Objects/unicodeobject.c:2017
#2  0x00000000004db856 in PyUnicode_FromWideChar (w=0xa847a0 L'\x7fffbeba' <repeats 12 times>, size=<optimized out>, size@entry=-1) at ../Objects/unicodeobject.c:2502
#3  0x000000000043253d in makeargvobject (argc=argc@entry=2, argv=argv@entry=0xa82268) at ../Python/sysmodule.c:2145
#4  0x0000000000433228 in PySys_SetArgvEx (argc=2, argv=0xa82268, updatepath=1) at ../Python/sysmodule.c:2264
#5  0x00000000004332c1 in PySys_SetArgv (argc=<optimized out>, argv=<optimized out>) at ../Python/sysmodule.c:2277
#6  0x000000000043a5bd in Py_Main (argc=argc@entry=3, argv=argv@entry=0xa82260) at ../Modules/main.c:733
#7  0x0000000000421149 in main (argc=3, argv=0x7fffffffe178) at ../Programs/python.c:69

Similar issues: "Segmentation fault with invalid Unicode command-line arguments in embedded Python" (actually 'fixed' since it now abort()s) "sys.argv is wrong for unicode strings"
msg334705 - (view) Author: SilentGhost (SilentGhost) * (Python triager) Date: 2019-02-01 17:10
I'm on  4.15.0-44-generic and I cannot reproduce the crash. I get "python3: can't open file '������': [Errno 2] No such file or directory"

Could you try this on a different machine / installation?
msg334707 - (view) Author: SilentGhost (SilentGhost) * (Python triager) Date: 2019-02-01 17:22
Hm, this seems to be due to how the terminal emulator handles those special characters, actually. I can reproduce in another terminal.
msg334712 - (view) Author: (Neui) Date: 2019-02-01 19:33
I'd say that the terminal is not really relevant here, but rather the locale settings because it uses wide string functions. Prefixing it with LC_ALL=C produces the same output as you had on my Ubuntu machine. I also get that output when running it in Cygwin (and MSYS2), although it seems setting LC_ALL has no effect.
msg334732 - (view) Author: Eryk Sun (eryksun) * (Python triager) Date: 2019-02-01 23:49
In Unix, Python 3.6 decodes the char * command line arguments via mbstowcs. In Linux, I see the following misbehavior of mbstowcs when decoding an overlong UTF-8 sequence:

    >>> mbstowcs = ctypes.CDLL(None, use_errno=True).mbstowcs
    >>> arg = bytes(x + 128 for x in [1 + 124, 63, 63, 59, 58, 58])
    >>> mbstowcs(None, arg, 0)
    >>> buf = (ctypes.c_int * 2)()
    >>> mbstowcs(buf, arg, 2)
    >>> hex(buf[0])

This shouldn't be an issue in 3.7, at least not with the default UTF-8 mode configuration. With this mode, Py_DecodeLocale calls _Py_DecodeUTF8Ex using the surrogateescape error handler [1].

Date User Action Args
2019-02-01 23:49:22eryksunsetnosy: + eryksun
messages: + msg334732
2019-02-01 19:48:23SilentGhostsetnosy: + ncoghlan

versions: + Python 3.7
2019-02-01 19:33:53Neuisetmessages: + msg334712
2019-02-01 17:22:34SilentGhostsetmessages: + msg334707
2019-02-01 17:10:26SilentGhostsettype: behavior -> crash

messages: + msg334705
nosy: + SilentGhost
2019-02-01 16:55:38Neuicreate