New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crash during decoding using UTF-16/32 and custom error handler #76764
Comments
The CPython interpreter gets SIGSEGV or SIGABRT during the run. The script attempts to decode binary file using UTF-16-LE encoding and custom error handler. The error handler is poorly built, and doesn't respect the unicode standard with wrong calculation of the new position for decoder to continue. This somehow interfere with internal C code doing memory allocation. The result is invalid writes outside of allocated block. Here is how it looks like with Python 3.7.0a4+ (heads/master:44a70e9, Jan 17 2018, 12:18:45) run under Valgrind 3.11.0. Please see the full Valgrind output in attached valgrind.log. ==24836== Invalid write of size 4 |
As written, decode_crash.py crashes on Windows also. Passing 'replace' instead of 'w3lib_replace' results in no crash and lots of boxes and blanks. |
The problem is utf16 decoder almost always assumes that two bytes decodes to one unicode character, so when allocating memory, it assumes (bytes_number+1)/2 unicode slots is enough, there is even a comment in the code. And in unicode_decode_call_errorhandler_writer, it only allocates more memory when the error handler returns a unicode longer than 1, but doesn't take care pace by one, in which case one byte to one unicode character. So it's possible for the decoder to write out of bound. This example could steadily crash on my Mac with debug version, it writes across the bound of the internal unicode buffer: >>> import codecs
>>> def pace_by_one(exc):
... return ('\ufffd', exc.start+1)
...
>>> codecs.register_error('pace_by_one', pace_by_one)
>>> b'\xd8\xd8\xd8\xd8\xd8\xd8\x00\x00\x00'.decode('utf-16-le', 'pace_by_one')
Debug memory block at address p=0x10210c260: API 'o'
100 bytes originally requested
The 7 pad bytes at p-7 are FORBIDDENBYTE, as expected.
The 8 pad bytes at tail=0x10210c2c4 are not all FORBIDDENBYTE (0xfb):
at tail+0: 0x00 *** OUCH
at tail+1: 0x00 *** OUCH
at tail+2: 0xfb
at tail+3: 0xfb
at tail+4: 0xfb
at tail+5: 0xfb
at tail+6: 0xfb
at tail+7: 0xfb
The block was made by call python/cpython#74857 to debug malloc/realloc.
Data at p: 00 00 00 00 00 00 00 00 ... fd ff fd ff fd ff d8 00 Fatal Python error: bad trailing pad byte Current thread 0x00007fffab9b4340 (most recent call first): I'll try to make a fix tomorrow. |
Another way to crash: >>> import codecs
>>> def replace_with_longer(exc):
... exc.object = b'\xa0\x00' * 100
... return ('\ufffd', exc.end)
...
>>> codecs.register
codecs.register( codecs.register_error(
>>> codecs.register_error('replace_with_longer', rep
replace_with_longer( repr(
>>> codecs.register_error('replace_with_longer', replace_with_longer)
>>> b'\xd8\xd8'.decode('utf-16-le', 'replace_with_longer')
Debug memory block at address p=0x10b3b8c40: API 'o'
92 bytes originally requested
The 7 pad bytes at p-7 are FORBIDDENBYTE, as expected.
The 8 pad bytes at tail=0x10b3b8c9c are not all FORBIDDENBYTE (0xfb):
at tail+0: 0xa0 *** OUCH
at tail+1: 0x00 *** OUCH
at tail+2: 0xa0 *** OUCH
at tail+3: 0x00 *** OUCH
at tail+4: 0xa0 *** OUCH
at tail+5: 0x00 *** OUCH
at tail+6: 0xa0 *** OUCH
at tail+7: 0x00 *** OUCH
The block was made by call #11529390970613309440 to debug malloc/realloc.
Data at p: 00 00 00 00 00 00 00 00 ... 00 00 00 00 fd ff a0 00 Fatal Python error: bad trailing pad byte Current thread 0x00007fffab9b4340 (most recent call first): |
I write a draft patch, without tests yet. I'll add them later. Reviews are appreciated. I also check the Windows codepage equivalent and encoders, look to me they don't suffer the problem. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: