Message358995
lib2to3.tokenize should allow 'utf8' and 'utf-8' interchangeably, to be consistent with the rest of the Python library (I looked through the library source, and there seems to be no consistent preference, and also many (but not all) checks for 'utf-8' also check for 'utf8'). In particular, tokenize.detect_encoding should have code for both forms, as the encoding can be set by the user. Also, code should allow for 'UTF8' and 'UTF-8'.
See also https://bugs.python.org/issue39154
(This is probably a larger issue than just lib2to3, as a quick grep through /usr/lib/python3.7 showed; but not sure how to best address that.) |
|
Date |
User |
Action |
Args |
2019-12-29 17:48:02 | Peter Ludemann | set | recipients:
+ Peter Ludemann, vstinner, ezio.melotti |
2019-12-29 17:48:02 | Peter Ludemann | set | messageid: <1577641682.29.0.16123079146.issue39154@roundup.psfhosted.org> |
2019-12-29 17:48:02 | Peter Ludemann | link | issue39154 messages |
2019-12-29 17:48:02 | Peter Ludemann | create | |
|