Message355827
Tested on 3.6.
tokenize.untokenize does not round-trip whitespace before backslash-newlines outside of strings:
from io import BytesIO
import tokenize
# Round tripping fails on the second string.
table = (
r'''
print\
("abc")
''',
r'''
print \
("abc")
''',
)
for s in table:
tokens = list(tokenize.tokenize(
BytesIO(s.encode('utf-8')).readline))
result = g.toUnicode(tokenize.untokenize(tokens))
print(result==s)
I have an important use case that would benefit from a proper untokenize. After considerable study, I have not found a proper fix for tokenize.add_whitespace.
I would be happy to work with anyone to rewrite tokenize.untokenize so that unit tests pass without fudges in TestRoundtrip.check_roundtrip. |
|
Date |
User |
Action |
Args |
2019-11-01 17:31:35 | edreamleo | set | recipients:
+ edreamleo |
2019-11-01 17:31:35 | edreamleo | set | messageid: <1572629495.93.0.152221972505.issue38663@roundup.psfhosted.org> |
2019-11-01 17:31:35 | edreamleo | link | issue38663 messages |
2019-11-01 17:31:35 | edreamleo | create | |
|