Message192454
untokenize has also some other problems, especially when it is using compat - it will skip first significant token, if ENCODING token is not present in input.
For example for input like this (code simplified):
>>> tokens = tokenize(b"1 + 2")
>>> untokenize(tokens[1:])
'+2 '
It also doesn't adhere to another documentation item:
"The iterable must return sequences with at least two elements. [...] Any additional sequence elements are ignored."
In current implementation sequences can be either 2 or 5 elements long, and in the 5-elements long variant the last 3 elements are not ignored, but used to construct source code with original whitespace.
I'm trying to prepare a patch for those issues. |
|
Date |
User |
Action |
Args |
2013-07-06 15:40:24 | kurazu | set | recipients:
+ kurazu, eric.snow |
2013-07-06 15:40:24 | kurazu | set | messageid: <1373125224.33.0.51813085596.issue16223@psf.upfronthosting.co.za> |
2013-07-06 15:40:24 | kurazu | link | issue16223 messages |
2013-07-06 15:40:24 | kurazu | create | |
|