This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author kurazu
Recipients eric.snow, kurazu
Date 2013-07-06.15:40:24
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
untokenize has also some other problems, especially when it is using compat - it will skip first significant token, if ENCODING token is not present in input.

For example for input like this (code simplified):
>>> tokens = tokenize(b"1 + 2")
>>> untokenize(tokens[1:])
'+2 '

It also doesn't adhere to another documentation item:
"The iterable must return sequences with at least two elements. [...] Any additional sequence elements are ignored."

In current implementation sequences can be either 2 or 5 elements long, and in the 5-elements long variant the last 3 elements are not ignored, but used to construct source code with original whitespace.

I'm trying to prepare a patch for those issues.
Date User Action Args
2013-07-06 15:40:24kurazusetrecipients: + kurazu, eric.snow
2013-07-06 15:40:24kurazusetmessageid: <>
2013-07-06 15:40:24kurazulinkissue16223 messages
2013-07-06 15:40:24kurazucreate