This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author terry.reedy
Recipients eric.snow, kurazu, terry.reedy
Date 2014-02-18.04:27:35
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
The no encoding issue was mentioned in #12691, but needed to be opened in a separate issue, which is this one. The doc, as opposed to the docstring, says "Converts tokens back into Python source code". Python 3.3 source code is defined in the reference manual as a sequence of unicode chars. The doc also says "The reconstructed script is returned as a single string." In 3.x, that also means unicode, not bytes. On the other hand tokenize does not currently accept actually Python code (unicode) but only encoded code. I think that should change, but that is a different issue (literally).

For this issue, I think the doc and docstring should change to match current behavior: output a string unless the tokens (which contain unicode strings, not bytes) start with a non-empty ENCODING token. Change the behavior would break code that believes the code and doc (as opposed to the docstring).

Since tokenize will only put out ENCODING as the first token, I would be inclined to ignore ENCODING thereafter, but that might be seen as an impermisable change in behavior.

The dropped token issue is the subject of #8478, with patch1. It was mentioned again in #12691, among several other issues, and is the subject again of duplicate issue #16224 (now closed) with patch2.

The actual issue is that the first token of iterator input gets dropped, but not that of lists. The fix is reported on #8478, so dropped token is not part of this issue.
Date User Action Args
2014-02-18 04:27:37terry.reedysetrecipients: + terry.reedy, eric.snow, kurazu
2014-02-18 04:27:36terry.reedysetmessageid: <>
2014-02-18 04:27:36terry.reedylinkissue16223 messages
2014-02-18 04:27:35terry.reedycreate