This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Recipients benjamin.peterson, daniel.urban, docs@python, eric.snow, ezio.melotti,, meador.inge, sandro.tosi, serhiy.storchaka, takluyver, yselivanov
Date 2014-02-06.15:50:03
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
I did some research on the cause of this issue. The assertion was
added in this change by Jeremy Hylton in August 2006:
(The corresponding Mercurial commit is here:

At that point I believe the assertion was reasonable. I think it would
have been triggered by backslash-continued lines, but otherwise it

But in this change <> in
March 2008 Trent Nelson applied this patch by Michael Foord
<> to implement PEP
263 and fix issue719888. The patch added ENCODING tokens to the output
of tokenize.tokenize(). The ENCODING token is always generated with
row number 0, while the first actual token is generated with row
number 1. So now every token stream from tokenize.tokenize() sets off
the assertion.

The lack of a test case for tokenize.untokenize() in "full" mode meant
that it was (and is) all too easy for someone to accidentally break it
like this.
Date User Action Args
2014-02-06 15:50:03gdr@garethrees.orgsetrecipients: +, benjamin.peterson, ezio.melotti, meador.inge, daniel.urban, sandro.tosi, docs@python, eric.snow, takluyver, serhiy.storchaka, yselivanov
2014-02-06 15:50:03gdr@garethrees.orgsetmessageid: <>
2014-02-06 15:50:03gdr@garethrees.orglinkissue12691 messages
2014-02-06 15:50:03gdr@garethrees.orgcreate