This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gregory.p.smith
Recipients ammar2, gregory.p.smith, meador.inge, taleinat, terry.reedy
Date 2018-10-29.22:26:38
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
The behavior change introduced in 3.6.7 and 3.7.1 via has further consequences:

>>> tokenize.untokenize(tokenize.generate_tokens(io.StringIO('#').readline))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../cpython/cpython-upstream/Lib/", line 332, in untokenize
    out = ut.untokenize(iterable)
  File ".../cpython/cpython-upstream/Lib/", line 266, in untokenize
  File ".../cpython/cpython-upstream/Lib/", line 227, in add_whitespace
    raise ValueError("start ({},{}) precedes previous end ({},{})"
ValueError: start (1,1) precedes previous end (2,0)

The same goes for using the documented tokenize API (`generate_tokens` is not documented):

ValueError: start (1,1) precedes previous end (2,0)


`untokenize()` is no longer able to work on output of `generate_tokens()` if the input to generate_tokens() did not end in a newline.

Today's workaround: Always append a newline if one is missing to the line that the readline callable passed to tokenize or generate_tokens returns.  Very annoying to implement.
Date User Action Args
2018-10-29 22:26:38gregory.p.smithsetrecipients: + gregory.p.smith, terry.reedy, taleinat, meador.inge, ammar2
2018-10-29 22:26:38gregory.p.smithsetmessageid: <>
2018-10-29 22:26:38gregory.p.smithlinkissue35107 messages
2018-10-29 22:26:38gregory.p.smithcreate