This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ucodery
Recipients lys.nikolaou, pablogsal, ucodery
Date 2022-01-05.23:12:47
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1641424367.42.0.632530824135.issue46274@roundup.psfhosted.org>
In-reply-to
Content
A source of one or more backslash-escaped newlines, and one final newline, is not tokenized the same as a source where those lines are "manually joined".

The source
```
\
\
\

```
produces the tokens NEWLINE, ENDMARKER when piped to the tokenize module.

Whereas the source
```

```
produces the tokens NL, ENDMARKER.

What I expect is to receive only one NL token from both sources. As per the documentation "Two or more physical lines may be joined into logical lines using backslash characters" ... "A logical line that contains only spaces, tabs, formfeeds and possibly a comment, is ignored (i.e., no NEWLINE token is generated)"

And, because these logical lines are not being ignored, if there are spaces/tabs, INDENT and DEDENT tokens are also being unexpectedly produced.

The source
```
    \

```
produces the tokens INDENT, NEWLINE, DEDENT, ENDMARKER.

Whereas the source (with spaces)
```
    
```
produces the tokens NL, ENDMARKER.
History
Date User Action Args
2022-01-05 23:12:47ucoderysetrecipients: + ucodery, lys.nikolaou, pablogsal
2022-01-05 23:12:47ucoderysetmessageid: <1641424367.42.0.632530824135.issue46274@roundup.psfhosted.org>
2022-01-05 23:12:47ucoderylinkissue46274 messages
2022-01-05 23:12:47ucoderycreate