Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tokenize.py emits spurious NEWLINE if file ends on a comment without a newline #88833

Closed
MatthieuDartiailh mannequin opened this issue Jul 18, 2021 · 5 comments
Closed
Labels
3.9 only security fixes 3.10 only security fixes 3.11 only security fixes stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@MatthieuDartiailh
Copy link
Mannequin

MatthieuDartiailh mannequin commented Jul 18, 2021

BPO 44667
Nosy @terryjreedy, @ambv, @MatthieuDartiailh, @pablogsal, @miss-islington
PRs
  • bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer #27499
  • [3.10] bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499) #27500
  • [3.9] bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499) #27501
  • Files
  • no_newline_at_end_of_file_with_comment.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2021-07-31.01:17:51.161>
    created_at = <Date 2021-07-18.13:50:13.546>
    labels = ['type-bug', 'library', '3.9', '3.10', '3.11']
    title = 'tokenize.py emits spurious NEWLINE if file ends on a comment without a newline'
    updated_at = <Date 2022-01-19.20:53:27.027>
    user = 'https://github.com/MatthieuDartiailh'

    bugs.python.org fields:

    activity = <Date 2022-01-19.20:53:27.027>
    actor = 'terry.reedy'
    assignee = 'none'
    closed = True
    closed_date = <Date 2021-07-31.01:17:51.161>
    closer = 'pablogsal'
    components = ['Library (Lib)']
    creation = <Date 2021-07-18.13:50:13.546>
    creator = 'mdartiailh'
    dependencies = []
    files = ['50157']
    hgrepos = []
    issue_num = 44667
    keywords = ['patch']
    message_count = 5.0
    messages = ['397750', '398615', '398738', '398739', '410980']
    nosy_count = 5.0
    nosy_names = ['terry.reedy', 'lukasz.langa', 'mdartiailh', 'pablogsal', 'miss-islington']
    pr_nums = ['27499', '27500', '27501']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue44667'
    versions = ['Python 3.9', 'Python 3.10', 'Python 3.11']

    @MatthieuDartiailh
    Copy link
    Mannequin Author

    MatthieuDartiailh mannequin commented Jul 18, 2021

    Using tokenize.py to tokenize the attached file yields:
    0,0-0,0: ENCODING 'utf-8'
    1,0-1,2: NAME 'if'
    1,3-1,4: NAME 'a'
    1,4-1,5: OP ':'
    1,5-1,7: NEWLINE '\r\n'
    2,0-2,4: INDENT ' '
    2,4-2,5: NAME 'b'
    2,6-2,7: OP '='
    2,8-2,9: NUMBER '1'
    2,9-2,11: NEWLINE '\r\n'
    3,0-3,2: NL '\r\n'
    4,0-4,6: COMMENT '# test'
    4,6-4,6: NL ''
    4,6-4,7: NEWLINE ''
    5,0-5,0: DEDENT ''
    5,0-5,0: ENDMARKER ''

    This output is wrong in that it adds 2 newlines one as a NL which is a correct and one as a NEWLINE which is not since there is no preceding code.

    If a new line is added at the end of the file, one gets:

    0,0-0,0: ENCODING 'utf-8'
    1,0-1,2: NAME 'if'
    1,3-1,4: NAME 'a'
    1,4-1,5: OP ':'
    1,5-1,7: NEWLINE '\r\n'
    2,0-2,4: INDENT ' '
    2,4-2,5: NAME 'b'
    2,6-2,7: OP '='
    2,8-2,9: NUMBER '1'
    2,9-2,11: NEWLINE '\r\n'
    3,0-3,2: NL '\r\n'
    4,0-4,6: COMMENT '# test'
    4,6-4,8: NL '\r\n'
    5,0-5,0: DEDENT ''
    5,0-5,0: ENDMARKER ''

    Similarly if code is added before the comment, a single NEWLINE is generated (with no text since it is fake).

    The extra NEWLINE found when tokenizing the attached file can cause issue when parsing the file. It was found in we-like-parsers/pegen#11 (comment) where a pure python parser based on pegen is being built. The extra NEWLINE is an issue since the grammar does not accept NEWLINE at the end of a block and cause parsing to fail using the same rules as the python grammar while the cpython parser can handle this file without any issue.

    I believe this issue stems from https://github.com/python/cpython/blob/3.9/Lib/tokenize.py#L605 where the check does not account for a last line limited to comments. Adding a check to determine if the line starts with a # should be sufficient to avoid emitting the extra NEWLINE.

    @MatthieuDartiailh MatthieuDartiailh mannequin added 3.8 only security fixes stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error labels Jul 18, 2021
    @pablogsal
    Copy link
    Member

    New changeset b6bde9f by Pablo Galindo Salgado in branch 'main':
    bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499)
    b6bde9f

    @ambv
    Copy link
    Contributor

    ambv commented Aug 2, 2021

    New changeset 33a4010 by Miss Islington (bot) in branch '3.10':
    bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499) (GH-27500)
    33a4010

    @ambv
    Copy link
    Contributor

    ambv commented Aug 2, 2021

    New changeset 2d11797 by Miss Islington (bot) in branch '3.9':
    bpo-44667: Treat correctly lines ending with comments and no newlines in the Python tokenizer (GH-27499) (GH-27501)
    2d11797

    @terryjreedy
    Copy link
    Member

    This appears to have been a duplicate of bpo-35107, where the failing example was '#' and it was NL, NEWLINE pair was noted. So this either predates 3.9 or was re-introduced. In any case, thanks for the fix.

    @terryjreedy terryjreedy added 3.9 only security fixes 3.10 only security fixes 3.11 only security fixes and removed 3.8 only security fixes labels Jan 19, 2022
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sachaa-Thanasius added a commit to Sachaa-Thanasius/jishaku that referenced this issue Apr 14, 2024
    - Fix test break, since `tokenize.tokenize` has buggy behavior that wasn't backported.
     - See python/cpython#79288 and python/cpython#88833.
    - Adjust typing using so everything from typing is prepended with `typing.`.
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.9 only security fixes 3.10 only security fixes 3.11 only security fixes stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants