This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gvanrossum
Recipients eric.smith, gvanrossum, lys.nikolaou, pablogsal
Date 2020-02-06.00:48:42
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1580950124.53.0.781176372311.issue39564@roundup.psfhosted.org>
In-reply-to
Content
This is indeed a duplicate of issue35212 (the second message there also shows a problem with implicit f-string concatenation). There's not much information in issue34364 (just a merged PR, and a mention of a PR of yours from 2 years ago that apparently never materialized).

FWIW I meant "pegen" (https://github.com/gvanrossum/pegen/) -- not sure why my mobile spell corrector turned that into "oefen" (maybe it's becoming multilingual :-).

And the reason I think called this "urgent" is that until this is fixed in CPython 3.8 we have to disable the part of our tests that compares the AST generated by pegen to the one generated by ast.parse. (We've found several other issues with incorrect column offsets in ast.parse this way, and always managed to fix them upstream. :-)

If we came up with a hacky fix, would you oppose incorporating it?

I'm guessing your alternate approach is modifying the tokenizer to treat f"x{y}z{a}b" as several tokens:

- f"x{
- y (or multiple tokens if there's a complex expression here)
- }z{
- a (same)
- }b"

(This has been proposed before by a few others, and IIRC we did it this way in ABC, in the early '80s.)
History
Date User Action Args
2020-02-06 00:48:44gvanrossumsetrecipients: + gvanrossum, eric.smith, lys.nikolaou, pablogsal
2020-02-06 00:48:44gvanrossumsetmessageid: <1580950124.53.0.781176372311.issue39564@roundup.psfhosted.org>
2020-02-06 00:48:44gvanrossumlinkissue39564 messages
2020-02-06 00:48:42gvanrossumcreate