This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author eric.smith
Recipients eric.smith, gvanrossum, lys.nikolaou, pablogsal
Date 2020-02-06.09:23:19
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1580980999.2.0.913112384097.issue39564@roundup.psfhosted.org>
In-reply-to
Content
I'll see if I can dig up the patch today. If I can find it, I'll attach it to issue 34364.

This is really the first time I've tried to write down all of the issues related to tokenizing f-strings. It does seem a little daunting, but I'm not done noodling it through. At first blush it looks like the tokenizer would need to remember if it's inside an f-string or not and switch to different rules if so. Which doesn't exactly describe your average tokenizer, but I'm not sure how Python's tokenizer would need to be changed to deal with it, or how messy that change would be.

I should probably write an informational PEP about parsing f-strings. And I should include the reason I went with the "just a regular string which is later hand-parsed" approach: at the time, f-strings were a controversial topic (there were any number of reddit threads predicting doom and gloom if they were added). By parsing them as just regular strings with one simple added string prefix, it allowed existing tooling (editors, syntax highlighters, etc.) to easily skip over them just by recognizing 'f' as an additional string prefix.
History
Date User Action Args
2020-02-06 09:23:19eric.smithsetrecipients: + eric.smith, gvanrossum, lys.nikolaou, pablogsal
2020-02-06 09:23:19eric.smithsetmessageid: <1580980999.2.0.913112384097.issue39564@roundup.psfhosted.org>
2020-02-06 09:23:19eric.smithlinkissue39564 messages
2020-02-06 09:23:19eric.smithcreate