This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author serhiy.storchaka
Recipients adelfino, eric.smith, serhiy.storchaka
Date 2018-05-04.15:25:19
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1525447519.77.0.682650639539.issue33422@psf.upfronthosting.co.za>
In-reply-to
Content
I don't think we need to support prefixes without quotes or with triple qoutes. 'ur' is not valid prefix. Using simplified code from tokenize:

        _strprefixes = [''.join(u) + q
                        for t in ('b', 'r', 'u', 'f', 'br', 'rb', 'fr', 'rf')
                        for u in itertools.product(*[(c, c.upper()) for c in t])
                        for q in ("'", '"')]

Or you can use tokenize._all_string_prefixes() directly:

        _strprefixes = [p + q
                        for p in tokenize._all_string_prefixes()
                        for q in ("'", '"')]

But it may be simple to just convert the string to lower case before looking up in the symbols dict. Then

        _strprefixes = [p + q
                        for p in ('b', 'r', 'u', 'f', 'br', 'rb', 'fr', 'rf')
                        for q in ("'", '"')]
History
Date User Action Args
2018-05-04 15:25:19serhiy.storchakasetrecipients: + serhiy.storchaka, eric.smith, adelfino
2018-05-04 15:25:19serhiy.storchakasetmessageid: <1525447519.77.0.682650639539.issue33422@psf.upfronthosting.co.za>
2018-05-04 15:25:19serhiy.storchakalinkissue33422 messages
2018-05-04 15:25:19serhiy.storchakacreate