This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Fix comment in tokenizer.c
Type: behavior Stage: resolved
Components: Documentation Versions: Python 3.7, Python 3.6
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: docs@python Nosy List: berker.peksag, docs@python, eric.smith, matrixise, python-dev, refi64, serhiy.storchaka
Priority: normal Keywords: patch

Created on 2016-10-20 17:23 by refi64, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
0001-Fix-comment-in-tokenizer.c.patch refi64, 2016-10-20 17:23
0001-Fix-comment-in-tokenizer.c.patch refi64, 2016-10-20 20:13
Messages (14)
msg279056 - (view) Author: Ryan Gonzalez (refi64) * Date: 2016-10-20 17:23
Attached is a little fix for a comment in tokenizer.c. I noticed that it was never updated for the inclusion of f-strings.
msg279057 - (view) Author: Stéphane Wirtel (matrixise) * (Python committer) Date: 2016-10-20 17:25
Thank you for this patch.
msg279059 - (view) Author: Eric V. Smith (eric.smith) * (Python committer) Date: 2016-10-20 17:47
I'd actually change this to be something like:

Process b"", r"", u"", and the various legal combinations.

There are 24 total combinations when you add upper case. I actually wrote a script in Lib/tokenize.py to generate them all. And when I add binary f-strings, that number climbs to 80.
msg279060 - (view) Author: Eric V. Smith (eric.smith) * (Python committer) Date: 2016-10-20 17:55
FWIW, in Lib/tokenize.py, it's _all_string_prefixes():

>>> _all_string_prefixes()
set(['', 'FR', 'rB', 'rF', 'BR', 'Fr', 'RF', 'rf', 'RB', 'fr', 'B', 'rb', 'F', 'Br', 'R', 'U', 'br', 'fR', 'b', 'f', 'Rb', 'Rf', 'r', 'u', 'bR'])

My basic point is that trying to list them all is hard and a maintenance problem. So as long as we're not being exhaustive, the comment should just state the gist of what the code does.
msg279076 - (view) Author: Ryan Gonzalez (refi64) * Date: 2016-10-20 20:13
@eric.smith How's this instead?
msg279077 - (view) Author: Ryan Gonzalez (refi64) * Date: 2016-10-20 20:14
Ugh, hit Submit too soon. I meant to say the patch that has 20:13 as the date (I should've probably given them different names...).
msg279085 - (view) Author: Eric V. Smith (eric.smith) * (Python committer) Date: 2016-10-20 21:51
That's great. Thanks!
msg285945 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-01-21 09:20
There are just 8 legal combinations if ignore case:

>>> import tokenize
>>> sorted({x.lower() for x in tokenize._all_string_prefixes() if x})
['b', 'br', 'f', 'fr', 'r', 'rb', 'rf', 'u']
msg285947 - (view) Author: Eric V. Smith (eric.smith) * (Python committer) Date: 2017-01-21 11:19
Right, that's basically what _all_string_prefixes() does: it starts with the 6 unique prefixes that are case- and order- independent ('b', 'r', 'u', 'f', 'br', 'fr'), and adds the cased and ordered versions.

If you're saying that we should list those 8, and say "with all combinations of case", then I think we'd better off listing the 6 and saying "with all combinations of case and order". That's mainly because if "fbr" gets added, then the list of ordered ones gets larger.

But it's just a comment. I think we should just commit Ryan's last patch.
msg285950 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2017-01-21 11:24
No, I'm just saying that the list of combinations is not so large. Ryan's patch LGTM.
msg287029 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2017-02-05 01:28
New changeset 8b20ed083a94 by Berker Peksag in branch '3.6':
Issue #28489: Fix comment in tokenizer.c
https://hg.python.org/cpython/rev/8b20ed083a94

New changeset 72ec2c599301 by Berker Peksag in branch 'default':
Issue #28489: Merge from 3.6
https://hg.python.org/cpython/rev/72ec2c599301
msg287031 - (view) Author: Berker Peksag (berker.peksag) * (Python committer) Date: 2017-02-05 01:30
I've applied Ryan's patch since it's been sitting in the 'commit review' queue. Thanks!
msg287032 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2017-02-05 02:00
New changeset bc2b2193520cea1adc44d27c9f7721fc6ad37293 by Berker Peksag in branch '3.6':
Issue #28489: Fix comment in tokenizer.c
https://github.com/python/cpython/commit/bc2b2193520cea1adc44d27c9f7721fc6ad37293
msg287033 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2017-02-05 02:00
New changeset bc2b2193520cea1adc44d27c9f7721fc6ad37293 by Berker Peksag in branch 'master':
Issue #28489: Fix comment in tokenizer.c
https://github.com/python/cpython/commit/bc2b2193520cea1adc44d27c9f7721fc6ad37293

New changeset 9babb676a3c143099ab548ccc72c1ee897335e84 by Berker Peksag in branch 'master':
Issue #28489: Merge from 3.6
https://github.com/python/cpython/commit/9babb676a3c143099ab548ccc72c1ee897335e84
History
Date User Action Args
2022-04-11 14:58:38adminsetgithub: 72675
2017-02-05 02:00:25python-devsetmessages: + msg287033
2017-02-05 02:00:23python-devsetmessages: + msg287032
2017-02-05 01:30:32berker.peksagsetstatus: open -> closed

type: behavior

nosy: + berker.peksag
messages: + msg287031
resolution: fixed
stage: commit review -> resolved
2017-02-05 01:28:01python-devsetnosy: + python-dev
messages: + msg287029
2017-01-21 11:24:46serhiy.storchakasetmessages: + msg285950
2017-01-21 11:19:11eric.smithsetmessages: + msg285947
2017-01-21 09:20:12serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg285945
2016-10-20 23:32:26eric.smithsetstage: commit review
2016-10-20 21:51:50eric.smithsetmessages: + msg279085
2016-10-20 20:14:13refi64setmessages: + msg279077
2016-10-20 20:13:30refi64setfiles: + 0001-Fix-comment-in-tokenizer.c.patch

messages: + msg279076
2016-10-20 17:55:12eric.smithsetmessages: + msg279060
2016-10-20 17:47:49eric.smithsetnosy: + eric.smith
messages: + msg279059
2016-10-20 17:25:14matrixisesetnosy: + matrixise

messages: + msg279057
versions: + Python 3.7
2016-10-20 17:23:36refi64create