This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients georg.brandl, josephgordon, martin.panter, meador.inge, serhiy.storchaka, vstinner
Date 2017-05-16.21:02:54
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1494968574.68.0.617507108233.issue25324@psf.upfronthosting.co.za>
In-reply-to
Content
> I would fix this by making tokenize.tok_name a copy. It looks like this behaviour dates back to 1997 (see revision 1efc4273fdb7).

token.tok_name is part of the Python public API:
https://docs.python.org/dev/library/token.html#token.tok_name

whereas tokenize.tok_name isn't documented. So I dislike having two disconnected mappings. I prefer to add tokenize tokens directly in Lib/token.py, and then get COMMENT, NL and ENCODING using tok_name.index().
History
Date User Action Args
2017-05-16 21:02:54vstinnersetrecipients: + vstinner, georg.brandl, meador.inge, martin.panter, serhiy.storchaka, josephgordon
2017-05-16 21:02:54vstinnersetmessageid: <1494968574.68.0.617507108233.issue25324@psf.upfronthosting.co.za>
2017-05-16 21:02:54vstinnerlinkissue25324 messages
2017-05-16 21:02:54vstinnercreate