This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pablogsal
Recipients Anthony Sottile, BTaskaya, cstratak, gvanrossum, lys.nikolaou, ned.deily, pablogsal, serhiy.storchaka, vstinner
Date 2020-04-27.22:15:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
> The parser generator imports modules token and tokenize. It is not correct, because they are relevant to the Python version used to run the parser generator, and not to the Python version for which the parser is generated. It works currently only because there is no differences between 3.8 and 3.9, but it will fail when you add a new token or change/remove an old one.

Very good point, Serhiy! Thanks for catching that

So there are two parts of the parser generator where we use these modules:

- For the grammar parser and the python-based generator (that generates the grammar parser) we are good because we just need the modules to parse the grammars and generate the metaparser so there is no need for those modules to be updated. Running the grammar parser (the one generated with the meta-grammar) is also fine.

- For the C generator we need the current set of "exact_token_types" and in this case we need them to be synchronized with the Tokens file. I think we can do as pgen and take the path to that file as part of the command line interface and parse it.

I will make a PR for this soon.
Date User Action Args
2020-04-27 22:15:32pablogsalsetrecipients: + pablogsal, gvanrossum, vstinner, ned.deily, serhiy.storchaka, Anthony Sottile, cstratak, lys.nikolaou, BTaskaya
2020-04-27 22:15:32pablogsalsetmessageid: <>
2020-04-27 22:15:32pablogsallinkissue40334 messages
2020-04-27 22:15:32pablogsalcreate