This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gvanrossum
Recipients benjamin.peterson, ethan smith, gregory.p.smith, gvanrossum, lukasz.langa, njs, serhiy.storchaka, zsol
Date 2018-04-25.21:37:44
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <CAP7+vJKL1LMGJULNy5XKa8cZasN9dwqxJnQUCufPY=wEABbrvg@mail.gmail.com>
In-reply-to <1524690909.26.0.682650639539.issue33337@psf.upfronthosting.co.za>
Content
I think merging the tokenizers still makes sense. We can then document
top-level tokenize.py (in 3.8 and later) as guaranteed to be able to
tokenize anything going back to Python 2.7. And since lib2to3/pgen2 it is
undocumented I presume removing lib2to3/pgen2/tokenize.py isn't going to
break anything -- but if we worry about that it could be made into a
trivial wrapper for top-level tokenize.py.

Still, the improvements you're planning to lib2to3 (no matter how
compatible) will benefit more people sooner if you extract it into its own
PyPI package. Not everybody can upgrade to 3.7 as soon as Instagram. :-)
History
Date User Action Args
2018-04-25 21:37:44gvanrossumsetrecipients: + gvanrossum, gregory.p.smith, benjamin.peterson, njs, lukasz.langa, serhiy.storchaka, ethan smith, zsol
2018-04-25 21:37:44gvanrossumlinkissue33337 messages
2018-04-25 21:37:44gvanrossumcreate