Message383688
I am with Lysandros and Batuhan. The parser is considerably coupled with the C tokenizer and the only way to reuse *the parser* is to make flexible enough to receive a token stream of Python objects as input and that can not only have a performance impact on normal parsing but also raises the complexity of this task considerably, especially taking into account that the use case is quite restricted and is something that you can already achieve by transforming the token stream into text and using ast.parse.
There is a considerable tension on exposed parts of the compiler pipeline for introspection and other capabilities and our ability to do optimizations. Given how painful it has been in the past to deal with this, my view is to avoid exposing as much as possible anything in the compiler pipeline, so we don't shoot ourselves in the foot in the future if we need to change stuff around. |
|
Date |
User |
Action |
Args |
2020-12-24 13:14:59 | pablogsal | set | recipients:
+ pablogsal, pfalcon, serhiy.storchaka, lys.nikolaou, BTaskaya |
2020-12-24 13:14:59 | pablogsal | set | messageid: <1608815699.4.0.477576677928.issue42729@roundup.psfhosted.org> |
2020-12-24 13:14:59 | pablogsal | link | issue42729 messages |
2020-12-24 13:14:59 | pablogsal | create | |
|