Author martin.panter
Recipients brett.cannon, martin.panter, thomaslee
Date 2016-11-22.10:16:14
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
I’m not sure I understand your questions Brett (Which tokenizer? What rebuilding?), but I will try to explain some relevant parts.

My main goal is to add a makefile dependency of parsetok.o on $(GRAMMAR_H). In Python 3, that is easy, and (I realize now) my problem would be solved without changing any other file. However my changes in parsetok.c eliminate an unnecessary bootstrapping problem, so I still think they are worthwhile.

For Python 3, the parsetok.c code gets compiled to parsetok_pgen.o, and after executing, also gets compiled to a separate parsetok.o file. Rebuilding the code is not a problem here.

In Python 2, the same parsetok.o object is both part of the pgen program, and the eventual Python program. In order to add my desired makefile dependency, I have to duplicate parsetok.c compilation into two makefile rules (parsetok.o and parsetok_pgen.o). So you could say I am breaking a circle _by_ rebuilding the code.

Dependencies between files with my patch applied:

Parser/parsetok_pgen.o -> Parser/pgen -> Include/graminit.h -> Parser/parsetok.o -> python
Date User Action Args
2016-11-22 10:16:14martin.pantersetrecipients: + martin.panter, brett.cannon, thomaslee
2016-11-22 10:16:14martin.pantersetmessageid: <>
2016-11-22 10:16:14martin.panterlinkissue4347 messages
2016-11-22 10:16:14martin.pantercreate