Message149491
Sure, but what does that have to do with anything? tokenize isn't a general purpose tokenizer, it's specifically for tokenizing Python source code.
The *problem* is that it doesn't currently fully tokenize everything, but doesn't explicitly say that in the module documentation.
Hence my proposed two-fold fix: document the current behaviour explicitly and also add a separate "exact_type" attribute for easy access to the detailed tokenization without doing your own string comparisons. |
|
Date |
User |
Action |
Args |
2011-12-15 03:16:53 | ncoghlan | set | recipients:
+ ncoghlan, akuchling, terry.reedy, gpolo, docs@python |
2011-12-15 03:16:53 | ncoghlan | set | messageid: <1323919013.83.0.854406512576.issue2134@psf.upfronthosting.co.za> |
2011-12-15 03:16:53 | ncoghlan | link | issue2134 messages |
2011-12-15 03:16:52 | ncoghlan | create | |
|