This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ron_adam
Recipients christian.heimes, gvanrossum, ron_adam
Date 2007-11-16.00:36:11
SpamBayes Score 0.043700725
Marked as misclassified No
Message-id <1195173371.36.0.215266785157.issue1720390@psf.upfronthosting.co.za>
In-reply-to
Content
It looks like the disabling of \u and \U in raw strings is done.  Does
tokenize.py need to be fixed, to match?

While working on this I was able to clean up the string parsing parts of
tokenize.c, and have a separate patch with just that.

And an updated patch with both the cleaned up tokenize.c and the no
escapes in raw strings in case it is desired after all.
Files
File name Uploaded
tokenize_cleanup_patch.diff ron_adam, 2007-11-16.00:36:11
History
Date User Action Args
2007-11-16 00:36:11ron_adamsetspambayes_score: 0.0437007 -> 0.043700725
recipients: + ron_adam, gvanrossum, christian.heimes
2007-11-16 00:36:11ron_adamsetspambayes_score: 0.0437007 -> 0.0437007
messageid: <1195173371.36.0.215266785157.issue1720390@psf.upfronthosting.co.za>
2007-11-16 00:36:11ron_adamlinkissue1720390 messages
2007-11-16 00:36:11ron_adamcreate