Message149608
> Can this be fixed?
More or less.
The following patch does the trick, but is not really elegant:
"""
--- a/Parser/tokenizer.c 2011-06-01 02:39:38.000000000 +0000
+++ b/Parser/tokenizer.c 2011-12-16 08:48:45.000000000 +0000
@@ -1574,6 +1576,10 @@
}
}
tok_backup(tok, c);
+ if (is_potential_identifier_start(c)) {
+ tok->done = E_TOKEN;
+ return ERRORTOKEN;
+ }
*p_start = tok->start;
*p_end = tok->cur;
return NUMBER;
"""
"""
> python -c "1and 0"
File "<string>", line 1
1and 0
^
SyntaxError: invalid token
"""
Note that there are other - although less bothering - limitations:
"""
> python -c "1 and@ 2"
File "<string>", line 1
1 and@ 2
^
SyntaxError: invalid syntax
"""
This should be catched by the lexer, not the parser (i.e. it should raise an "Invalid token" error).
That's a limitation of the ad-hoc scanner. |
|
Date |
User |
Action |
Args |
2011-12-16 10:21:39 | neologix | set | recipients:
+ neologix, benjamin.peterson, Jean-Michel.Fauth |
2011-12-16 10:21:39 | neologix | set | messageid: <1324030899.48.0.846463737238.issue13610@psf.upfronthosting.co.za> |
2011-12-16 10:21:38 | neologix | link | issue13610 messages |
2011-12-16 10:21:38 | neologix | create | |
|