This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author neologix
Recipients Jean-Michel.Fauth, benjamin.peterson, neologix
Date 2011-12-16.10:21:38
SpamBayes Score 0.00081544346
Marked as misclassified No
Message-id <1324030899.48.0.846463737238.issue13610@psf.upfronthosting.co.za>
In-reply-to
Content
> Can this be fixed?

More or less.
The following patch does the trick, but is not really elegant:
"""
--- a/Parser/tokenizer.c        2011-06-01 02:39:38.000000000 +0000
+++ b/Parser/tokenizer.c        2011-12-16 08:48:45.000000000 +0000
@@ -1574,6 +1576,10 @@
             }
         }
         tok_backup(tok, c);
+        if (is_potential_identifier_start(c)) {
+            tok->done = E_TOKEN;
+            return ERRORTOKEN;
+        }
         *p_start = tok->start;
         *p_end = tok->cur;
         return NUMBER;
"""

"""
> python -c "1and 0"
  File "<string>", line 1
    1and 0
    ^
SyntaxError: invalid token
"""

Note that there are other - although less bothering - limitations:
"""
> python -c "1 and@ 2"
  File "<string>", line 1
    1 and@ 2
       ^
SyntaxError: invalid syntax
"""

This should be catched by the lexer, not the parser (i.e. it should raise an "Invalid token" error).
That's a limitation of the ad-hoc scanner.
History
Date User Action Args
2011-12-16 10:21:39neologixsetrecipients: + neologix, benjamin.peterson, Jean-Michel.Fauth
2011-12-16 10:21:39neologixsetmessageid: <1324030899.48.0.846463737238.issue13610@psf.upfronthosting.co.za>
2011-12-16 10:21:38neologixlinkissue13610 messages
2011-12-16 10:21:38neologixcreate