This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author RJ722
Recipients Esa.Peuha, Mark.Shannon, RJ722, Rosuav, cheryl.sabella, lys.nikolaou, mbussonn, ncoghlan, pablogsal, r.david.murray, terry.reedy
Date 2020-06-29.17:34:51
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1593452091.58.0.611767925345.issue19335@roundup.psfhosted.org>
In-reply-to
Content
> That may actually be another alternative: instead of doing the "try
> appending newlines and see if it works or generates different errors",
> we may be able to switch to the tokenizer if the initial compilation
> fails and check for hanging INDENT tokens (i.e. INDENTS without a
> corresponding DEDENT). That would get us much closer to what the real
> eval loop is doing.

From what I understand, "checking for two or more hanging INDENTS" and, "hardcoding a check for nonlocal SyntaxErrors in codeop._maybe_compile" are two different solutions, right?  If yes, do we have an answer to which one of them is more cleaner, and henceforth, the preferable solution?

I, personally, like the idea of checking INDENTS primarily because of it's reduced specificity, but I am in no position to comment on this (I already kinda did ':D), and you folks know better! For all we know, we should be optimizing for specificity.

Also, reading Nick's comments and the comc's code, gives me the feeling that a fix for this wouldn't require drastic changes.  I'm slowly starting my journey with CPython, and I'd like to contribute a patch if that is the case. Thanks!
History
Date User Action Args
2020-06-29 17:34:51RJ722setrecipients: + RJ722, terry.reedy, ncoghlan, r.david.murray, Mark.Shannon, Rosuav, Esa.Peuha, mbussonn, cheryl.sabella, lys.nikolaou, pablogsal
2020-06-29 17:34:51RJ722setmessageid: <1593452091.58.0.611767925345.issue19335@roundup.psfhosted.org>
2020-06-29 17:34:51RJ722linkissue19335 messages
2020-06-29 17:34:51RJ722create