Author tzickel
Recipients benjamin.peterson, brett.cannon, eric.snow, gregory.p.smith, meador.inge, ncoghlan, tzickel
Date 2015-11-02.19:10:55
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
1. You are correct the issue I am talking about is in parsing source files (Altough because python caches them as .pyc it's a worse situation).
2. The example you give is EINTR handling (which is mostly handling interrupted I/O operations by signals and retrying them) the real I/O error checking in that get_line is I belive in the next ferror check there. It might be nice to have EINTR checking (and retry) when reading the source file as well, but that is a secondary issue.
3. As for your recommendation for changing Py_UniversalNewlineFgets, you can see that both it's documentation says "Note that we need no error handling: fgets() treats error and eof identically." and since it seems like a low-level function that does not have any python stuff like exception handling, and in it's current signature it can't return an error (it simply returns the buffer, or NULL if nothing was read).
4. As for why putting it in that position, basically there could be a few I/O paths, besides Py_UniversalNewlineFget, such as codec decoding in fp_readl (via decoding_fgets) that can fail in I/O as well. Looking at the code again (while not tested), maybe the check can be actually moved to the end of decoding_fgets in tokenizer.c i.e. if there is an ferror in tok->fp in the end of decoding_fgets then to return error_ret(tok); there, but then double eyes need to be sure that no other code path can have an I/O error. I am not an expert on the layout of tokenizer (read it mostly to figure out this bug) so if that's better it can be moved I guess.
Date User Action Args
2015-11-02 19:10:55tzickelsetrecipients: + tzickel, brett.cannon, gregory.p.smith, ncoghlan, benjamin.peterson, meador.inge, eric.snow
2015-11-02 19:10:55tzickelsetmessageid: <>
2015-11-02 19:10:55tzickellinkissue25083 messages
2015-11-02 19:10:55tzickelcreate