This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author p-ganssle
Recipients belopolsky, eric.smith, matrixise, miss-islington, mjsaah, p-ganssle, pablogsal, terry.reedy, thatiparthy, vstinner, xdegaye, xtreak
Date 2019-03-18.13:34:39
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
> No please, don't do that :-(

Interesting, I don't feel terribly strongly about it, but I would have thought that you'd be more in favor of that solution, maybe we have a different definition of "expected failure"?

Usually in my projects, I use xfail if I have *tests* for a bug, but no fix for it yet. The xfail-ing test serves two purposes: 1. it notifies me if the bug is incidentally fixed (so that I can remove the xfail and it becomes a regression test, and I close the bug report) and 2. it allows me to encode acceptance criteria for fixing the bug directly into the test suite.

I do personally like the idea of separate tests for "is this consistent across platforms" and "does this throw an error", but it is true that once it's possible to pass the consistency test it *also* serves as a test that no errors are thrown.
Date User Action Args
2019-03-18 13:34:40p-gansslesetrecipients: + p-ganssle, terry.reedy, belopolsky, vstinner, eric.smith, xdegaye, matrixise, thatiparthy, pablogsal, miss-islington, xtreak, mjsaah
2019-03-18 13:34:40p-gansslesetmessageid: <>
2019-03-18 13:34:40p-gansslelinkissue35066 messages
2019-03-18 13:34:39p-gansslecreate