This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ronaldoussoren
Recipients ezio.melotti, louielu, michael.foord, rbcollins, ronaldoussoren
Date 2017-07-23.12:50:56
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1500814256.71.0.852290658332.issue30997@psf.upfronthosting.co.za>
In-reply-to
Content
That's correct, I have a test like:


def test_something(self):

    for info in CASES:
       with self.subTest(info):
          self.assert_something(info)

For some values of 'info' the test is known to fail and I want to mark those as exptectedFailure somehow and there doesn't appear to be a way to do so right now.

I'm currently doing:
    with self.subTest(info):
       try:
           ... # actual test

       except:
           if info in KNOWN_FAILURES:
               self.skipTest()
           raise

That suppresses the test failures, but in an unclean way and without getting a warning when the testcase starts working again.

I could generate testcases manually the old fashioned way without using subTest, but that results in more complicated test code and requires rewriting a number of tests.

One possible design for making it possible to mark subTests as known failures it to return a value in the subTest context manager that has a method for marking the subTest:

     with self.subTest(...) as tc:
        if ...:
           tc.expectedFailure(...)
 
        ... # actual test

I don't know how invasive such a change would be.
History
Date User Action Args
2017-07-23 12:50:56ronaldoussorensetrecipients: + ronaldoussoren, rbcollins, ezio.melotti, michael.foord, louielu
2017-07-23 12:50:56ronaldoussorensetmessageid: <1500814256.71.0.852290658332.issue30997@psf.upfronthosting.co.za>
2017-07-23 12:50:56ronaldoussorenlinkissue30997 messages
2017-07-23 12:50:56ronaldoussorencreate