This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author chris.jerdonek
Recipients Julian, Yaroslav.Halchenko, abingham, bfroehle, borja.ruiz, brian.curtin, chris.jerdonek, eric.araujo, eric.snow, exarkun, ezio.melotti, fperez, hpk, kynan, michael.foord, nchauvat, ncoghlan, pitrou, r.david.murray, santoso.wijaya, serhiy.storchaka, spiv
Date 2013-01-19.00:33:30
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1358555610.83.0.527134617854.issue16997@psf.upfronthosting.co.za>
In-reply-to
Content
Nice/elegant idea.  A couple comments:

(1) What will be the semantics of TestCase/subtest failures?  Currently, it looks like each subtest failure registers as an additional failure, meaning that the number of test failures can exceed the number of test cases.  For example (with a single test case with 2 subtests):

$ ./python.exe test_subtest.py 
FF
======================================================================
FAIL: test (__main__.MyTests) (i=0)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_subtest.py", line 9, in test
    self.assertEqual(0, 1)
AssertionError: 0 != 1

======================================================================
FAIL: test (__main__.MyTests) (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_subtest.py", line 9, in test
    self.assertEqual(0, 1)
AssertionError: 0 != 1

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=2)

With the way I understand it, it seems like a subtest failure should register as a failure of the TestCase as a whole, unless the subtests should be enumerated and considered tests in their own right (in which case the total test count should reflect this).

(2) Related to (1), it doesn't seem like decorators like expectedFailure are being handled correctly.  For example:

    @unittest.expectedFailure
    def test(self):
        for i in range(2):
            with self.subTest(i=i):
                self.assertEqual(i, 0)

This results in:

    Ran 1 test in 0.001s

    FAILED (failures=1, unexpected successes=1)

In other words, it seems like the decorator is being applied to each subtest as opposed to the test case as a whole (though actually, I think the first should read "expected failures=1").  It seems like one subtest failing should qualify as an expected failure, or are the semantics such that expectedFailure means that every subtest must fail?
History
Date User Action Args
2013-01-19 00:33:30chris.jerdoneksetrecipients: + chris.jerdonek, spiv, exarkun, ncoghlan, pitrou, ezio.melotti, eric.araujo, r.david.murray, michael.foord, brian.curtin, hpk, fperez, Yaroslav.Halchenko, santoso.wijaya, nchauvat, kynan, Julian, abingham, eric.snow, serhiy.storchaka, borja.ruiz, bfroehle
2013-01-19 00:33:30chris.jerdoneksetmessageid: <1358555610.83.0.527134617854.issue16997@psf.upfronthosting.co.za>
2013-01-19 00:33:30chris.jerdoneklinkissue16997 messages
2013-01-19 00:33:30chris.jerdonekcreate