Issue35327
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2018-11-27 15:24 by p-ganssle, last changed 2022-04-11 14:59 by admin.
Messages (6) | |||
---|---|---|---|
msg330527 - (view) | Author: Paul Ganssle (p-ganssle) * | Date: 2018-11-27 15:24 | |
It seems that if you call skipTest *anywhere* in a test function, even in a subTest, the *entire* function gets marked as "skipped", even if only one sub-test is skipped. Example: import unittest class SomeTest(unittest.TestCase): def test_something(self): for i in range(1, 3): with self.subTest(i): if i > 1: self.skipTest('Not supported') self.assertEqual(i, 1) If you run `python3.7 -m unittest -v` on this, you get: $ python -m unittest -v test_something (test_mod.SomeTest) ... skipped 'Not supported' ---------------------------------------------------------------------- Ran 1 test in 0.000s OK (skipped=1) Despite the fact that the test *was* run in the `i == 1` case. Similarly, pytest marks this as a single skipped test: ========= test session starts ======= platform linux -- Python 3.7.1, pytest-4.0.1, py-1.7.0, pluggy-0.8.0 rootdir: /tmp/test_mod, inifile: collected 1 item test_mod.py s [100%] ===== 1 skipped in 0.00 seconds ===== The solution to this is not obvious, unfortunately. One way to solve it would be to have each subtest considered a "separate test". Another would be to detect whether subTests have been skipped and expose only the *skipped* tests as separate tests. You could also add a "partially skipped" marker, so to say, "Some parts of this were skipped" I suspect the right answer is some combination of these three answers - possibly adding an extra-verbose mode that splits out all sub tests and having skipped subTests default to being displayed separately. Somewhat related to issue #30997. |
|||
msg330528 - (view) | Author: Paul Ganssle (p-ganssle) * | Date: 2018-11-27 15:29 | |
As "prior art" the way that pytest does this is to have parametrized tests show up as separate tests: import pytest @pytest.mark.parametrize("i", range(1, 3)) def test_something(i): if i > 1: pytest.skip('Not supported') assert i == 1 @pytest.mark.parametrize("i", range(1, 3)) def test_something_else(i): assert 3 > i >= 1 Running `pytest -v` for this gives: ======================================= test session starts ======================================== platform linux -- Python 3.7.1, pytest-4.0.1, py-1.7.0, pluggy-0.8.0 cachedir: .pytest_cache rootdir: /tmp/test_mod, inifile: collected 4 items test_mod.py::test_something[1] PASSED [ 25%] test_mod.py::test_something[2] SKIPPED [ 50%] test_mod.py::test_something_else[1] PASSED [ 75%] test_mod.py::test_something_else[2] PASSED [100%] =============================== 3 passed, 1 skipped in 0.01 seconds ================================ |
|||
msg330529 - (view) | Author: Rémi Lapeyre (remi.lapeyre) * | Date: 2018-11-27 15:36 | |
Did you notice that `skipped 'Not supported'` will be displayed once per skipped subtest so changing your `for i in range(1, 3):` by `for i in range(1, 5):` will show: python3 -m unittest -v test_something (test2.SomeTest) ... skipped 'Not supported' skipped 'Not supported' skipped 'Not supported' Do you think it should show something like: python3 -m unittest -v test_something (test2.SomeTest) ... SubTest skipped 'Not supported' SubTest skipped 'Not supported' SubTest skipped 'Not supported' ? |
|||
msg330530 - (view) | Author: Paul Ganssle (p-ganssle) * | Date: 2018-11-27 15:41 | |
@Rémi Interesting. Your suggested output does look clearer than the existing one, but it still doesn't indicate that anything *passed*. I think I like the way pytest does it the best, but if we can't expose the subtests as separate tests, I'd probably want it to be more like this: test_something (test2.SomeTest) ... ok (3 subtests skipped) test_something [2] skipped 'Not supported' test_something [3] skipped 'Not supported' test_something [4] skipped 'Not supported' |
|||
msg330531 - (view) | Author: Rémi Lapeyre (remi.lapeyre) * | Date: 2018-11-27 15:47 | |
I think this is a nice output, taking a quick look at unittest source, all the information needed to display this is save in the TestResult object, showing skipped tests is done here: https://github.com/python/cpython/blob/master/Lib/unittest/runner.py#L85 It seems to me that the change would not be very hard to add. |
|||
msg330770 - (view) | Author: Martin Panter (martin.panter) * | Date: 2018-11-30 11:12 | |
Sounds very similar to Issue 25894, discussing how to deal with tests where different subtests errored, failed, skipped and passed. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:59:08 | admin | set | github: 79508 |
2018-11-30 11:12:16 | martin.panter | set | nosy:
+ martin.panter messages: + msg330770 |
2018-11-27 15:47:18 | remi.lapeyre | set | messages: + msg330531 |
2018-11-27 15:41:40 | p-ganssle | set | messages: + msg330530 |
2018-11-27 15:36:00 | remi.lapeyre | set | nosy:
+ remi.lapeyre messages: + msg330529 |
2018-11-27 15:29:45 | p-ganssle | set | messages: + msg330528 |
2018-11-27 15:24:27 | p-ganssle | create |