Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"unittest.TestCase.debug" should honour "skip" (and other test controls) #80855

Closed
d-maurer mannequin opened this issue Apr 20, 2019 · 15 comments
Closed

"unittest.TestCase.debug" should honour "skip" (and other test controls) #80855

d-maurer mannequin opened this issue Apr 20, 2019 · 15 comments
Labels
3.9 only security fixes 3.10 only security fixes 3.11 only security fixes stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@d-maurer
Copy link
Mannequin

d-maurer mannequin commented Apr 20, 2019

BPO 36674
Nosy @terryjreedy, @d-maurer, @rbtcollins, @ezio-melotti, @voidspace, @serhiy-storchaka, @lisroach, @miss-islington
PRs
  • bpo-36674: Stops skipped tests from running in debug mode. #13180
  • bpo-36674: Honour the skipping decorators in TestCase.debug() #28446
  • [3.10] bpo-36674: Honour the skipping decorators in TestCase.debug() (GH-28446) #28447
  • [3.9] bpo-36674: Honour the skipping decorators in TestCase.debug() (GH-28446) #28448
  • Dependencies
  • bpo-41620: Python Unittest does not return results object when the test is skipped
  • Files
  • utest.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2021-12-16.11:58:24.474>
    created_at = <Date 2019-04-20.09:20:54.728>
    labels = ['type-bug', 'library', '3.9', '3.10', '3.11']
    title = '"unittest.TestCase.debug" should honour "skip" (and other test controls)'
    updated_at = <Date 2021-12-16.11:58:24.474>
    user = 'https://github.com/d-maurer'

    bugs.python.org fields:

    activity = <Date 2021-12-16.11:58:24.474>
    actor = 'serhiy.storchaka'
    assignee = 'none'
    closed = True
    closed_date = <Date 2021-12-16.11:58:24.474>
    closer = 'serhiy.storchaka'
    components = ['Library (Lib)']
    creation = <Date 2019-04-20.09:20:54.728>
    creator = 'dmaurer'
    dependencies = ['41620']
    files = ['48287']
    hgrepos = []
    issue_num = 36674
    keywords = ['patch']
    message_count = 15.0
    messages = ['340569', '340954', '340976', '341122', '341135', '348576', '400560', '401105', '402127', '402128', '402129', '402130', '402131', '402133', '402134']
    nosy_count = 8.0
    nosy_names = ['terry.reedy', 'dmaurer', 'rbcollins', 'ezio.melotti', 'michael.foord', 'serhiy.storchaka', 'lisroach', 'miss-islington']
    pr_nums = ['13180', '28446', '28447', '28448']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue36674'
    versions = ['Python 3.9', 'Python 3.10', 'Python 3.11']

    @d-maurer
    Copy link
    Mannequin Author

    d-maurer mannequin commented Apr 20, 2019

    Currently, "TestCase.run" supports several features to control testing - among others, a test can be skipped via the attribute "__unittest_skip__". "TestCase.debug" ignores all those controls and calls the test method unconditionally.

    I am using "zope.testrunner" to run test suites. Its "-D" option switches from "TestCase.run" to "TestCase.debug" in order to allow the analysis of the state of a failing test in the Python debugger. "-D" is typically used if a test in a larger suite failed and a detailed analysis is required to determine the failure's cause. It is important that this second run executes the same tests as the first run; it is not helpful when the second run fails in a test skipped in the first run. Therefore, "TestCase.debug" should honour all test controls supported by "TestCase.run".

    One could argue that the testsuite runner should implement this logic. However, this would force the runner to duplicate the test control logic using internal implementation details of "unittest". Conceptually, it is much nicer to have the test control encapsulated by "unittest".

    @d-maurer d-maurer mannequin added type-bug An unexpected behavior, bug, or error stdlib Python modules in the Lib dir labels Apr 20, 2019
    @terryjreedy
    Copy link
    Member

    3.6 only gets security fixes. This appears to be an enhancement request. SkipTest is an exception and according to the doc it should be propagated to the caller. __unittest_skip__ is an undocumented internal name, so you probably should not be using it.

    If this request is not rejected, we will need test code demonstrating the issue *that does not use zope*.

    @terryjreedy terryjreedy added 3.8 only security fixes type-feature A feature request or enhancement and removed type-bug An unexpected behavior, bug, or error labels Apr 27, 2019
    @d-maurer
    Copy link
    Mannequin Author

    d-maurer mannequin commented Apr 27, 2019

    Terry J. Reedy wrote:

    SkipTest is an exception and according to the doc it should be propagated to the caller. __unittest_skip__ is an undocumented internal name, so you probably should not be using it.

    If this request is not rejected, we will need test code demonstrating the issue *that does not use zope*.

    The "skip" I am speaking about is a decorator for tests or test classes telling a "runner" to skip tests (likely because conditions for their success are not met - usually, that required packages are not installed). This "skip" (decorator) uses __unittest_skip__ to tell the the "unittest runner" (that is unittest.TestCase.run) that the test should be skipped.

    The "unittest runner" is extremely rudimentary and lacks many important features. That is why I use the much more feature rich zope.testrunner. I could implement the __unittest_skip__ logic there - but as you wrote - this should remain encapsulated inside unittest (that's why I wrote this issue).

    I attach a test script - which likely is not convincing as the feature is mainly important for a sophisticated test runner, not for explicit calls of TestCase.debug.

    @lisroach
    Copy link
    Contributor

    +1, I think tests should run the same way in debug() and run(), the difference being limited only to how the result is handled.

    Marking a test as skipped should still be skipped while debugging, else it pollutes the results with tests that will not run with run().

    @terryjreedy
    Copy link
    Member

    1. NO SKIP utest.py with debug prints added and @skipIf initially commented out.
      ---
    from unittest import TestCase, skipIf
    
    #@skipIf(True, "Skip Testing")
    class Tests(TestCase):
      def test_skip(self):
        "this test will fail - if not skipped"
        print('asserting')
        self.assertEqual(0, 1)

    print(Tests("test_skip").run())
    print(Tests("test_skip").debug())
    ---

    test_skip is run twice, with the output difference being as documented.

    asserting
    <unittest.result.TestResult run=1 errors=0 failures=1>
    asserting
    Traceback (most recent call last):
    ...
    AssertionError: 0 != 1
    1. SKIPTEST Adding "self.skipTest('why')" either in a new setUp() or at the top of skip_test() and adding another print

    print(Tests("test_skip").run().skipped)

    results in the output I expect from the doc.

    <unittest.result.TestResult run=1 errors=0 failures=0>
    [(<__main__.Tests testMethod=test_skip>, 'why')]
    Traceback (most recent call last):
    ...
    unittest.case.SkipTest: why
    1. SKIPIF CLASS Uncommenting @skipIf (the OP's case) instead results in

    None
    asserting
    Traceback ...
    AssertionError: 0 != 1

    Since .run does not run the test, I agree that debug() running the test and raising AssertionError is a bug. The test should not be run. In my original comments, I expected SkipTest, as in cases 2 above and 4 below. I have not yet checked the current tests.

    4. SKIPIF FUNC Moving the skipIf decorator to test_skip results in
    None
    Traceback (most recent call last):
    ..
    unittest.case.SkipTest: Skip Testing

    I don't know why run() returns None for skipIf cases instead of returning a TestResult with non-empty skipped, as it does for skipTest, or if the None is a separate bug.

    @voidspace
    Copy link
    Contributor

    "I don't know why run() returns None for skipIf cases instead of returning a TestResult with non-empty skipped, as it does for skipTest, or if the None is a separate bug."

    That does sound like a bug.

    @serhiy-storchaka
    Copy link
    Member

    See bpo-41620 for run() returning None.

    @serhiy-storchaka
    Copy link
    Member

    There are too many issues with TestResult.debug(). It does not call tearDown() and doCleanups() if the test is skipped, failed or raises other exception (including errors in cleanup). It does not support skipping decorators. It does not support the expectedFailure decorator. It does not work at all in IsolatedAsyncioTestCase.

    @serhiy-storchaka
    Copy link
    Member

    PR 28446 makes TestCase.debug() honoring the skipping decorator.

    The difference with PR 13180:

    1. It does not run setUp() and tearDown() for tests decorated with the skipping decorator, as TestCase.run().
    2. It has the same behavior for tests decorated with the skipping decorator and tests using skipTest() or raising a SkipTest -- raises a SkipTest.
    3. Tests actually test the skipping decorator.

    @serhiy-storchaka serhiy-storchaka added 3.9 only security fixes 3.10 only security fixes 3.11 only security fixes type-bug An unexpected behavior, bug, or error and removed 3.8 only security fixes type-feature A feature request or enhancement labels Sep 18, 2021
    @serhiy-storchaka
    Copy link
    Member

    Since the behavior of debug() was different for the following cases:

    @skip('skipping')
    def test1(self):
        pass
    @othedecorator
    @skip('skipping')
    def test2(self):
        pass
    def test3(self):
        self.skipTest('skipping')
    

    I consider it as a bug and will backport the fix.

    @serhiy-storchaka
    Copy link
    Member

    New changeset dea59cf by Serhiy Storchaka in branch 'main':
    bpo-36674: Honour the skipping decorators in TestCase.debug() (GH-28446)
    dea59cf

    @miss-islington
    Copy link
    Contributor

    New changeset 753f7af by Miss Islington (bot) in branch '3.10':
    bpo-36674: Honour the skipping decorators in TestCase.debug() (GH-28446)
    753f7af

    @miss-islington
    Copy link
    Contributor

    New changeset 7e465a6 by Miss Islington (bot) in branch '3.9':
    bpo-36674: Honour the skipping decorators in TestCase.debug() (GH-28446)
    7e465a6

    @serhiy-storchaka
    Copy link
    Member

    See bpo-45238 for debug() in IsolatedAsyncioTestCase.

    @d-maurer
    Copy link
    Mannequin Author

    d-maurer mannequin commented Sep 18, 2021

    Thank you for working on this issue!

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.9 only security fixes 3.10 only security fixes 3.11 only security fixes stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants