Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional Flag For Unit-Test Module: There Can Be Only One (Error) #46494

Closed
bcwhite mannequin opened this issue Mar 5, 2008 · 7 comments
Closed

Additional Flag For Unit-Test Module: There Can Be Only One (Error) #46494

bcwhite mannequin opened this issue Mar 5, 2008 · 7 comments
Labels
stdlib Python modules in the Lib dir type-feature A feature request or enhancement

Comments

@bcwhite
Copy link
Mannequin

bcwhite mannequin commented Mar 5, 2008

BPO 2241
Nosy @amauryfa
Files
  • unittest-diff25.py: diff -u unittest.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2008-03-06.08:27:27.770>
    created_at = <Date 2008-03-05.16:24:30.883>
    labels = ['type-feature', 'library']
    title = 'Additional Flag For Unit-Test Module: There Can Be Only One (Error)'
    updated_at = <Date 2008-03-08.15:21:53.186>
    user = 'https://bugs.python.org/bcwhite'

    bugs.python.org fields:

    activity = <Date 2008-03-08.15:21:53.186>
    actor = 'bcwhite'
    assignee = 'purcell'
    closed = True
    closed_date = <Date 2008-03-06.08:27:27.770>
    closer = 'purcell'
    components = ['Library (Lib)']
    creation = <Date 2008-03-05.16:24:30.883>
    creator = 'bcwhite'
    dependencies = []
    files = ['9612']
    hgrepos = []
    issue_num = 2241
    keywords = []
    message_count = 7.0
    messages = ['63285', '63313', '63321', '63322', '63333', '63334', '63403']
    nosy_count = 3.0
    nosy_names = ['purcell', 'amaury.forgeotdarc', 'bcwhite']
    pr_nums = []
    priority = 'normal'
    resolution = 'rejected'
    stage = None
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue2241'
    versions = ['Python 2.5']

    @bcwhite
    Copy link
    Mannequin Author

    bcwhite mannequin commented Mar 5, 2008

    The attached diff adds a "-o" ("--one") option to the "unittest" module
    that causes the run to abort on the first error encountered. I name my
    tests so that the lowest-level tests get run first so stopping at the
    first error tends to prevent a lot of dependent errors and speed
    debugging. During development, I typically run the tests I'm writing
    with "-ov". During a full test run, I omit both those flags.

    @bcwhite bcwhite mannequin added stdlib Python modules in the Lib dir type-feature A feature request or enhancement labels Mar 5, 2008
    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Mar 6, 2008

    Hi Brian;

    The module is intended for test suites where the unit tests are written
    to be independent of each other, which is the "standard" way to do
    things. Note, for instance, that there is no convenient support for
    changing the order in which tests run.

    When tests are written like that, you can interrupt a bulk test run at
    any point, and you can run a single test to reproduce and then debug a
    failure.

    Given your test suite, I can see how this '--one' option is helpful to
    you, but I don't believe it should be made standard. (I've never seen
    it in any XP-inspired test framework or related IDE UI.) However, you
    can easily write a custom TestRunner that provides this "fast abort"
    behaviour that you want, and then hook it into unittest as follows:

       unittest.main(testRunner=MyCustomTestRunner())

    (BTW, regarding the implementation, it's not ideal to pass the 'onlyOne'
    parameter down and through to the run() method; much better would be to
    initialise a TestResult subclass with the 'onlyOne' option, so that it
    could then abort when 'addError' or 'addFailure' is called. You can use
    that trick in any custom test runner you might write.)

    Best wishes,

    -Steve

    @purcell purcell mannequin closed this as completed Mar 6, 2008
    @purcell purcell mannequin self-assigned this Mar 6, 2008
    @amauryfa
    Copy link
    Member

    amauryfa commented Mar 6, 2008

    Actually, py.test and nose both have the -x option for this purpose.
    I use it very often during development, mostly during a refactoring
    phase: failures are easy to correct, and I don't want to wait for the
    complete suite to complete and display tons of tracebacks.

    Even while developing on core python, this option would have helped me a
    couple of times. Please, reopen this item!

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Mar 6, 2008

    I guess I don't completely agree with the rationale, because I've never
    wanted this feature; when running tests en-masse after refactoring, I
    want an overview of what was broken. If the codebase is in good shape,
    the test failures will be few and close together, and then I can usually
    re-run one of the individual test cases to debug the error.

    However, this isn't a big issue for me, and if someone's willing to
    prepare a new patch with the different implementation I described
    previously, I'm happy to re-open this ticket and sign off on it. I'd
    suggest using the same argument names as nose and py.test in this case.

    @bcwhite
    Copy link
    Mannequin Author

    bcwhite mannequin commented Mar 6, 2008

    Having tests run independently of each other is not the same as having
    tests be completely independent. I'd argue that the latter is
    impossible. You're never going to test the entire system in a single
    test case and thus the tests work together (i.e. not independently) to
    test everything.

    If one function under test calls another function that is also tested,
    then it makes sense to test the lower-level function first and display
    any problems as it will be easier/faster to find the root of the trouble
    than when the error causes unexpected results in the higher-level function.

    To make things easier, I simply name my tests such that lower-level
    functions are tested first. Each individual tests still runs
    independently, of course.

    The point of the "--one" option is just to have it stop when the first
    test fails, allowing me to fix the lowest level error. If that same
    error causes a dozen other tests to also fail and I just pick one
    failure randomly to start debugging, it's going to take me longer,
    perhaps a lot longer, to track down the problem.

    As for the method of implementation, I'm sure there are better ways to
    do it. Though I can write fully functional programs in Python, I by no
    means consider myself an expert in the language. I did it this way
    because the only other solution I saw was a global variable and figured
    that would be a poor way to do it. As such, I'd appreciate help on
    exactly how it should "properly" be done. :-)

    I'll let somebody else actually re-open this issue if it's a desired
    item since I'm not knowledgeable enough to see the solution you propose.

    Thanks!

    @purcell
    Copy link
    Mannequin

    purcell mannequin commented Mar 6, 2008

    Hi Brian - thanks for going into some details of your rationale!

    You might be surprised to hear that it's indeed possible to make all of
    your unit tests mutually independent; check out the area of 'mock
    objects'. It turns out to be possible, and indeed desirable once the
    "zen" of the technique clicks, to test every class in isolation without
    referring to other neighbouring classes. I was surprised by the
    enormous effectiveness of this somewhat hardcore technique when I was
    forced into it by working with one of the original Mock Object paper
    authors. Having already spent years coaching developers in XP
    techniques, I thought I was already a testing whiz.

    In most real-world cases, though, a class under test will be tested
    using its interaction with other separately-tested classes in the
    system, and the associated unit tests therefore bear some relation to
    one another. It's usually not helpful to divide those classes into
    layers that can be tested in order from the lowest layer to the highest,
    because classes tend to form clumps rather than layers. When a big
    suite of tests is run, failures therefore form clumps too, and often the
    underlying programming error is easier to see by looking at the clump
    rather than just the first failure. I think this explains why most
    people get by without an option like '-o'.

    Of course, it often makes sense to have separate test suites for
    different areas of the system under test so that they can be run in
    isolation. Rather than relying on test naming, you might consider
    explicitly building TestSuites that run your test cases in the desired
    order.

    As for preparing an updated patch, I'll get to it if I get a few
    minutes.

    All the best from this Brit in Germany.

    @bcwhite
    Copy link
    Mannequin Author

    bcwhite mannequin commented Mar 8, 2008

    I am somewhat new to mock objects. I'm coding up my first one now (in
    D) to simulate a "stream" for other objects I want to write.

    Even within a single module, I typically have many tests for the methods
    within that module. And since a module's methods make use of each
    other, there is again a case for the tests of the lower-level functions
    to be executed first.

    Anyway, this is something that works for me, but I understand that not
    everybody operates this way.

    All the best from this Canadian in Switzerland. :-)

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    stdlib Python modules in the Lib dir type-feature A feature request or enhancement
    Projects
    None yet
    Development

    No branches or pull requests

    1 participant