This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Support parametrized tests in unittest
Type: enhancement Stage: needs patch
Components: Tests Versions: Python 3.6
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: michael.foord Nosy List: Julian, Miklós.Fazekas, Yaroslav.Halchenko, abingham, bfroehle, borja.ruiz, chris.jerdonek, eric.araujo, eric.snow, exarkun, ezio.melotti, fperez, hpk, kynan, martin.panter, michael.foord, moormaster, nchauvat, ncoghlan, pconnell, pitrou, r.david.murray, santoso.wijaya, spiv, terry.reedy, zach.ware
Priority: normal Keywords:

Created on 2010-02-10 02:21 by fperez, last changed 2022-04-11 14:56 by admin.

Messages (60)
msg99149 - (view) Author: Fernando Perez (fperez) Date: 2010-02-10 02:21
IPython has unittest-based parametric testing (something nose has but
which produces effectively undebuggable tests, while this approach
gives perfectly debuggable ones).  The code lives here for 2.x and
3.x:

http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/head%3A/IPython/testing/_paramtestpy2.py

http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/head%3A/IPython/testing/_paramtestpy3.py

we import them into our public decorators module for actual use:

http://bazaar.launchpad.net/~ipython-dev/ipython/trunk/annotate/head%3A/IPython/testing/decorators.py

Simple tests showing them in action are here:

http://bazaar.launchpad.net/%7Eipython-dev/ipython/trunk/annotate/head%3A/IPython/testing/tests/test_decorators.py#L45

The code is all BSD and we'd be more than happy to see it used
upstream; the less we have to carry the better.

If there is interest in this code, I'm happy to sign a PSF contributor agreement, the code is mostly my authorship. I received help for the 3.x version on the Testing in Python mailing list, I would handle asking for permission on-list if there is interest in including this.
msg99215 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2010-02-11 15:17
I'm not sure what this brings. It is easy to write a loop iterating over test data.
What parametric testing could bring is precise progress and error display (displaying the parameters for each run), but this doesn't seem to be the case here, since you are not making the parameters available to unittest. Your examples just boil down to writing "yield is_smaller(x, y)" instead of "self.assertLess(x, y)"...
msg99216 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2010-02-11 15:18
With paramterized tests *all* the tests are run and *all* failures reported. With testing in a loop the tests stop at the first failure.
msg99217 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2010-02-11 15:22
By the way - I have no opinion on whether or not using yield is the right way to support parameterized tests. It may be better for the test method to take arguments, and be decorated as a parameterized test, with the decorator providing the parameters. When I come to look at it I will look at how py.test and nose do it and solicit advice on the Testing in Python list. We had a useful discussion there previously that would be good to refer to.
msg99218 - (view) Author: Jean-Paul Calderone (exarkun) * (Python committer) Date: 2010-02-11 15:22
> With paramterized tests *all* the tests are run and *all* failures reported. With testing in a loop the tests stop at the first failure.

+1 to this justification.  Parameterized tests are a big win over a simple for loop in a test.

(However, I haven't looked at the IPython code at all, and Antoine's objection seemed to have something in particular to do with the IPython code?)
msg99222 - (view) Author: Brian Curtin (brian.curtin) * (Python committer) Date: 2010-02-11 15:33
> It may be better for the test method to take arguments, and 
> be decorated as a parameterized test, with the decorator 
> providing the parameters.

+1 on something like this. That's also how NUnit supports parameterized tests.
msg99223 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2010-02-11 15:34
> > With paramterized tests *all* the tests are run and *all* failures reported. With testing in a loop the tests stop at the first failure.
> 
> +1 to this justification.  Parameterized tests are a big win over a
> simple for loop in a test.

Ah, thank you. Looks better indeed.

> (However, I haven't looked at the IPython code at all, and Antoine's
> objection seemed to have something in particular to do with the
> IPython code?)

No, it has to do that you need to be able to distinguish the different
runs of your parameterized test (the same way you distinguish between
different test methods by their names, or line numbers, in the test
report).

If I have 500 runs in my parameterized (*) test, and the only failure
reported is that "2 is not greater than 3", I don't know where to start
looking because the traceback won't give me the information of *which*
parameters were currently in use.

(*) (this is horrible to type)
msg99224 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2010-02-11 15:38
Antoine: the failure message would include a repr of the parameters used in the particular test that failed. So you can tell which test failed and with what parameters.
msg99226 - (view) Author: Jean-Paul Calderone (exarkun) * (Python committer) Date: 2010-02-11 15:53
Something else I think it would be nice to consider is what the id() (and shortDescription(), heh) of the resulting tests will be.

It would be great if the id were sufficient to identify a particular test *and* data combination.

In trial, we're trying to use test ids to support distributed test running.  If the above property holds for parameterized tests, then we'll be able to automatically distribute them to different processes or hosts to be run.
msg99243 - (view) Author: Fernando Perez (fperez) Date: 2010-02-11 23:48
I should probably have clarified better our reasons for using this type of code.  The first is the one Michael pointed out, where such parametric tests all execute; it's very common in scientific computing to have algorithms that only fail for certain values, so it's important to identify these points of failure easily while still running the entire test suite.  

The second is that the approach nose uses produces on failure the nose stack, not the stack of the test. Nose consumes the test generators at test discovery time, and then simply calls the stored assertions at test execution time.  If a test fails, you see a nose traceback which is effectively useless for debugging and with which using --pdb for interactive debugging doesn't help much (all you can do is print the values, as your own stack is gone).  This code, in contrast, evaluates the full test at execution time, so a failure can be inspected 'live'.  In practice this makes an enormous difference in a test suite being actively useful for ongoing development where changes may send you into debugging often.

I hope this helps clarify the intent of the code better, I'd be happy to provide further details.
msg102667 - (view) Author: Yaroslav Halchenko (Yaroslav.Halchenko) Date: 2010-04-09 01:53
In PyMVPA we have our little decorator as an alternative to Fernando's generators,  and which is closer, I think, to what Michael was wishing for:
@sweepargs

http://github.com/yarikoptic/PyMVPA/blob/master/mvpa/testing/sweepargs.py

NB it has some minor PyMVPA specificity which could be easily wiped out, and since it was at most 4 eyes looking at it and it bears "evolutionary" changes, it is far from being the cleanest/best piece of code, BUT:

* it is very easy to use, just decorate a test method/function and give an argument which to vary within the function call, e.g smth like

@sweepargs(arg=range(5))
def test_sweepargs_demo(arg):
    ok_(arg < 5)
    ok_(arg < 3)
    ok_(arg < 2)

For nose/unittest it would still look like a single test

* if failures occur, sweepargs groups failures by the type/location of the failures and spits out a backtrace for one of failures + summary (instead of detailed backtraces for each failure) specifying which arguments lead to what error... here is the output for example above:

$> nosetests -s test_sweepargs_demo.py
F
======================================================================
FAIL: mvpa.tests.test_sweepargs_demo.test_sweepargs_demo
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/pymodules/python2.5/nose/case.py", line 183, in runTest
    self.test(*self.arg)
  File "/usr/lib/pymodules/python2.5/nose/util.py", line 630, in newfunc
    return func(*arg, **kw)
  File "/home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py", line 11, in test_sweepargs_demo
    ok_(arg < 2)
  File "/usr/lib/pymodules/python2.5/nose/tools.py", line 25, in ok_
    assert expr, msg
AssertionError: 
 Different scenarios lead to failures of unittest test_sweepargs_demo (specific tracebacks are below):
  File "/home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py", line 10, in test_sweepargs_demo
    ok_(arg < 3)
    File "/usr/lib/pymodules/python2.5/nose/tools.py", line 25, in ok_
    assert expr, msg
  on
    arg=3 
    arg=4 

  File "/home/yoh/proj/pymvpa/pymvpa/mvpa/tests/test_sweepargs_demo.py", line 11, in test_sweepargs_demo
    ok_(arg < 2)
    File "/usr/lib/pymodules/python2.5/nose/tools.py", line 25, in ok_
    assert expr, msg
  on
    arg=2 

----------------------------------------------------------------------
Ran 1 test in 0.003s

FAILED (failures=1)

* obviousely multiple decorators could be attached to the same test, to test on all combinations of more than 1 argument but output atm is a bit cryptic ;-)
msg102740 - (view) Author: Fernando Perez (fperez) Date: 2010-04-09 19:25
Hey Yarick,

On Thu, Apr 8, 2010 at 18:53, Yaroslav Halchenko <report@bugs.python.org> w=
rote:
> In PyMVPA we have our little decorator as an alternative to Fernando's ge=
nerators, =A0and which is closer, I think, to what Michael was wishing for:
> @sweepargs
>
> http://github.com/yarikoptic/PyMVPA/blob/master/mvpa/testing/sweepargs.py
>
> NB it has some minor PyMVPA specificity which could be easily wiped out, =
and since it was at most 4 eyes looking at it and it bears "evolutionary" c=
hanges, it is far from being the cleanest/best piece of code, BUT:
>
> * it is very easy to use, just decorate a test method/function and give a=
n argument which to vary within the function call, e.g smth like
>
> @sweepargs(arg=3Drange(5))
> def test_sweepargs_demo(arg):
> =A0 =A0ok_(arg < 5)
> =A0 =A0ok_(arg < 3)
> =A0 =A0ok_(arg < 2)
>
> For nose/unittest it would still look like a single test

Thanks for the post; I obviously defer to Michael on the final
decision, but I *really* would like a solution that reports an
'argument sweep' as multiple tests, not as one.  They are truly
multiple tests (since they can pass/fail independently), so I think
they should be treated as such.

On the other hand, your code does have nifty features that could be
used as well, so perhaps the best of both can be used in the end.

Cheers,

f
msg102741 - (view) Author: Yaroslav Halchenko (Yaroslav.Halchenko) Date: 2010-04-09 19:42
Fernando, I agree... somewhat ;-)

At some point (whenever everything works fine and no unittests fail) I wanted to merry sweepargs to nose and make it spit out a dot (or animate a spinning wheel ;)) for every passed unittest, so instead of 300 dots I got a picturesque field of thousands dots and Ss and also saw how many were skipped for some parametrizations.  But I became "Not sure" of such feature since field became quite large and hard to "grasp" visually although it gave me better idea indeed of what was the total number of "testings" were done and skipped.  So may be it would be helpful to separate notions of tests and testings and provide user ability to control the level of verbosity (1 -- tests, 2 -- testings, 3 -- verbose listing of testings (test(parametrization)))

But I blessed sweepargs every time whenever something goes nuts and a test starts failing for (nearly) all parametrization at the same point.  And that is where I really enjoy the concise summary.
Also I observe that often an ERROR bug reveals itself through multiple tests.  So, may be it would be worth developing a generic 'summary' output which would collect all tracebacks and then groups them by the location of the actual failure and tests/testings which hit it?
msg102750 - (view) Author: Fernando Perez (fperez) Date: 2010-04-09 21:19
Yarick: Yes, I do actually see the value of the summary view.  When I have a parametric test that fails, I tend to just run nose with -x so it stops at the first error and with the --pdb options to study it, so I simply ignore all the other failures.  To me, test failures are quite often like compiler error messages: if there's a lot of them, it's best to look only at the first few, fix those and try again, because the rest could be coming from the same cause.

I don't know if Michael has plans/bandwidth to add the summary support as well, but I agree it would be very nice to have, just not at the expense of individual reporting.
msg102754 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2010-04-09 21:44
If we provide builtin support for parameterized tests it will have to report each test separately otherwise there is no point. You can already add support for running tests with multiple parameters yourself - the *only* advantage of building support into unittest (as I see it) is for better reporting of individual tests.
msg105508 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2010-05-11 13:07
I agree with Michael - one test that covers multiple settings can easily be done by collecting results within the test itself and then checking at the end that no failures were detected (e.g. I've done this myself with a test that needed to be run against multiple input files - the test knew the expected results and maintained lists of filenames where the result was incorrect. At the end of the test, if any of those lists contained entries, the test was failed, with the error message giving details of which files had failed and why).

What parameterised tests could add which is truly unique is for each of those files to be counted and tracked as a separate test. Sometimes the single-test-with-internal-failure-recording will still make more sense, but tracking the tests individually will typically give a better indication of software health (e.g. in my example above, the test ran against a few dozen files, but the only way to tell if it was one specific file that was failing or all of them was to go look at the error message).
msg105520 - (view) Author: Yaroslav Halchenko (Yaroslav.Halchenko) Date: 2010-05-11 14:10
Hi Nick,

Am I reading your right, Are you suggesting to implement this
manual looping/collecting/reporting separately in every unittest
which needs that?

On Tue, 11 May 2010, Nick Coghlan wrote:
> Nick Coghlan <ncoghlan@gmail.com> added the comment:

> I agree with Michael - one test that covers multiple settings can easily be done by collecting results within the test itself and then checking at the end that no failures were detected (e.g. I've done this myself with a test that needed to be run against multiple input files - the test knew the expected results and maintained lists of filenames where the result was incorrect. At the end of the test, if any of those lists contained entries, the test was failed, with the error message giving details of which files had failed and why).
msg105554 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2010-05-11 22:35
No, I'm saying I don't see summarising the parameterised tests separately from the overall test run as a particularly important feature, since you can test multiple parameters in a single test manually now.

The important part is for the framework to be able to generate multiple tests from a single test function with a selection of arguments.  Doing a separate run with just those tests will give you any information that a summary would give you.
msg113113 - (view) Author: (nchauvat) Date: 2010-08-06 17:02
In case it could be useful, here is how generative/parametrized tests are handled in logilab.common.testlib http://hg.logilab.org/logilab/common/file/a6b5fe18df99/testlib.py#l1137
msg140729 - (view) Author: Austin Bingham (abingham) Date: 2011-07-20 13:48
Has a decision been made to implement some form of parametric tests? Is working being done?

Along with parameterizing individual test methods, I'd also like to throw out a request for parametric TestCases. In some cases I find that I want the behavior of setUp() or tearDown() to vary based on some set of parameters, but the individual tests are not parametric per se.
msg140730 - (view) Author: Brian Curtin (brian.curtin) * (Python committer) Date: 2011-07-20 13:54
By this issue existing, that's the decision that we should probably do this, and I think the discussion shows we agree it should happen. How it's done is another way, and we have roughly a year to get it figured out before 3.3 gets closer.

I have a patch that implements this as a decorator, but it's out of date by now and not feature complete. It's on my todo list of patches to revive.
msg140732 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2011-07-20 14:03
Brian, if you don't have time to work on it in the next little while, maybe you could post your partial patch in case someone else wants to work on it?  Might be a good project for someone on the mentoring list.

Unless someone sees a clever way to implement both with the same mechanism, parameterizing test cases should probably be a separate issue.  Doing it via subclassing is straightforward, which also argues for a separate issue, since an 'is this worth it' discussion should be done separately for that feature.
msg140733 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 14:03
*If* we add this to unittest then we need to decide between test load time parameterised tests and test run time parameterisation. 

Load time is more backwards compatible / easier (all tests can be generated at load time and the number of tests can be known). Run time is more useful. (With load time parameterisation the danger is that test generation can fail so we need to have the test run not bomb out in this case.)

A hack for run time parameterisation is to have all tests represented by a single test but generate a single failure that represents all the failures. I think this would be an acceptable approach. It could still be done with a decorator.
msg140735 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 14:06
And yes, parameterising test cases is a different issue. bzr does this IIRC. This is easier in some ways, and can be done through load_tests, or any other test load time mechanism.
msg140736 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2011-07-20 14:07
Michael, would your "single test" clearly indicate all the individual failures by name?  If not, then I would not find it useful.  I can already easily "parameterize" inside a single test using a loop, it's the detailed reporting piece that I want support for.
msg140737 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2011-07-20 14:12
The reporting piece, and ideally being able to use the arguments to unittest to run a single one of the parameterized tests.  (I can get the reporting piece now using the locals() hack, but that doesn't support test selection).  Does test selection support required load time parameterization?
msg140738 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 14:16
Test selection would require load time parameterisation - although the current test selection mechanism is through importing which would probably *not* work without a specific fix. Same for run time parameterisation.

Well how *exactly* you generate the names is an open question, and once you've solved that problem it should be no harder to show them clearly in the failure message with a "single test report" than with multiple test reports.

The way to generate the names is to number each test *and* show the parameterised data as part of the name (which can lead to *huge* names if you're not careful - or just object reprs in names which isn't necessarily useful). I have a decorator example that does runtime parameterisation, concatenating failures to a single report but still keeping the generated name for each failure.

Another issue is whether or not parameterised tests share a TestCase instance or have separate ones. If you're doing load time generation it makes sense to have a separate test case instance, with setUp and tearDown run individually. This needs to be clearly documented as the parameter generation would run against an uninitialised (not setUp) testcase.

Obviously reporting multiple test failures separately (run time parameterisation) is a bit nicer, but runtime test generation doesn't play well with anything that works with test suites - where you expect all tests to be represented by a single test case instance in the suite. I'm not sure that's a solveable problem.
msg140739 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 14:19
Dammit, I've reversed my thinking in some of those messages. Load time parameterisation *does* give you separate test reporting. It is run time parameterisation that doesn't.

Depending on how you do it (i.e. if the decorator generates the tests and attaches them as TestCase methods) you can do it at import time. And then normal test selection works if you know the test name.
msg140740 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 14:28
So if we do import time *or* test load time parameterisation then we can do separate failure reporting. We may still want to improve "test selection" for parameterised tests.

There are use cases for run time parameterisation (for example generating tests from a database that is only available during the test run was one use case someone has that can't be met by early parameterisation).

If we do run time parameterisation then I think we can only maintain compatibility with existing uses of test suites by having a single test case instance per parameterised test, and single combined failure report.

I would prefer *not* to add run time parameterisation.
msg140766 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2011-07-20 23:07
Load time parameterisation seems more of a worthwhile addition to me, too. As David noted, runtime parameterisation is pretty easy to do by looping and consolidating failures into the one error message via self.fail().

For test naming, trying to get too clever seems fraught with peril. I'd be happy with something like:
1. Parameterised tests are just numbered sequentially by default
2. The repr of the test parameters is included when displaying detailed error messages
3. A hook is provided to allow customisation of the naming scheme and test parameters. This hook receives an iterable of test parameters and is expected to create a new iterable producing 2-tuples of test names and parameters.

The default behaviour of the parameterised test generator could then be something like:

def parameterised_test_info(name, cases):
    for idx, case in enumerate(cases, start=1):
        yield ("{}_{}".format(name, idx), case)

The new machinery (however provided) would then take care of checking the names are unique, adding those methods to the test case, and storing the parameters somewhere convenient (likely as attributes of the created methods) for use when running the test and when displaying error messages.
msg140771 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-20 23:57
That all sounds good to me Nick. Some notes / questions.

How should parameterised tests be marked? I'm happy with a unittest.parameterized decorator (it would do no work other than mark the test method, with the parameterisation being done in the TestLoader).

What could the "name customisation hook" look like? A module level hook (yuck) / a class method hook on the TestCase that must handle every parameterised test on that TestCase / a decorator for the parameterised test method?

If we do it at load time should we require parameterised methods to be class methods? The alternative is to instantiate the test case when loading and collecting the tests. Class methods won't play well with the other unittest decorators as you can't attach attributes to classmethods. (So I guess instance methods it is!)

If collecting tests fails part way through we should generate a failing test that reports the exception. Should the tests collected "so far" be kept?

Should skip / expectedFail decorators still work with them? If so they'll need custom support I expect.

If we're generating test names and then attaching these tests to TestCase instances (so that normal test case execution will run them), should we check for name clashes? What should we do if there is a name clash? (The generated name would shadow an existing name.) We could prevent that by using a non-identifier name - e.g. "${}_{}".format(name, idx)
msg140772 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-21 00:01
Oh, and if we're not going to get clever with naming, how is the TestResult going to include the parameter repr in the failure report? That information will have to be stored on the TestCase. 

I would prefer this feature not to touch the TestResult - so any failure message cleverness should be done in the TestCase. It will be harder to maintain if this feature is spread out all the way through unittest.
msg140776 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2011-07-21 02:23
Here's a sketch for a possible decorator factory:

def parameters(params, *, builder=_default_param_builder):
    def make_parameterized_test(f):
        return ParameterizedTest(f, params, builder)
    return make_parameterized_test

(default builder would be the one I sketched earlier)

Example usage:

ExampleParams = namedtuple('ExampleParams', 'x y')
all_cases = [ExampleParams(x, y) for x in range(3) for y in range(3)]

def x_y_naming(name, cases):
    for idx, case in enumerate(cases, start=1):
        x, y = case
        yield ("{}_{}_{}_{}".format(name, idx, x, y), case)

class ExampleTestCase(TestCase):
  @parameters(all_cases)
  def test_example_auto_naming(self, x, y):
      self.assertNotEqual(x, y) # fails sometimes :)

  @parameters(all_cases, builder=x_y_naming)
  def test_example_custom_naming(self, x, y):
      self.assertNotEqual(x, y) # fails sometimes :)

Once defined, you're far better equipped than I am to decide how TestLoader and TestCase can best cooperate to generate appropriate test operations and error messages.
msg140777 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2011-07-21 02:34
Personally I would be happy if I could pass in a dictionary that maps names to argument tuples, or an iterator yielding (name, (argtuple)) pairs, and just have the failure report the name.  That is, make me responsible for generating meaningful names, don't autogenerate them.  Then someone could provide a name autogenerate helper on top of that base interface. 

Checking for name clashes would be nice, but it would be nice to have that for non-parameterized test cases, too (I've been bitten by that a number of time ;)
msg140785 - (view) Author: Austin Bingham (abingham) Date: 2011-07-21 05:40
OK, I created issue 12600 for dealing with parameterized TestCases as
opposed to individual tests.

On Wed, Jul 20, 2011 at 4:03 PM, R. David Murray <report@bugs.python.org> wrote:
> Unless someone sees a clever way to implement both with the same mechanism, parameterizing test cases should probably be a separate issue.  Doing it via subclassing is straightforward, which also argues for a separate issue, since an 'is this worth it' discussion should be done separately for that feature.
msg140804 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-21 10:36
Well, pyflakes will tell you about name clashes within a TestCase (unless you're shadowing a test on a base class which I guess is rarely the case)...

When we generate the tests we could add the parameter reprs to the docstring. A decorator factor that takes arguments and an optional name builder seem like a reasonable api to me.

Actually, name clashes *aren't* a problem - as we're generating TestCase instances these generated tests won't shadow existing tests (you'll just have two tests with the same name - does this mean id() may not be unique - worth checking).
msg140809 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-21 11:44
Note that name clashes *would* result in non-unique testcase ids, so we need to prevent that.
msg140811 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2011-07-21 11:51
Please implement name+argtuple first and build auto-naming on top of that.  Nick's approach would not allow me to specify a custom (hand coded) name for each set of arguments, which is my normal use case.  I also would not like the arguments auto-generated into the docstring unless that was optional, since I often have quite substantial argument tuples and it would just clutter the output to have them echoed in the docstring.  In my use cases the custom name is more useful than seeing the actual parameters.
msg140818 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2011-07-21 15:26
David, I don't understand - it looks like Nick's suggestion would allow you to create a name per case, that's the point of it!

You could even do this:

def _name_from_case(name, cases):
    for idx, case in enumerate(cases, start=1):
        test_name = case[0]
        params = case[1:]
        yield (test_name, params)
msg140837 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2011-07-21 23:41
Sorry, misclicked and removed this comment from David:

Oh, I see.  Make the name the first element of the argtuple and then strip it off.  Well, that will work, it just seems bass-ackwards to me :)

And why is it 'case'?  I thought we were talking about tests.
msg140838 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2011-07-21 23:44
In my example, I needed a word to cover each entry in the collection of parameter tuples. 'case' fit the bill.

The reason I like the builder approach is that it means the simplest usage is to just create a list (or other iterable) of parameter tuples to be tested, then pass that list to the decorator factory. The sequential naming will then let you find the failing test cases in the sequence.

Custom builders then cover any cases where better naming is possible and desirable (such as explicitly naming each case as part of the parameters).

One refinement that may be useful is for the builders to produce (name, description, parameters) 3-tuple rather than 2-tuples, though. Then the default builder could just insert repr(params) as the description, while David's custom builder could either leave the description blank, or include a relevant subset of the parameters.
msg140843 - (view) Author: Éric Araujo (eric.araujo) * (Python committer) Date: 2011-07-22 00:10
> Well, pyflakes will tell you about name clashes within a TestCase
Seen that :)

> (unless you're shadowing a test on a base class which I guess is
> rarely the case)...
That too, in our own test suite (test_list or test_tuple, I have that change in a Subversion or old hg clone somewhere).  I agree about “rarely”, however.
msg140844 - (view) Author: Andrew Bennetts (spiv) Date: 2011-07-22 00:13
You may be interested an existing, unittest-compatible library that provides this: http://pypi.python.org/pypi/testscenarios
msg147236 - (view) Author: Éric Araujo (eric.araujo) * (Python committer) Date: 2011-11-07 16:29
Another nice API: http://feldboris.alwaysdata.net/blog/unittest-template.html
msg153450 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2012-02-15 23:41
I just remembered that many of the urllib.urlparse tests are guilty of only reporting the first case that fails, instead of testing everything and reporting all failures:

http://hg.python.org/cpython/file/default/Lib/test/test_urlparse.py

IMO, it would make a good guinea pig for any upgrade to the stdlib support for parameterised unit tests.
msg153452 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2012-02-16 01:40
FWIW I think nose2 is going to have "test load time" parameterized tests rather than "run time" parameterized tests, which is what I think we should do for unit test. The API should be as simple as possible for basic cases, but suitable for more complex cases too.

I quite liked Nicks idea of allowing a custom "case builder" but providing a sane default one.
msg161979 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2012-05-31 01:57
People interested in this issue might be interested in changeset e6a33938b03f.  I use parameterized unit tests in email a lot, and was annoyed by the fact that I couldn't run the tests individually using the unittest CLI.  The fix for that turned out to be trivial, but by the time I figured it out, I'd already written most of the metaclass.  So since the metaclass reduces the boilerplate (albeit at the cost of looking like black magic), I decided to go with it.  And the metaclass at least avoids the rather questionable use of the "modify locals()" hack I was using before.
msg163809 - (view) Author: Éric Araujo (eric.araujo) * (Python committer) Date: 2012-06-24 17:08
I like test_email’s decorator.  It looks like https://github.com/wolever/nose-parameterized which I’m using.   (The implementation and generation of test method names may be less nice than what was discussed here, but the API (decorator + list of arguments) is something I like (much better than nose’s built-in parametrization).
msg164511 - (view) Author: Borja Ruiz Castro (borja.ruiz) Date: 2012-07-02 11:23
Hi Murray!

I use a lot od parametrized tests. I usually use the ENV to pass these
parameters and/or a custon configuration file.

What is your approach to parametrize all the test stuff?

Regards,

Borja.

On 31 May 2012 03:57, R. David Murray <report@bugs.python.org> wrote:

>
> R. David Murray <rdmurray@bitdance.com> added the comment:
>
> People interested in this issue might be interested in changeset
> e6a33938b03f.  I use parameterized unit tests in email a lot, and was
> annoyed by the fact that I couldn't run the tests individually using the
> unittest CLI.  The fix for that turned out to be trivial, but by the time I
> figured it out, I'd already written most of the metaclass.  So since the
> metaclass reduces the boilerplate (albeit at the cost of looking like black
> magic), I decided to go with it.  And the metaclass at least avoids the
> rather questionable use of the "modify locals()" hack I was using before.
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue7897>
> _______________________________________
> _______________________________________________
> Python-bugs-list mailing list
> Unsubscribe:
> http://mail.python.org/mailman/options/python-bugs-list/isec%40alienvault.com
>
>
msg164512 - (view) Author: Borja Ruiz Castro (borja.ruiz) Date: 2012-07-02 11:42
Sorry, I failed to mention that I use Testify to launch all my tests!

On 2 July 2012 13:23, Borja Ruiz Castro <report@bugs.python.org> wrote:

>
> Borja Ruiz Castro <bruiz@alienvault.com> added the comment:
>
> Hi Murray!
>
> I use a lot od parametrized tests. I usually use the ENV to pass these
> parameters and/or a custon configuration file.
>
> What is your approach to parametrize all the test stuff?
>
> Regards,
>
> Borja.
>
> On 31 May 2012 03:57, R. David Murray <report@bugs.python.org> wrote:
>
> >
> > R. David Murray <rdmurray@bitdance.com> added the comment:
> >
> > People interested in this issue might be interested in changeset
> > e6a33938b03f.  I use parameterized unit tests in email a lot, and was
> > annoyed by the fact that I couldn't run the tests individually using the
> > unittest CLI.  The fix for that turned out to be trivial, but by the
> time I
> > figured it out, I'd already written most of the metaclass.  So since the
> > metaclass reduces the boilerplate (albeit at the cost of looking like
> black
> > magic), I decided to go with it.  And the metaclass at least avoids the
> > rather questionable use of the "modify locals()" hack I was using before.
> >
> > ----------
> >
> > _______________________________________
> > Python tracker <report@bugs.python.org>
> > <http://bugs.python.org/issue7897>
> > _______________________________________
> > _______________________________________________
> > Python-bugs-list mailing list
> > Unsubscribe:
> >
> http://mail.python.org/mailman/options/python-bugs-list/isec%40alienvault.com
> >
> >
>
> ----------
> nosy: +borja.ruiz
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue7897>
> _______________________________________
>
msg164967 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2012-07-08 06:42
As another in-the-standard-library uses case: my additions to the ipaddress test suite are really crying out for parameterised test support.

My current solution is adequate for coverage and debugging purposes (a custom assert applied to multiple values in a test case), but has the known limitations of that approach (specifically, only the first failing case gets reported rather than all failing cases, which can sometimes slow down the debugging process).
msg181507 - (view) Author: Miklós Fazekas (Miklós.Fazekas) Date: 2013-02-06 09:37
http://gist.github.com/mfazekas/1710455

I have a parametric delclarator which works is similiar to sweepargs in concept. It can be either applied at class or method level. And it mutates testnames so failure should be nice, and filters can be applied too.

@parametric
class MyTest(unittest.TestCase):
    @parametric(foo=[1,2],bar=[3,4])
    def testWithParams(self,foo,bar):
        self.assertLess(foo,bar)
    def testNormal(self):
        self.assertEqual('foo','foo')

@parametric(foo=[1,2],bar=[3,4])
class MyTest(unittest.TestCase):
     def testPositive(self,foo,bar):
         self.assertLess(foo,bar)
     def testNegative(self,foo,bar):
         self.assertLess(-foo,-bar)

Sample failures:
======================================================================
FAIL: testNegative_bar_3_foo_1 (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Work/temp/parametric.py", line 63, in f
    self.fun(*args,**v)
  File "/Work/temp/parametric.py", line 158, in testNegative
    self.assertLess(-foo,-bar)
AssertionError: -1 not less than -3

======================================================================
FAIL: testNegative_bar_3_foo_2 (__main__.MyTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Work/temp/parametric.py", line 63, in f
    self.fun(*args,**v)
  File "/Work/temp/parametric.py", line 158, in testNegative
    self.assertLess(-foo,-bar)
AssertionError: -2 not less than -3
msg181859 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2013-02-10 21:48
Looks like we're going to get subtests ( issue #16997 ) instead of parameterized tests.
msg196378 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2013-08-28 12:02
Since we now got subtests, can this be closed?
msg196382 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2013-08-28 13:15
subtests don't satisfy my use cases.  You can't run an individual subtest by name, and I find that to be a very important thing to be able to do during development and debugging.  At the moment at least I'm fine with just having my parameterize decorator in the email module, so I'm not motivated to move this forward right now.  I would like to come back to it eventually, though.
msg196425 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-08-28 21:32
Right, subtests are about improving reporting without adding selectivity.
Explicitly parameterized tests require more structural changes to tests,
but give the selectivity that subtests don't.
msg224457 - (view) Author: Mark Lawrence (BreamoreBoy) * Date: 2014-07-31 23:44
Is there any possibility of getting this into 3.5?  If it helps I've always got time on my hands so if nothing else I could do testing on Windows 8.1.
msg224492 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2014-08-01 12:27
I don't believe the "how to request specific parameters" aspect has been
clearly addressed so far, and the subTest() API in Python 3.4 already
allows subtests that aren't individually addressable.

Any new API (if any) should likely also be based on the subtest feature,
and I don't think the community has enough experience with that yet to be
confident in how best to integrate it with parameterised testing.
msg224501 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2014-08-01 13:56
I agree with Nick. There is a potential use case for parameterized tests as well as sub tests, but it's not something we're going to rush into.
msg352882 - (view) Author: Andre Herbst (moormaster) Date: 2019-09-20 19:18
+1 for the feature

Subtests make the test results of all asserts visible at test execution time but decrease the readability of a test:

@parameterized([2,4,6])
def test_method_whenCalled_returnsNone(self, a):
    # 1) arrange
    something = Something()

    # 2) act
    result = something.method(a)

    # 3) assert
    self.assertIsNone(result)

When using subtests the phases of 1) arrange, 2) act, 3) assert are not clearly separated, the unit test contains logic and two additional indentation levels that could be avoided with parameterized tests:

def test_method_whenCalled_returnsNone(self, a):
    # 1) arrange
    something = Something()

    for a in [2,4,6]:
        with self.subTest(a=a):
            # 2) act
            result = something.method(a)

            # 3) assert
            self.assertIsNone(result)
History
Date User Action Args
2022-04-11 14:56:57adminsetgithub: 52145
2019-10-28 20:47:39pconnellsetnosy: + pconnell
2019-09-20 19:18:32moormastersetnosy: + moormaster
messages: + msg352882
2019-02-24 22:25:17BreamoreBoysetnosy: - BreamoreBoy
2015-10-02 20:58:05belopolskysetversions: + Python 3.6, - Python 3.5
2014-12-21 01:35:43martin.pantersetnosy: + martin.panter
2014-08-01 13:56:45michael.foordsetmessages: + msg224501
2014-08-01 12:27:04ncoghlansetmessages: + msg224492
2014-08-01 00:24:19brian.curtinsetnosy: - brian.curtin
2014-07-31 23:44:21BreamoreBoysetnosy: + BreamoreBoy, zach.ware

messages: + msg224457
versions: + Python 3.5, - Python 3.4
2013-08-28 21:32:32ncoghlansetmessages: + msg196425
2013-08-28 13:15:13r.david.murraysetmessages: + msg196382
2013-08-28 12:02:24ezio.melottisetmessages: + msg196378
2013-03-04 07:07:00terry.reedysetnosy: + terry.reedy
2013-02-10 21:48:01michael.foordsetmessages: + msg181859
2013-02-06 09:37:33Miklós.Fazekassetnosy: + Miklós.Fazekas
messages: + msg181507
2012-09-26 00:30:23santoso.wijayasetnosy: + santoso.wijaya
2012-09-24 21:28:01chris.jerdoneksetnosy: + chris.jerdonek
2012-08-06 00:01:43bfroehlesetnosy: + bfroehle
2012-07-11 15:04:01kynansetnosy: + kynan
2012-07-08 06:42:26ncoghlansetmessages: + msg164967
2012-07-02 11:42:27borja.ruizsetmessages: + msg164512
2012-07-02 11:23:58borja.ruizsetnosy: + borja.ruiz
messages: + msg164511
2012-06-24 17:08:15eric.araujosetmessages: + msg163809
versions: + Python 3.4, - Python 3.3
2012-05-31 01:57:41r.david.murraysetmessages: + msg161979
2012-02-16 01:40:55michael.foordsetmessages: + msg153452
2012-02-15 23:41:48ncoghlansetmessages: + msg153450
2011-12-18 09:28:33hpksetnosy: + hpk
2011-12-15 14:08:00Juliansetnosy: + Julian
2011-11-07 16:29:33eric.araujosetmessages: + msg147236
2011-07-22 00:13:56spivsetnosy: + spiv
messages: + msg140844
2011-07-22 00:10:43eric.araujosetmessages: + msg140843
2011-07-21 23:44:33ncoghlansetmessages: + msg140838
2011-07-21 23:41:46ncoghlansetmessages: + msg140837
2011-07-21 23:40:49ncoghlansetmessages: - msg140819
2011-07-21 15:47:21r.david.murraysetmessages: + msg140819
2011-07-21 15:26:51michael.foordsetmessages: + msg140818
2011-07-21 11:51:13r.david.murraysetmessages: + msg140811
2011-07-21 11:44:57michael.foordsetmessages: + msg140809
2011-07-21 10:36:59michael.foordsetmessages: + msg140804
2011-07-21 05:40:27abinghamsetmessages: + msg140785
2011-07-21 03:46:13ezio.melottisetnosy: + ezio.melotti
2011-07-21 02:34:24r.david.murraysetmessages: + msg140777
2011-07-21 02:23:29ncoghlansetmessages: + msg140776
2011-07-21 00:01:25michael.foordsetmessages: + msg140772
2011-07-20 23:57:50michael.foordsetmessages: + msg140771
2011-07-20 23:33:28eric.snowsetnosy: + eric.snow
2011-07-20 23:07:47ncoghlansetmessages: + msg140766
2011-07-20 16:22:10eric.araujosetnosy: + eric.araujo
2011-07-20 14:28:34michael.foordsetmessages: + msg140740
2011-07-20 14:19:56michael.foordsetmessages: + msg140739
2011-07-20 14:16:58michael.foordsetmessages: + msg140738
2011-07-20 14:12:08r.david.murraysetmessages: + msg140737
2011-07-20 14:07:51r.david.murraysetmessages: + msg140736
2011-07-20 14:06:01michael.foordsetmessages: + msg140735
2011-07-20 14:03:25michael.foordsetmessages: + msg140733
2011-07-20 14:03:21r.david.murraysetmessages: + msg140732
2011-07-20 13:59:27r.david.murraysetnosy: + r.david.murray
2011-07-20 13:54:40brian.curtinsetstage: needs patch
messages: + msg140730
components: + Tests
versions: + Python 3.3, - Python 2.7, Python 3.2
2011-07-20 13:48:04abinghamsetnosy: + abingham
messages: + msg140729
2010-08-06 17:02:21nchauvatsetnosy: + nchauvat
messages: + msg113113
2010-05-11 22:35:08ncoghlansetmessages: + msg105554
2010-05-11 14:10:14Yaroslav.Halchenkosetmessages: + msg105520
2010-05-11 13:07:22ncoghlansetnosy: + ncoghlan
messages: + msg105508
2010-04-09 21:44:50michael.foordsetmessages: + msg102754
2010-04-09 21:19:49fperezsetmessages: + msg102750
2010-04-09 19:42:24Yaroslav.Halchenkosetmessages: + msg102741
2010-04-09 19:25:45fperezsetmessages: + msg102740
2010-04-09 01:53:44Yaroslav.Halchenkosetnosy: + Yaroslav.Halchenko
messages: + msg102667
2010-02-11 23:48:43fperezsetmessages: + msg99243
2010-02-11 15:53:55exarkunsetmessages: + msg99226
2010-02-11 15:38:10michael.foordsetmessages: + msg99224
2010-02-11 15:34:12pitrousetmessages: + msg99223
2010-02-11 15:33:11brian.curtinsetnosy: + brian.curtin
messages: + msg99222
2010-02-11 15:22:07exarkunsetnosy: + exarkun
messages: + msg99218
2010-02-11 15:22:01michael.foordsetmessages: + msg99217
2010-02-11 15:18:43michael.foordsetmessages: + msg99216
2010-02-11 15:17:39pitrousetnosy: + pitrou
messages: + msg99215
2010-02-10 10:29:22michael.foordsetassignee: michael.foord

nosy: + michael.foord
2010-02-10 02:21:20fperezcreate