This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients brian.curtin, exarkun, fperez, michael.foord, pitrou
Date 2010-02-11.15:34:12
SpamBayes Score 0.0010160886
Marked as misclassified No
Message-id <1265902538.3529.6.camel@localhost>
In-reply-to <1265901728.97.0.29639357185.issue7897@psf.upfronthosting.co.za>
Content
> > With paramterized tests *all* the tests are run and *all* failures reported. With testing in a loop the tests stop at the first failure.
> 
> +1 to this justification.  Parameterized tests are a big win over a
> simple for loop in a test.

Ah, thank you. Looks better indeed.

> (However, I haven't looked at the IPython code at all, and Antoine's
> objection seemed to have something in particular to do with the
> IPython code?)

No, it has to do that you need to be able to distinguish the different
runs of your parameterized test (the same way you distinguish between
different test methods by their names, or line numbers, in the test
report).

If I have 500 runs in my parameterized (*) test, and the only failure
reported is that "2 is not greater than 3", I don't know where to start
looking because the traceback won't give me the information of *which*
parameters were currently in use.

(*) (this is horrible to type)
History
Date User Action Args
2010-02-11 15:34:16pitrousetrecipients: + pitrou, exarkun, michael.foord, brian.curtin, fperez
2010-02-11 15:34:12pitroulinkissue7897 messages
2010-02-11 15:34:12pitroucreate