This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ncoghlan
Recipients Yaroslav.Halchenko, brian.curtin, exarkun, fperez, michael.foord, ncoghlan, pitrou
Date 2010-05-11.13:07:21
SpamBayes Score 0.011336
Marked as misclassified No
Message-id <>
I agree with Michael - one test that covers multiple settings can easily be done by collecting results within the test itself and then checking at the end that no failures were detected (e.g. I've done this myself with a test that needed to be run against multiple input files - the test knew the expected results and maintained lists of filenames where the result was incorrect. At the end of the test, if any of those lists contained entries, the test was failed, with the error message giving details of which files had failed and why).

What parameterised tests could add which is truly unique is for each of those files to be counted and tracked as a separate test. Sometimes the single-test-with-internal-failure-recording will still make more sense, but tracking the tests individually will typically give a better indication of software health (e.g. in my example above, the test ran against a few dozen files, but the only way to tell if it was one specific file that was failing or all of them was to go look at the error message).
Date User Action Args
2010-05-11 13:07:24ncoghlansetrecipients: + ncoghlan, exarkun, pitrou, michael.foord, brian.curtin, fperez, Yaroslav.Halchenko
2010-05-11 13:07:24ncoghlansetmessageid: <>
2010-05-11 13:07:22ncoghlanlinkissue7897 messages
2010-05-11 13:07:21ncoghlancreate