This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author hpk
Recipients Arfrever, Julian, Yaroslav.Halchenko, abingham, bfroehle, borja.ruiz, brett.cannon, chris.jerdonek, eric.araujo, eric.snow, exarkun, ezio.melotti, flox, fperez, hpk, michael.foord, nchauvat, ncoghlan, pitrou, r.david.murray, santoso.wijaya, serhiy.storchaka, spiv
Date 2013-02-11.08:01:11
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <CAPJJ-pLh_TDBX7LAJS1o4afi6oTy_EJ93oSpz83usOZtCQrovQ@mail.gmail.com>
In-reply-to <1360496309.3470.6.camel@localhost.localdomain>
Content
On Sun, Feb 10, 2013 at 12:41 PM, Antoine Pitrou <report@bugs.python.org>wrote:

>
> Antoine Pitrou added the comment:
>
> > Please don't commit I think we still need a discussion as to whether
> > subtests or paramaterized tests are a better approach. I certainly
> > don't think we need both and there are a lot of people asking for
> > parameterized tests.
>
> I think they don't cater to the same crowd. I see parameterized tests as
> primarily used by people who like adding formal complexity to their
> tests in the name of architectural elegance (decorators, non-intuitive
> constructs and other additional boilerplate). Subtests are meant to not
> get in the way. IMHO, this makes them more suitable for stdlib
> inclusion, while the testing-in-python people can still rely on their
> additional frameworks.
>

Parametrized testing wasn't introduced because I or others like formal
complexity.  I invented the "yield" syntax that both pytest and nose still
support and it was enhanced for several years until it was deemed not fit
for a general purpose testing approach.  More specifically, if you have
functional parametrized tests, they each run relatively slow.   People
often want to re-run a single failing test or otherwise select tests which
use a certain fixture, or send tests to different CPUs to speed up
execution.   That's all not possible with subtests or am i missing
something?

That being said, I have plans to support some form of "subtests" for pytest
as well, as there are indeed cases where a more lightweight approach fits
well, especially for unit- or integration tests where one doesn't care if a
group of tests need to be re-run when working on fixing a failure to a
single subtest.  And where it's usually more about reporting, getting nice
debuggable output on failures.  I suspect the cases which Antoine refers
satisfy this pattern.

best,
holger
History
Date User Action Args
2013-02-11 08:01:12hpksetrecipients: + hpk, brett.cannon, spiv, exarkun, ncoghlan, pitrou, ezio.melotti, eric.araujo, Arfrever, r.david.murray, michael.foord, flox, fperez, chris.jerdonek, Yaroslav.Halchenko, santoso.wijaya, nchauvat, Julian, abingham, eric.snow, serhiy.storchaka, borja.ruiz, bfroehle
2013-02-11 08:01:12hpklinkissue16997 messages
2013-02-11 08:01:11hpkcreate