This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Yaroslav.Halchenko
Recipients Yaroslav.Halchenko, brian.curtin, exarkun, fperez, michael.foord, pitrou
Date 2010-04-09.19:42:23
SpamBayes Score 2.93964e-07
Marked as misclassified No
Message-id <>
Fernando, I agree... somewhat ;-)

At some point (whenever everything works fine and no unittests fail) I wanted to merry sweepargs to nose and make it spit out a dot (or animate a spinning wheel ;)) for every passed unittest, so instead of 300 dots I got a picturesque field of thousands dots and Ss and also saw how many were skipped for some parametrizations.  But I became "Not sure" of such feature since field became quite large and hard to "grasp" visually although it gave me better idea indeed of what was the total number of "testings" were done and skipped.  So may be it would be helpful to separate notions of tests and testings and provide user ability to control the level of verbosity (1 -- tests, 2 -- testings, 3 -- verbose listing of testings (test(parametrization)))

But I blessed sweepargs every time whenever something goes nuts and a test starts failing for (nearly) all parametrization at the same point.  And that is where I really enjoy the concise summary.
Also I observe that often an ERROR bug reveals itself through multiple tests.  So, may be it would be worth developing a generic 'summary' output which would collect all tracebacks and then groups them by the location of the actual failure and tests/testings which hit it?
Date User Action Args
2010-04-09 19:42:25Yaroslav.Halchenkosetrecipients: + Yaroslav.Halchenko, exarkun, pitrou, michael.foord, brian.curtin, fperez
2010-04-09 19:42:25Yaroslav.Halchenkosetmessageid: <>
2010-04-09 19:42:24Yaroslav.Halchenkolinkissue7897 messages
2010-04-09 19:42:23Yaroslav.Halchenkocreate