This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ncoghlan
Recipients ncoghlan
Date 2013-09-08.03:05:34
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1378609534.94.0.186067069028.issue18968@psf.upfronthosting.co.za>
In-reply-to
Content
Issue 18952 (fixed in http://hg.python.org/cpython/rev/23770d446c73) was another case where a test suite change resulted in tests not be executed as expected, but this wasn't initially noticed since it didn't *fail* the tests, it just silently skipped them.

We've had similar issues in the past, due to test name conflicts (so the second test shadowed the first), to old regrtest style test discovery missing a class name from the test list, and to incorrect skip conditions on platform specific tests.

Converting "unexpected skips" to a failure isn't enough, since these errors occur at a narrower scope than entire test modules.

I'm not sure on what *would* work, though. Perhaps collecting platform specific coverage stats for the test suite itself and looking for regressions?
History
Date User Action Args
2013-09-08 03:05:35ncoghlansetrecipients: + ncoghlan
2013-09-08 03:05:34ncoghlansetmessageid: <1378609534.94.0.186067069028.issue18968@psf.upfronthosting.co.za>
2013-09-08 03:05:34ncoghlanlinkissue18968 messages
2013-09-08 03:05:34ncoghlancreate