This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author terry.reedy
Recipients ezio.melotti, michael.foord, rbcollins, terry.reedy
Date 2013-06-21.21:27:00
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1371850020.11.0.528920121115.issue18232@psf.upfronthosting.co.za>
In-reply-to
Content
I do not quite see the need to complicate the interface for most users in a way that does not really solve all of the realistic problems.

import unittest
unittest.main()
#
Ran 0 tests in 0.000s

OK
---
It seems to me that a continuous integration system should parse out the tests run, ok, failed or errored, skipped (or use a lower level interface to grab the numbers before being printed), report them, and compare to previous numbers. Even one extra skip might be something to be explained. An 'arbitrary' figure could easily not detect real problems.
History
Date User Action Args
2013-06-21 21:27:00terry.reedysetrecipients: + terry.reedy, rbcollins, ezio.melotti, michael.foord
2013-06-21 21:27:00terry.reedysetmessageid: <1371850020.11.0.528920121115.issue18232@psf.upfronthosting.co.za>
2013-06-21 21:27:00terry.reedylinkissue18232 messages
2013-06-21 21:27:00terry.reedycreate