This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: running a suite with no tests is not an error
Type: enhancement Stage: patch review
Components: Tests Versions: Python 3.10
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: ezio.melotti, kamilturek, mgorny, michael.foord, rbcollins, terry.reedy
Priority: normal Keywords: patch

Created on 2013-06-16 19:32 by rbcollins, last changed 2022-04-11 14:57 by admin.

Pull Requests
URL Status Linked Edit
PR 24893 open mgorny, 2021-03-16 12:24
Messages (8)
msg191282 - (view) Author: Robert Collins (rbcollins) * (Python committer) Date: 2013-06-16 19:32
In bug https://bugs.launchpad.net/subunit/+bug/586176 I recorded a user request - that if no tests are found, tools consuming subunit streams should be able to consider that an error.

There is an analogous situation though, which is that if discover returns without error, running the resulting suite is worthless, as it has no tests. This is a bit of a sliding slope - what if discover finds one test when there should be 1000's ? 

Anyhow, filing this because there's been a few times when things have gone completely wrong that it would have helped CI systems detect that. (For instance, the tests package missing entirely, but tests were being scanned in the whole source tree, so no discover level error occurred).

I'm thinking I'll add a '--min-tests=X' parameter to unittest.main, with the semantic that if there are less than X tests executed, the test run will be considered a failure, and folk can set this to 1 for the special case, or any arbitrary figure that they want for larger suites.
msg191611 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2013-06-21 21:27
I do not quite see the need to complicate the interface for most users in a way that does not really solve all of the realistic problems.

import unittest
unittest.main()
#
Ran 0 tests in 0.000s

OK
---
It seems to me that a continuous integration system should parse out the tests run, ok, failed or errored, skipped (or use a lower level interface to grab the numbers before being printed), report them, and compare to previous numbers. Even one extra skip might be something to be explained. An 'arbitrary' figure could easily not detect real problems.
msg192331 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2013-07-05 08:21
> I'm thinking I'll add a '--min-tests=X' parameter to unittest.main,
> with the semantic that if there are less than X tests executed, the
> test run will be considered a failure,

The minimum number of tests is a fast moving target, and unless you know exactly how many tests you have and use that value, missing tests will be undetected.  If you only want to distinguish between 0 and more tests, a boolean flag is enough, but checking that at least 1 test in the whole test suite is run is quite pointless IMHO (I assume it's quite easy to notice if/when it happens).

Making this per-module or even per-class would be more interesting (because it's harder to spot these problems), but OTOH there's no way to know for sure if this is what the user wants.  A good compromise might be using a boolean flag that generates a warning by using some heuristic (e.g. test discovery found a test_*.py file that defines no tests, or a TestCase class that defines no test_* methods and has no subclasses (or have no subclasses with test_* methods)).
msg226617 - (view) Author: Robert Collins (rbcollins) * (Python committer) Date: 2014-09-08 23:27
@Terry in principle you're right, there are an arbitrary number of things that can go wrong, but in practice what we see is either catastrophic failure where nothing is loaded at all *and* no error is returned or localised failure where the deferred reporting of failed imports serves quite well enough.

The former is caused by things like the wrong path in a configuration file.

@ezio sure - a boolean option would meet the needs reported to me, I was suggesting a specific implementation in an attempt to be generic enough to not need to maintain two things if more was added in future.
msg226620 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2014-09-09 03:58
You missed my point, which is that tools consuming subunit streams are already able to consider 'no tests found' to be an error. Conversely, when I run the suite on my Windows box, I usually consider only 1 or 2 errors to be success. After unittest reports actual results, the summary pass/fail judgment is only advisory.

To be really flexible and meet all needs for automated adjustment of pass/fail, the new parameter should be function that gets the numbers and at least the set of tests that 'failed'.
msg226631 - (view) Author: Michael Foord (michael.foord) * (Python committer) Date: 2014-09-09 09:03
I'd agree that a test run that actually runs zero tests almost always indicates an error, and it would be better if this was made clear. 

I have this problem a great deal with Go, where the test tools are awful, and it's very easy to think you have a successful test run (PASS) when you actually ran zero tests.

Particularly with discovery you will want to know your invocation is wrong.

I'm agnostic on a new "--min-tests" parameter, but having zero tests found should return a non-zero exit code and display a warning.
msg388669 - (view) Author: Michał Górny (mgorny) * Date: 2021-03-14 12:35
I'm not convinced we need something that complex here but I think it would make sense to make 'unittest discover' fail when it doesn't discover a single test.  As packagers, we've been bitten more than once by packages whose tests suddenly stopped being discovered, and it would be really helpful if we were able to catch this automatically without having to resort to hacks.
msg388682 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2021-03-14 19:36
With more experience, I agree that 0/0 tests passing should not be a pass.
History
Date User Action Args
2022-04-11 14:57:46adminsetgithub: 62432
2021-03-16 12:25:51mgornysetversions: + Python 3.10, - Python 3.4
2021-03-16 12:24:03mgornysetkeywords: + patch
stage: patch review
pull_requests: + pull_request23655
2021-03-14 19:36:02terry.reedysetmessages: + msg388682
2021-03-14 18:03:35kamiltureksetnosy: + kamilturek
2021-03-14 12:35:27mgornysetnosy: + mgorny
messages: + msg388669
2014-09-09 09:03:49michael.foordsetmessages: + msg226631
2014-09-09 03:58:15terry.reedysetmessages: + msg226620
2014-09-08 23:27:07rbcollinssetmessages: + msg226617
2013-07-05 08:21:33ezio.melottisetmessages: + msg192331
2013-06-21 21:27:00terry.reedysetnosy: + terry.reedy
messages: + msg191611
2013-06-16 20:08:48ezio.melottisetnosy: + ezio.melotti

type: enhancement
components: + Tests
versions: + Python 3.4
2013-06-16 19:32:39rbcollinscreate