This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rbcollins
Recipients michael.foord, r.david.murray, rbcollins, vstinner
Date 2014-09-10.02:26:23
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Thanks; I'm still learning how to get the system here to jump appropriately :). I thought I'd told hg to reset me to trunk...

"You are right about the docs.  Reading that, I thought it was saying that errors would have a list of the errors that show up in the summary as Errors=N, and not the ones that show up as Failures=N, which of course is completely off base for two reasons (but, then, I can never remember the difference between Failure and Error and always ignore what type the test failures are)."

Ah, so this is specifically about *loader* errors, nothing to do with the ERROR and FAILURE concepts for the TestResult class; that definitely needs to be made more clear.

"Anyway, you probably want to talk about actual error types.  I understand ImportError, but I have no idea what would trigger the 'Failed to call load_tests' error.  Nor is it clear what would be a fatal error (easier just to say which errors are trapped, I think)."

'Failed to call load_tests' is an existing error that can be triggered if a load_tests method errors.

e.g. put this in a

def load_tests(loader, tests, pattern):
    raise Exception('fred')

to see it with/without the patch. I'll take a stab at improving the docs in a bit.

"It should also be mentioned that the contents of the list are an error message followed by a full traceback.  And, frankly, I'm not sure why this is useful, and in particular why this implementation satisfies your use case."

Ah! So I have an external runner which can enumerate the tests without running them. This is useful when the test suite is being distributed across many machines (simple hash based distribution can have very uneven running times, so enumerating the tests that need to run then scheduling based on runtime (or other metadata) gets better results). If the test suite can't be imported I need to show the failure of the import to users so they can fix it, but since the test suite may take hours (or even not be runnable locally) I need to do this without running the tests. Thus a simple list of the tracebacks encountered loading the test suite is sufficient. Where would be a good place to make this clearer?
Date User Action Args
2014-09-10 02:26:24rbcollinssetrecipients: + rbcollins, vstinner, r.david.murray, michael.foord
2014-09-10 02:26:24rbcollinssetmessageid: <>
2014-09-10 02:26:24rbcollinslinkissue19746 messages
2014-09-10 02:26:23rbcollinscreate