New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
regrtest/buildbot: test run marked as failure even when re-run succeeds #68939
Comments
The buildbots all run the test suite with the '-w', which re-runs any tests that failed in the main test sequence at a higher verbosity level. More often than not it seems the re-run tests succeed, but the exit code is still 1 so the build is marked as a failure. The simplest action I'd like would be to exit(0) iff all re-run tests pass on the re-run. Alternatively, we could try to get a bit fancier and exit with some other return code, and adjust the build master to interpret that return code as "passed, with warnings" and mark the build as amber rather than red. |
I think option 1 is to be preferred. One of the things we've been talking about for the workflow is gating on the buildbots passing, and the way that works with flaky tests is if the check fails, you just run the test again so you get a green and the patch can be gated in. So from that perspective if the tests pass on rerun the result is most useful if it is green. Unless we want to say amber is OK for gating...but in terms of cognative load I think green is better. After all, our current green state is morally equivalent to running the tests again and having them pass.. |
Here's a patch. |
New changeset 6987a9c7dde9 by Zachary Ware in branch '2.7': New changeset 9964edf2fd1e by Zachary Ware in branch '3.4': New changeset 9d1f6022261d by Zachary Ware in branch '3.5': New changeset 6f67c74608b6 by Zachary Ware in branch 'default': |
While running a manual test (make buildbottest) on my 2.7 Ubuntu buildbot, I ran into an exception in this patch: The tail end of the test run: [401/401/1] test_signal
379 tests OK.
1 test failed:
test_curses
21 tests skipped:
test_aepack test_al test_applesingle test_bsddb185 test_cd test_cl
test_dl test_gl test_imgfile test_kqueue test_linuxaudiodev
test_macos test_macostools test_msilib test_ossaudiodev
test_scriptpackages test_startfile test_sunaudiodev test_winreg
test_winsound test_zipfile64
Those skips are all expected on linux2.
Re-running failed tests in verbose mode
Traceback (most recent call last):
File "./Lib/test/regrtest.py", line 1598, in <module>
main()
File "./Lib/test/regrtest.py", line 655, in main
for test in bad[:]:
TypeError: 'set' object has no attribute '__getitem__' The code is attempting to iterate over a sliced copy of bad (bad[:]) due to later possible mutation, but by that point, if you had failures, bad is a set, from the block shortly above where it subtracts out the environment changed list. I was testing 2.7, but I think the issue affects all branches. Perhaps list(bad) instead of bad[:]? |
Ah. The problem is on 2.7 only; 3.x calls sorted() on the set operation. The set operation should just go away, though; we don't count ENV_CHANGED as 'bad' anymore. Will fix shortly. |
New changeset 7d69b214e668 by Zachary Ware in branch '2.7': |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: