New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
False positives when running leak tests with -R 1:1 #78054
Comments
I am not sure is a problem we can do something about but right know if you run the refleak tests with low repetitions it reports leaks: ./python -m test test_list -R 1:1 test_list leaked [3] memory blocks, sum=3 == Tests result: FAILURE == 1 test failed: Total duration: 1 sec 759 ms This also happens with other low numbers: ./python -m test test_list -R 1:2 Obviously using this numbers is "wrong" (there is not enough repetitions to get meaningful results). The only problem I see is that if you are not aware of this limitation (in the case this is a real limitation on how Should we leave this as it is or try to improve the output? |
Memory block leaks are very different from reference leaks. Memory block are low level allocations. Python has *many* internal caches: tuple uses an internal "free list" for example. The first runs of the tests (2 runs when using -R 2:3) is used to warmup these caches. Maybe regrtest -R should raise an error, or at least emit a big warning when using -R with less than 3 warmup runs. By the way, regrtest has a very old bug: -R 3:3 runs the test 7 times, not 6 times. See runtest_inner() in Lib/test/libregrtest/runtest.py: test_runner()
if ns.huntrleaks:
refleak = dash_R(the_module, test, test_runner, ns.huntrleaks) The code should be: if ns.huntrleaks:
refleak = dash_R(the_module, test, test_runner, ns.huntrleaks)
else:
test_runner() Do you want to write a PR for that? I should make our Refleaks buildbots 1/7 faster ;-) |
Let's make the buildbots happier! |
I tested PR 7735: vstinner@apu$ ./python -m test -R 0:0 test_os -m test_access test_os leaked [] references, sum=0 == Tests result: FAILURE == 1 test failed: Total duration: 63 ms vstinner@apu$ ./python -m test -R 0:1 test_os -m test_access == Tests result: FAILURE == 1 test failed: Total duration: 95 ms vstinner@apu$ ./python -m test -R 1:0 test_os -m test_access == Tests result: FAILURE == 1 test failed: Total duration: 95 ms Hum, we should require at least one run and at least one warmup: -R 1:1 should be the bare minimum. By the way, it seems like negative numbers are currently accepted, whereas it doesn't make sense: vstinner@apu$ ./python -m test -R 0:-2 test_list It would fix this bug as well. |
Updated PR7735 with the checks for invalid parameters. |
Pablo Galindo Salgado fixed the bug in master, I backported his fix to 2.7, 3.6 and 3.7 branches. Thanks Pablo! |
runtest.py
and add checks for invalid-R
parameters #7735runtest.py
. #7736runtest.py
and add checks for invalid-R
parameters (GH-7735) #7933runtest.py
and add checks for invalid-R
parameters (GH-7735) #7934Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: