This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author steven.daprano
Recipients agthorr, belopolsky, christian.heimes, ethan.furman, gregory.p.smith, mark.dickinson, oscarbenjamin, pitrou, ronaldoussoren, sjt, steven.daprano, stutzbach, terry.reedy, tshepang, vajrasky
Date 2013-08-18.12:27:48
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1376828869.27.0.706031333422.issue18606@psf.upfronthosting.co.za>
In-reply-to
Content
On 15/08/13 22:58, ezio.melotti@gmail.com wrote:
> http://bugs.python.org/review/18606/diff/8927/Lib/statistics.py#newcode277
> Lib/statistics.py:277: assert isinstance(x, float) and
> isinstance(partials, list)
> Is this a good idea?

I think so add_partials is internal/private, and so I don't have to worry about the caller providing wrong arguments, say a non-float. But I want some testing to detect coding errors. Using assert for this sort of internal pre-condition is exactly what assert is designed for.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics.py#newcode144
> Lib/test/test_statistics.py:144: assert data != sorted(data)
> Why not assertNotEqual?

I use bare asserts for testing code logic, even if the code is test code. So if I use self.assertSpam(...) then I'm performing a unit test of the module being tested. If I use a bare assert, I'm asserting something about the test logic itself.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py
> File Lib/test/test_statistics_approx.py (right):
>
> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode1
> Lib/test/test_statistics_approx.py:1: """Numeric approximated equal
> comparisons and unit testing.
> Do I understand correctly that this is just an helper module used in
> test_statistics and that it doesn't actually test anything from the
> statistics module?

Correct.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode137
> Lib/test/test_statistics_approx.py:137: # and avoid using
> TestCase.almost_equal, because it sucks
> Could you elaborate on this?

Ah, I misspelled "TestCase.AlmostEqual".

- Using round() to test for equal-to-some-tolerance is IMO quite an idiosyncratic way of doing approx-equality tests. I've never seen anyone do it that way before. It surprises me.

- It's easy to think that ``places`` means significant figures, not decimal places.

- There's now a delta argument that is the same as my absolute error tolerance ``tol``, but no relative error argument.

- You can't set a  per-instance error tolerance.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode241
> Lib/test/test_statistics_approx.py:241: assert len(args1) == len(args2)
> Why not assertEqual?

As above, I use bare asserts to test the test logic, and assertSpam methods to perform the test. In this case, I'm confirming that I haven't created dodgy test data.


> http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode255
> Lib/test/test_statistics_approx.py:255: self.assertTrue(approx_equal(b,
> a, tol=0, rel=rel))
> Why not assertApproxEqual?

Because I'm testing the approx_equal function. I can't use assertApproxEqual to test its own internals.
History
Date User Action Args
2013-08-18 12:27:49steven.dapranosetrecipients: + steven.daprano, terry.reedy, gregory.p.smith, ronaldoussoren, mark.dickinson, belopolsky, pitrou, agthorr, christian.heimes, stutzbach, sjt, ethan.furman, tshepang, oscarbenjamin, vajrasky
2013-08-18 12:27:49steven.dapranosetmessageid: <1376828869.27.0.706031333422.issue18606@psf.upfronthosting.co.za>
2013-08-18 12:27:49steven.dapranolinkissue18606 messages
2013-08-18 12:27:48steven.dapranocreate