Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve coverage and rigour of test.test_math #70228

Closed
jeff5 mannequin opened this issue Jan 7, 2016 · 20 comments
Closed

Improve coverage and rigour of test.test_math #70228

jeff5 mannequin opened this issue Jan 7, 2016 · 20 comments
Assignees
Labels
tests Tests in the Lib/test dir type-bug An unexpected behavior, bug, or error

Comments

@jeff5
Copy link
Mannequin

jeff5 mannequin commented Jan 7, 2016

BPO 26040
Nosy @malemburg, @mdickinson, @ericvsmith, @tiran, @ned-deily, @serhiy-storchaka, @jeff5, @koobs
Files
  • iss26040.patch: Patch that improves coverage and addresses the uneven accuracy
  • extra_cmath_testcases.py: Generate the extra test cases proposed for cmath_testcases.txt
  • stat_math.py: Demonstrates coverage and accuracy problems in test_math v3
  • iss26040_v2.patch
  • iss26040_v3.patch
  • iss26040_v4.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/mdickinson'
    closed_at = <Date 2016-09-04.08:36:23.465>
    created_at = <Date 2016-01-07.15:41:22.750>
    labels = ['type-bug', 'tests']
    title = 'Improve coverage and rigour of test.test_math'
    updated_at = <Date 2016-09-04.08:36:23.463>
    user = 'https://github.com/jeff5'

    bugs.python.org fields:

    activity = <Date 2016-09-04.08:36:23.463>
    actor = 'mark.dickinson'
    assignee = 'mark.dickinson'
    closed = True
    closed_date = <Date 2016-09-04.08:36:23.465>
    closer = 'mark.dickinson'
    components = ['Tests']
    creation = <Date 2016-01-07.15:41:22.750>
    creator = 'jeff.allen'
    dependencies = []
    files = ['42166', '42191', '42192', '44251', '44253', '44254']
    hgrepos = []
    issue_num = 26040
    keywords = ['patch']
    message_count = 20.0
    messages = ['257695', '261789', '261855', '261933', '273472', '273484', '273493', '273507', '273509', '273767', '273853', '273856', '273857', '273872', '273874', '274320', '274321', '274324', '274346', '274351']
    nosy_count = 10.0
    nosy_names = ['lemburg', 'mark.dickinson', 'eric.smith', 'christian.heimes', 'ned.deily', 'stutzbach', 'python-dev', 'serhiy.storchaka', 'jeff.allen', 'koobs']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue26040'
    versions = ['Python 2.7', 'Python 3.5', 'Python 3.6']

    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Jan 7, 2016

    In many cases, test_math does not exercise the functions defined by the math module, other than at a few values needing special handling. Where it does test a function with values representative of the domain, the accuracy demanded is often loose in one place and tight in another. By comparison, test_cmath uses more interesting values and a rational approach to accuracy.

    In most implementations, math simply proxies an expertly built (and tested) platform library or firmware, so it is unlikely we are the victims of mathematical inexactitude.

    We observed this as a problem in the Jython project, where failures in cmath were traced eventually to the naive (but passing) implementation of some functions in math. For Jython, we now supplement test_math with stronger tests more like those in test_cmath. So this issue is really an offer to fold that work back into the CPython test_math, to strengthen testing for all.

    Coverage:
    --------
    In the attached program, in the coverge section, I logged the calls to selected functions from test_math to see what range of values they used. It produces results like:

    sqrt
    [9.881e-324, 7.905e-323, 1.862e-309, 1e-305, 1e-150, ..., 2, 23, 1e+16, 1e+150, 1e+299]
    [-1]
    [-0.0, -inf, 0.0, inf, nan]

    sin
    [1.571]
    [-1.571]
    [-0.0, -inf, 0.0, inf, nan]

    exp
    []
    [-745]
    [-0.0, -inf, 0.0, inf, nan]

    sinh
    []
    []
    [-0.0, -inf, 0.0, inf, nan]

    The three rows here are positive, negative and "special" arguments, and "..." means I abridged the list.

    Accuracy:
    --------
    Where functions are tested, results are expected with a maximum error of 1e-5. On sqrt(1e150), this demands bit-perfect accuracy; on sqrt(1e-150) it demands virtually nothing.

    In the attached program, in the accuracy section, I wrap most of the functions so as to fuzz their results by plus or minus 2e-6, test_math will still pass them. test_cmath uses a relative measure of error (so many ulps of the expected value), which is worth emulating.

    If people think it better, coverage and accuracy can be tracked as separate issues.

    @jeff5 jeff5 mannequin added tests Tests in the Lib/test dir type-bug An unexpected behavior, bug, or error labels Jan 7, 2016
    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Mar 14, 2016

    Here is a patch that improves coverage and addresses the uneven accuracy. Required accuracy is now specified in ulps. Mostly, I have choses 1 ulp, since this passed for me on an x86 architecture (and also ARM), but this may be too ambitious.

    I have also responded to the comment relating to erfc:
    # XXX Would be better to weaken this test only
    # for large x, instead of for all x."

    I found I could not contribute the code I used to generate the additional test cases in Tools/scripts without failing test_tools. (It complained of a missing dependency. The generator uses mpmath.)

    @mdickinson
    Copy link
    Member

    Big +1 for the idea, and thank you for doing this work. I can't promise to find time to do a thorough review in the near future (though I'll try).

    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Mar 17, 2016

    Thanks for the prompt acknowledgement and for accepting this to review.

    I have updated the coverage & tolerance demo program. Usage in the comments (in v3).

    I have also added the program I used to generate the extra test cases (needs mpmath -- easier to get working than mpf in the original Windows/Jython environment).

    @mdickinson mdickinson self-assigned this Aug 23, 2016
    @mdickinson
    Copy link
    Member

    I finally have some time to look at this. I've double-checked all the new cmath testcases against MPFR (via bigfloat), and they all look good. I plan to apply this in two stages: (1) apply the new cmath testcases, (2) apply the test_math changes, and (1.5) watch for buildbot failures in-between.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Aug 23, 2016

    New changeset 1017215f5492 by Mark Dickinson in branch 'default':
    Issue bpo-26040 (part 1): add new testcases to cmath_testcases.txt. Thanks Jeff Allen.
    https://hg.python.org/cpython/rev/1017215f5492

    @ned-deily
    Copy link
    Member

    Mark, many buildbots are unhappy. For example:

    ======================================================================
    FAIL: test_testfile (test.test_math.MathTests)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_math.py", line 1106, in test_testfile
        self.ftest("%s:%s(%r)" % (id, fn, ar), result, er)
      File "/root/buildarea/3.x.angelico-debian-amd64/build/Lib/test/test_math.py", line 194, in ftest
        (name, value, expected))
    AssertionError: cosh0063:cosh(709.7827) returned 8.988349783319008e+307, expected 8.988349783319007e+307

    @mdickinson
    Copy link
    Member

    Thanks, Ned. (And apologies.) The test_math tests are too strict in this case, which is one of the issues that Jeff's full patch fixes. It looks like my strategy of applying the patch in two pieces isn't going to work.

    I'll revert the change for now, and have another go at this tomorrow.

    @mdickinson
    Copy link
    Member

    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Aug 27, 2016

    Mark: Thanks for validating the additional cases so carefully.

    If you still want to apply it in stages then I suppose the change to the comparison logic could go first (untested idea), although that's also where I could most easily have made a mistake.

    @mdickinson
    Copy link
    Member

    iss26040.patch didn't apply cleanly to master (because of the recent math.tau addition). Here's an updated patch.

    @mdickinson
    Copy link
    Member

    I've reviewed the test_math portion of the patch (which looks good, thank you!), and made a few revisions in iss26040_v3.patch.

    1. I've rewritten the ulp() function to use struct. Same behaviour as before, but this way feels safer, since it doesn't rely on the exact form of the float.hex format.

    2. I've reworked the ulp-based comparison to use the old to_ulps function, instead of basing it on ulp(). That's mostly just a personal preference: I prefer the symmetry of the abs(to_ulps(got) - to_ulps(expected)) form, and I think it behaves slightly better at or near powers of two. (If expected == 1.0 and got == 1.0 - 2**-51, I want that to be seen as 4 ulps error rather than 2.)

    3. I increased the default ulp tolerance to 5 ulps, and increased the ulps tolerance for some of the special cases. There are some badly behaved libm implementations out there, and I don't want to break more buildbots than necessary.

    @mdickinson
    Copy link
    Member

    One more version; increased the default ulp_tol to 5 everywhere, and made some minor style fixes in nearby code.

    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Aug 29, 2016

    Mark: Thanks for doing my homework. Points 1 and 3 I can readily agree with. I must take another look at to_ulps() with your patch on locally. I used the approach I did because I thought it was incorrect in exactly those corners where you prefer it. I'll take a closer look.

    @jeff5
    Copy link
    Mannequin Author

    jeff5 mannequin commented Aug 29, 2016

    Ah, cunning: I can make sense of it in hex.

    >>> hex(to_ulps(expected))
    '0x3ff0000000000000'
    >>> hex(to_ulps(got))
    '0x3feffffffffffffc'
    >>> hex( to_ulps(got) - to_ulps(expected) )
    '-0x4'

    ... and what you've done with ulp then follows.

    In my version a format like "{:d} ulps" was a bad idea when the error was a gross one, but your to_ulps is only piece-wise linear -- large differences are compressed.

    I'm pleased my work has mostly survived: here's hoping the house build-bots agree. erfc() is perhaps the last worry, but math & cmath pass on my machine.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Sep 3, 2016

    New changeset e3dbe8b7279a by Mark Dickinson in branch 'default':
    Issue bpo-26040: Improve test_math and test_cmath coverage and rigour. Thanks Jeff Allen.
    https://hg.python.org/cpython/rev/e3dbe8b7279a

    @mdickinson
    Copy link
    Member

    iss26040_v4.patch applied. I'm watching the buildbots; if everything looks good there, I'll close the issue.

    @mdickinson
    Copy link
    Member

    Buildbots seem happy (or at least, no more unhappy than before). Closing.

    @mdickinson
    Copy link
    Member

    or at least, no more unhappy than before

    Not quite true: test_math and test_cmath are failing for the x86 Tiger 3.x buildbot:

    ======================================================================
    FAIL: test_testfile (test.test_math.MathTests)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_math.py", line 1190, in test_testfile
        '\n  '.join(failures))
    AssertionError: Failures in test_testfile:
      tan0064: tan(1.5707963267948961): expected 1978937966095219.0, got 1978945885716843.0 (error = 7.92e+09 (31678486496 ulps); permitted error = 0 or 5 ulps)

    ======================================================================
    FAIL: test_specific_values (test.test_cmath.CMathTests)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_cmath.py", line 398, in test_specific_values
        msg=error_message)
      File "/Users/db3l/buildarea/3.x.bolen-tiger/build/Lib/test/test_cmath.py", line 147, in rAssertAlmostEqual
        '{!r} and {!r} are not sufficiently close'.format(a, b))
    AssertionError: tan0064: tan(complex(1.5707963267948961, 0.0))
    Expected: complex(1978937966095219.0, 0.0)
    Received: complex(1978945885716843.0, 0.0)
    Received value insufficiently close to expected value.

    It's probably not worth trying to fix this for such an ancient version of OS X. Re-opening; I'll aim to skip the test on the platform.

    @mdickinson mdickinson reopened this Sep 4, 2016
    @mdickinson
    Copy link
    Member

    Opened bpo-27953 to track the issue with tan on OS X Tiger. Re-closing here.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    tests Tests in the Lib/test dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    2 participants