Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce the number of comparisons for range checking. #67741

Closed
rhettinger opened this issue Mar 1, 2015 · 14 comments
Closed

Reduce the number of comparisons for range checking. #67741

rhettinger opened this issue Mar 1, 2015 · 14 comments
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) pending The issue will be closed if no feedback is provided performance Performance or resource usage

Comments

@rhettinger
Copy link
Contributor

BPO 23553
Nosy @rhettinger, @pitrou, @serhiy-storchaka
Files
  • size_t.diff: Fix deque casts
  • bounds_check_list.diff: Faster bounds checking for lists.
  • bounds_check_deque.diff: Faster bounds checking for deques
  • valid_index.diff: Isolate the technique in an in-lineable function call
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = None
    created_at = <Date 2015-03-01.00:50:38.601>
    labels = ['interpreter-core', 'performance']
    title = 'Reduce the number of comparisons for range checking.'
    updated_at = <Date 2015-03-02.04:13:50.021>
    user = 'https://github.com/rhettinger'

    bugs.python.org fields:

    activity = <Date 2015-03-02.04:13:50.021>
    actor = 'rhettinger'
    assignee = 'none'
    closed = False
    closed_date = None
    closer = None
    components = ['Interpreter Core']
    creation = <Date 2015-03-01.00:50:38.601>
    creator = 'rhettinger'
    dependencies = []
    files = ['38281', '38282', '38283', '38293']
    hgrepos = []
    issue_num = 23553
    keywords = ['patch']
    message_count = 13.0
    messages = ['236928', '236931', '236933', '236934', '236935', '236938', '236939', '236940', '236942', '236945', '236946', '236948', '237011']
    nosy_count = 5.0
    nosy_names = ['rhettinger', 'pitrou', 'Arfrever', 'python-dev', 'serhiy.storchaka']
    pr_nums = []
    priority = 'normal'
    resolution = None
    stage = None
    status = 'open'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue23553'
    versions = ['Python 3.5']

    @rhettinger
    Copy link
    Contributor Author

    Python's core is full of bound checks like this one in Objects/listobject.c:

    static PyObject *
    list_item(PyListObject *a, Py_ssize_t i)
    {
        if (i < 0 || i >= Py_SIZE(a)) {
        ...

    Abner Fog's high-level language optimization guide, http://www.agner.org/optimize/optimizing_cpp.pdf in section 14.2 Bounds Checking, shows a way to fold this into a single check:

    -    if (i < 0 || i >= Py_SIZE(a)) {
    +    if ((unsigned)i >= (unsigned)(Py_SIZE(a))) {
             if (indexerr == NULL) {
                 indexerr = PyUnicode_FromString(
                     "list index out of range");

    The old generated assembly code looks like this:

    _list_item:
    subq $8, %rsp
    testq %rsi, %rsi
    js L227
    cmpq 16(%rdi), %rsi
    jl L228
    L227:
    ... <error reporting and exit > ...
    L228:
    movq 24(%rdi), %rax
    movq (%rax,%rsi,8), %rax
    addq $1, (%rax)
    addq $8, %rsp
    ret

    The new disassembly looks like this:

    _list_item:
    cmpl %esi, 16(%rdi)
    ja L227
    ... <error reporting and exit > ...
    L227:
    movq 24(%rdi), %rax
    movq (%rax,%rsi,8), %rax
    addq $1, (%rax)
    ret

    Note, the new code not only saves a comparison/conditional-jump pair, it also avoids the need to adjust %rsp on the way in and the way out for a net savings of four instructions along the critical path.

    When we have good branch prediction, the current approach is very low cost; however, Abner Fog's recommendation is never more expensive, is sometimes cheaper, saves a possible misprediction, and reduces the total code generated. All in all, it is a net win.

    I recommend we put in a macro of some sort so that this optimization gets expressed exactly once in the code and so that it has a good clear name with an explanation of what it does.

    @rhettinger rhettinger added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Mar 1, 2015
    @serhiy-storchaka
    Copy link
    Member

    Yes, this is a technique commonly used in STL implementations. This is why sizes and indices in STL are unsigned.

    But in CPython implementation sizes are signed (Py_ssize_t). The problem with using this optimization (rather low-level than high-level) is that we need to know unsigned version of the type of compared values.

    • if (i < 0 || i >= Py_SIZE(a)) {
    • if ((unsigned)i >= (unsigned)(Py_SIZE(a))) {

    Here is a bug. The type of i and Py_SIZE(a) is Py_ssize_t, so when casted to unsigned int, highest bits are lost. The correct casting type is size_t.

    In changeset 5942fd9ab335 you introduced a bug.

    @rhettinger
    Copy link
    Contributor Author

    The type of i and Py_SIZE(a) is Py_ssize_t, so when casted to
    unsigned int, highest bits are lost. The correct casting type is size_t.

    Yes, I had just seen that a early today and deciding whether to substitute size_t for the unsigned cast or whether to just revert. I believe size_t is guaranteed to hold any array index and that a cast from non-negative Py_ssize_t would not lose bits.

    But in CPython implementation sizes are signed (Py_ssize_t).
    The problem with using this optimization (rather low-level
    than high-level) is that we need to know unsigned version of
    the type of compared values.

    Wouldn't size_t always work for Py_ssize_t?

    @serhiy-storchaka
    Copy link
    Member

    Wouldn't size_t always work for Py_ssize_t?

    Yes. But it wouldn't work for say off_t.

    The consistent way is always use size_t instead of Py_ssize_t. But this boat has sailed.

    @rhettinger
    Copy link
    Contributor Author

    But it wouldn't work for say off_t.

    I'm only proposing a bounds checking macro for the Py_ssize_t case which is what all of our IndexError tests look for.

    Also, please look at the attached deque fix.

    @serhiy-storchaka
    Copy link
    Member

    It looks correct to me, but I would change type and introduce few new variables to get rid of casts.

    @rhettinger
    Copy link
    Contributor Author

    Attaching a diff for the bounds checking Objects/listobject.c.
    It looks like elsewhere in that file, (size_t) casts are done
    for various reasons.

    @rhettinger
    Copy link
    Contributor Author

    Also attaching a bounds checking patch for deques.

    @serhiy-storchaka
    Copy link
    Member

    Parenthesis around Py_SIZE() are redundant.

    Are there any benchmarking results that show a speed up? Such microoptimization makes sense in tight loops, but optimized source code looks more cumbersome and errorprone.

    @rhettinger
    Copy link
    Contributor Author

    I think the source in listobject.c would be benefit from a well-named macro for this. That would provide the most clarity. For deques, I'll just put in the simple patch because it only applies to a place that is already doing unsigned arithmetic/comparisons.

    FWIW, I don't usually use benchmarking on these kinds of changes. The generated assembler is sufficiently informative. Benchmarking each tiny change risks getting trapped in a local minimum. Also, little timeit tests tend to branch the same way every time (which won't show the cost of prediction misses) and it tends to have all code and data in cache (so you don't see the effects of cache misses) and it risks tuning to a single processor (in my case a haswell). Instead, I look at the code generated by GCC and CLang to see that it does less work.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Mar 1, 2015

    New changeset 1e89094998b2 by Raymond Hettinger in branch 'default':
    Issue bpo-23553: Use an unsigned cast to tighten-up the bounds checking logic.
    https://hg.python.org/cpython/rev/1e89094998b2

    @serhiy-storchaka
    Copy link
    Member

    My point is that if the benefit is too small (say < 5% in microbenchmarks), it
    is not worth code churning. Actually my bar for microbenchmarks is higher,
    about 20%.

    @rhettinger
    Copy link
    Contributor Author

    FWIW, here is a small patch to show how this could can be done consistently and with code clarity.

    @rhettinger rhettinger changed the title Reduce the number of comparison for range checking. Reduce the number of comparisons for range checking. Mar 2, 2015
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    @iritkatriel
    Copy link
    Member

    Is there anything left to do on this issue? It seems that at least some of it was already done in c208308 and f1aa8ae.

    @iritkatriel iritkatriel added the pending The issue will be closed if no feedback is provided label Sep 9, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) pending The issue will be closed if no feedback is provided performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants