Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize PyLong_AsDouble for single-digit longs #70476

Closed
1st1 opened this issue Feb 5, 2016 · 15 comments
Closed

Optimize PyLong_AsDouble for single-digit longs #70476

1st1 opened this issue Feb 5, 2016 · 15 comments
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@1st1
Copy link
Member

1st1 commented Feb 5, 2016

BPO 26288
Nosy @mdickinson, @pitrou, @vstinner, @skrah, @serhiy-storchaka, @1st1
Files
  • as_double.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/1st1'
    closed_at = <Date 2016-02-08.21:13:53.105>
    created_at = <Date 2016-02-05.01:55:00.043>
    labels = ['interpreter-core', 'performance']
    title = 'Optimize PyLong_AsDouble for single-digit longs'
    updated_at = <Date 2016-02-08.21:13:53.104>
    user = 'https://github.com/1st1'

    bugs.python.org fields:

    activity = <Date 2016-02-08.21:13:53.104>
    actor = 'yselivanov'
    assignee = 'yselivanov'
    closed = True
    closed_date = <Date 2016-02-08.21:13:53.105>
    closer = 'yselivanov'
    components = ['Interpreter Core']
    creation = <Date 2016-02-05.01:55:00.043>
    creator = 'yselivanov'
    dependencies = []
    files = ['41812']
    hgrepos = []
    issue_num = 26288
    keywords = ['patch']
    message_count = 15.0
    messages = ['259615', '259703', '259704', '259715', '259723', '259724', '259736', '259737', '259738', '259739', '259740', '259741', '259777', '259785', '259886']
    nosy_count = 7.0
    nosy_names = ['mark.dickinson', 'pitrou', 'vstinner', 'skrah', 'python-dev', 'serhiy.storchaka', 'yselivanov']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue26288'
    versions = ['Python 3.6']

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 5, 2016

    The attached patch drastically speeds up PyLong_AsDouble for single digit longs:

    -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)2 + (x+10)(x-30)"

    with patch: 0.414
    without: 0.612

    spectral_norm: 1.05x faster. The results are even better when paired with patch from issue bpo-21955.

    @1st1 1st1 added the interpreter-core (Objects, Python, Grammar, and Parser dirs) label Feb 5, 2016
    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Feb 6, 2016

    New changeset 986184c355e8 by Yury Selivanov in branch 'default':
    Issue bpo-26288: Optimize PyLong_AsDouble.
    https://hg.python.org/cpython/rev/986184c355e8

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 6, 2016

    Thanks a lot for the review, Serhiy!

    @1st1 1st1 closed this as completed Feb 6, 2016
    @1st1 1st1 added the performance Performance or resource usage label Feb 6, 2016
    @vstinner
    Copy link
    Member

    vstinner commented Feb 6, 2016

    Nice enhancement.

    /* Fast path; single digit will always fit decimal.
    This improves performance of FP/long operations by at
    least 20%. This is even visible on macro-benchmarks.
    */

    I'm not sure that "spectral_norm" can be qualified as macro-benchmark. It's more a microbenchmark on numerics, no? I'm just nitpicking, your patch is fine ;-)

    @pitrou
    Copy link
    Member

    pitrou commented Feb 6, 2016

    Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are.

    @pitrou pitrou reopened this Feb 6, 2016
    @skrah
    Copy link
    Mannequin

    skrah mannequin commented Feb 6, 2016

    Additionally, "single digit will always fit a double"?

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 6, 2016

    Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are.

    If spectral-norm and nbody aren't good benchmarks then let's remove them from our benchmarks suite.

    I'll remove that comment anyways, as it doesn't make a lot of sense :)

    Additionally, "single digit will always fit a double"?

    What's wrong about that phrase? Would this be better: "It's safe to cast a single-digit long (31 bits) to double"?

    @skrah
    Copy link
    Mannequin

    skrah mannequin commented Feb 6, 2016

    Sorry, I was a bit brief: The current comment says "decimal" instead of "double". It should be changed to "double".

    @pitrou
    Copy link
    Member

    pitrou commented Feb 6, 2016

    Le 06/02/2016 18:07, Yury Selivanov a écrit :

    > Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are.

    If spectral-norm and nbody aren't good benchmarks then let's remove
    them from our benchmarks suite.

    Probably, yes.

    @pitrou
    Copy link
    Member

    pitrou commented Feb 6, 2016

    Actually, let me refine that. nbody and spectral-norm don't make sense
    for people running CPython. Perhaps people running PyPy might care about
    their performance... (though PyPy is supposed to support a subset of
    Numpy too)

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Feb 6, 2016

    New changeset cfb77ccdc23a by Yury Selivanov in branch 'default':
    Issue bpo-26288: Fix comment
    https://hg.python.org/cpython/rev/cfb77ccdc23a

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 6, 2016

    Sorry, I was a bit brief: The current comment says "decimal" instead of "double". It should be changed to "double".

    Oh, got it now, sorry. I rephrased the comment a bit, hopefully it's better now. Please check. Thanks!

    @skrah
    Copy link
    Mannequin

    skrah mannequin commented Feb 7, 2016

    The comment looks good to me -- I'll stay out of the benchmarking issue, I didn't check any of that. :)

    @skrah
    Copy link
    Mannequin

    skrah mannequin commented Feb 7, 2016

    Well I *did* run the decimal/float milli-benchmark now and it shows at least 15% improvement for floats consistently.

    Given that the official benchmark suite does not seem to be very stable either (bpo-21955), I actually prefer small and well-understood benchmarks.

    @1st1
    Copy link
    Member Author

    1st1 commented Feb 8, 2016

    I'm not sure why this issue is open... Closing it.

    @1st1 1st1 closed this as completed Feb 8, 2016
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants