Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimise PyLong division by 1 or -1 #66691

Closed
scoder opened this issue Sep 26, 2014 · 21 comments
Closed

Optimise PyLong division by 1 or -1 #66691

scoder opened this issue Sep 26, 2014 · 21 comments
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@scoder
Copy link
Contributor

scoder commented Sep 26, 2014

BPO 22501
Nosy @akuchling, @mdickinson, @pitrou, @scoder, @vstinner, @serhiy-storchaka
Files
  • div_by_1_fast_path.patch: fast paths for division by 1 or -1 and 0/x
  • mul_by_1_fast_path.patch: fast paths for multiplication by 0, 1 and -1
  • mul_div_by_1_fast_path.patch
  • mul_div_by_1_fast_path_3.patch: updated patch that moves div optimisation into l_divmod()
  • add_sub_0_fast_path.patch: fast paths for +/- 0
  • add_sub_0_fast_path_2.patch: fast paths for +/- 0 that always return PyLong
  • mul_div_by_1_fast_path_4.patch: updated patch that always returns PyLong from the special cases
  • mul_div_by_1_fast_path_5.patch
  • mul_div_by_1_fast_path_6.patch: updated patch without changes to long_true_div()
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2015-04-22.23:42:55.105>
    created_at = <Date 2014-09-26.07:52:31.886>
    labels = ['interpreter-core', 'performance']
    title = 'Optimise PyLong division by 1 or -1'
    updated_at = <Date 2015-04-22.23:42:55.103>
    user = 'https://github.com/scoder'

    bugs.python.org fields:

    activity = <Date 2015-04-22.23:42:55.103>
    actor = 'akuchling'
    assignee = 'none'
    closed = True
    closed_date = <Date 2015-04-22.23:42:55.105>
    closer = 'akuchling'
    components = ['Interpreter Core']
    creation = <Date 2014-09-26.07:52:31.886>
    creator = 'scoder'
    dependencies = []
    files = ['36729', '36730', '36731', '36733', '36734', '36735', '36736', '36739', '36746']
    hgrepos = []
    issue_num = 22501
    keywords = ['patch']
    message_count = 21.0
    messages = ['227590', '227594', '227595', '227596', '227597', '227598', '227599', '227600', '227601', '227602', '227604', '227605', '227607', '227609', '227611', '227625', '227627', '227633', '227644', '227709', '241834']
    nosy_count = 6.0
    nosy_names = ['akuchling', 'mark.dickinson', 'pitrou', 'scoder', 'vstinner', 'serhiy.storchaka']
    pr_nums = []
    priority = 'normal'
    resolution = 'wont fix'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue22501'
    versions = ['Python 3.5']

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    The attached patch adds fast paths for PyLong division by 1 and -1, as well as dividing 0 by something. This was found helpful for fractions normalisation, as the GCD that is divided by can often be |1|, but firing up the whole division machinery for this eats a lot of CPU cycles for nothing.

    There are currently two test failures in test_long.py because dividing a huge number by 1 or -1 no longer raises an OverflowError. This is a behavioural change, but I find it acceptable. If others agree, I'll fix the tests and submit a new patch.

    @scoder scoder added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Sep 26, 2014
    @serhiy-storchaka
    Copy link
    Member

    Perhaps it would be worth to special case multiplying on 0, 1 and -1 and adding 0, 1 and -1 too.

    @vstinner
    Copy link
    Member

    Any optimization requires a benchmark. What is the speedup?

    @vstinner
    Copy link
    Member

    I proposed an optimization for "x << 0" (as part of a larger patch to optimize 2 ** x) but the issue was rejected:
    http://bugs.python.org/issue21420#msg217802

    Mark Dickson wrote (msg217863):
    "There are many, many tiny optimisations we *could* be making in Objects/longobject.c; each of those potential optimisations adds to the cost of maintaining the code, detracts from readability, and potentially even slows down the common cases fractionally. In general, I think we should only be applying this sort of optimization when there's a clear benefit to real-world code. I don't think this one crosses that line."

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Attaching a similar patch for long_mul().

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Any optimization requires a benchmark. What is the speedup?

    I gave numbers in ticket bpo-22464.

    """
    Since many Fraction input values can already be normalised for some reason, the following change shaves off almost 30% of the calls to PyNumber_InPlaceFloorDivide() in the telco benchmark during Fraction instantiation according to callgrind, thus saving 20% of the CPU instructions that go into tp_new().
    """

    I then proposed to move this into the PyLong type in general, rather than letting Fraction itself do it less efficiently.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    @serhiy: moving the fast path into l_divmod() has the disadvantage of making it even more complex because we'd then also want to determine the modulus, but only if requested, and it can be 1, 0 or -1, depending on the second value. Sounds like a lot more if's.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Combined patch for both mul and div that fixes the return value of long_true_div(), as found by Serhiy, and removes the useless change in long_divrem(), as found by Antoine. Thanks!

    All test_long.py tests pass now.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    @serhiy: please ignore my comment in msg227599. I'll submit a patch that moves the specialisation to l_divmod().

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Thanks for the reviews, here's a new patch.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Sorry, last patch version contained a "use before type check" bug.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Here is an incremental patch that adds fast paths for adding and subtracting 0.

    Question: the module calls long_long() in some places (e.g. long_abs()) and thus forces the return type to be exactly a PyLong and not a subtype. My changes use a plain "incref+return input value" in some places. Should they call long_long() on it instead?

    @pitrou
    Copy link
    Member

    pitrou commented Sep 26, 2014

    Le 26/09/2014 12:57, Stefan Behnel a écrit :

    Question: the module calls long_long() in some places (e.g.
    long_abs()) and thus forces the return type to be exactly a PyLong and
    not a subtype. My changes use a plain "incref+return input value" in
    some places. Should they call long_long() on it instead?

    Ah, yes, they should. The return type should not depend on the input
    *values* :-)

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Ok, updating both patches.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    I reran the fractions benchmark over the final result and the overall gain turned out to be, well, small. It's a clearly reproducible 2-3% faster. That's not bad for the macro impact of a micro-optimisation, but it's not a clear argument for throwing more code at it either.

    I'll leave it to you to decide.

    @serhiy-storchaka
    Copy link
    Member

    Oh, such small gain and only on one specific benchmark not included still in standard benchmark suite, looks discourage. May be other benchmarks have gain from these changes?

    @vstinner
    Copy link
    Member

    2-3% faster

    3% is not enough to justify the change.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    Since Serhiy gave another round of valid feedback, here's an updated patch.

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 26, 2014

    I callgrinded it again and it confirmed that the gain when doing this inside of long_div() and friends is way lower than doing it right in Fraction.__new__(). It's not safe to do there, though, as "is" tests on integers are generally not a good idea in Python code. (Although it doesn't break anything if it fails, as it's a pure optimisation to avoid useless overhead.)

    The micro benchmarks with timeit confirm that it's as fast as expected.

    Large numbers before:

    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' ' -x'
    10000000 loops, best of 3: 0.177 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * -2'
    1000000 loops, best of 3: 0.329 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // -2'
    100000 loops, best of 3: 2.8 usec per loop
    
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * -1'
    1000000 loops, best of 3: 0.329 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // -1'
    100000 loops, best of 3: 2.36 usec per loop
    
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * 1'
    1000000 loops, best of 3: 0.333 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // 1'
    100000 loops, best of 3: 2.37 usec per loop

    Patched:

    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' ' -x'
    10000000 loops, best of 3: 0.176 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * -2'
    1000000 loops, best of 3: 0.328 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // -2'
    100000 loops, best of 3: 2.8 usec per loop
    
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * -1'
    10000000 loops, best of 3: 0.177 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // -1'
    10000000 loops, best of 3: 0.178 usec per loop
    
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x * 1'
    10000000 loops, best of 3: 0.0244 usec per loop
    $ ./python -m timeit -s 'x = 2**2000 + 3**234 + 5**891 + 7**1234' 'x // 1'
    10000000 loops, best of 3: 0.0258 usec per loop

    Small numbers before:

    $ ./python -m timeit -s 'x = 5' 'x * -2'
    10000000 loops, best of 3: 0.0408 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x // -2'
    10000000 loops, best of 3: 0.0714 usec per loop
    
    $ ./python -m timeit -s 'x = 5' 'x * -1'
    10000000 loops, best of 3: 0.0293 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x * 1'
    10000000 loops, best of 3: 0.0282 usec per loop
    
    $ ./python -m timeit -s 'x = 5' 'x // 1'
    10000000 loops, best of 3: 0.0529 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x // -1'
    10000000 loops, best of 3: 0.0536 usec per loop

    Patched:

    $ ./python -m timeit -s 'x = 5' 'x * -2'
    10000000 loops, best of 3: 0.0391 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x // -2'
    10000000 loops, best of 3: 0.0718 usec per loop
    
    $ ./python -m timeit -s 'x = 5' 'x * -1'
    10000000 loops, best of 3: 0.0267 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x * 1'
    10000000 loops, best of 3: 0.0265 usec per loop
    
    $ ./python -m timeit -s 'x = 5' 'x // 1'
    10000000 loops, best of 3: 0.0259 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x // -1'
    10000000 loops, best of 3: 0.0285 usec per loop

    Note: we're talking µsecs here, not usually something to worry about. And it's unlikely that other benchmarks see similarly "high" speedups as the one for fractions (due to the relatively high likelihood of the GCD being 1 there).

    I'm ok with closing this ticket as "won't fix".

    @scoder
    Copy link
    Contributor Author

    scoder commented Sep 27, 2014

    One more comment: I also benchmarked the change in long_true_div() now and found that it's only a minor improvement for large numbers and a *pessimisation* for small numbers:

    Before:

    $ ./python -m timeit -s 'x = 5' 'x / -1'
    10000000 loops, best of 3: 0.0313 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x / 1'
    10000000 loops, best of 3: 0.0307 usec per loop
    
    $ ./python -m timeit -s 'x = 2**200 + 3**234 + 5**89 + 7**123' 'x / 1'
    10000000 loops, best of 3: 0.101 usec per loop
    $ ./python -m timeit -s 'x = 2**200 + 3**234 + 5**89 + 7**123' 'x / -1'
    10000000 loops, best of 3: 0.104 usec per loop

    Patched:

    $ ./python -m timeit -s 'x = 5' 'x / 1'                                  
    10000000 loops, best of 3: 0.0569 usec per loop
    $ ./python -m timeit -s 'x = 5' 'x / -1'
    10000000 loops, best of 3: 0.0576 usec per loop
    
    $ ./python -m timeit -s 'x = 2**200 + 3**234 + 5**89 + 7**123' 'x / -1'
    10000000 loops, best of 3: 0.056 usec per loop
    $ ./python -m timeit -s 'x = 2**200 + 3**234 + 5**89 + 7**123' 'x / 1' 
    10000000 loops, best of 3: 0.056 usec per loop
    
    $ ./python -m timeit -s 'x = 2**200 + 3**234 + 5**89 + 7**123' 'x / -2'
    10000000 loops, best of 3: 0.106 usec per loop

    So, just for completeness, here's the patch without that part, with changes only in l_divmod() and long_mul(), with the timeit results as given in my previous comment.

    @akuchling
    Copy link
    Member

    From reading the discussion thread, it looks like the consensus is to not apply this set of patches because the speed-up is unfortunately small. Closing as won't-fix; please re-open if someone wishes to pursue this again.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants