Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

faster long multiplication #48194

Closed
pernici mannequin opened this issue Sep 23, 2008 · 14 comments
Closed

faster long multiplication #48194

pernici mannequin opened this issue Sep 23, 2008 · 14 comments
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@pernici
Copy link
Mannequin

pernici mannequin commented Sep 23, 2008

BPO 3944
Nosy @mdickinson, @tiran
Files
  • longobject2.diff
  • longobject_diff1: patch from issue 4258, modified
  • faster_long_mul.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/mdickinson'
    closed_at = <Date 2009-11-28.16:01:03.920>
    created_at = <Date 2008-09-23.10:25:58.601>
    labels = ['interpreter-core', 'performance']
    title = 'faster long multiplication'
    updated_at = <Date 2009-11-28.16:01:03.919>
    user = 'https://bugs.python.org/pernici'

    bugs.python.org fields:

    activity = <Date 2009-11-28.16:01:03.919>
    actor = 'mark.dickinson'
    assignee = 'mark.dickinson'
    closed = True
    closed_date = <Date 2009-11-28.16:01:03.920>
    closer = 'mark.dickinson'
    components = ['Interpreter Core']
    creation = <Date 2008-09-23.10:25:58.601>
    creator = 'pernici'
    dependencies = []
    files = ['11653', '13394', '13406']
    hgrepos = []
    issue_num = 3944
    keywords = ['patch']
    message_count = 14.0
    messages = ['73624', '73694', '73722', '73773', '74028', '74029', '74035', '74430', '75495', '75750', '83964', '84069', '84072', '95792']
    nosy_count = 4.0
    nosy_names = ['mark.dickinson', 'pernici', 'christian.heimes', 'fredrikj']
    pr_nums = []
    priority = 'normal'
    resolution = 'rejected'
    stage = None
    status = 'closed'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue3944'
    versions = ['Python 3.1', 'Python 2.7']

    @pernici
    Copy link
    Mannequin Author

    pernici mannequin commented Sep 23, 2008

    In this patch x_mul(a, b) uses fewer bit operations for a != b,
    asymptotically half of them.
    On the three computers I tried the speed-up is around 5% for size=4
    and it increases up to 45-60% just below the Karatsuba cutoff,
    then it decreases a bit after this cutoff (on one computer the speed-up
    is only 16% after KARATSUBA_CUTOFF=70, but raising the cutoff to 140,
    for which with the current code the multiplication is also faster,
    the speed-up is 45%).

    @pernici pernici mannequin added the performance Performance or resource usage label Sep 23, 2008
    @mdickinson
    Copy link
    Member

    Just to be clear: this patch halves the number of shifts and masks,
    asymptotically; it doesn't affect the number of adds and multiplies
    (except for saving a few additions to zero by setting the initial carry
    intelligently). Is that correct? It's quite surprising that the bit
    operations have such a large effect on the running time.

    Perhaps this is an argument for considering changing PyLong_SHIFT to 16
    (or better, 32, since surely almost every compiler provides uint64_t or
    equivalent these days)? Though that would be quite an involved task.

    While I believe the idea is sound, the patch isn't quite correct---it's
    got some assertions that won't always hold. For example, with the
    patch, in a debug build of Python, I get:

    >>> a = b = 2**30-1
    [36413 refs]
    >>> a*b
    Assertion failed: (carry <= PyLong_MASK), function x_mul, file 
    Objects/longobject.c, line 2228.
    Abort trap

    I'm fairly sure that the assert(carry <= PyLong_MASK) doesn't matter,
    and that the assertion at the end of the main outer loop (assert(carry

    > PyLong_SHIFT) == 0)) should still hold. But some more comments and
    analysis in the code would be good. For example, if carry >=
    PyLong_MASK the the comment that 2*MASK + 2*MASK*MASK is contained in a
    twodigit isn't so useful. How big can carry get?

    @mdickinson mdickinson self-assigned this Sep 24, 2008
    @pernici
    Copy link
    Mannequin Author

    pernici mannequin commented Sep 24, 2008

    Yes, I think that the speed-up is due to reducing the number of
    shifts and masks.

    Changing PyLong_SHIFT to 16 would be complicated; for instance in
    v_iadd() carry could not be a digit of 16 bits anymore; writing code
    specific for 64 bit machines would surely improve performance;
    maybe with PyLong_SHIFT=30 few changes to the code would be needed?

    I did not modify the case a = b.

    I changed the documentation, which was wrong,
    adding detailed bounds on carry
    in the various steps to check that it does not overflow.
    I corrected the wrong assertion (carry <= PyLong_MASK).

    @mdickinson
    Copy link
    Member

    Thanks for the updated patch! Looks good, on a quick scan.

    (One comment typo I noticed: there's a line
    BASE - 3 = 2*MASK - 1
    presumably this should be 2*BASE - 3 on the LHS.)

    Just out of interest, is it possible to go further, and combine 4
    partial multiplications at once instead of 2? Or does the extra
    bookkeeping involved make it not worth it?

    I think it's important to make sure that any changes to longobject.c
    don't slow down operations on small integers (where "small" means "less
    than 2**32") noticeably.

    Re: possible changes to PyLong_SHIFT

    Yes, changing PyLong_SHIFT to 16 (or 32) would be complicated, and would
    involve almost a complete rewrite of longobject.c, together with much
    else... It wasn't really a serious suggestion, but it probably would
    provide a speedup. The code in GMP gives some idea how things might work.

    Changing PyLong_SHIFT to 30 doesn't seem like a totally ridiculous idea,
    though. One problem is that there's no 64-bit integer type (for
    twodigits) in *standard* C89; so since Python is only allowed to assume
    C89 there would have to be some fallback code for those (very few,
    surely) platforms that didn't have a 64-bit integer type available.

    On 64-bit machines one could presumably go further, and have
    PyLong_SHIFT be 60 (or 62, or 63 --- but those break the assumption
    in long_pow that the PyLong_SHIFT is a multiple of 5). This would
    depend on the compiler providing a 128-bit type for twodigits (like
    __uint128_t on gcc/x86-64). Probably not worth it, especially if it
    ends up slowing down operations on everyday small integers.

    Any of these changes is also going to affect a good few other parts of
    the codebase (e.g. marshal, pickle?, struct?, floatobject.c, ...). It
    shouldn't be difficult to find most of the files affected (just look to
    see which files include longintrepr.h), but I have a suspicion there are
    a couple of other places that just assume PyLong_SHIFT is 15).

    @pernici
    Copy link
    Mannequin Author

    pernici mannequin commented Sep 29, 2008

    Mark, following your suggestions about using bigger integer types,
    I added code to convert Python numbers to arrays of twodigits,
    when a 64 bit integer type is supported, and for numbers with size
    larger than 20; otherwise the code of the previous patch is used.
    This 64 bit integer is used only inside multiplication, so no
    modifications need to be made in other parts of the Python code.
    Now with numbers with 300 decimal digits or more the speedup is
    2x on 32 bit machine, 3x on 64 bit machine (50% and 2x respectively
    for squaring).

    There is a macro HAVE_INT64 to control if there is a 64 bit type;
    the preprocessor instructions should be OK with gcc, but other
    compilers might have a 64 bit type and not long long, so HAVE_INT64
    is wrongly not defined and one falls back to multiplying arrays of
    16 bit digits; these preprocessor instructions need to be fixed.

    The speed difference for small integers is small;
    here is a summary of some benchmarks on
    Pentium M 1.6GHz, Athlon XP 2600+,
    Athlon 64 X2 Dual Core 3800+, all with Debian;
    speedup of this patch with respect to the current revision
    (+ means the patch is faster):
    In pybench, SimpleIntegerArithmetic: from -0.5% to +0.5%
    SimpleLongArithmetic: : from -1% to +7%
    pystone : from +0.5% to +1.5%

    @mdickinson
    Copy link
    Member

    Nice work! Seems like we're going to be able to look forward to faster
    integer arithmetic in Python 2.7 / 3.1. I'll try to find time to review
    your patch properly sometime soon.

    Regarding the HAVE_INT64, there's a standard autoconf macro
    AC_TYPE_INT64_T (and its unsigned counterpart AC_TYPE_UINT64_T) that
    automatically sets int64_t (or uint64_t) to a 64-bit (unsigned) integer
    type whenever it exists. So don't worry about the preprocessor stuff for
    the moment, so long as it's working on your machine; we can fix it in
    Python's configure script instead sometime.

    @tiran
    Copy link
    Member

    tiran commented Sep 29, 2008

    Nice work :)

    I'm changing the target versions to 2.7 and 3.1. The proposed changes
    are too large for a maintenance release.

    @tiran tiran added the interpreter-core (Objects, Python, Grammar, and Parser dirs) label Sep 29, 2008
    @mdickinson
    Copy link
    Member

    It looks as though changing PyLong_SHIFT to 30 everywhere is much simpler
    than I feared. Here's a short patch that does exactly that. It:

    • changes the definitions in longintrepr.h
    • changes marshal.c to write digits as longs, not shorts
    • adds some casts to longobject.c (most of which should really
      have been there already---clearly Python's never encountered
      a machine where ints are only 2 bytes long, even though the
      standard seems to permit it).

    With this patch, all tests pass on my machine with the exception of
    the getsizeof tests in test_sys; and sys.getsizeof is working fine---it's
    the tests that need to be changed.

    Still to do:

    • use uint64 and uint32 instead of unsigned long long and unsigned long,
      when available; this avoids wasting lots of space on platforms
      where a long is 64 bits.
    • provide fallback definitions for platforms that don't have any 64-bit
      type available
    • (?)expose the value of PyLong_SHIFT to Python somewhere (in sys?); then
      the getsizeof tests could use this value to determine whether a digit
      is expected to take 2 bytes or 4 (or ...)

    @mdickinson
    Copy link
    Member

    I've opened a separate issue (bpo-4258) for the idea of using 30-bit
    long digits instead of 15-bit ones. I'll remove the patch from this
    issue.

    Pernici Mario's idea applies even better to base 2**30 longs: one can
    clump together 16 (!) of the multiplications at once, essentially
    eliminating the overhead of shifts almost completely.

    @mdickinson
    Copy link
    Member

    See bpo-4258 for a patch that combines 30-bit digits with
    this multiplication optimization. The code is quite different
    from the patch posted here, but the basic idea is the same.

    @pernici
    Copy link
    Mannequin Author

    pernici mannequin commented Mar 22, 2009

    This patch comes from 30bit_longdigit13+optimizations1.patch in bpo-4258
    with modification for the case of multiplication by 0; it passes
    test_long.py and pidigits is a bit faster.

    @mdickinson
    Copy link
    Member

    Thanks! Unfortunately, it looks like I messed this up yesterday by
    removing mul1 (after the division patch went in, mul1 wasn't used any
    more). I'll fix this.

    @mdickinson
    Copy link
    Member

    Updated version of longobject_diff1:

    • add mul1 back in
    • rename MAX_PARTIALS to the more descriptive BLOCK_MUL_SIZE
    • rewrite digits_multiply so that the call to digits_multiply_add
      always has b_size=BLOCK_MUL_SIZE, then hard-code this and get
      rid of the b_size argument. This should give the compiler some
      opportunities for loop-unrolling in digits_multiply_add.

    @mdickinson
    Copy link
    Member

    I'm going to close this: it's a really nice idea, but after the 30-bit
    long digits were implemented, the speedup doesn't seem to be worth the
    extra code complication.

    The only situation I've found where this optimization really does make a
    big difference is when using 60-bit digits, but allowing those in Python
    would take a bit more work (essentially because it requires using some
    inline assembler to get at the CPU widening multiply and 128-bit-by-64-bit
    division instructions).

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    2 participants