Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faster ascii decoding #58627

Closed
serhiy-storchaka opened this issue Mar 26, 2012 · 16 comments
Closed

Faster ascii decoding #58627

serhiy-storchaka opened this issue Mar 26, 2012 · 16 comments
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@serhiy-storchaka
Copy link
Member

BPO 14419
Nosy @jcea, @pitrou, @vstinner, @serhiy-storchaka
Superseder
  • bpo-14738: Amazingly faster UTF-8 decoding
  • Files
  • decode_ascii.patch
  • decode_ascii_2.patch: Simplified patch only for sizeof(long) <= sizeof(void *)
  • decode_ascii_bench.c
  • decode_ascii_compare.ods
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = <Date 2012-05-11.21:58:14.450>
    created_at = <Date 2012-03-26.20:56:17.896>
    labels = ['interpreter-core', 'performance']
    title = 'Faster ascii decoding'
    updated_at = <Date 2012-05-11.21:58:22.676>
    user = 'https://github.com/serhiy-storchaka'

    bugs.python.org fields:

    activity = <Date 2012-05-11.21:58:22.676>
    actor = 'pitrou'
    assignee = 'none'
    closed = True
    closed_date = <Date 2012-05-11.21:58:14.450>
    closer = 'pitrou'
    components = ['Interpreter Core']
    creation = <Date 2012-03-26.20:56:17.896>
    creator = 'serhiy.storchaka'
    dependencies = []
    files = ['25033', '25034', '25050', '25051']
    hgrepos = []
    issue_num = 14419
    keywords = ['patch']
    message_count = 16.0
    messages = ['156870', '156872', '156873', '156874', '156876', '156880', '156896', '156899', '156901', '156914', '156922', '156923', '156956', '156957', '160445', '160455']
    nosy_count = 4.0
    nosy_names = ['jcea', 'pitrou', 'vstinner', 'serhiy.storchaka']
    pr_nums = []
    priority = 'normal'
    resolution = 'duplicate'
    stage = 'resolved'
    status = 'closed'
    superseder = '14738'
    type = 'performance'
    url = 'https://bugs.python.org/issue14419'
    versions = ['Python 3.3']

    @serhiy-storchaka
    Copy link
    Member Author

    The proposed patch accelerates ascii decoding in a particular case, when the alignment of the input data coincides with the alignment of data in PyASCIIObject. This is a common case on 32-bit platforms. I did not check whether the patch have any effect on 64-bit platforms.

    Without patch:
    $ ./python -m timeit -n 1000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 1000000).encode(enc)' 'd(x)'
    1000 loops, best of 3: 1.93 msec per loop
    $ ./python -m timeit -n 10000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 100000).encode(enc)' 'd(x)'
    10000 loops, best of 3: 59.4 usec per loop

    With patch:
    $ ./python -m timeit -n 1000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 1000000).encode(enc)' 'd(x)'
    1000 loops, best of 3: 1.46 msec per loop
    $ ./python -m timeit -n 10000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 100000).encode(enc)' 'd(x)'
    10000 loops, best of 3: 35.6 usec per loop

    @serhiy-storchaka serhiy-storchaka added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Mar 26, 2012
    @serhiy-storchaka
    Copy link
    Member Author

    I would like to have someone tested it on a 64-bit platform. May be worth a copy smaller blocks, not sizeof(long) but sizeof(void *)? This guaranteed the alignment of the destination (and very likely the source).

    @vstinner
    Copy link
    Member

    Results on a 64-bit Linux box, Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz with 12 GB of RAM:

    Unpatched: 10000 loops, best of 3: 150 usec per loop
    Patched: 10000 loops, best of 3: 80.2 usec per loop

    @vstinner
    Copy link
    Member

    + if (((size_t) p & LONG_PTR_MASK) == ((size_t) q & LONG_PTR_MASK)) {

    This test looks. I think that it should be replaced by:

        if (!((size_t) q & LONG_PTR_MASK)) {

    @serhiy-storchaka
    Copy link
    Member Author

    • if (((size_t) p & LONG_PTR_MASK) == ((size_t) q & LONG_PTR_MASK)) {
      This test looks. I think that it should be replaced by:
      if (!((size_t) q & LONG_PTR_MASK)) {

    "if (!((size_t) p & LONG_PTR_MASK)) {" if sizeof(long) <= sizeof(void *). And rewrite loop.

    Now I got the idea that it is now difficult to find a platform for which sizeof(long) > sizeof(void *).

    @vstinner
    Copy link
    Member

    +#if SIZEOF_LONG <= SIZEOF_VOID_P
    + if (!((size_t) p & LONG_PTR_MASK)) {

    I wrote "q", not "p". You have to check p and q alignement to be able
    to dereference p and q pointers.

    sizeof(long) <= sizeof(void*) is always true.

    @serhiy-storchaka
    Copy link
    Member Author

    +#if SIZEOF_LONG <= SIZEOF_VOID_P

    • if (!((size_t) p & LONG_PTR_MASK)) {

    I wrote "q", not "p". You have to check p and q alignement to be able
    to dereference p and q pointers.

    Initial q (destination) is always pointer-aligned, because PyASCIIObject is
    pointer-aligned. If PyASCIIObject not aligned, then on this platform alignment
    does not matter. But initial p (source) can be non-aligned. If the initial p
    and q are not equally aligned, then they will not be able to be aligned in the
    future, and the whole loop shall be omitted.

    sizeof(long) <= sizeof(void*) is always true.

    Some memory models in I8086 processor had a 4-byte long and a 2-byte pointer.
    It is already in the past, but some modern processors can be 8-byte long and a
    4-byte pointer. I do not know, whether there are such. If you think this
    condition is unnecessary, we can remove it.

    @vstinner
    Copy link
    Member

    > +#if SIZEOF_LONG <= SIZEOF_VOID_P
    > +    if (!((size_t) p & LONG_PTR_MASK)) {
    >
    > I wrote "q", not "p". You have to check p and q alignement to be able
    > to dereference p and q pointers.

    Initial q (destination) is always pointer-aligned, because PyASCIIObject is
    pointer-aligned. If PyASCIIObject not aligned, then on this platform alignment
    does not matter.

    q is not the address of the Unicode string, but the address of the
    data following the Unicode structure in memory. Strings created by
    PyUnicode_New() are composed on one unique memory block: {structure,
    data}. It's safer to check that q is aligned to sizeof(long). In
    Python 3.2, it was different because the string data was always a
    separated memory block (as "not ready" strings in Python 3.3).

    We may change PyASCIIObject later to pack fields (to reduce memory consumption).

    @serhiy-storchaka
    Copy link
    Member Author

    q is not the address of the Unicode string, but the address of the
    data following the Unicode structure in memory. Strings created by
    PyUnicode_New() are composed on one unique memory block: {structure,
    data}.

    I know all that.

    #define _PyUnicode_COMPACT_DATA(op)                     \
        (PyUnicode_IS_ASCII(op) ?                   \
         ((void*)((PyASCIIObject*)(op) + 1)) :              \
         ((void*)((PyCompactUnicodeObject*)(op) + 1)))

    q is ((void*)((PyASCIIObject*)(op) + 1)). (PyASCIIObject*)(op) + 1 is pointer to PyASCIIObject and has same alignment as PyASCIIObject. PyASCIIObject is aligned to sizeof(void *)
    because it starts with void * field. Consequently, q is aligned to sizeof(void *). It does not depend on the number and the size of the fields in PyASCIIObject, except for the
    first one.

    Of course, if _PyUnicode_COMPACT_DATA definition is changed, it will cease to be true. Then apply my first patch, which may be a bit less effective for short strings
    (performance for short strings is bad measureable through Python). However, for short strings, we can put a size limit:

    if (size >= 2 * SIZEOF_LONG && ((size_t) p & LONG_PTR_MASK) == ((size_t) q & LONG_PTR_MASK)) {

    @vstinner
    Copy link
    Member

    New tests. I'm not conviced by the patch: it slows down the decoder for "short" strings. I don't understand which kind of ASCII encoded strings (specific length or content?) are optimized by the patch.

    Unpatched:

    $ ./python -m timeit -n 50000 -r 100 -s 'data=open("README", "r").read().encode("ascii")' 'data.decode("ASCII")'
    50000 loops, best of 100: 1.41 usec per loop
    
    $ ./python -m timeit -n 1000 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*10' 'd(x)'
    1000 loops, best of 3: 0.564 usec per loop
    
    $ ./python -m timeit -n 1000 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*1000' 'd(x)'
    1000 loops, best of 3: 24.4 usec per loop
    
    $ ./python -m timeit -n 10 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*100000' 'd(x)'
    10 loops, best of 3: 10.9 msec per loop
    
    $ ./python -m timeit -n 1000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 1000000).encode(enc)' 'd(x)'
    1000 loops, best of 3: 722 usec per loop

    Patched:

    $ ./python -m timeit -n 50000 -r 100 -s 'data=open("README", "r").read().encode("ascii")' 'data.decode("ASCII")'
    50000 loops, best of 100: 1.74 usec per loop
    
    $ ./python -m timeit -n 1000 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*10' 'd(x)'
    1000 loops, best of 3: 0.597 usec per loop
    
    $ ./python -m timeit -n 1000 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*1000' 'd(x)'
    1000 loops, best of 3: 27.3 usec per loop
    
    $ ./python -m timeit -n 10 -s 'import codecs; d = codecs.getdecoder("ascii"); x = bytes(range(128))*100000' 'd(x)'
    10 loops, best of 3: 8.32 msec per loop
    
    $ ./python -m timeit -n 1000 -s 'enc = "ascii"; import codecs; d = codecs.getdecoder(enc); x = ("\u0020" * 1000000).encode(enc)' 'd(x)'
    1000 loops, best of 3: 479 usec per loop

    @serhiy-storchaka
    Copy link
    Member Author

    New tests. I'm not conviced by the patch: it slows down the decoder for
    "short" strings. I don't understand which kind of ASCII encoded strings
    (specific length or content?) are optimized by the patch.

    May be you forgot the -r? Add -r 100 or -r 1000 and run tests a few times to evaluate the dispersion.

    I get the acceleration in all cases. For bytes(range(128))*10 it barely exceeds variation (second patch is slightly faster than first one), for bytes(range(128))*1000 it is 1.7x
    faster. For more short strings (under "short" I knew the length of the order of 10) measurement is thus quite impossible.

    This may also depend on the processor and compiler. I have AMD Athlon 64 X2 4600+ (2-core, 2.4GHz, 512 KB cache) and use gcc 4.4.3 on 32-bit Linux.

    @pitrou
    Copy link
    Member

    pitrou commented Mar 27, 2012

    This may also depend on the processor and compiler. I have AMD Athlon
    64 X2 4600+ (2-core, 2.4GHz, 512 KB cache) and use gcc 4.4.3 on 32-bit
    Linux.

    Then by choosing a string length that exceeds the L2 cache size, you may have found an ideal case for your optimization. Basically you're doing the error checking and the memcpy in one single pass.

    Honestly I'm not sure that's worth the hassle. ASCII-decoding is already very fast for shorter strings.

    (no need to pass "-n" or "-r" to timeit, it will figure out adequate numbers by itself)

    @serhiy-storchaka
    Copy link
    Member Author

    Python script too rough tool to measure decoding performance on short strings. To do this I used C.

    The benchmark scheme is as follows. Taken a big enough chunk of memory to reduce effect of processor cache. This area is splitted into many pieces with the same offset over long aligned block. Then measured a time of decoding all pieces of a certain size with a certain offset. Calculated an average time (ms) and decoding speed (MB/s).

    Run benchmark:
    gcc -Wall -O3 -I Include/ -I . -Xlinker -export-dynamic decode_ascii_bench.c libpython3.3m.a -lpthread -ldl -lutil -lm -lrt -o decode_ascii_bench && ./decode_ascii_bench

    @serhiy-storchaka
    Copy link
    Member Author

    As you can see, the unpatched code does not depend on the alignment. With patches aligned data (which constitute the vast majority, if not all) decoded much faster and non-aligned data decoded sometimes slightly slower. Time of decoding 2-10-bytes practically does not depend on the string length, most of the time (0.1 ms) occupy the overhead of function calls and objects creation and destruction. But even in this case, the patches show a steady increase in performance. When the overhead costs are reduced advantage becomes stronger. For short strings the second patch is better the first patch, as expected.

    @serhiy-storchaka
    Copy link
    Member Author

    Since the patch commited as part of new UTF-8 decoder, this issue can be closed (bpo-14738).

    @pitrou
    Copy link
    Member

    pitrou commented May 11, 2012

    Okay, thank you!

    @pitrou pitrou closed this as completed May 11, 2012
    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants