Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove unicode specialization from set objects #67308

Closed
rhettinger opened this issue Dec 27, 2014 · 16 comments
Closed

Remove unicode specialization from set objects #67308

rhettinger opened this issue Dec 27, 2014 · 16 comments
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage

Comments

@rhettinger
Copy link
Contributor

BPO 23119
Nosy @malemburg, @rhettinger, @pitrou, @vstinner, @ezio-melotti, @serhiy-storchaka
Files
  • one_lookkey.diff: Only use one lookkey() function
  • no_special_hash.diff: Remove check for precomputed unicode hashes
  • time_suite.sh: Way too simplistic timing suite
  • move_unicode_inside.diff: Replace global lookkey indirection with a pair of inlined type checks
  • measure_build_set.py: Time three patches for building sets of various sizes
  • build_set_timings.txt: Results of set construction timings on GCC-4.9 and CLang
  • inline_unicode_specialization.diff: Inline the unicode specialization
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/rhettinger'
    closed_at = <Date 2015-01-26.00:13:20.981>
    created_at = <Date 2014-12-27.10:50:11.092>
    labels = ['interpreter-core', 'performance']
    title = 'Remove unicode specialization from set objects'
    updated_at = <Date 2015-01-26.00:13:20.981>
    user = 'https://github.com/rhettinger'

    bugs.python.org fields:

    activity = <Date 2015-01-26.00:13:20.981>
    actor = 'rhettinger'
    assignee = 'rhettinger'
    closed = True
    closed_date = <Date 2015-01-26.00:13:20.981>
    closer = 'rhettinger'
    components = ['Interpreter Core']
    creation = <Date 2014-12-27.10:50:11.092>
    creator = 'rhettinger'
    dependencies = []
    files = ['37547', '37548', '37549', '37552', '37557', '37558', '37848']
    hgrepos = []
    issue_num = 23119
    keywords = ['patch']
    message_count = 16.0
    messages = ['233128', '233132', '233164', '233181', '233658', '233660', '233664', '233667', '233669', '233670', '233730', '233739', '234655', '234658', '234708', '234709']
    nosy_count = 7.0
    nosy_names = ['lemburg', 'rhettinger', 'pitrou', 'vstinner', 'ezio.melotti', 'python-dev', 'serhiy.storchaka']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'patch review'
    status = 'closed'
    superseder = None
    type = 'performance'
    url = 'https://bugs.python.org/issue23119'
    versions = ['Python 3.5']

    @rhettinger
    Copy link
    Contributor Author

    This tracker item is to record experiments with removing unicode specialization code from set objects and run timings to determine the performance benefits or losses from those specializations.

    • Removes the set_lookkey_unicode() function and the attendant so->lookup indirections. That saves 60 lines of code. On each lookup, it saves one indirection for the lookup dispatch, but in the case of unicode only tables, it costs an additional indirection through the abstract API for PyObject_RichCompareBool.

    • Removes the specialization code in add, discard, and contains functions to check for a unicode key with an already computed hash value. This saves a type check (cheap), a hash field check, and a nine lines of code. In the cast where the hash value would have already been computed, it costs a call to PyObject_Hash (which has an indirection, but otherwise does the same field test that we are doing). The working hypothesis is that this specialization code saves only a little in cases where it applies and adds a little to all the cases where it does not apply. (Note, the use cases for sets are less likely than dicts to be looking up strings whose hash value has already been computed.)

    ----------------------

    Here are some initial timings for the first patch. It seems to show that intersection benefits slightly and that set creation time is unaffected.

    $ ./time_suite.sh 
    100000 loops, best of 3: 14.9 usec per loop
    100000 loops, best of 3: 15.3 usec per loop
    1000000 loops, best of 3: 1.17 usec per loop
    1000000 loops, best of 3: 1.13 usec per loop
    10000 loops, best of 3: 24.9 usec per loop
    10000 loops, best of 3: 24.2 usec per loop
    
    $ ./time_suite.sh 
    100000 loops, best of 3: 14.7 usec per loop
    100000 loops, best of 3: 14.6 usec per loop
    1000000 loops, best of 3: 1.16 usec per loop
    1000000 loops, best of 3: 1.07 usec per loop
    10000 loops, best of 3: 23.1 usec per loop
    10000 loops, best of 3: 23.4 usec per loop
    
    $ ./time_suite.sh 
    100000 loops, best of 3: 14.5 usec per loop
    100000 loops, best of 3: 14.5 usec per loop
    1000000 loops, best of 3: 1.16 usec per loop
    1000000 loops, best of 3: 1.17 usec per loop
    10000 loops, best of 3: 22.5 usec per loop
    10000 loops, best of 3: 22 usec per loop

    @rhettinger rhettinger self-assigned this Dec 27, 2014
    @rhettinger rhettinger added interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Dec 27, 2014
    @pitrou
    Copy link
    Member

    pitrou commented Dec 27, 2014

    +1 for this. In my experience sets of unicode keys are not as common as dicts with unicode keys, and the posted numbers make the simplification a no-brainer.

    @rhettinger
    Copy link
    Contributor Author

    Attaching an alternative patch that handles the unicode specific case with far less code and less overhead. It seems to speed-up all the timings I've tried.

    It keeps the unicode_eq() specific path which bypasses several unneeded steps:

    • an incref/decref of the startkey
    • an incref/decref of Py_True or Py_False
    • Py_Enter/LeaveRecursive call overhead
    • A NotImplemented check
    • Various indirections, null checks, xor bool result, and several nested function calls that save and restore registers
    • Test for an error result
    • Test for changed table or entry
      usual incref/decref, richbool, richcmp, tablesize_change, errorcheck,
      recursion depth, check for NotImplemented, conv to PyTrue/False, incr/dec Py_True/Py_False, conv back to int
      }

    @rhettinger
    Copy link
    Contributor Author

    Timings for no_special_hash.diff:

    $ ~/cpython/python.exe -m timeit -r7 -s 's={"html"}' '"html" in s'
    10000000 loops, best of 7: 0.0315 usec per loop
    $ ~/nounicode/python.exe -m timeit -r7 -s 's={"html"}' '"html" in s'
    10000000 loops, best of 7: 0.0336 usec per loop
    $ ~/cpython/python.exe -m timeit -r7 -s 's={"html"}' '"html" in s'
    10000000 loops, best of 7: 0.0316 usec per loop
    $ ~/nounicode/python.exe -m timeit -r7 -s 's={"html"}' '"html" in s'
    10000000 loops, best of 7: 0.0336 usec per loop
    
    $ ~/cpython/python.exe -m timeit -r7 -s 's={1}' '1 in s'
    10000000 loops, best of 7: 0.0331 usec per loop
    $ ~/nounicode/python.exe -m timeit -r7 -s 's={1}' '1 in s'
    10000000 loops, best of 7: 0.0328 usec per loop
    $ ~/cpython/python.exe -m timeit -r7 -s 's={1}' '1 in s'
    10000000 loops, best of 7: 0.0332 usec per loop
    $ ~/nounicode/python.exe -m timeit -r7 -s 's={1}' '1 in s'
    10000000 loops, best of 7: 0.0328 usec per loop

    The unicode specialization in set_add, set_discard, and set_contains
    benefits unicode lookups by 7% but impairs other lookups by 1%.

    @serhiy-storchaka
    Copy link
    Member

    +1 for removing unicode specialization. Dictionaries with string keys is a part of the language, but sets of strings are not special.

    @malemburg
    Copy link
    Member

    I'm not sure I follow:

    Sets of strings are very common when trying to create a unique set of strings or optimizing "name in set_of_names" lookups.

    Regarding your benchmark numbers: I have a hard time following how they work. A simply "word in set_of_one_word" certainly doesn't make a good benchmark for these sort of tests :-)

    You should at least test with sets of various sizes and include the non-matching checks as well.

    @serhiy-storchaka
    Copy link
    Member

    Sets of strings are very common when trying to create a unique set of strings or optimizing "name in set_of_names" lookups.

    This is not nearly so common as attributes or globals access, or passing keyword arguments.

    @malemburg
    Copy link
    Member

    On 08.01.2015 15:46, Serhiy Storchaka wrote:

    > Sets of strings are very common when trying to create a unique set of strings or optimizing "name in set_of_names" lookups.

    This is not nearly so common as attributes or globals access, or passing keyword arguments.

    Not at the interpreter level, but unlike dicts, sets are not used
    much to implement interpreter logic. Their use cases are more based
    in application code where you basically find two major use cases:

    1. check for flag in set of numeric codes

    2. check for string in unique set of strings

    (and their associated set operations, i.e. checking for subsets,
    superset, etc.)

    This is why I don't follow Raymond's reasoning. Of course, if
    he comes up with a patch that makes both cases faster, I'd be
    +1 ;-)

    @vstinner
    Copy link
    Member

    vstinner commented Jan 8, 2015

    @Raymond: Please disable git format for patches, because Rietveld doesn't support such patch and so we don't get the "review" button.

    @ezio-melotti
    Copy link
    Member

    Without changesets information (not included in the git format) rietveld will try to apply the patch on default and if it applies clearly it will work, so creating the patch against an up to date py3 clone should work even with the git format.

    @rhettinger
    Copy link
    Contributor Author

    I'm withdrawing this one. After more work trying many timings on multiple compilers and various sizes and kinds of datasets, it appears that the unicode specialization is still worth it.

    The cost of the lookup indirection appears to be completely insignificant (i.e. doesn't harm the non-unicode case) while the benefits of the unicode specialized lookup does have measurable benefits in the use case of deduping an iterable of strings.

    @malemburg
    Copy link
    Member

    On 09.01.2015 09:33, Raymond Hettinger wrote:

    I'm withdrawing this one. After more work trying many timings on multiple compilers and various sizes and kinds of datasets, it appears that the unicode specialization is still worth it.

    The cost of the lookup indirection appears to be completely insignificant (i.e. doesn't harm the non-unicode case) while the benefits of the unicode specialized lookup does have measurable benefits in the use case of deduping an iterable of strings.

    Thanks, Raymond, for the additional testing :-)

    I did a grep over the Python C source code and it seems that sets are
    only used by Python/symtable.c for anything mildly performance
    relevant (which IIRC is used by the byte code compiler) -
    and those sets have Unicode strings as members.

    The stdlib uses sets with both Unicode strings and integers
    as members. From looking at the grep hits, it seems that Unicode
    strings are more commonly used than integers in the stdlib
    as set members, e.g. for method names, module names and character
    sets.

    @rhettinger
    Copy link
    Contributor Author

    Thanks Mark. I appreciate the effort looking at use cases.

    I'm reopening this one because I have an alternative patch that simplifies the code but keeps the unicode specialization. It replaces the lookkey indirection with a fast and predictable inline test for PyUnicode_CheckExact. The timings show no measureable difference, but the code is simpler and the setobject size is reduced by one pointer. This simplify future maintenance and development as well as making the code more pleasant.

    @rhettinger rhettinger reopened this Jan 25, 2015
    @serhiy-storchaka
    Copy link
    Member

    May be add the unicode specialization right in PyObject_RichCompareBool?

    @rhettinger
    Copy link
    Contributor Author

    May be add the unicode specialization right in PyObject_RichCompareBool?

    That's a possibility but it has much broader ramifications and might or might not be the right thing to do. I'll leave that for someone else to pursue and keep my sights on sets for while. (If you do modify PyObject_RichCompareBool, take a look moving the surrounding incref/decref pair as well).

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Jan 26, 2015

    New changeset 20f54cdf351d by Raymond Hettinger in branch 'default':
    Issue bpo-23119: Simplify setobject by inlining the special case for unicode equality testing.
    https://hg.python.org/cpython/rev/20f54cdf351d

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    6 participants