Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PEP 456 Secure and interchangeable hash algorithm #63382

Closed
tiran opened this issue Oct 6, 2013 · 56 comments
Closed

PEP 456 Secure and interchangeable hash algorithm #63382

tiran opened this issue Oct 6, 2013 · 56 comments
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) type-feature A feature request or enhancement

Comments

@tiran
Copy link
Member

tiran commented Oct 6, 2013

BPO 19183
Nosy @ncoghlan, @pitrou, @vstinner, @tiran, @serhiy-storchaka
Files
  • pep-0456-1.patch
  • 31ce9488be1c.diff
  • 257597d20fa8.diff
  • 19427e9cc500.diff
  • b8d39bf9ca4a.diff
  • 19427e9cc500-simplified.diff
  • fnv.c
  • 4756e9ed0328.diff
  • fb2f9c0bbca9.diff
  • ac521cef665a.diff
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/tiran'
    closed_at = <Date 2013-11-20.16:42:15.128>
    created_at = <Date 2013-10-06.14:58:27.551>
    labels = ['interpreter-core', 'type-feature']
    title = 'PEP 456 Secure and interchangeable hash algorithm'
    updated_at = <Date 2013-11-21.09:29:51.388>
    user = 'https://github.com/tiran'

    bugs.python.org fields:

    activity = <Date 2013-11-21.09:29:51.388>
    actor = 'python-dev'
    assignee = 'christian.heimes'
    closed = True
    closed_date = <Date 2013-11-20.16:42:15.128>
    closer = 'christian.heimes'
    components = ['Interpreter Core']
    creation = <Date 2013-10-06.14:58:27.551>
    creator = 'christian.heimes'
    dependencies = []
    files = ['31974', '32384', '32388', '32392', '32398', '32402', '32414', '32417', '32440', '32606']
    hgrepos = ['212']
    issue_num = 19183
    keywords = ['patch']
    message_count = 56.0
    messages = ['199078', '199113', '199114', '199137', '199139', '199142', '199144', '199145', '199146', '199147', '199148', '199149', '199884', '201193', '201405', '201441', '201448', '201449', '201459', '201465', '201466', '201531', '201548', '201549', '201550', '201551', '201553', '201554', '201555', '201561', '201563', '201564', '201566', '201582', '201625', '201626', '201629', '201636', '201677', '201849', '202799', '202949', '203003', '203019', '203024', '203048', '203050', '203066', '203457', '203466', '203468', '203469', '203471', '203472', '203503', '203595']
    nosy_count = 8.0
    nosy_names = ['ncoghlan', 'pitrou', 'vstinner', 'christian.heimes', 'Arfrever', 'neologix', 'python-dev', 'serhiy.storchaka']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue19183'
    versions = ['Python 3.4']

    @tiran
    Copy link
    Member Author

    tiran commented Oct 6, 2013

    The patch implements the current state of PEP-456 plus a configure option to select the hash algorithm. I have tested it only on 64bit Linux so far.

    @tiran tiran added interpreter-core (Objects, Python, Grammar, and Parser dirs) type-feature A feature request or enhancement labels Oct 6, 2013
    @pitrou
    Copy link
    Member

    pitrou commented Oct 6, 2013

    Here is a simple benchmark (Linux, gcc 4.7.3):

    $ ./python -m timeit -s "words=[w for line in open('LICENSE') for w in line.split()]; import collections" "c = collections.Counter(words); c.most_common(10)"
    • 64-bit build, before: 313 usec per loop

    • 64-bit build, after: 298 usec per loop

    • 32-bit build, before: 328 usec per loop

    • 32-bit build, before: 329 usec per loop

    • x32 build, before: 291 usec per loop

    • x32 build, after: 284 usec per loop

    @pitrou
    Copy link
    Member

    pitrou commented Oct 6, 2013

    Microbenchmarking hash computation (Linux, gcc 4.7.3):

    • Short strings:
      python -m timeit -s "b=b'x'*20" "hash(memoryview(b))"
    • 64-bit build, before: 0.263 usec per loop

    • 64-bit build, after: 0.263 usec per loop

    • 32-bit build, before: 0.303 usec per loop

    • 32-bit build, after: 0.358 usec per loop

    • Long strings:
      python -m timeit -s "b=b'x'*1000" "hash(memoryview(b))"
    • 64-bit build, before: 1.56 usec per loop

    • 64-bit build, after: 1.03 usec per loop

    • 32-bit build, before: 1.61 usec per loop

    • 32-bit build, after: 2.46 usec per loop

    Overall, performance looks fine to me.

    @tiran
    Copy link
    Member Author

    tiran commented Oct 7, 2013

    Your benchmark is a bit unrealistic because it times the hash cache most of the time. Here is a better benchmark (but bytes-only):

    $ ./python -m timeit -s "words=[w.encode('utf-8') for line in open('../LICENSE') for w in line.split()]; import collections" -- "c = collections.Counter(memoryview(w) for w in words); c.most_common(10)"
    1000 loops, best of 3: 1.63 msec per loop

    This increases the number of hash calculations from about 28k to over 8.4 mio.

    I also added a little statistic function to see how large typical string are. The artificial benchmark:

    hash 1: 115185
    hash 2: 1440956
    hash 3: 1679976
    hash 4: 873769
    hash 5: 948124
    hash 6: 651799
    hash 7: 676707
    hash 8: 545459
    hash 9: 523615
    hash 10: 421232
    hash 11: 161641
    hash 12: 140797
    hash 13: 86826
    hash 14: 41702
    hash 15: 41570
    hash 16: 332
    hash 17: 211
    hash 18: 4275
    hash 19: 205
    hash 20: 131
    hash 21: 4197
    hash 22: 70
    hash 23: 35
    hash 24: 44
    hash 25: 4145
    hash 26: 4137
    hash 27: 4137
    hash 28: 21
    hash 29: 4124
    hash 30: 8
    hash 31: 5
    hash 32: 1
    hash other: 28866
    hash total: 8404302

    And here is the statistic of a full test run.

    hash 1: 18935
    hash 2: 596761
    hash 3: 643973
    hash 4: 645399
    hash 5: 576231
    hash 6: 742531
    hash 7: 497214
    hash 8: 330890
    hash 9: 291301
    hash 10: 93206
    hash 11: 1417900
    hash 12: 160802
    hash 13: 58675
    hash 14: 49324
    hash 15: 48068
    hash 16: 90634
    hash 17: 24163
    hash 18: 66079
    hash 19: 23408
    hash 20: 20695
    hash 21: 16424
    hash 22: 17236
    hash 23: 59135
    hash 24: 10368
    hash 25: 6047
    hash 26: 6784
    hash 27: 5565
    hash 28: 5931
    hash 29: 3469
    hash 30: 4220
    hash 31: 2652
    hash 32: 2911
    hash other: 72042
    hash total: 6608973

    About 50% of the hashed elements are between 1 and 5 characters long. A realistic hash collision attack on 32bit needs at least 7, 8 chars. I see the chance of a micro-optimization! :)

    @pitrou
    Copy link
    Member

    pitrou commented Oct 7, 2013

    Your benchmark is a bit unrealistic because it times the hash cache
    most of the time. Here is a better benchmark (but bytes-only):

    $ ./python -m timeit -s "words=[w.encode('utf-8') for line in
    open('../LICENSE') for w in line.split()]; import collections" -- "c
    = collections.Counter(memoryview(w) for w in words);
    c.most_common(10)"
    1000 loops, best of 3: 1.63 msec per loop

    Good point. Can you also post all benchmark results?

    @tiran
    Copy link
    Member Author

    tiran commented Oct 7, 2013

    unmodified Python:

    1000 loops, best of 3: 307 usec per loop (unicode)
    1000 loops, best of 3: 930 usec per loop (memoryview)

    SipHash:

    1000 loops, best of 3: 300 usec per loop (unicode)
    1000 loops, best of 3: 906 usec per loop (memoryview)

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Oct 7, 2013

    New changeset c960bed22bf6 by Christian Heimes in branch 'default':
    Make Nick BDFG delegate
    http://hg.python.org/peps/rev/c960bed22bf6

    @serhiy-storchaka
    Copy link
    Member

    I propose extract all hash related stuff from Include/object.h in separated file Include/pyhash.h. And perhaps move Objects/hash.c to Python/pyhash.c.

    @serhiy-storchaka
    Copy link
    Member

    Since hash algorithm determined at compile time, the _Py_HashSecret_t structure and the _Py_HashSecret function are redundant. We need define only the _Py_HashBytes function.

    Currently SipHash algorithm doesn't work with unaligned data.

    @tiran
    Copy link
    Member Author

    tiran commented Oct 7, 2013

    Sure it does. The test for unaligned hashing passes without an error or a segfault.

    @serhiy-storchaka
    Copy link
    Member

    And note that the quality of the FNV hash function is reduced (msg186403). We need "shuffle" result's bits.

    @serhiy-storchaka
    Copy link
    Member

    The test for unaligned hashing passes without an error or a segfault.

    On some platforms it can work without a segfault.

    @ncoghlan
    Copy link
    Contributor

    Added some structural comments to the patch. I'll defer to Serhiy when it comes to assessing the algorithm details :)

    @tiran
    Copy link
    Member Author

    tiran commented Oct 24, 2013

    I have created a clone for PEP-456 and applied your suggestions. I'm still looking for a nice API to handle the hash definition. Do you have some suggestions?

    @tiran
    Copy link
    Member Author

    tiran commented Oct 27, 2013

    Nick, can you do another review? All tests should pass on common boxes.

    The latest code hides the struct with the hash function. I have added a configure block that detects platforms that don't support unaligned memory access. It works correctly on the SPARC Solaris 10 box.

    I'm still looking for a 64bit big endian box and a 32bit big endian box that support unaligned memory.

    @pitrou
    Copy link
    Member

    pitrou commented Oct 27, 2013

    I'm still looking for a 64bit big endian box

    Have you tried the PPC64 PowerLinux box? It's in the stable buildbots for a reason :-)

    @tiran
    Copy link
    Member Author

    tiran commented Oct 27, 2013

    I can no longer find the configuration for custom path. It's still documented but there is no field for "repo path".

    http://buildbot.python.org/all/buildslaves/edelsohn-powerlinux-ppc64

    @pitrou
    Copy link
    Member

    pitrou commented Oct 27, 2013

    I can no longer find the configuration for custom path. It's still documented but there is no field for "repo path".

    http://buildbot.python.org/all/builders/PPC64%20PowerLinux%20custom

    (usually, just replace "3.x" with "custom" in the URL)

    @tiran
    Copy link
    Member Author

    tiran commented Oct 27, 2013

    Nick, please review the latest patch. I have addressed Antoine's review in 257597d20fa8.diff. I'll update the PEP as soon as you are happy with the patch.

    @serhiy-storchaka
    Copy link
    Member

    I suggest move to Include/pyhash.h and Python/pyhash.c only _Py_HashBytes() and string hash algorithm related constants, and don't touch PyObject_Hash(), _Py_HashDouble(), etc. So if anybody want change string hashing algorithm, it need only replace these two minimal files.

    @tiran
    Copy link
    Member Author

    tiran commented Oct 27, 2013

    PyObject_Hash() and PyObject_HashNotImplemented() should not have been moved to pyhash.h. But the other internal helper methods should be kept together.

    @ncoghlan
    Copy link
    Contributor

    On 27 Oct 2013 23:46, "Christian Heimes" <report@bugs.python.org> wrote:

    Christian Heimes added the comment:

    Nick, please review the latest patch. I have addressed Antoine's review
    in 257597d20fa8.diff. I'll update the PEP as soon as you are happy with the
    patch.

    Comments from the others sound promising, I should be able to take a look
    myself tomorrow night.

    @serhiy-storchaka
    Copy link
    Member

    Christian, why PY_HASH_EXTERNAL is here? Do you plan use it any official build? I think that in custom build of Python whole files pyhash.c and pyhash.h can be replaced.

    When you will get rid from PY_HASH_EXTERNAL, then you could get rid from PyHash_FuncDef, PyHash_Func, etc.

    Why _Py_HashDouble() and _Py_HashPointer() are moved to pyhash.c? They are hash algorithm agnostic, and it is unlikely they will be redefined in custom build.

    You not need the HAVE_ALIGNED_REQUIRED macros if use PY_UHASH_CPY (or something like for exact 64 bit) in siphash24. On platforms where aligned access is required you will use per-bytes copy, otherwise you will use fast 64-bit copy.

    @tiran
    Copy link
    Member Author

    tiran commented Oct 28, 2013

    Am 28.10.2013 16:18, schrieb Serhiy Storchaka:

    Christian, why PY_HASH_EXTERNAL is here? Do you plan use it any official build? I think that in custom build of Python whole files pyhash.c and pyhash.h can be replaced.

    Because you can't simple replace the files. It also contains
    _Py_HashBytes() and _PyHash_Fini(). With PY_HASH_EXTERNAL embedders can
    simply define PY_HASH_ALGORITHM PY_HASH_EXTERNAL and provide the extern
    struct inside a separate object file.

    When you will get rid from PY_HASH_EXTERNAL, then you could get rid from PyHash_FuncDef, PyHash_Func, etc.

    I don't understand why you want me to get rid of the struct. What's your
    argument against the struct? I like the PyHash_FuncDef because it groups
    all information (func ptr, name, hash metadata) in a single structure.

    Why _Py_HashDouble() and _Py_HashPointer() are moved to pyhash.c? They are hash algorithm agnostic, and it is unlikely they will be redefined in custom build.

    I have moved the functions to pyhash.c in order to keep all related
    internal function in one file. They do not belong in Objects/object.c.

    You not need the HAVE_ALIGNED_REQUIRED macros if use PY_UHASH_CPY (or something like for exact 64 bit) in siphash24. On platforms where aligned access is required you will use per-bytes copy, otherwise you will use fast 64-bit copy.

    I'm not going to make siphash24 compatible with platforms that require
    aligned memory for integers. It's an unnecessary complication and
    slow-down for all common platforms. The feature will simply not be
    available on archaic architectures.

    Seriously, nobody gives a ... about SPARC and MIPS. :) It's nice that
    Python still works on these CPU architectures. But I neither want to
    deviate from the SipHash24 implementation nor make the code slower on
    all relevant platforms such as X86 and X86_64.

    @tiran
    Copy link
    Member Author

    tiran commented Oct 28, 2013

    Serhiy, I would like to land my patch before beta 1 hits the fan. We can always improve the code during beta. Right now I don't want to mess around with SipHash24 code. That includes non-64bit platforms as well as architectures that enforce aligned memory for integers.

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Oct 28, 2013

    Seriously, nobody gives a ... about SPARC and MIPS. :) It's nice that
    Python still works on these CPU architectures. But I neither want to
    deviate from the SipHash24 implementation nor make the code slower on
    all relevant platforms such as X86 and X86_64.

    Well, unaligned memory access is usually slower on all architectures :-)
    Also, I think some ARM architectures don't support unaligned access, so
    it's not really a thing of the past...

    @tiran
    Copy link
    Member Author

    tiran commented Oct 28, 2013

    Am 28.10.2013 16:59, schrieb Charles-François Natali:

    Well, unaligned memory access is usually slower on all architectures :-)
    Also, I think some ARM architectures don't support unaligned access, so
    it's not really a thing of the past...

    On modern computers it's either not slower or just a tiny bit slower.
    http://lemire.me/blog/archives/2012/05/31/data-alignment-for-speed-myth-or-reality/

    Python's str and bytes datatype are always aligned properly. The
    majority of bytearray and memoryview instances are aligned, too.
    Unaligned memory access is rare case for most applications. About 50% of
    strings have less than 8 bytes (!), 90% have less than 16 bytes. For the
    Python's test suite the numbers are even smaller: ~45% <=5 bytes, ~90%
    <=12 bytes.

    You might see a 10% slowdown for very long and unaligned byte arrays on
    some older CPUs. I think we can safely ignore the special case. Any
    special case for unaligned memory will introduce additional overhead
    that *will* slow down the common path.

    Oh, and ARM has unaligned memory access:
    http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0360f/CDFEJCBH.html

    @vstinner
    Copy link
    Member

    To support Windows 32 bit, the following code in PC/pyconfig.h can be modified to use __int64 or _W64: see ssize_t definition below in the same file.

    #ifndef PY_UINT64_T
    #if SIZEOF_LONG_LONG == 8
    #define HAVE_UINT64_T 1
    #define PY_UINT64_T unsigned PY_LONG_LONG
    #endif
    #endif

    @tiran
    Copy link
    Member Author

    tiran commented Oct 29, 2013

    Victor:
    I have added the licence to Doc/licence.rst and created a new ticket for PY_UINT64_T on Windows bpo-19433.

    Nick:
    The memory layout of the hash secret is now documented. I have renamed the members to reflect their purpose, too. http://hg.python.org/features/pep-456/file/f0a7e606c2d0/Include/pyhash.h#l32

    I'll update my PEP shortly and address the memory layout of _Py_HashSecret_t, the small string hashing optimization and performance/memory alignment.

    @serhiy-storchaka
    Copy link
    Member

    About memcpy(). Here is sample file. Compile it to assembler:

    gcc -O2 -S -masm=intel fnv.c

    With memcpy() main loop is compiled to:

    .L3:
    mov esi, DWORD PTR [ebx]
    imul eax, eax, 1000003
    add ebx, 4
    xor eax, esi
    sub ecx, 1
    mov DWORD PTR [esp+24], esi
    jne .L3

    With per-byte copy it is compiled to:

    .L3:
    mov dl, BYTE PTR [ecx]
    imul eax, eax, 1000003
    sub ebp, 1
    movzx ebx, BYTE PTR [ecx+1]
    movzx edi, BYTE PTR [ecx+2]
    movzx esi, BYTE PTR [ecx+3]
    add ecx, 4
    mov dh, bl
    sal edi, 16
    movzx edx, dx
    sal esi, 24
    or edx, edi
    or edx, esi
    xor eax, edx
    cmp ebp, -1
    jne .L3

    @tiran
    Copy link
    Member Author

    tiran commented Oct 31, 2013

    I had to add the conversion from LE to host endianess. The missing conversion was affecting and degrading hash value dispersion.

    @tiran
    Copy link
    Member Author

    tiran commented Nov 13, 2013

    Hi Nick,

    I have updated the patch and the PEP text. The new version has small string hash optimization disabled. The other changes are mostly cleanup, reformatting and simplifications.

    Can you please do a review so I can get the patch into 3.4 before beta1 is released?

    @ncoghlan
    Copy link
    Contributor

    I reviewed the latest PEP text at http://www.python.org/dev/peps/pep-0456/

    I'm almost prepared to accept the current version of the implementation, but there's one technical decision to be clarified and a few placeholders in the PEP that need to be cleaned up prior to formal acceptance:

    • The rationale for turning off the small string optimisation by default rather than setting the cutoff to 7 bytes isn't at all clear to me. A consistent 3-5% speed difference on the benchmark suite isn't trivial, and if we have the small string optimization off by default, why aren't we just deleting that code instead?

    • A link to the benchmark suite at http://hg.python.org/benchmarks should be included at the appropriate places in the PEP

    • The "Further things to consider" section needs to be moved to a paragraph under "Discussion" describing the current implementation (i.e. the hash equivalence is tolerated for simplicity and consistency)

    • The "TBD" in the performance section needs to go. Reference should be made to the numbers in the small string optimisation section.

    • The performance numbers need to be clear on what version of the feature branch was used to obtain them (preferably the one you plan to commit!).

    @ncoghlan ncoghlan assigned tiran and unassigned ncoghlan Nov 15, 2013
    @tiran
    Copy link
    Member Author

    tiran commented Nov 16, 2013

    Here are benchmarks on two Linux machine. It looks like SipHash24 takes advantage of newer CPUs. I'm a bit puzzled about the results. Or maybe my super simple and naive analyzer doesn't give sensible results...

    https://bitbucket.org/tiran/pep-456-benchmarks/

    @ncoghlan
    Copy link
    Contributor

    Thanks - are those numbers with the current feature branch, and hence no small string optimization?

    To be completely clear, I'm happy to accept a performance penalty to fix the hash algorithm. I'd just like to know exactly how big a penalty I'm accepting, and whether taking advantage of the small string optimization makes it measurably smaller.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 16, 2013

    For the record, it's better to use a geometric mean when agregating benchmark results into a single score.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 16, 2013

    So, amusingly, Christian's patch seems to be 4-5% faster than vanilla on many benchmarks here (Sandy Bridge Core i5, 64-bit, gcc 4.8.1). A couple of benchmarks are a couple % slower, but nothing severe.

    This without the small strings optimization.

    On top of that, the small strings optimization seems to have varying effects, some positive, some negative, so it doesn't seem to be desirable (on this platform anyway).

    @pitrou
    Copy link
    Member

    pitrou commented Nov 16, 2013

    Benchmark report (without the small strings optimization):

    http://bpaste.net/show/UohtA8dmSREbrtsJYfTI/

    @tiran
    Copy link
    Member Author

    tiran commented Nov 16, 2013

    The numbers are between cpython default tip and my feature branch. I have pulled and merged all upstream changes into my feature branch yesterday. The results with "sso" in the file name are with small string optimization.

    Performance greatly depends on compiler and CPU. In general SipHash24 is faster on modern compilers and modern CPUs. MSVC generates slower code for SipHash24 but I haven't optimized the code for MSVC yet. Perhaps Serhiy is able to do his magic. :)

    PS: I have uploaded more benchmarks. The directories contain the raw output and CSV files.

    @tiran
    Copy link
    Member Author

    tiran commented Nov 20, 2013

    The PEP should be ready now. I have addressed your input in http://hg.python.org/peps/rev/fbe779221a7a

    @tiran tiran assigned ncoghlan and unassigned tiran Nov 20, 2013
    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 20, 2013

    New changeset adb471b9cba1 by Christian Heimes in branch 'default':
    ssue bpo-19183: Implement PEP-456 'secure and interchangeable hash algorithm'.
    http://hg.python.org/cpython/rev/adb471b9cba1

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 20, 2013

    New changeset 422ed27b62ce by Christian Heimes in branch 'default':
    Issue bpo-19183: test_gdb's test_dict was failing on some machines as the order or dict keys has changed again.
    http://hg.python.org/cpython/rev/422ed27b62ce

    @tiran tiran closed this as completed Nov 20, 2013
    @tiran tiran assigned tiran and unassigned ncoghlan Nov 20, 2013
    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 20, 2013

    New changeset 11cb1c8faf11 by Victor Stinner in branch 'default':
    Issue bpo-19183: Fix repr() tests of test_gdb, hash() is now platform dependent
    http://hg.python.org/cpython/rev/11cb1c8faf11

    @vstinner
    Copy link
    Member

    Not only test_gdb relies on repr() exact value, there is also test_functools:

    http://buildbot.python.org/all/builders/AMD64%20OpenIndiana%203.x/builds/6875/steps/test/logs/stdio

    ======================================================================
    FAIL: test_repr (test.test_functools.TestPartialC)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/test_functools.py", line 174, in test_repr
        repr(f))
    AssertionError: 'func[51 chars]88>, b=<object object at 0xffffdd7fff790440>, [36 chars]00>)' != 'func[51 chars]88>, a=<object object at 0xffffdd7fff790400>, [36 chars]40>)'
    - functools.partial(<function capture at 0xffffdd7ffe64d788>, b=<object object at 0xffffdd7fff790440>, a=<object object at 0xffffdd7fff790400>)
    + functools.partial(<function capture at 0xffffdd7ffe64d788>, a=<object object at 0xffffdd7fff790400>, b=<object object at 0xffffdd7fff790440>)

    ======================================================================
    FAIL: test_repr (test.test_functools.TestPartialCSubclass)
    ----------------------------------------------------------------------

    Traceback (most recent call last):
      File "/export/home/buildbot/64bits/3.x.cea-indiana-amd64/build/Lib/test/test_functools.py", line 174, in test_repr
        repr(f))
    AssertionError: 'Part[49 chars]88>, b=<object object at 0xffffdd7fff790540>, [36 chars]00>)' != 'Part[49 chars]88>, a=<object object at 0xffffdd7fff790500>, [36 chars]40>)'
    - PartialSubclass(<function capture at 0xffffdd7ffe64d788>, b=<object object at 0xffffdd7fff790540>, a=<object object at 0xffffdd7fff790500>)
    + PartialSubclass(<function capture at 0xffffdd7ffe64d788>, a=<object object at 0xffffdd7fff790500>, b=<object object at 0xffffdd7fff790540>)

    @vstinner vstinner reopened this Nov 20, 2013
    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 20, 2013

    New changeset 961d832d8734 by Christian Heimes in branch 'default':
    Issue bpo-19183: too many tests depend on the sort order of repr().
    http://hg.python.org/cpython/rev/961d832d8734

    @tiran
    Copy link
    Member Author

    tiran commented Nov 20, 2013

    The problems have been resolved.

    @tiran tiran closed this as completed Nov 20, 2013
    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 21, 2013

    New changeset eec4758e3a45 by Victor Stinner in branch 'default':
    Issue bpo-19183: Simplify test_gdb
    http://hg.python.org/cpython/rev/eec4758e3a45

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) type-feature A feature request or enhancement
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants