This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Hash function is not randomized properly
Type: security Stage: resolved
Components: Versions: Python 3.3, Python 2.7
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: christian.heimes Nosy List: Arfrever, Bob.Ziuchkovski, Giovanni.Bajo, PaulMcMillan, ReneSac, Vlado.Boza, alex, arigo, benjamin.peterson, bkabrda, camara, christian.heimes, cvrebert, dmalcolm, dstufft, editor-buzzfeed, gregory.p.smith, iElectric, isoschiz, koniiiik, lemburg, mark.dickinson, ncoghlan, pconnell, sbermeister, serhiy.storchaka, vstinner, Łukasz.Rekucki
Priority: normal Keywords: patch

Created on 2012-04-19 17:58 by Vlado.Boza, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
find_hash_collision.py vstinner, 2012-04-19 23:26
hash.py vstinner, 2012-04-26 22:58
hash-attack-3.patch lemburg, 2012-11-07 08:34
_Py_Hash_Sip24.S serhiy.storchaka, 2012-11-07 11:51
Repositories containing patches
http://hg.python.org/sandbox/cheimes
Messages (89)
msg158736 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-19 17:58
Fix of this http://bugs.python.org/issue13703 is broken.

tl;dr: There only 256 different hash functions (compare it to size of _Py_HashSecret prefix and suffix). And whether keys collide or not depends only on the last 8 bits of prefix. 

Problem with current randomization of hash function is following:
Suffix does not influence whether two keys have some hash or not (it is xor-ed after everything). 
Everything except last 8 bits in prefix does not influence it also. Try adding 0x474200 to prefix and see what happens (it will add 0x474200 to resulting hash). 

To make a DoS attack, attacker must do the following:
Generate sets of colliding keys for every 256 possible combinations of last 8 bits. Try each one of these sets - one will have significantly bigger response time, and then repeat this one.
msg158744 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-19 20:40
E.g this strings collide for every prefix ending on 0xcd:
0x27fd5a18, 0x26fe78fa
msg158759 - (view) Author: Dave Malcolm (dmalcolm) (Python committer) Date: 2012-04-19 21:31
Thanks for filing this bug report.

I'm not seeing the equal hashes you describe.

I'm using this recipe to hardcode a specific prefix and print the hashes using it:
$ gdb --eval-command="break _PyRandom_Init" --eval-command="run" --eval-command="print _Py_HashSecret" --eval-command="set _Py_HashSecret.prefix=0xcdcdcdcd" --eval-command="print _Py_HashSecret" --eval-command="continue" -eval-command="continue" --args python -c "a='\x27\xfd\x5a\x18'; b='\x26\xfe\x78\xfa'; print(hash(a)); print(hash(b))"


On a 32-bit build of Python 2.7.3 (i686), if I set _Py_HashSecret.prefix=0xcdcdcdcd, I get non-equal hashes for the data you specify (output trimmed somewhat for conciseness):

  $1 = {prefix = 0, suffix = 0}
  $2 = {prefix = -842150451, suffix = 0}
  Continuing.
  -121255142
  -1199906326

Similarly, on a 64-bit build of Python 2.7.3 (x86_64), I get non-equal hashes:
  $1 = {prefix = 0, suffix = 0}
  $2 = {prefix = 3452816845, suffix = 0}
  -3992804574342296806
  -8147489705433570838

Did I misunderstand the report?  Thanks.
msg158773 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-19 23:26
I don't understand this issue: can you write a short script to test a collision? 

"E.g this strings collide for every prefix ending on 0xcd"

Do you mean that prefix & 0xff == 0xcd?

"0x27fd5a18, 0x26fe78fa"

Is it a byte string or an Unicode string? b'\x27\xfd\x5a\x18' and b'\x26\xfe\x78\xfa'?

--

Using PYTHONHASHSEED environment variable, it's easy to find two values generating the same _Py_HashSecret. Just one example:

PYTHONHASHSEED=3035016679:
* _Py_HashSecret = {0xcd5192eff3fd4d58, 0x3926b1431b200720}
PYTHONHASHSEED=4108758503:
*  _Py_HashSecret = {0xcd5192eff3fd4d58, 0x3926b1431b200720}

--

I wrote find_hash_collision.py to try to compute a collision, but the programs fail with:
---
Fail to generate a new seed!
# seeds = 65298
---
So it fails to generate a new random seed after testing 65298 different seeds. I ran the script with a function generating a seed, a seed generate a prefix "ending with 0xDC".

See attached program: it generates a random seed. Uncomment "seed = generate_seed_0xCD()" if the prefix must ends with 0xCD byte.
msg158778 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-19 23:53
My bad (I checked only function in C++, not result in python).
This should work on 32bit:
Prefix: anything ending on 0x00
Suffix: anything
Strings: "\x00\xcf\x0b\x96\x19", "\x00\x6d\x29\x45\x18"
msg158780 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-19 23:58
For example take this script (on 32bit):
ha = hash("\x00\xcf\x0b\x96\x19")
hb = hash("\x00\x6d\x29\x45\x18")
if ha == hb:
  print "collision"

And run following:
for i in `seq 0 25`; do echo $i; for j in `seq 0 100`; do ./python -R x.py; done; done;

It gives collison too many times (around 9 out of 2500).
msg158781 - (view) Author: Dave Malcolm (dmalcolm) (Python committer) Date: 2012-04-19 23:59
$ gdb --eval-command="break _PyRandom_Init" --eval-command="run" --eval-command="print _Py_HashSecret" --eval-command="set _Py_HashSecret.prefix=0xcdcdcd00" --eval-command="print _Py_HashSecret" --eval-command="continue" -eval-command="continue" --args python -c 'a="\x00\xcf\x0b\x96\x19"; b="\x00\x6d\x29\x45\x18"; print(hash(a)); print(hash(b))'

On 32-bit:
$1 = {prefix = 0, suffix = 0}
$2 = {prefix = -842150656, suffix = 0}
1220138288
1220138288

On 64-bit:
$1 = {prefix = 0, suffix = 0}
$2 = {prefix = 3452816640, suffix = 0}
Continuing.
4087671194599937328
-1679444439011306192
msg158783 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-20 00:05
> For example take this script (on 32bit): (...)
> It gives collison too many times (around 9 out of 2500).

I tried this script on Linux 32 bits and Linux 64 bits: I didn't see any collision. What is your operating system and the version of your operating system please?
msg158784 - (view) Author: Michal Petrucha (koniiiik) Date: 2012-04-20 00:08
@dmalcolm:
As for the gdb example, you need to add --eval-command="set _Py_HashSecret_Initialized=1", otherwise _Py_HashSecret will get overwritten immediately after it is set by gdb, either to 0 if run without the -R switch, or to a random value.

With the fresh pair of values Vlado provided, I managed to reproduce the collisions on Python 2.7.3, i686 (output trimmed like you did):

for __PREFIX in 0x0 0x4700 0xdead00 0xcafe00; do gdb --eval-command="break _PyRandom_Init" --eval-command="run" --eval-command="print _Py_HashSecret" --eval-command="set _Py_HashSecret.prefix=${__PREFIX}" --eval-command="set _Py_HashSecret_Initialized=1" --eval-command="print _Py_HashSecret" --eval-command="continue" -eval-command="continue" --args ./python -c "a='\x00\xcf\x0b\x96\x19'; b='\x00\x6d\x29\x45\x18'; print(hash(a)); print(hash(b))"; done

$1 = {prefix = 0, suffix = 0}
$2 = {prefix = 0, suffix = 0}
Continuing.
1220138288
1220138288


$1 = {prefix = 0, suffix = 0}
$2 = {prefix = 18176, suffix = 0}
Continuing.
-1483212240
-1483212240


$1 = {prefix = 0, suffix = 0}
$2 = {prefix = 14593280, suffix = 0}
Continuing.
-972665808
-972665808


$1 = {prefix = 0, suffix = 0}
$2 = {prefix = 13303296, suffix = 0}
Continuing.
1003122480
1003122480
msg158785 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-20 00:21
>I tried this script on Linux 32 bits and Linux 64 bits: I didn't see any >collision. What is your operating system and the version of your >operating system please?

uname -a
Linux 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux


Anyway you should be able to do following (in 32bits):
- generate two colliding keys (with some random seed)
- try these keys with different random seeds and they will collide ~1 out of 256 times
msg158790 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-20 00:43
hash.py: Python implementation of the 32-bit version of hash(bytes). Ok, I now see that only the lower 8 bits are really mixed with the input string.
msg158860 - (view) Author: Vlado Boza (Vlado.Boza) Date: 2012-04-20 17:44
One possible fix:
Look for StringHasher in google v8 code (http://code.google.com/p/v8/source/search?q=stringhasher&origq=stringhasher&btnG=Search+Trunk). Main loop looks like this:
raw_running_hash_ += c;                                                                       
raw_running_hash_ += (raw_running_hash_ << 10);                                               
raw_running_hash_ ^= (raw_running_hash_ >> 6);  

It seems not to have same collisions with many different hash seeds.
msg159430 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-26 22:47
> One possible fix: ...
> Main loop looks like this: ..

Well, it was decided to not impact performances to workaround one specific class of attack, whereas there are many other ways to DoS Python. So we chose to keep the main loop unchanged. Randomizing the hash is not a fix for the hash DoS, it only makes the attacker harder.
msg159431 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-26 22:58
Oops, I attached the wrong "hash.py" file.
msg159433 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2012-04-26 23:08
We should see if more security can be gained without sacrificing speed.
msg159434 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-04-26 23:10
> Problem with current randomization of hash function
> is following: Suffix does not influence whether two keys
> have some hash or not (it is xor-ed after everything).

Yes, the suffix is used to "protect" the secret. Without the suffix, it would be too simple to compute the prefix: getting a single hash value of a known string would leak the prefix.

> Suffix does not influence whether two keys have some hash
> or not (...). Everything except last 8 bits in prefix does
> not influence it also.

I don't know if we can do better and/or if it is a critical issue.
msg173185 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-10-17 16:53
I've modified unicodeobject's unicode_hash() function. V8's algorithm is about 55% slower for a 800 MB ASCII string on my box.

Python's current hash algorithm for bytes and unicode:

   while (--len >= 0)
        x = (_PyHASH_MULTIPLIER * x) ^ (Py_uhash_t) *P++;

$ ./python -m timeit -s "t = 'abcdefgh' * int(1E8)" "hash(t)"
10 loops, best of 3: 94.1 msec per loop


V8's algorithm:

    while (--len >= 0) {
        x += (Py_uhash_t) *P++;
        x += ((x + (Py_uhash_t)len) << 10);
        x ^= (x >> 6);
    }

$ ./python -m timeit -s "t = 'abcdefgh' * int(1E8)" "hash(t)"
10 loops, best of 3: 164 msec per loop
msg173455 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-10-21 15:28
Just to make it extra clear: Vlado showed that the "-R" switch of Python can easily be made fully pointless, with only a bit of extra work.  Here is how:

* Assume you have an algo that gives you as many strings with colliding hashes as you want, provided you know the last 8 bits of the secret prefix.

* Say you want to attack a web server.  You send it 256 requests, each with 100 strings that have identical hash for one of the 256 possible values.  You measure which one is significantly slower than the others.

* From there you are back in the original situation: you know which of the 256 values to pick, so you can make the web server crawl by sending it a large number of strings that have identical hashes for this particular value.

It's interesting to note how this whole -R discussion made very long threads on python-dev, and python-dev has subsequently ignored (for the past 6 months!) the fact that their "fix" can be worked around in a matter of minutes.

(For information, I'm sure that if the algorithm is improved to depend on all 32 or 64 bits of the prefix, it would still be easy to crack it.  You don't actually need to send 2**32 or 2**64 requests to the web server: with careful design you can send only 32 or 64 requests that each leak one bit of information.  Doing that requires a bit more research, but once the recipe is known, it can be freely reused, which seems to defeat the original point.)
msg173457 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-10-21 15:35
For reference, the above means that we can implement -R support for PyPy as a dummy ignored flag, and get "security" that is very close to CPython's.  :-)
msg173458 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2012-10-21 15:39
That doesn't make it any easy CPython issue. :)
msg173461 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-10-21 16:33
As far as my understanding goes the issue can't be solved with our current hash algorithm. We'd have to use a crypto hash function or at least a hash algorithm that has an increased avalanche effect on the outcome. The current hash algorithm is designed and optimized for speed and not for security. Any other algorithm is going to slow down hashing.

Small strings and strings with lots of NUL bytes may leak too many information, too.
msg173488 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-10-21 21:42
> It's interesting to note how this whole -R discussion made very long
threads on python-dev, and python-dev has subsequently ignored (for the
past 6 months!) the fact that their "fix" can be worked around in a matter
of minutes.

No, this issue has no been ignored. Nobody proposed anything to fix this
issue, but we are still working on it (sometimes in private).

In my opinion, we cannot solve this issue without slowing down python. Or I
don't know yet.a.fast and secure hash algorithm. I don't really want to
slow down Python for one specific issue whereas there are so many other
ways to DoS a (web) server.

How do other languages (say Perl, Ruby, PHP, Javascript) handle this issue?
msg173491 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-10-21 22:46
> $ ./python -m timeit -s "t = 'abcdefgh' * int(1E8)" "hash(t)"

I got another numbers (32-bit Linux, AMD Athlon 64 X2 4600+).

Python's current hash algorithm:
10 loops, best of 3: 343 msec per loop

V8's algorithm:
10 loops, best of 3: 244 msec per loop
msg173498 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-10-22 06:48
On 21.10.2012 23:42, STINNER Victor wrote:
> 
> STINNER Victor added the comment:
> 
>> It's interesting to note how this whole -R discussion made very long
> threads on python-dev, and python-dev has subsequently ignored (for the
> past 6 months!) the fact that their "fix" can be worked around in a matter
> of minutes.
> 
> No, this issue has no been ignored. Nobody proposed anything to fix this
> issue, but we are still working on it (sometimes in private).
> 
> In my opinion, we cannot solve this issue without slowing down python. Or I
> don't know yet.a.fast and secure hash algorithm. I don't really want to
> slow down Python for one specific issue whereas there are so many other
> ways to DoS a (web) server.

Well, I did propose a different approach to the whole problem to
count collisions. That would have avoided the usability issues you
have with the randomization approach, made it possible for the
application to detect the attack and not have introduced any significant
runtime overhead for applications not being attacked.

The proposal was shot down with the argument that it wouldn't
fix the problem.

It should also be noted that the randomization only applies to
strings/bytes, dictionaries with other colliding keys are not protected
at all.

Perhaps it's time to revisit the collision counting idea ?

It would work in much the same way as the stack recursion limit
we have in Python.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Oct 22 2012)
>>> Python Projects, Consulting and Support ...   http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2012-09-27: Released eGenix PyRun 1.1.0 ...       http://egenix.com/go35
2012-09-26: Released mxODBC.Connect 2.0.1 ...     http://egenix.com/go34
2012-09-25: Released mxODBC 3.2.1 ...             http://egenix.com/go33
2012-10-23: Python Meeting Duesseldorf ...                      tomorrow

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/
msg174964 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-11-06 14:10
Benjamin: oups, sorry.  I don't remember setting the "easy" keyword, my mistake.

Fwiw I'm +1 on Marc-Andre's solution.  Make it a tunable setting, e.g. with sys.setcollisionlimit().  Defaults to sys.maxint on existing Pythons and some smaller value (70?) on new Pythons.  It has the same benefits as the recursion limit: it's theoretically bad, but most of the time very useful.

It would also crash on bad usages of custom __hash__() methods: e.g. if you put a lot of keys in a dict, all with a custom __hash__() that returns 42.  I imagine that it can be considered a good thing to raise in this case rather than silently degrade performance forever.
msg174973 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2012-11-06 15:04
Here's the message that helped convince us to go against collision counting originally: http://mail.python.org/pipermail/python-dev/2012-January/115726.html
msg174986 - (view) Author: John M Camara (camara) Date: 2012-11-06 15:44
How about using a secure hash algorithm that's implemented in HW when available.  It doesn't eliminate the issue on systems that lack this support but at least it limits the scope of the problem.

Of course some testing would need to be done to make sure the hardware hashing doesn't have a significant impact on performance.
msg174987 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-06 15:54
Our hash randomization will always leak some information about the randomization keys. The only way to properly secure our secrets is a cryptographic secure algorithms, for example a crypto hashing function in combination with a message authentication code like HMAC. I don't have to explain how that is going to hurt performance ...

We can try to make it harder to guess the secret parts with a slightly modified algorithm like e.g. V8's hash but that's never going to be 100% secure. But might be secure enough to make an attack too hard.
msg174989 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-06 16:08
I deem hash randomization and collision counting as a poor man's workaround for the actual issue. Perhaps we shouldn't try too hard to fix an unsuitable data type. Hash maps have a known worst case complexity of O(n). A O(log n) algorithm should be used to parses and malicious key/value pairs.

How about Python grows a additional btree implementation in its collections module? I know that it's not going to fix existing code. However in the long run it's the best safeguard against hash collision attacks. I'm thinking about a simple, self balancing btree like red-black-tree. A quick search on Wikipedia also revealed Scapegoat and Splay tree with interesting properties.
msg174990 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-06 16:29
Christian, there are good semi-crypto hash functions that don't leak as bad as Python's own modified FNV hash, without going all the way to HMAC.

SipHash has very good collision resistance and doesn't leak anything:
https://www.131002.net/siphash/
(notice: they distribute a python program to recover python's seed)

It's obviously slower than Python's FNV, but it's hard to beat a sum+multiplication per character.
msg174994 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-06 16:49
Thanks!

SipHash looks interesting. It's using a XOR + ROT approach with a seed. And it's written by DJB which is usually a good sign. He writes secure code with good quality. Just his coding style tends to be ... unique. :)

I'm going to try the algorithm.
msg174998 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-06 17:10
I modified crypto_auth() a bit:

Py_uhash_t crypto_auth(const unsigned char *in, unsigned long long inlen)
  ...
  u64 k0 = _Py_HashSecret.prefix;
  u64 k1 = _Py_HashSecret.suffix;
  ...
  return (Py_uhash_t)b;

and replaced the loop in _Py_HashBytes() with a call to crypto_auth(). For large strings SipHash is as faster as our current algorithm on my 64bit box. That was to be expected as SipHash works on blocks of 8 bytes while the default algorithm can't be optimized with SIMD instructions.

Current hashing algorithm:
$ ./python -m timeit -s "x = b'a' * int(1E7)" "hash(x)"
1000000 loops, best of 3: 0.39 usec per loop

SipHash:
$ ./python -m timeit -s "x = b'a' * int(1E7)" "hash(x)"
1000000 loops, best of 3: 0.381 usec per loop
msg174999 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-06 18:26
For short strings, you might want to have a look at the way you fetch the final partial word from memory.

If the string is >= 8 bytes, you can fetch the last partial word as an unaligned memory fetch followed by a shift, instead of using a switch like in the reference code.
msg175000 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-06 19:11
We can explore the various optimization options later. Also unaligned memory address is not allowed on some architectures like SPARC.

If somebody likes to play with the algorithm:
http://hg.python.org/sandbox/cheimes/shortlog/2cb7e97ca8d0
msg175007 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-06 20:12
$ ./python -m timeit -s "x = b'a' * int(1E7)" "hash(x)"

Note that hash calculated only once.  Add -n 1 option and use a larger data.

> If somebody likes to play with the algorithm:

$ ./python -m timeit -n 1 -s "t = 'abcdefgh' * 10**8"  "hash(t)"
1 loops, best of 3: 4.86 sec per loop

Current hash algorithm runs 3.43 sec, V8's algorithm runs 2.44 sec.

With simple optimization I got 3.62 sec, only 6% slower than the current.

  #define U8TO64_LE(p) ((u64)((u32 *)(p))[0] | ((u64)((u32 *)(p))[1] << 32))
msg175038 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-07 00:52
Thanks to Snakebit I was able to tests the code on a 32bit BSD installation with GCC 4.2. The ASCII unicode and bytes performance is about 8% slower, UCS2 unicode is about 37% slower. There might be room for improvements, though.

% ./python -m timeit -r20 -n1000000 -s "h = hash; x = 'a' * 10**7" -- "h(x)"
Current:
1000000 loops, best of 20: 0.109 usec per loop
SipHash:
1000000 loops, best of 20: 0.118 usec per loop

% ./python -m timeit -r20 -n1000000 -s "h = hash; x = 'ä' * 10**7" -- "h(x)"
Current:
1000000 loops, best of 20: 0.119 usec per loop
SipHash:
1000000 loops, best of 20: 0.163 usec per loop
msg175040 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-07 01:27
Serhiy's trick

#define U8TO64_LE(p) ((u64)((u32 *)(p))[0] | ((u64)((u32 *)(p))[1] << 32))

gives a nice speedup. Now UCS2 is down to 0.133 usec (12% slower than the current algorithm) and ASCII down to 0.105 usec (3% faster).
msg175047 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-07 06:49
% ./python -m timeit -r20 -n1000000 -s "h = hash; x = 'a' * 10**7" -- "h(x)"

Here is only one hash calculation and 999999 cached calls.
msg175048 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-07 07:40
I tested different kind of strings.

$ ./python -m timeit -n 1 -s "t = b'a' * 10**8"  "hash(t)"
$ ./python -m timeit -n 1 -s "t = 'a' * 10**8"  "hash(t)"
$ ./python -m timeit -n 1 -s "t = '\u0100' * 10**8"  "hash(t)"
$ ./python -m timeit -n 1 -s "t = '\U00010000' * 10**8"  "hash(t)"

       current   SipHash

bytes  181 msec  453 msec  2.5x
UCS1   429 msec  453 msec  1.06x
UCS2   179 msec  897 msec  5x
UCS4   183 msec  1.79 sec  9.8x
msg175050 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-07 08:34
Here's a demo patch (against Python 2.7) which counts hash value collisions and slot collisions. I had posted that in the original ticket where we discussed the hash problem (http://bugs.python.org/issue14621).

This avoids issues like attack 1 mentioned in http://mail.python.org/pipermail/python-dev/2012-January/115726.html

Attack 2 in that email can easily be worked around by reducing the collision limit to a smaller number.

Even better: An application could even dynamically adjust the maximum collision counts by catching the exception and setting a new upper limit depending on its knowledge of the field of application - warning the sysadmin of a potential problem and allowing her to take action. That way the application could start with a low safe maximum collision number of say 100 and then raise the limit in a controlled way.

BTW: When trying out new hash functions, you need to look not only at the performance of the hash function, but also (and more importantly) at the effect on dictionaries.

Just as reminder: The integer key problem is still open. Using the demo script http://bugs.python.org/file24300/integercollision.py, it's easy to keep Python going for minutes without any major effort.

I don't understand why we are only trying to fix the string problem and completely ignore other key types. Strings are easy to send to a web server, yes, but there are other applications out there which take input data from other sources/formats as well (e.g. csv files). And it's not unusual to convert input strings to integers to use them as dictionary keys, say item IDs or counts. So while the string keys may not cause a problem, the integer keys still might.
msg175052 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-07 08:41
On 07.11.2012 09:34, Marc-Andre Lemburg wrote:
> 
> Here's a demo patch (against Python 2.7) which counts hash value collisions and slot collisions. I had posted that in the original ticket where we discussed the hash problem (http://bugs.python.org/issue14621).

Sorry, wrong URL. The correct one is http://bugs.python.org/issue13703

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Nov 07 2012)
>>> Python Projects, Consulting and Support ...   http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/
msg175053 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-07 08:44
Until it's broken with a yet-unknown attack, SipHash is a pseudo-random function and as such it does uniformly distribute values across the output space, and never leak any information on the key (the randomized seed). Being designed by cryptographers, it is likely that it doesn't turn out to be a "fail" like the solution that was just released (no offense intended, but it's been a large-scale PR failure).

As long as we don't introduce bias while reducing SipHash's output to fit the hash table size (so for instance, usage of modulus is not appropriate), then the hash function should behave very well.

Any data type can be supplied to SipHash, including numbers; you just need to take their (platform-dependent) memory representation and feed it to SipHash. Obviously it will be much much slower than the current function which used to be hash(x) = x (before randomization), but that's the price to pay to avoid security issues.
msg175080 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-11-07 11:05
Marc-André: estimating the risks of giving up on a valid query for a truly random hash, at an overestimated one billion queries per second, in a 2/3 full dictionary:

* for 1000: 4E159 years between mistakes

* for 100: 12.9 years between mistakes

* for 150: 8E9 years between mistakes

* for 200: 5E18 years between mistakes

So while it seems that 100 might be a bit too small, using 150 to 200 is perfectly safe (and that's "perfect" in the sense that a computer will encounter random hardware errors at a higher rate than that).
msg175081 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-07 11:19
On 07.11.2012 12:06, Armin Rigo wrote:
> 
> Armin Rigo added the comment:
> 
> Marc-André: estimating the risks of giving up on a valid query for a truly random hash, at an overestimated one billion queries per second, in a 2/3 full dictionary:
> 
> * for 1000: 4E159 years between mistakes
> 
> * for 100: 12.9 years between mistakes
> 
> * for 150: 8E9 years between mistakes
> 
> * for 200: 5E18 years between mistakes
> 
> So while it seems that 100 might be a bit too small, using 150 to 200 is perfectly safe (and that's "perfect" in the sense that a computer will encounter random hardware errors at a higher rate than that).

I used the 1000 limit only as example. In tests Victor and I ran (see the
original ticket from a few months ago), 200 turned out to be a reasonable
number for the default maximum hash collision value.

I'm not sure about the slot collision limit. We'd have to run more tests
on those.
msg175082 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-07 11:26
Il giorno 07/nov/2012, alle ore 08:40, Serhiy Storchaka <report@bugs.python.org> ha scritto:

> Serhiy Storchaka added the comment:
> 
> I tested different kind of strings.
> 
> $ ./python -m timeit -n 1 -s "t = b'a' * 10**8"  "hash(t)"
> $ ./python -m timeit -n 1 -s "t = 'a' * 10**8"  "hash(t)"
> $ ./python -m timeit -n 1 -s "t = '\u0100' * 10**8"  "hash(t)"
> $ ./python -m timeit -n 1 -s "t = '\U00010000' * 10**8"  "hash(t)"
> 
>       current   SipHash
> 
> bytes  181 msec  453 msec  2.5x
> UCS1   429 msec  453 msec  1.06x
> UCS2   179 msec  897 msec  5x
> UCS4   183 msec  1.79 sec  9.8x

Hi Serhiy,

can you please attach the generated assembly code for the siphash function with your compiler and your optimization flags (that is, the one that produces the above results)?

Thanks!
-- 
Giovanni Bajo
msg175083 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-07 11:51
> can you please attach the generated assembly code for the siphash function with your compiler and your optimization flags (that is, the one that produces the above results)?

GCC (Ubuntu 4.4.3-4ubuntu5.1) options:

-pthread -c -Wno-unused-result -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes   -I. -IInclude -I./Include    -DPy_BUILD_CORE

32-bit Linux on AMD Athlon(tm) 64 X2 Dual Core Processor 4600+.
msg175085 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-07 11:53
Serhiy, the performance of hash() for long strings isn't very relevant for the general performance of a Python program. Short strings dominate. I've modified the timeit to create a new string object every time.

for I in 5 10 15 20 30 40 50 60; do echo -ne "$I\t"; ./python -m timeit -n100000 -r30 -s "h = hash; x = 'ä' * $I" -- "h(x + 'a')" | awk '{print $6}' ; done

ASCII:
#       SIP        FNV
5       0.112      0.0979
10      0.115      0.103
15      0.12       0.107
20      0.124      0.112
30      0.126      0.127
40      0.136      0.142
50      0.142      0.147
60      0.146      0.159

UCS-2:
#       SIP        FNV
5       0.114      0.0977
10      0.117      0.0988
15      0.12       0.11
20      0.126      0.109
30      0.13       0.122
40      0.14       0.132
50      0.144      0.147
60      0.152      0.157

For short strings the additional round and setup costs make hash() about 10% slower. For long strings SIP is faster as it processes 8 bytes at once instead of 1 to 4 bytes.
msg175086 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2012-11-07 11:55
[MAL]
> I don't understand why we are only trying to fix the string problem
> and completely ignore other key types.

[Armin]
> estimating the risks of giving up on a valid query for a truly random
> hash, at an overestimated one billion queries per second ...

That's fine in principle, but if this gets extended to integers, note that our current integer hash is about as far from 'truly random' as you can get:

    Python 3.4.0a0 (default:f02555353544, Nov  4 2012, 11:50:12) 
    [GCC 4.2.1 (Apple Inc. build 5664)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> [hash(i) for i in range(20)]
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]

Moreover, it's going to be *very* hard to change the int hash while preserving the `x == y implies hash(x) == hash(y)` invariant across all the numeric types (int, float, complex, Decimal, Fraction, 3rd-party types that need to remain compatible).
msg175088 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-07 11:59
On 07.11.2012 12:55, Mark Dickinson wrote:
> 
> Mark Dickinson added the comment:
> 
> [MAL]
>> I don't understand why we are only trying to fix the string problem
>> and completely ignore other key types.
> 
> [Armin]
>> estimating the risks of giving up on a valid query for a truly random
>> hash, at an overestimated one billion queries per second ...
> 
> That's fine in principle, but if this gets extended to integers, note that our current integer hash is about as far from 'truly random' as you can get:
> 
>     Python 3.4.0a0 (default:f02555353544, Nov  4 2012, 11:50:12) 
>     [GCC 4.2.1 (Apple Inc. build 5664)] on darwin
>     Type "help", "copyright", "credits" or "license" for more information.
>     >>> [hash(i) for i in range(20)]
>     [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
> 
> Moreover, it's going to be *very* hard to change the int hash while preserving the `x == y implies hash(x) == hash(y)` invariant across all the numeric types (int, float, complex, Decimal, Fraction, 3rd-party types that need to remain compatible).

Exactly. And that's why trying to find secure hash functions isn't
going to solve the problem. Together with randomization they may
make things better for strings, but they are no solution for numeric
types, and they also don't allow detecting possible attacks on your
systems.

But yeah, I'm repeating myself :-)
msg175089 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2012-11-07 12:06
And I'm probably repeating myself too, but: the predictability of (and difficulty of changing of) hashing for numeric types is why I'm strongly opposed to hash collision / slot collision limits:  they'd end up disallowing reasonably natural looking Python numeric sets (e.g. {2**k for k in range(n)} for smallish n).  I don't think core Python should be solving this issue at all---I think that's a job for the web frameworks.  Christian's idea of providing more suitable types in the std. lib. sounds like the right direction to me.
msg175091 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-07 12:19
On 07.11.2012 13:06, Mark Dickinson wrote:
> 
> Mark Dickinson added the comment:
> 
> And I'm probably repeating myself too, but: the predictability of (and difficulty of changing of) hashing for numeric types is why I'm strongly opposed to hash collision / slot collision limits:  they'd end up disallowing reasonably natural looking Python numeric sets (e.g. {2**k for k in range(n)} for smallish n).  I don't think core Python should be solving this issue at all---I think that's a job for the web frameworks.  Christian's idea of providing more suitable types in the std. lib. sounds like the right direction to me.

I definitely agree on that last sentence. Having more suitable data
types in Python (like e.g. tries, b-trees or red-black-trees) would certainly
be a better solution than trying to build everything into dictionaries.

Nice comparison:
http://en.wikipedia.org/wiki/Trie
msg175094 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-07 12:40
See issue16427.
msg175096 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-11-07 12:54
I won't try to influence the outcome of this discussion, but I'd like to correct myself: in the measures I posted, "true randomness" is not needed at all.  The exact criterion might be hard to pin down, but as a first approximation, we get the same answers as long as most keys have different hashes, as all the bits of the hash are used by the dict lookup in only a few iterations.  No two small ints have the same hash, by construction.  You can build a sequence of (long) integers that have all exactly the same hash, but doing that is not as easy as "2**k".
msg175097 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-07 12:55
> Serhiy, the performance of hash() for long strings isn't very relevant for the general performance of a Python program.

It exposes the raw speed of hashing algorithm.  It is good as a first estimate, because more real cases require more sophisticated measurements.

> Short strings dominate. I've modified the timeit to create a new string object every time.

timeit is absolutely not suitable for this.  Need to write a C program that uses the Python C API.

> for I in 5 10 15 20 30 40 50 60; do echo -ne "$I\t"; ./python -m timeit -n100000 -r30 -s "h = hash; x = 'ä' * $I" -- "h(x + 'a')" | awk '{print $6}' ; done

Please, do not be fooled by the wrong measurements. You measure the height of the building together with the hill, on which it stands. Use "-n1" and you will see a 
completely different numbers.
msg175098 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2012-11-07 13:02
Wrong, sorry.  On a 32-bit Python 2.7, "(2**32-1)*n" has the same hash -2, for any value of n.

Of course if you build a dict containing thousands of such integers as keys, then right now you get unexpectedly bad performance.  I wonder if I should open another bug report about that --- the hash of longs should be slightly more random-looking...
msg175099 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2012-11-07 13:03
[Armin]
> You can build a sequence of (long) integers that have all exactly the
> same hash, but doing that is not as easy as "2**k".

Sure it is.  The hash for integers is (by design) repeated modulo a number of the form 2**n - 1:  we use 2**61 - 1 on 64-bit systems and 2**31 - 1 on 32-bit.  So in {2**k for k in range(n)} you get at most 61 distinct hash values.
msg175100 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-07 13:16
Il giorno 07/nov/2012, alle ore 12:59, Marc-Andre Lemburg <report@bugs.python.org> ha scritto:

> 
> Marc-Andre Lemburg added the comment:
> 
> On 07.11.2012 12:55, Mark Dickinson wrote:
>> 
>> Mark Dickinson added the comment:
>> 
>> [MAL]
>>> I don't understand why we are only trying to fix the string problem
>>> and completely ignore other key types.
>> 
>> [Armin]
>>> estimating the risks of giving up on a valid query for a truly random
>>> hash, at an overestimated one billion queries per second ...
>> 
>> That's fine in principle, but if this gets extended to integers, note that our current integer hash is about as far from 'truly random' as you can get:
>> 
>>    Python 3.4.0a0 (default:f02555353544, Nov  4 2012, 11:50:12) 
>>    [GCC 4.2.1 (Apple Inc. build 5664)] on darwin
>>    Type "help", "copyright", "credits" or "license" for more information.
>>>>> [hash(i) for i in range(20)]
>>    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
>> 
>> Moreover, it's going to be *very* hard to change the int hash while preserving the `x == y implies hash(x) == hash(y)` invariant across all the numeric types (int, float, complex, Decimal, Fraction, 3rd-party types that need to remain compatible).
> 
> Exactly. And that's why trying to find secure hash functions isn't
> going to solve the problem. Together with randomization they may
> make things better for strings, but they are no solution for numeric
> types, and they also don't allow detecting possible attacks on your
> systems.
> 
> But yeah, I'm repeating myself :-)
> 

I don't see how it follows. Python has several hash functions in its core, one of which is the string hash function; it is currently severely broken from a security standpoint; it also happens to be probably the most common case for dictionaries in Python, and the ones that it is more easily exploited in web frameworks. 

If we can manage to fix the string hash function (eg: through SipHash) we will be one step further in mitigating the possible attacks.

Solving collisions and mitigating attacks on numeric types is a totally different problem because it is a totally different function. I suggest we keep different discussions and different bugs for it. For instance, I'm only personally interested in mitigating attacks on the string hash function.
-- 
Giovanni Bajo
msg175192 - (view) Author: Sasha B (sbermeister) Date: 2012-11-08 21:58
Ruby uses the Murmur hash for some types (string & integer at least): http://murmurhash.googlepages.com/
src: http://stackoverflow.com/a/3270836/1332819

The Perl hash implementation: http://cpansearch.perl.org/src/NWCLARK/perl-5.8.8/hv.c

PHP5 hash implementation: http://lxr.php.net/xref/PHP_5_4/ext/hash/hash.c

The Probe() function for V8's Javascript implementation is HW-specific:
Hash functions: http://code.google.com/searchframe#W9JxUuHYyMg/trunk/src/hashmap.h&q=Probe%20package:v8%5C.googlecode%5C.com&l=134
Probe() function: http://code.google.com/searchframe#search%26q%3DProbe%20package%3Av8%5C.googlecode%5C.com
msg175196 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-08 22:10
I considered MurMur a year ago, too. Nowadays I don't think it's an option anymore. JPA and DJB have released a C++ program that is able to generate lots of collisions:

https://www.131002.net/siphash/
C++ program to find universal (key-independent) multicollisions for MurmurHash3
msg175198 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-08 22:16
From the header of murmurcollisions.cc:

 * multicollisions for MurmurHash3
 *
 * MurmurHash3 C++ implementation is available at 
 * http://code.google.com/p/smhasher/wiki/MurmurHash3
 *
 * the function Murmur3Multicollisions finds many different inputs
 * hashing to the same 32-bit value (multicollision)
 * 
 * example output:
 * 32-bit seed 7a0e823a
 * 4-multicollision
 * 16-byte inputs
 * MurmurHash3_x86_32( bdd0c04b5c3995827482773b12acab35 ) = 94d7cf1b
 * MurmurHash3_x86_32( 652fa0565c3946be7482773b12acab35 ) = 94d7cf1b
 * MurmurHash3_x86_32( bdd0c04b5c399582cc23983012ac5c71 ) = 94d7cf1b
 * MurmurHash3_x86_32( 652fa0565c3946becc23983012ac5c71 ) = 94d7cf1b
 *
 * the multicollisions found are "universal": they work for any seed/key
 *
 * authors:
 * Jean-Philippe Aumasson, Daniel J. Bernstein

I consider MurMur3 busted and unsuitable for our purpose.
msg175299 - (view) Author: Gregory P. Smith (gregory.p.smith) * (Python committer) Date: 2012-11-10 22:23
People have been posting micro-benchmarks (often run wrong) rather than actual useful benchmarks.  Running our real world benchmarks would be more interesting.
msg175318 - (view) Author: Chris Rebert (cvrebert) * Date: 2012-11-11 04:56
What about CityHash? (http://code.google.com/p/cityhash/ ; unofficial C port: http://code.google.com/p/cityhash-c/ )
It's good enough for Google...
msg175342 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-11-11 10:09
Did qomeone start to write a PEP?
Le 11 nov. 2012 05:56, "Chris Rebert" <report@bugs.python.org> a écrit :

>
> Chris Rebert added the comment:
>
> What about CityHash? (http://code.google.com/p/cityhash/ ; unofficial C
> port: http://code.google.com/p/cityhash-c/ )
> It's good enough for Google...
>
> ----------
> nosy: +cvrebert
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue14621>
> _______________________________________
>
msg175345 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2012-11-11 11:14
Il giorno 11/nov/2012, alle ore 05:56, Chris Rebert <report@bugs.python.org> ha scritto:

> 
> Chris Rebert added the comment:
> 
> What about CityHash? (http://code.google.com/p/cityhash/ ; unofficial C port: http://code.google.com/p/cityhash-c/ )
> It's good enough for Google...

It's good enough for Google in a context that does not require protection against collision attacks. If you have a look at SipHash' page, you will find a program to generate collisions to CityHash.
-- 
Giovanni Bajo

My Blog: http://giovanni.bajo.it
msg176680 - (view) Author: Łukasz Rekucki (Łukasz.Rekucki) Date: 2012-11-30 07:53
Note that a few weeks ago, Ruby has switched from Murmur to SipHash for this exact reason:

http://www.ruby-lang.org/en/news/2012/11/09/ruby19-hashdos-cve-2012-5371/
msg176697 - (view) Author: René (ReneSac) Date: 2012-11-30 18:12
Christian Heimes: It has always been trivial to artificially generate collisions for fast hashes designed for hash tables, like MurmurHash. I wouldn't call Murmurhash3 "busted" because of that, as this was never a design goal. It is a known propriety of this type of hash: you do that basically running them backwards. You can't even call that cryptanalysis. 

Of course, you need the seed to search those collisions, but from this thread it seems very difficult, if not impossible, not to leak the random seed to the attacker. 

I see the various collision counting alternatives proposed as the less intrusive and definitive solution for this problem. It also has the benefit that it can work for any type of key. Some pseudo code:

hash as always, with a fast hash.
if reprobes > initial_threshold:
    if the table has only one key type AND it has a robust comparison method:
        store the collisions in a O(log n) worst case structure (tree).
    elif the object has a secondary slow secure hash:
        try searching/inserting the key again with the new secure hash.
        It works like a double hashing reprobing hash table.
    elif collisions > max_threshold:
        raise Exception("Under attack or the object is using a crappy hash, with no fallback.")

The first option, the O(log n) structure, can be ignored as unnecessary complication (even though there is already a path implementing that), but I suspect it may be faster than a secure hash. If not, then there is really no point in this option, except if the secure hash proves to be not so secure.
msg176704 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2012-11-30 19:20
No, Murmur3 *is* busted. Some clever people have found a way to perform a universal multicollision attack, that's a key independent attack. An attacker doesn't need to know the seed for an attack.

Collision counting as not a solution for the issue, just a workaround. It has been proofed for decades that a tree data structure is not vulnerable to this kind of collision attacks. A hash function with crypto properties is the second best solution.
msg176705 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-30 19:39
However a tree data structure is vulnerable to sorted data.

May be worth it to have the two hash functions (switchable by interpreter option or environment variable), strong for applications which can be attacked, and fast for applications which run in safe environment?
msg176709 - (view) Author: René (ReneSac) Date: 2012-11-30 20:01
Serhiy Storchaka: I said a O(log n) data structure, so I was referring to balanced trees, like AVL, RBT or B+-tree. They are not vulnerable to sorted data. The downside is that they need the keys to provide robust comparison methods (like, if z < y < x, then z < x). 

Christian Heimes: Right, the seed indeed doesn't provides protection...

But the collision counting is compatible with your two suggestions, and solves the problem. The only difference is that you don't get the performance hit if not under attack.
msg176710 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-30 20:06
René, a balanced tree requires rebalancing on every (or almost every) item for some special (sorted) data sequences.
msg176713 - (view) Author: Michal Petrucha (koniiiik) Date: 2012-11-30 20:33
On Fri, Nov 30, 2012 at 08:06:32PM +0000, Serhiy Storchaka wrote:
> René, a balanced tree requires rebalancing on every (or almost
> every) item for some special (sorted) data sequences.

That's perfectly true and it holds for most unsorted sequences as
well -- that's why they are balanced. The fact that the tree is always
balanced guarantees that each rebalance takes at most O(log n)
operations.
msg176714 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-30 20:34
On 30.11.2012 21:06, Serhiy Storchaka wrote:
> 
> Serhiy Storchaka added the comment:
> 
> René, a balanced tree requires rebalancing on every (or almost every) item for some special (sorted) data sequences.

Sure, but that's still O(N*log N) for an attack scenario, not O(N^2).

I think the main point here is that using hash tables or dictionaries
for these things without any size limit is simply a wrong development
approach.

Developers need to be made aware of the problem and given data
structures that they can use more safely to store the data and
instead of trying to hide away the problem using some crypto
approach, it's better to offer methods that allow developers to
gain control over the problem (e.g. via an exception) rather than
hoping for few hash collisions.

If a developer has to build a lookup table from untrusted data,
she should be able to say:

try:
    mapping = insert_untrusted_data(source)
except UnderAttackError:
    return 'no thank you'

instead of:

# Hmm, let's hope this doesn't bomb...
mapping = insert_untrusted_data(source)

At the moment, the best thing you can do is insert the data
in chunks and measure the time it takes for each chunk. If it
takes too long, you raise the UnderAttackError.

If we make the collision counting limit a per-dictionary adjustable
limit with some global default limit, the above could be written
as:

try:
    mapping = {}
    mapping.max_collisions = 100
    mapping.update(source)
except CollisionLimitError:
    return 'no thank you'

Aside: It's better to know you worst case and program for it, rather
than to ignore the problem and hope an attacker won't find your secret
key. In the above case, if you know what the worst-case timing is for
a 100-item dictionary, you can safely deal with it, possibly adjusting
the limit to more suitable value for your application.

-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Source  (#1, Nov 30 2012)
>>> Python Projects, Consulting and Support ...   http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ...       http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ...        http://python.egenix.com/
________________________________________________________________________
2012-11-28: Released eGenix mx Base 3.2.5 ...     http://egenix.com/go36
2013-01-22: Python Meeting Duesseldorf ...                 53 days to go

::: Try our new mxODBC.Connect Python Database Interface for free ! ::::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
           Registered at Amtsgericht Duesseldorf: HRB 46611
               http://www.egenix.com/company/contact/
msg176715 - (view) Author: René (ReneSac) Date: 2012-11-30 20:35
Serhiy Storchaka: Yes, but it is still O(log n) worst case. Even in the worst case rebalancing, you only need to walk up/down rotating/spliting every node in your path. As the tree height is guaranteed to be x * log n (x from 1 to 2, depending on the algorithm), the rebalancing operation is aways limited by O(log n).
msg176720 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-30 21:25
> Serhiy Storchaka: Yes, but it is still O(log n) worst case. Even in the
> worst case rebalancing, you only need to walk up/down rotating/spliting
> every node in your path. As the tree height is guaranteed to be x * log n
> (x from 1 to 2, depending on the algorithm), the rebalancing operation is
> aways limited by O(log n).

Agree. However I think that for large enough data a balanced tree is slower 
than a hashtable with any slow hash.
msg176721 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-30 21:27
> try:
>     mapping = {}
>     mapping.max_collisions = 100
>     mapping.update(source)
> except CollisionLimitError:
>     return 'no thank you'

May be use a more general solution?

try:
    with run_with_timeout(timeout=100, timer=collisions_count):
        mapping = insert_untrusted_data(source)
except TimeoutError:
    return 'no thank you'

(You can can use different measurement for timeout: user time, real time, ticks 
count, collisions count, or even a user defined timer).
msg176725 - (view) Author: Marc-Andre Lemburg (lemburg) * (Python committer) Date: 2012-11-30 22:51
On 30.11.2012 22:27, Serhiy Storchaka wrote:
> 
> Serhiy Storchaka added the comment:
> 
>> try:
>>     mapping = {}
>>     mapping.max_collisions = 100
>>     mapping.update(source)
>> except CollisionLimitError:
>>     return 'no thank you'
> 
> May be use a more general solution?
> 
> try:
>     with run_with_timeout(timeout=100, timer=collisions_count):
>         mapping = insert_untrusted_data(source)
> except TimeoutError:
>     return 'no thank you'
> 
> (You can can use different measurement for timeout: user time, real time, ticks 
> count, collisions count, or even a user defined timer).

Sure, as long as there's a way to break into the execution,
any such method would do.

The problem is that you'd have to check for the timeout at some
point and the .update() call is running completely in C, so
the only way to break into execution is either by explicitly
adding a runtime check to the code, or use a signal (which is
a can of worms better avoided :-)).

Collision counting is one such method of detecting that
something is likely not working according to plan, but it's
really only another way of implementing the explicit runtime
check. Using other counters or timers would work just as well.

As long as you know that there are places in your code that can
produce CPU time overloads, you can address those issues.

The dictionary implementation is one such place, where you
can run into the problem, but there are usually many other
such areas in more complex applications as well, e.g. calculations
that enter endless loops for some parameters, deadlocks,
I/O operations that take unusually long (e.g. due to disk
errors), poorly written drivers of all sorts, etc. etc.

If there's a way to solve all these things in general
and without explicit runtime checks, I'd like to know :-)
msg176808 - (view) Author: Bob Ziuchkovski (Bob.Ziuchkovski) Date: 2012-12-02 20:47
Why not redefine -R to mean "use secure hashing algorithms for built-in types"?

When specified, use hashing algorithms that are secure against denial-of-service and other known attacks, at the possible expense of performance.  When not specified, use whatever hashing algorithms provide the most sensible defaults for every-day use (basically hash the way python currently hashes).

Secure hashing would apply not just to strings but to numeric and other types as well.  This would break the invariant of `x == y implies hash(x) == hash(y)` for numeric types that Mark mentioned.  However, that seems like an implementation detail that python users shouldn't rely upon.
msg178784 - (view) Author: Domen Kožar (iElectric) Date: 2013-01-01 23:20
According to talk at 29c3: http://events.ccc.de/congress/2012/Fahrplan/events/5152.en.html

Quote: We also describe a vulnerability of Python's new randomized hash, allowing an attacker to easily recover the 128-bit secret seed. As a reliable fix to hash-flooding, we introduce SipHash, a family of cryptographically strong keyed hash function competitive in performance with the weak hashes, and already adopted in OpenDNS, Perl 5, Ruby, and in the Rust language.
msg178798 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2013-01-02 05:52
Thanks for the information! I'm working on a PEP for the issue at hand.
msg178800 - (view) Author: Alyssa Coghlan (ncoghlan) * (Python committer) Date: 2013-01-02 07:08
Bob, the hash invariant isn't a mere implementation detail, it is critical to making hash based data structures work properly - if two equal objects (say the integer zero and the float zero) ever end up in different hash bins, then the uniqueness property of dictionary keys and sets breaks down.

The three proposed mitigation strategies (using SipHash for string hashing, a tunable collision counting hash map and providing a non-hash based mapping container in the standard library) are all reasonable approaches to the problem and, most importantly, they're *orthogonal* approaches to the problem. There's nothing stopping us doing all three if someone is willing and able to provide the code.
msg178808 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2013-01-02 10:02
Il giorno 02/gen/2013, alle ore 00:20, Domen Kožar <report@bugs.python.org> ha scritto:

> 
> Domen Kožar added the comment:
> 
> According to talk at 29c3: http://events.ccc.de/congress/2012/Fahrplan/events/5152.en.html
> 
> Quote: We also describe a vulnerability of Python's new randomized hash, allowing an attacker to easily recover the 128-bit secret seed. As a reliable fix to hash-flooding, we introduce SipHash, a family of cryptographically strong keyed hash function competitive in performance with the weak hashes, and already adopted in OpenDNS, Perl 5, Ruby, and in the Rust language.

That is exactly the vulnerability that was previously mentioned in the context of this bug. SipHash is currently the only solution for a collision-resistant fast-enough hash. 
-- 
Giovanni Bajo
msg178809 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2013-01-02 10:15
Il giorno 02/gen/2013, alle ore 06:52, Christian Heimes <report@bugs.python.org> ha scritto:

> 
> Christian Heimes added the comment:
> 
> Thanks for the information! I'm working on a PEP for the issue at hand.

Since you're collecting ideas on this, I would like to stress that, in the Python 3 transition, it was deemed acceptable to switch all objects to use unicode strings for attribute names, making the hash computation of such attributes (in the context of the instance dictionary) at least twice as slow than it used to be (the 'at least' refers to the fact that longer strings might have even worse effects because of a higher number of cache misses). SipHash isn't twice as slow as the current hash function, not even for short strings.

So there is a precedent in slowing down the hash computation time in a very important use case, and it doesn't look like hell froze over.
-- 
Giovanni Bajo
msg178814 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2013-01-02 14:49
2013/1/2 Giovanni Bajo <report@bugs.python.org>:
>
> Giovanni Bajo added the comment:
>
> Il giorno 02/gen/2013, alle ore 06:52, Christian Heimes <report@bugs.python.org> ha scritto:
>
>>
>> Christian Heimes added the comment:
>>
>> Thanks for the information! I'm working on a PEP for the issue at hand.
>
> Since you're collecting ideas on this, I would like to stress that, in the Python 3 transition, it was deemed acceptable to switch all objects to use unicode strings for attribute names, making the hash computation of such attributes (in the context of the instance dictionary) at least twice as slow than it used to be (the 'at least' refers to the fact that longer strings might have even worse effects because of a higher number of cache misses). SipHash isn't twice as slow as the current hash function, not even for short strings.
>
> So there is a precedent in slowing down the hash computation time in a very important use case, and it doesn't look like hell froze over.

It's probably not to bad for attribute names because a) they're short
b) they're interned c) the hash is cached.
msg178836 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2013-01-02 18:51
Giovanni, why do you think that hashing of unicode strings is slower than byte strings? 

First of all ASCII only unicode strings are packed and use just one byte per char. CPython's FNV implementation processes one element in each cycle, that is one byte for bytes and ASCII unicode, two bytes for UCS-2 and four bytes for UCS-4. Bytes and UCS-4 strings require the same amount of CPU instructions.
msg178837 - (view) Author: Giovanni Bajo (Giovanni.Bajo) Date: 2013-01-02 18:56
Il giorno 02/gen/2013, alle ore 19:51, Christian Heimes <report@bugs.python.org> ha scritto:

> 
> Christian Heimes added the comment:
> 
> Giovanni, why do you think that hashing of unicode strings is slower than byte strings? 
> 
> First of all ASCII only unicode strings are packed and use just one byte per char. CPython's FNV implementation processes one element in each cycle, that is one byte for bytes and ASCII unicode, two bytes for UCS-2 and four bytes for UCS-4. Bytes and UCS-4 strings require the same amount of CPU instructions.

Ah sorry, I stand corrected (though packing wasn't there in 3.0, was it? I was specifically referred to the 2.x -> 3.0 transition).
-- 
Giovanni Bajo

My Blog: http://giovanni.bajo.it
msg178842 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2013-01-02 19:42
See microbenchmark results in issue16427. Really hashing of ASCII/UCS1 strings more than 2x slower than bytes hashing.
msg203506 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2013-11-20 16:50
The issue has been solved for Python 3.4 with the integration of PEP 456.
msg205758 - (view) Author: Alyssa Coghlan (ncoghlan) * (Python committer) Date: 2013-12-10 00:29
This issue has belatedly had a CVE assigned: CVE-2013-7040 ("CPython hash secret can be recovered remotely")

3.4 is not affected (due to PEP 456), but 3.3 and 2.7 are still affected.
msg205759 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2013-12-10 00:34
I think that's just WONTFIX at this point.
History
Date User Action Args
2022-04-11 14:57:29adminsetgithub: 58826
2016-04-22 09:23:46serhiy.storchakasetmessages: - msg263978
2016-04-22 08:47:35SilentGhostsetnosy: + lemburg, arigo, gregory.p.smith, mark.dickinson, ncoghlan, vstinner, christian.heimes, benjamin.peterson, iElectric, Arfrever, alex, cvrebert, dmalcolm, Giovanni.Bajo, PaulMcMillan, serhiy.storchaka, bkabrda, Vlado.Boza, koniiiik, sbermeister, camara, pconnell, Łukasz.Rekucki, ReneSac, Bob.Ziuchkovski, dstufft, isoschiz
2016-04-22 07:48:56editor-buzzfeedsetnosy: + editor-buzzfeed, - lemburg, arigo, gregory.p.smith, mark.dickinson, ncoghlan, vstinner, christian.heimes, benjamin.peterson, iElectric, Arfrever, alex, cvrebert, dmalcolm, Giovanni.Bajo, PaulMcMillan, serhiy.storchaka, bkabrda, Vlado.Boza, koniiiik, sbermeister, camara, pconnell, Łukasz.Rekucki, ReneSac, Bob.Ziuchkovski, dstufft, isoschiz
messages: + msg263978
2013-12-10 00:34:45benjamin.petersonsetstatus: open -> closed
resolution: fixed
messages: + msg205759
2013-12-10 00:29:54ncoghlansetstatus: pending -> open
versions: + Python 2.7, Python 3.3
nosy: + bkabrda

messages: + msg205758
2013-11-20 16:50:12christian.heimessetstatus: open -> pending

messages: + msg203506
stage: resolved
2013-06-01 19:43:58dstufftsetnosy: + dstufft
2013-04-20 13:29:54isoschizsetnosy: + pconnell, isoschiz
2013-01-02 19:42:48serhiy.storchakasetmessages: + msg178842
2013-01-02 18:56:26Giovanni.Bajosetmessages: + msg178837
2013-01-02 18:51:52christian.heimessetmessages: + msg178836
2013-01-02 14:49:30benjamin.petersonsetmessages: + msg178814
2013-01-02 10:15:43Giovanni.Bajosetmessages: + msg178809
2013-01-02 10:02:31Giovanni.Bajosetmessages: + msg178808
2013-01-02 07:08:20ncoghlansetnosy: + ncoghlan
messages: + msg178800
2013-01-02 05:52:45christian.heimessetassignee: christian.heimes
messages: + msg178798
2013-01-01 23:20:26iElectricsetnosy: + iElectric
messages: + msg178784
2012-12-02 20:47:31Bob.Ziuchkovskisetnosy: + Bob.Ziuchkovski
messages: + msg176808
2012-11-30 22:51:28lemburgsetmessages: + msg176725
2012-11-30 21:27:24serhiy.storchakasetmessages: + msg176721
2012-11-30 21:25:54serhiy.storchakasetmessages: + msg176720
2012-11-30 20:35:39ReneSacsetmessages: + msg176715
2012-11-30 20:34:37lemburgsetmessages: + msg176714
2012-11-30 20:33:04koniiiiksetmessages: + msg176713
2012-11-30 20:06:32serhiy.storchakasetmessages: + msg176710
2012-11-30 20:01:02ReneSacsetmessages: + msg176709
2012-11-30 19:39:20serhiy.storchakasetmessages: + msg176705
2012-11-30 19:20:28christian.heimessetmessages: + msg176704
2012-11-30 18:12:10ReneSacsetnosy: + ReneSac
messages: + msg176697
2012-11-30 07:53:28Łukasz.Rekuckisetnosy: + Łukasz.Rekucki
messages: + msg176680
2012-11-11 11:14:30Giovanni.Bajosetmessages: + msg175345
2012-11-11 10:09:31vstinnersetmessages: + msg175342
2012-11-11 04:56:29cvrebertsetnosy: + cvrebert
messages: + msg175318
2012-11-10 22:23:39gregory.p.smithsetnosy: + gregory.p.smith
messages: + msg175299
2012-11-08 22:16:06christian.heimessetmessages: + msg175198
2012-11-08 22:10:04christian.heimessetmessages: + msg175196
2012-11-08 21:58:55sbermeistersetnosy: + sbermeister
messages: + msg175192
2012-11-07 13:16:40Giovanni.Bajosetmessages: + msg175100
2012-11-07 13:03:05mark.dickinsonsetmessages: + msg175099
2012-11-07 13:02:05arigosetmessages: + msg175098
2012-11-07 12:55:50serhiy.storchakasetmessages: + msg175097
2012-11-07 12:54:59arigosetmessages: + msg175096
2012-11-07 12:40:26serhiy.storchakasetmessages: + msg175094
2012-11-07 12:19:34lemburgsetmessages: + msg175091
2012-11-07 12:06:04mark.dickinsonsetmessages: + msg175089
2012-11-07 11:59:56lemburgsetmessages: + msg175088
2012-11-07 11:55:11mark.dickinsonsetnosy: + mark.dickinson
messages: + msg175086
2012-11-07 11:53:11christian.heimessetmessages: + msg175085
2012-11-07 11:51:34serhiy.storchakasetfiles: + _Py_Hash_Sip24.S

messages: + msg175083
2012-11-07 11:26:19Giovanni.Bajosetmessages: + msg175082
2012-11-07 11:19:16lemburgsetmessages: + msg175081
2012-11-07 11:06:00arigosetmessages: + msg175080
2012-11-07 08:44:41Giovanni.Bajosetmessages: + msg175053
2012-11-07 08:41:38lemburgsetmessages: + msg175052
2012-11-07 08:34:26lemburgsetfiles: + hash-attack-3.patch
keywords: + patch
messages: + msg175050
2012-11-07 07:40:30serhiy.storchakasetmessages: + msg175048
2012-11-07 06:49:21serhiy.storchakasetmessages: + msg175047
2012-11-07 01:27:11christian.heimessetmessages: + msg175040
2012-11-07 00:52:07christian.heimessetmessages: + msg175038
2012-11-06 20:12:40serhiy.storchakasetmessages: + msg175007
2012-11-06 19:11:36christian.heimessethgrepos: + hgrepo159
messages: + msg175000
2012-11-06 18:26:07Giovanni.Bajosetmessages: + msg174999
2012-11-06 17:10:45christian.heimessetmessages: + msg174998
2012-11-06 16:49:04christian.heimessetmessages: + msg174994
2012-11-06 16:29:19Giovanni.Bajosetnosy: + Giovanni.Bajo
messages: + msg174990
2012-11-06 16:08:03christian.heimessetmessages: + msg174989
2012-11-06 15:54:14christian.heimessetmessages: + msg174987
2012-11-06 15:44:59alexsetnosy: + alex
2012-11-06 15:44:42camarasetnosy: + camara
messages: + msg174986
2012-11-06 15:04:42benjamin.petersonsetmessages: + msg174973
2012-11-06 14:10:27arigosetmessages: + msg174964
2012-10-22 06:48:48lemburgsetnosy: + lemburg
messages: + msg173498
2012-10-21 22:46:31serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg173491
2012-10-21 21:42:53vstinnersetmessages: + msg173488
2012-10-21 16:33:51christian.heimessetmessages: + msg173461
2012-10-21 15:39:43benjamin.petersonsetkeywords: - easy

messages: + msg173458
2012-10-21 15:35:29arigosetkeywords: + easy

messages: + msg173457
2012-10-21 15:28:08arigosetnosy: + arigo
messages: + msg173455
2012-10-17 16:53:31christian.heimessetmessages: + msg173185
2012-10-11 23:14:36christian.heimessetnosy: + christian.heimes
2012-04-26 23:10:43vstinnersetmessages: + msg159434
2012-04-26 23:08:55benjamin.petersonsetnosy: + benjamin.peterson
messages: + msg159433
2012-04-26 22:58:42vstinnersetfiles: + hash.py

messages: + msg159431
2012-04-26 22:58:24vstinnersetfiles: - hash.py
2012-04-26 22:47:54vstinnersetmessages: + msg159430
2012-04-20 17:44:06Vlado.Bozasetmessages: + msg158860
2012-04-20 00:52:15Arfreversetnosy: + Arfrever
2012-04-20 00:43:36vstinnersetfiles: + hash.py

messages: + msg158790
2012-04-20 00:21:15Vlado.Bozasetmessages: + msg158785
2012-04-20 00:08:30koniiiiksetmessages: + msg158784
2012-04-20 00:05:56vstinnersetmessages: + msg158783
2012-04-19 23:59:47dmalcolmsetmessages: + msg158781
2012-04-19 23:58:15Vlado.Bozasetmessages: + msg158780
2012-04-19 23:53:19Vlado.Bozasetmessages: + msg158778
2012-04-19 23:26:32vstinnersetfiles: + find_hash_collision.py

messages: + msg158773
2012-04-19 22:40:48pitrousetnosy: + PaulMcMillan
2012-04-19 21:31:07dmalcolmsetmessages: + msg158759
2012-04-19 21:23:32pitrousetnosy: + vstinner
2012-04-19 21:07:35koniiiiksetnosy: + koniiiik
2012-04-19 20:44:58dmalcolmsetnosy: + dmalcolm
2012-04-19 20:40:35Vlado.Bozasetmessages: + msg158744
2012-04-19 17:58:09Vlado.Bozacreate