Title: Dict lookups fail if sizeof(Py_ssize_t) < sizeof(long)
Type: behavior Stage: patch review
Components: Build, Interpreter Core Versions: Python 3.2
Status: closed Resolution: out of date
Dependencies: Superseder: Make hash values the same width as a pointer (or Py_ssize_t)
View: 9778
Assigned To: belopolsky Nosy List: belopolsky, georg.brandl, jimjjewett, ked-tao, loewis, mark.dickinson, pitrou, tim.peters
Priority: high Keywords: patch

Created on 2007-01-27 18:23 by ked-tao, last changed 2010-11-20 18:53 by belopolsky. This issue is now closed.

File name Uploaded Description Edit
dict.diff ked-tao, 2007-02-04 14:11 Suggested patch (against python 2.5 release)
issue1646068.diff belopolsky, 2010-07-14 18:20 Patch for py3k branch revision 82889
Messages (16)
msg31105 - (view) Author: ked-tao (ked-tao) Date: 2007-01-27 18:23
Portation problem.

Include/dictobject.h defines PyDictEntry.me_hash as a Py_ssize_t. Everywhere else uses a C 'long' for hashes.

On the system I'm porting to, ints and pointers (and ssize_t) are 32-bit, but longs and long longs are 64-bit. Therefore, the assignments to me_hash truncate the hash and subsequent lookups fail.

I've changed the definition of me_hash to 'long' and (in Objects/dictobject.c) removed the casting from the various assignments and changed the definition of 'i' in dict_popitem(). This has fixed my immediate problems, but I guess I've just reintroduced whatever problem it got changed for. The comment in the header says:

/* Cached hash code of me_key.  Note that hash codes are C longs.
 * We have to use Py_ssize_t instead because dict_popitem() abuses
 * me_hash to hold a search finger.

... but that doesn't really explain what it is about dict_popitem() that requires the different type.

Thanks. Kev.
msg31106 - (view) Author: Georg Brandl (georg.brandl) * (Python committer) Date: 2007-01-27 19:40
This is your code, Tim.
msg31107 - (view) Author: Jim Jewett (jimjjewett) Date: 2007-02-02 20:20
The whole point of a hash is that if it doesn't match, you can skip an expensive comparison.  How big to make the hash is a tradeoff between how much you'll waste calculating and storing it vs how often it will save a "real" comparison.

The comment means that, as an implementation detail, popitem assumes it can store a pointer there instead, so hashes need to be at least as big as a pointer.  

Going to the larger of the two sizes will certainly solve your problem; it just wastes some space, and maybe some time calculating the hash.  

If you want to get that space back, just make sure the truncation is correct and consistent.  I *suspect* your problem is that when there is a collision, either 

(1)  It is comparing a truncated value to an untruncated value, or
(2)  The perturbation to find the next slot is going wrong, because of when the truncation happens.
msg31108 - (view) Author: ked-tao (ked-tao) Date: 2007-02-04 14:11
Hi Jim. I understand what the problem is (perhaps I didn't state it clearly enough) - me_hash is a cache of the dict item's hash which is compared against the hash of the object being looked up before going any further with expensive richer comparisons. On my system, me_hash is a 32-bit quantity but hashes in general are declared 'long' which is a 64-bit quantity. Therefore for any object whose hash has any of the top 32 bits set, a dict lookup will fail as it will never get past that first check (regardless of why that slot is being checked - it has nothing to do with the perturbation to find the next slot).

The deal is that my system is basically a 32-bit system (sizeof(int) == sizeof(void *) == 4, and therefore ssize_t is not unreasonably also 32-bit), but C longs are 64-bit.

You say "popitem assumes it can store a pointer there", but AFAICS it's just storing an _index_, not a pointer. I was concerned that making that index a 64-bit quantity might tickle some subtlety in the code, thinking that perhaps it was changed from 'long' to 'Py_ssize_t' because it had to be 32-bit for some reason. However, it seems much more likely that it was defined like that to be more correct on a system with 64-bit addressing and 32-bit longs (which would be more common). With that in mind, I've attached a suggested patch which selects a reasonable type according to the SIZEOF_ configuration defines.

WRT forcing the hashes 32-bit to "save space and time" - that means inventing a 'Py_hash_t' type and going through the entire python source looking for 'long's that might be used to store/calculate hashes. I think I'll pass on that ;)

Regards, Kev.
File Added: dict.diff
msg31109 - (view) Author: Jim Jewett (jimjjewett) Date: 2007-02-04 16:35
Yes, I'm curious about what system this is ... is it a characteristic of the whole system, or a compiler choice to get longer ints?

As to using a Py_hash_t -- it probably wouldn't be as bad as you think.  You might get away with just masking it to throw away the high order bits in dict and set.  (That might not work with perturbation.)  

Even if you have to change it everywhere at the source, then there is some prior art (from when hash was allowed to be a python long), and it is almost certainly limited to methods with "hash" in the name which generate a hash.  (eq/ne on the same objects may use the hash.)  Consumers of hash really are limited to dict and derivatives.  I think dict, set, and defaultdict may be the full list for the default distribution.
msg31110 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2007-02-06 20:03
ked-tao: as for "doesn't really explain", please take a look at this comment:

        /* Set ep to "the first" dict entry with a value.  We abuse the hash
         * field of slot 0 to hold a search finger:
         * If slot 0 has a value, use slot 0.
         * Else slot 0 is being used to hold a search finger,
         * and we use its hash value as the first index to look.

So .popitem first returns (and removes) the item in slot 0. Afterwards, it does a 
linear scan through the dictionary, returning one item at a time. To avoid
re-scanning the emptying dictionary over and over again, the me_hash
value of slot 0 indicates the place to start searching when the next .popitem
call is made. Of course, this value may start out bogus and out-of-range,
or may become out-of-range if the dictionary shrinks; in that case, it
starts over at index 1. If it is bogus (i.e. never set as a search finger)
and in-range, that's fine: it will just start searching for a non-empty
slot at me_hash.

Because it is a slot number, me_hash must be large enough to hold a
Py_ssize_t. On some systems (Win64 in particular), long is not large
enough to hold Py_ssize_t.

I believe the proposed patch is fine.
msg61525 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2008-01-22 19:40
The patch looks fine, but why isn't an union used instead of trying to
figure out the longest of both types?
msg110293 - (view) Author: Mark Lawrence (BreamoreBoy) * Date: 2010-07-14 16:09
As this is a small patch about which there is one statement from Martin that says "I believe the proposed patch is fine", there is only one query from Antoine, and because the issue discussed refers to problems with 32 and 64 bit sizes, could someone with the relevant knowledge please take a look with a view to moving this forward.
msg110300 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-07-14 17:27
Responding to Antoine question, I don't understand how you would use a union here.  Certainly you cannot define Py_dicthashcache_t as a union of long and Py_ssize_t because it will not be able to easily assign long or Py_ssize_t values to it. I don't think ANSI C allows a cast from integer type to a union.

I am OK with the patch, but if this goes into 2.7/3.x, I think the same change should be applied to the set type.
msg110303 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-07-14 17:41
On the second thought, this comment:

-	/* Cached hash code of me_key.  Note that hash codes are C longs.
-	 * We have to use Py_ssize_t instead because dict_popitem() abuses
-	 * me_hash to hold a search finger.
-	 */

suggests that a union may be appropriate here.  I am not sure of the standards standing of anonymous unions, but if we could do

union {
  Py_ssize_t me_finger;
  long me_hash;

it would cleanly solve the problem.  If anonymous unions are not available, a regular union could also do the trick:

union {
  Py_ssize_t finger;
  long hash;
} me;

and use me.finger where me is used as search finger and me.hash where it stores hash.  Less clever naming scheme would be welcome, though.
msg110306 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-07-14 17:52
Please ignore my comment about set type.  Sets don't have this issue.
msg110307 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-07-14 17:58
Yet it looks like set has a bigger problem whenever sizeof(Py_ssize_t) > sizeof(long).  setentry.hash is defined as long, but set_pop() stores a Py_ssize_t index in it.
msg110310 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-07-14 18:20
I am attaching a patch that uses a regular union of long and Py_ssize_t to store cached hash/index value in both set and dict entry.  Using an anonymous union would simplify the patch and would reduce the likelihood of breaking extensions that access entry structs.
msg110315 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2010-07-14 19:11
If this goes in, it can't go into bug fix releases, as it may break the ABI.
msg110329 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2010-07-14 20:56
The approach in Alexander's patch looks fine to me.  +1 on applying this patch if someone can test it on a platform where sizeof(long) > sizeof(Py_ssize_t), and also verify that there are current test failures that are fixed by this patch.  (If there are no such tests, we should add some.)

I've only tested this on machines with sizeof(long) == sizeof(Py_ssize_t) == 4 and sizeof(long) == sizeof(Py_ssize_t) == 8, which isn't really much of a test at all.

ked-tao, if you're still listening:  are you in a position to test Alexander's patch on a relevant machine?
msg119021 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2010-10-18 14:11
Issue #9778 makes this out of date.
Date User Action Args
2010-11-20 18:53:51belopolskysetstatus: pending -> closed
2010-10-18 14:11:54belopolskysetstatus: open -> pending

superseder: Make hash values the same width as a pointer (or Py_ssize_t)
assignee: tim.peters -> belopolsky

nosy: - BreamoreBoy
messages: + msg119021
resolution: out of date
2010-07-14 20:56:05mark.dickinsonsetnosy: + mark.dickinson
messages: + msg110329
2010-07-14 19:11:42loewissetmessages: + msg110315
versions: - Python 2.6, Python 3.1, Python 2.7
2010-07-14 18:20:57belopolskysetfiles: + issue1646068.diff

messages: + msg110310
2010-07-14 17:58:30belopolskysetmessages: + msg110307
2010-07-14 17:52:18belopolskysetmessages: + msg110306
2010-07-14 17:41:38belopolskysetmessages: + msg110303
2010-07-14 17:27:07belopolskysetnosy: + belopolsky
messages: + msg110300
2010-07-14 16:09:36BreamoreBoysetnosy: + BreamoreBoy
messages: + msg110293
2010-05-11 20:51:39terry.reedysetversions: + Python 3.1, Python 2.7, Python 3.2, - Python 3.0
2009-03-30 20:03:41ajaksu2setkeywords: + patch
type: enhancement -> behavior
versions: + Python 2.6, Python 3.0, - Python 3.1, Python 2.7
2009-03-30 19:48:07ajaksu2setstage: patch review
type: enhancement
components: + Build
versions: + Python 3.1, Python 2.7, - Python 2.5
2008-01-22 19:40:58pitrousetnosy: + pitrou
messages: + msg61525
2007-01-27 18:23:44ked-taocreate