Author nnorwitz
Recipients
Date 2006-12-23.03:33:25
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
I only reviewed the code in the patch.  I have not thought conceptually about this, whether it's a good idea, how to break it, etc.  I also did not look to verify all the necessary dict methods were updated to add TOUCHED.

Have you profiled the slow down to dicts in normal cases?  Things like:

   d = {}
   d[0] = 0

etc?  All dict operations are going to be a tiny bit slower due to an additional assignment.  It's probably not measurable, but you should still verify this.

What is LOAD_FAST_HACK in ceval.c?  It doesn't seem necessary for this patch.
Please revert any whitespace/formatting changes that aren't part of the patch (there are a couple in dictobject.c and codeobject.c).

It seems like a little structure would be better than parallel arrays.  This would reduce some calculations and reduce the number of mallocs.  Also there would only be a single assignment in the eval loop and the names would probably wind up being more clear.

Would it be better to use PyObject_Malloc rather than PyMem_Malloc?  It should be faster for sizes <= 256.  (Despite the name, it doesn't have to handle PyObjects.)

Why not use a size_t instead of Py_ssize_t for the timestamp?  This is conceptually unsigned, no?

Use the macro version PyTuple_GET_SIZE(names); in codeobject.c.  It will be a little faster.

If the cache allocs fail when allocating a code object, ceval will crash due to not checking for NULL.

As for the timestamp, there should probably be a macro to access the field for a dict.  This macro should be used in ceval.c/codeobject.c for getting/setting it.

When returning the timestamp in dictobject.c you shouldn't cast it to a long.  Use the appropriate method such as PyInt_FromSsize_t.
History
Date User Action Args
2007-08-23 15:55:45adminlinkissue1616125 messages
2007-08-23 15:55:45admincreate