Author novalis_dt
Recipients novalis_dt
Date 2008-11-05.19:30:26
SpamBayes Score 7.86302e-10
Marked as misclassified No
Message-id <>
This is a patch to Tom Lee's AST optimization branch.

Python bytecode includes at least one operation which is not directly
accessible from Python code: LIST_APPEND, which pops a value and a list
off the stack and appends the value to the list. This is used in
generating code for list comprehensions:

[x for x in somelist]

generates the following code for the body of the loop:

LOAD_FAST 1#a local is generated to hold the new list
LOAD_FAST 2 #the iteration variable, x

Whereas if you were to try to write the comprehension directly, you
would get:

LOAD_ATTR 0 #an index into the constant table: “append”
CALL_FUNCTION 1 #remove x and the append function from the top of the
stack and push the result of the call

This is much slower. In part, it’s the cost of doing the attribute
lookup each time, which is why it’s common to see code like a = [] ap =
a.append for x in …..: ap(x + …) return a But the function call is
naturally slower than the simpler LIST_APPEND operation. The attached
patch tries to determine statically if a local is all circumstances
holds a list, and if so, generates LIST_APPEND whenever user code would
call local.append.  It’s not perfect — in particular, I could track
local types on a more fine-grained level than per-function. But it’s a
Date User Action Args
2008-11-05 19:31:33novalis_dtsetrecipients: + novalis_dt
2008-11-05 19:31:33novalis_dtsetmessageid: <>
2008-11-05 19:30:33novalis_dtlinkissue4264 messages
2008-11-05 19:30:32novalis_dtcreate