This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: More opcode predictions
Type: enhancement Stage: resolved
Components: Interpreter Core Versions: Python 3.6
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: serhiy.storchaka Nosy List: python-dev, rhettinger, serhiy.storchaka
Priority: low Keywords: patch

Created on 2016-06-07 18:41 by serhiy.storchaka, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
more_opcode_predicts.patch serhiy.storchaka, 2016-06-07 18:41 review
more_opcode_predicts_2.patch serhiy.storchaka, 2016-06-26 21:38 review
Messages (8)
msg267719 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-06-07 18:41
Currently the PREDICT() macros is used for predicticting following pair of opcodes in the ceval loop:

LIST_APPEND JUMP_ABSOLUTE
SET_ADD JUMP_ABSOLUTE
MAP_ADD JUMP_ABSOLUTE
COMPARE_OP POP_JUMP_IF_FALSE
COMPARE_OP POP_JUMP_IF_TRUE
GET_ITER FOR_ITER
FOR_ITER STORE_FAST (if continue iterating)
FOR_ITER UNPACK_SEQUENCE (if continue iterating)
WITH_CLEANUP_START WITH_CLEANUP_FINISH
WITH_CLEANUP_FINISH END_FINALLY

I collected static and dynamic opcode statistics. Static statistics is a statistics about what opcodes are used in code objects after running the compileall script on the Lib directory. Dynamic statistics is is a statistics about what opcodes are executed when run all Python tests. Resulting numbers are close, but there are some reasonable differences (opcodes used in loops have larger numbers in dynamic statistics, and opcodes used for creating classes and functions have larger numbers in static statistics). In following lines the fist number is from dynamic statistics and the second number is from static statistics.

LIST_APPEND, SET_ADD and MAP_ADD are always followed by JUMP_ABSOLUTE.
COMPARE_OP is followed by POP_JUMP_IF_FALSE in 83% (78%) and by POP_JUMP_IF_TRUE in 13% (5%).
GET_ITER is followed by FOR_ITER in 72% (80%) and CALL_FUNCTION in 28% (28%).
FOR_ITER is followed by STORE_FAST in 50% (80%), UNPACK_SEQUENCE in 21% (18%) and POP_BLOCK in 20% (0%).
WITH_CLEANUP_START is always followed by WITH_CLEANUP_FINISH.
WITH_CLEANUP_FINISH is always followed by END_FINALLY

According to this statistic existing predictions are correct.

But the statistics suggests other predictions.

JUMP_ABSOLUTE is followed by FOR_ITER in 77% (0%).
UNPACK_SEQUENCE is followed by STORE_FAST in 97% (94%).
SETUP_LOOP is followed by LOAD_FAST in 81% (52%) and LOAD_GLOBAL in 18% (31%).
BUILD_SLICE is followed by BINARY_SUBSCR in 99% (87%).
ROT_TWO is followed by STORE_FAST in 85% (40%).
LOAD_CLOSURE is followed by BUILD_TUPLE in 94% (68%).
SETUP_WITH is followed by POP_TOP in 94% (54%) and STORE_FAST in 6% (44%).
GET_YIELD_FROM_ITER, GET_AITER, GET_ANEXT and GET_AWAITABLE are always followed by LOAD_CONST.
DUP_TOP_TWO is always followed by BINARY_SUBSCR.
BEFORE_ASYNC_WITH is always followed by GET_AWAITABLE.
All INPLACE_ instructions almost always are followed by STORE_FAST.

Proposed patch adds predictions of following pairs:

DUP_TOP_TWO BINARY_SUBSCR
INPLACE_POWER STORE_FAST
INPLACE_MULTIPLY STORE_FAST
INPLACE_MATRIX_MULTIPLY STORE_FAST
INPLACE_TRUE_DIVIDE STORE_FAST
INPLACE_FLOOR_DIVIDE STORE_FAST
INPLACE_MODULO STORE_FAST
INPLACE_ADD STORE_FAST
INPLACE_SUBTRACT STORE_FAST
INPLACE_LSHIFT STORE_FAST
INPLACE_RSHIFT STORE_FAST
INPLACE_AND STORE_FAST
INPLACE_XOR STORE_FAST
INPLACE_OR STORE_FAST
GET_AITER LOAD_CONST
GET_ANEXT LOAD_CONST
GET_AWAITABLE LOAD_CONST
UNPACK_SEQUENCE STORE_FAST
LOAD_CLOSURE BUILD_TUPLE
GET_ITER CALL_FUNCTION
GET_YIELD_FROM_ITER LOAD_CONST
FOR_ITER POP_BLOCK
BEFORE_ASYNC_WITH GET_AWAITABLE
SETUP_WITH POP_TOP
SETUP_WITH STORE_FAST
BUILD_SLICE BINARY_SUBSCR

Note that the effect of this change is not very significant since the PREDICT() macros is no-op if computed GOTOs are used.
msg267754 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-06-08 00:01
Serhiy, please slow down and stop rewriting every single thing you see.  Your rate of patches is prolific and hard to digest.  Please give some consideration that the people who came before you (lots of them) put a lot of thought into what was done and were deliberately conservative.

Dynamic statistics can vary widely depending on code being profiled (much like PGO optimizations for C compilers).   

For a prediction to have value, it needs to be a commonly used opcode; it must occur in a highly predictive pair with an intrinsic relationship rather than a coincidental relationship.

FWIW, these tests aren't free.  Global prediction tables have a limited size and are subject to aliasing.  Mispredictions are expensive.  Also, the ceval loop doesn't need more clutter.

The new opcodes GET_YIELD_FROM_ITER, GET_AITER, GET_ANEXT, and GET_AWAITABLE haven't been considered before.  The code in Python/compile.c shows that LOAD_CONST is the only possible next opcode, so these are reasonable candidates (i.e. the pair effectively acts as a single opcode).  Of these, I think only GET_ANEXT and GET_AWAITABLE are likely to occur in a loop.  So, these two may be worthwhile.  All the rest should probably be skipped.
msg269314 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-06-26 21:38
Since existing predictions were added many new opcodes and opcode combinations were added. Predictions need to be updated to reflect current status.

The patch contained only predictions for highly predictive (>=94%) pairs of commonly used opcodes or 100% predictive pairs of uncommonly used opcodes (but they can be more common in specific applications).

The second version of the patch contains only 100% predictions. It doesn't depend on any statistics, the compiler always generate these opcodes in pairs.
msg269350 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-06-27 05:20
The second version of the patch mostly looks fine.  

The prediction from DUP_TOP_TWO to BINARY_SUBSCR is questionable.  Because compile.c only uses it one context, the prediction rate is 100%; however, there is no intrinsic relationship between the two opcodes, so the prediction is happenstance and fragile.  Reconsider whether you really want this prediction pairing.
msg269384 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2016-06-27 15:59
New changeset f19c2b28710e by Serhiy Storchaka in branch 'default':
Issue #27255: Added more predictions in ceval.c.
https://hg.python.org/cpython/rev/f19c2b28710e
msg269391 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-06-27 18:14
Thank you Raymond. Committed without the prediction from DUP_TOP_TWO to BINARY_SUBSCR.

What are you think about following highly predictive pairs?

1. From UNPACK_SEQUENCE to STORE_FAST with 96.5% probability. This is the 15th of most common pairs. It is more common than any other predicted pairs except the COMPARE_OP/POP_JUMP_IF_FALSE pair. I suppose it is mostly used in for loops over dict.items(), enumerate(), etc. I suppose the remaining 3.5% are unpacking to object attributes (like "self.x, self.y = ...").

2. From BUILD_SLICE to BINARY_SUBSCR with 99.3% probability. This is the 37th of most common pairs. It is more common than any other predicted pairs except the three most common pairs. The remaining 0.7% are slice assignment (0.42%), slice deleting (0.29%), slice inplace operations and extended slices.

FYI here is a list of most common pairs (predicted pairs are starred).

  1. 5.84%  LOAD_FAST LOAD_FAST                        22.6%
  2. 5.16%  LOAD_FAST LOAD_ATTR                        20.0%
  3. 4.18%  COMPARE_OP POP_JUMP_IF_FALSE               82.9% *
  4. 3.97%  POP_JUMP_IF_FALSE LOAD_FAST                66.3%
  5. 3.90%  STORE_FAST LOAD_FAST                       47.2%
  6. 3.70%  LOAD_FAST CALL_FUNCTION                    14.3%
  7. 3.36%  LOAD_FAST LOAD_CONST                       13.0%
  8. 2.64%  LOAD_ATTR LOAD_FAST                        35.2%
  9. 2.28%  LOAD_CONST COMPARE_OP                      26.7%
 10. 2.12%  STORE_FAST STORE_FAST                      25.6%
 11. 2.09%  LOAD_GLOBAL LOAD_FAST                      37.5%
 12. 1.49%  CALL_FUNCTION STORE_FAST                   20.5%
 13. 1.44%  <0> LOAD_FAST                              39.1%
 14. 1.37%  JUMP_ABSOLUTE FOR_ITER                     77.6%
 15. 1.29%  UNPACK_SEQUENCE STORE_FAST                 96.5%
 16. 1.28%  CALL_FUNCTION POP_TOP                      17.7%
 17. 1.28%  LOAD_FAST LOAD_GLOBAL                       4.9%
 18. 1.26%  FOR_ITER STORE_FAST                        50.3% *
 19. 1.25%  LOAD_CONST RETURN_VALUE                    14.6%
 20. 1.19%  LOAD_ATTR LOAD_CONST                       15.9%
...
 36. 0.65%  COMPARE_OP POP_JUMP_IF_TRUE                13.0% *
 37. 0.65%  BUILD_SLICE BINARY_SUBSCR                  99.3%
...
 45. 0.55%  SETUP_LOOP LOAD_FAST                       80.7%
 46. 0.55%  GET_ITER FOR_ITER                          71.9% *
 47. 0.53%  FOR_ITER UNPACK_SEQUENCE                   21.2% *
...
 50. 0.50%  FOR_ITER POP_BLOCK                         20.0% *
...
 66. 0.33%  ROT_TWO STORE_FAST                         85.8%
...
 71. 0.31%  INPLACE_ADD STORE_FAST                     92.1%
...
 73. 0.30%  LIST_APPEND JUMP_ABSOLUTE                 100.0% *
...
 90. 0.22%  BUILD_MAP STORE_FAST                       85.3%
...
 93. 0.21%  GET_ITER CALL_FUNCTION                     28.1% *
msg269409 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-06-27 22:17
The BUILD_SLICE pairing with BINARY_SUBSCR has a tight conceptual linkage with the lookup case dominating use patterns over slice deletion and slice assignment.  IIRC, I passed on that pairing long ago because even though the predictive power was good, it didn't come up much (a very, very low percentage of executed op-codes in any real programs).  That would mean the user wouldn't be likely to see any benefit and I worried about the cost (global prediction tables have a finite size and are subject to aliased false positives, so when in doubt, it is better to not do it at all.)  

The UNPACK_SEQUENCE pairing with STORE_FAST had the opposite issue.  The two arise in real code often enough to be interesting, but I wasn't as confident that it wasn't competing with alternative successors like STORE_NAME or attribute storage.  Here, the statistics are heavily influenced by whatever is being profiled.  Given that I wasn't confident in the pairing, I passed on it.

That said, there's nothing terribly wrong with either, so it is hard to veto them.  Now as well as back when the opcode predictions were first studied, I remain -0 on both pairings.  

In general, we should have a bias towards there being relatively few predictions (each one adds size to the generated code, adds load to the branch prediction tables, and adds to the cognitive load of anyone modifying the compiler, the peephole optimizer, or adding new opcodes) and there should be a preference to leave out cases where there is doubt.

I'll leave it to you to decide whether either one of these should go in (I just wanted you to know why they weren't included in the first place).
msg269419 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-06-28 07:09
OK, I'm only +0 on adding these predictions and fine with status quo. The ratio 28:1 looks compelling to me, but all this is not enough important.
History
Date User Action Args
2022-04-11 14:58:32adminsetgithub: 71442
2016-06-28 07:09:50serhiy.storchakasetstatus: open -> closed
resolution: fixed
messages: + msg269419

stage: patch review -> resolved
2016-06-27 22:17:29rhettingersetmessages: + msg269409
2016-06-27 18:14:51serhiy.storchakasetmessages: + msg269391
2016-06-27 15:59:22python-devsetnosy: + python-dev
messages: + msg269384
2016-06-27 05:20:07rhettingersetassignee: rhettinger -> serhiy.storchaka
messages: + msg269350
2016-06-26 21:38:15serhiy.storchakasetfiles: + more_opcode_predicts_2.patch

messages: + msg269314
2016-06-08 00:01:30rhettingersetassignee: rhettinger
2016-06-08 00:01:18rhettingersetnosy: + rhettinger
messages: + msg267754
2016-06-07 18:41:51serhiy.storchakasetpriority: normal -> low
2016-06-07 18:41:44serhiy.storchakacreate