classification
Title: ceval: use Wordcode, 16-bit bytecode
Type: performance Stage: needs patch
Components: Documentation Versions: Python 3.6
process
Status: closed Resolution: fixed
Dependencies: 26881 Superseder:
Assigned To: serhiy.storchaka Nosy List: Demur Rumed, Ryan May, abarnert, benjamin.peterson, brett.cannon, georg.brandl, haypo, josh.r, ncoghlan, python-dev, rhettinger, serhiy.storchaka, yselivanov
Priority: normal Keywords: patch

Created on 2016-03-26 23:27 by Demur Rumed, last changed 2017-03-31 16:36 by dstufft. This issue is now closed.

Files
File name Uploaded Description Edit
wpy.patch Demur Rumed, 2016-03-26 23:27 review
wcpybm.txt Demur Rumed, 2016-03-30 02:12 pybench, ccbench, EXTENDED_ARG counts, __pycache__ size
exarg_in_funcs.txt Demur Rumed, 2016-03-30 02:29 Better EXTENDED_ARG counts
2to3re.txt Demur Rumed, 2016-03-30 12:28 Benchmarks: 2to3, regex
wpy2.patch Demur Rumed, 2016-03-31 02:52 Changes from initial code review review
wpy3.patch Demur Rumed, 2016-04-01 02:12 review
wpy4.patch Demur Rumed, 2016-04-02 04:06 f_lasti = -1 review
module_finder.patch haypo, 2016-04-12 16:29 review
wpy5.patch Demur Rumed, 2016-04-13 13:03
wpy6.patch haypo, 2016-04-13 14:29 review
wpy7.patch Demur Rumed, 2016-04-29 23:10
wpy7.patch serhiy.storchaka, 2016-04-30 05:22 Regenerated for review review
wpy8.patch Demur Rumed, 2016-05-18 02:18
wpy8.patch haypo, 2016-05-20 09:15 Regenerated for review review
wpy9.patch Demur Rumed, 2016-05-22 03:39 review
wpyA.patch Demur Rumed, 2016-05-22 16:43
wpyA.patch serhiy.storchaka, 2016-05-22 16:56 Regenerated for review review
wpyB.patch Demur Rumed, 2016-05-22 17:58 review
wpyC.patch Demur Rumed, 2016-05-22 19:07 review
wpyD.patch Demur Rumed, 2016-05-22 21:38 review
default-May26-03-05-10.log haypo, 2016-05-26 06:45
wordcode.patch serhiy.storchaka, 2016-05-26 11:25 review
Pull Requests
URL Status Linked Edit
PR 552 closed dstufft, 2017-03-31 16:36
Messages (68)
msg262501 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-26 23:27
Originally started @ https://github.com/abarnert/cpython/tree/wpy

This patch is based off of https://github.com/serprex/cpython/tree/wpy

It omits importlib.h & importlib_external.h as those are generated

It omits https://github.com/serprex/cpython/blob/wpy/Python/wordcode.md

I got around to benchmarking against building on master rather than using my repo's packaged version, it's currently a 1% speed improvement (every bit counts). I'm testing on an Intel Atom 330 with Linux.  Besides the minor perf increase, it generates smaller bytecode & is simpler (peephole now handles EXTENDED_ARG since it isn't too hard to track & while loops become for loops in dis)

Previous discussion: https://mail.python.org/pipermail/python-dev/2016-February/143357.html

pdb works without changes. coverage.py doesn't seem to rely on anything this changes

I modified byteplay to target this change mostly over the course of half an hour before work: https://github.com/serprex/byteplay/blob/master/wbyteplay.py

I'd be interested to hear if this encoding simplifies things for FAT python & the recent work to cache attribute/global lookup

Remaining code issues: peepholer could allocate half the space as it does now for basic block tracking, compile.c & peephole.c repeat themselves on computing instruction size given an argument & how to spit out an instruction given an argument

Breaking change in dis: I've removed HAVE_ARGUMENT. This is to help code fail fast. It could be replaced with IGNORES_ARGUMENT or, as abarnert suggested, a range(90,256) named after the other hasXXXs 'hasarg'
msg262542 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-28 00:15
Also missing from this patch is modification of the bytecode magic number
msg262597 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-03-29 09:53
Sorry, I don't have the context. Can you please explain your change? What did you do? What is the rationale? Do you expect better performances? If yes, please run the Python benchmark suite and post results here. What is the new format of bytecode? etc.
msg262616 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-29 15:40
I'll dig up benchmark results when I get home, but I'd be interested to get results on a less wannabe RISC CPU

The change is to have all instructions take an argument. This removes the branch on each instruction on whether to load oparg. It then also aligns instructions to always be 2 bytes rather than 1 or 3 by having arguments only take up 1 byte. In the case that an argument to an instruction is greater than 255, it can chain EXTENDED_ARG up to 3 times. In practice this rarely occurs, mostly only for jumps, & abarnert measured stdlib to be ~5% smaller

The rationale is that this offers 3 benefits: Smaller code size, simpler instruction iteration/indexing (One may now scan backwards, as peephole.c does in this patch), which between the two results in a small perf gain (the only way for perf to be negatively impacted is by an increase in EXTENDED_ARGs, when I post benchmarking I'll also post a count of how many more EXTENDED_ARGs are emitted)

This also means that if I want to create something like a tracer that tracks some information for each instruction, I can allocate an array of codesize/2 bytes, then index off of half the instruction index. This isn't currently done in peephole.c, nor does this include halving jump opargs

I've looked up the 'recent work to cache attribute/global lookup' issue I mentioned: http://bugs.python.org/issue26219
I believe that patch would benefit from this one, but it'd be better to get Yury's opinion that belief
msg262622 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-03-29 20:20
> This also means that if I want to create something like a tracer that tracks some information for each instruction, I can allocate an array of codesize/2 bytes, then index off of half the instruction index. This isn't currently done in peephole.c, nor does this include halving jump opargs

There is something called "inline caching": put the cache between instructions, in the same memory block. Example of paper on CPython:

"Efficient Inline Caching without Dynamic Translation" by Stefan Brunthaler (2009)
https://www.sba-research.org/wp-content/uploads/publications/sac10.pdf

Yury's approach is a standard lookup table: offset => cache. In the issue #26219, he even used two tables: co->co_opt_opcodemap is an array mapping an instruction offset to the offset in the cache, then the second offset is used to retrieve cache data from a second array. You have 3 structures (co_code, co_opt_opcodemap, co_opt), whereas inline caching propose to only use one flat structure (a single array).

The paper promises "improved data locality and instruction decoding effciency".

but "The new combined data-structure requires significantly more space—two native machine words for each instruction byte. To compensate for the additional space requirements, we use a profiling infrastructure to decide when to switch to this new instruction encoding at run time."

Memory footprint and detection of hot code is handled in the issue #26219.
msg262624 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-03-29 20:32
> The change is to have all instructions take an argument. This removes the branch on each instruction on whether to load oparg. (...)

Oh ok, I like that :-) I had the same idea.

Your patch contains unrelated changes, you should revert them to have a change simpler to review.

Removing HAVE_ARGUMENT from opcode.h/dis.py doesn't seem like a good idea. IMHO it's stil useful for dis to show a more compact bytcode. For example, I expect "DUP_TOP", not "DUP_TOP 0", or worse "DUP_TOP 5".

For backward compatibility, I also suggest to keep HAS_ARG() even if it must not be used to decode instructions anymore.

The following obvious change is to use a pointer aligned to 16-bits for co_code to be able to use 16-bit instructions rather than two 8-bit instructions to retrieve the opcode and then the argument in ceval.c (see the issue #25823). I suggest to implement that later to keep the change as simple as possible.
msg262646 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-30 02:12
I've attached some benchmarking results as requested

There is 1 failing test which doesn't fail in master for test_trace; the unit test for #9936
msg262647 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-30 02:29
To clarify format of extended arg listings: 1st column is the number of instances of EXTENDED_ARG being emitted, 2nd column is length of bytecode, followed by filename

The previous numbers were of modules, which generally are run-once and listing many constants. I've attached a modification where instead I iterated over the code objects inside co_consts from compiling the *.py file. Trunk currently only emits EXTENDED_ARGs for classes (Pdb, & then the rest in Lib/typing.py) so I've omitted it
msg262648 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2016-03-30 03:28
Thanks for the benchmark results, Demur, but I think the  benchmarks Victor was talking about hg.Python.org/benchmarks
msg262658 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-03-30 10:13
Demur, I think you're on the right track here.  It will nice to be rid of the HAVE_ARGUMENT tests and to not have to decode the arguments one-byte at a time.  Overall, the patch looks good (although it includes several small unrelated changes).  Besides the speed benefit, the code looks cleaner than before.

I was surprised to see that the peephole optimizer grew larger, but the handling of extended arguments is likely worth it even though it adds several new wordy chunks of code.

When it comes to benchmarks, expect a certain amount of noise (especially from those that use I/O or that exercise the C-API more than the pure python bytecode).
msg262660 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-03-30 10:46
FWIW, I'm seeing about a 7% improvement to pystone.
msg262662 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-30 12:28
While it's good to know benchmarking in core Python goes beyond the microbenchmarks included in the distribution, I'm having some trouble with hg.python.org/benchmarks due to my system only having 256MB of ram

I've attached results for 2 benchmarks: 2to3 & regex
msg262671 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2016-03-30 16:03
Report on Darwin Raymonds-2013-MacBook-Pro.local 15.4.0 Darwin Kernel Version 15.4.0: Fri Feb 26 22:08:05 PST 2016; root:xnu-3248.40.184~3/RELEASE_X86_64 x86_64 i386
Total CPU cores: 8

### 2to3 ###
Min: 4.680941 -> 4.437426: 1.05x faster
Avg: 4.703692 -> 4.498773: 1.05x faster
Significant (t=9.57)
Stddev: 0.02670 -> 0.03972: 1.4874x larger

### chameleon_v2 ###
Min: 3.391806 -> 3.300793: 1.03x faster
Avg: 3.447192 -> 3.340437: 1.03x faster
Significant (t=28.26)
Stddev: 0.03141 -> 0.02098: 1.4972x smaller

### django_v3 ###
Min: 0.339693 -> 0.328680: 1.03x faster
Avg: 0.347655 -> 0.335704: 1.04x faster
Significant (t=16.97)
Stddev: 0.00477 -> 0.00518: 1.0871x larger

### nbody ###
Min: 0.159703 -> 0.148231: 1.08x faster
Avg: 0.164307 -> 0.152380: 1.08x faster
Significant (t=34.06)
Stddev: 0.00260 -> 0.00234: 1.1123x smaller

The following not significant results are hidden, use -v to show them:
fastpickle, fastunpickle, json_dump_v2, json_load, regex_v8, tornado_http.
msg262677 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2016-03-30 18:25
Thanks to Demur and Raymond for running the benchmarks. All of these numbers look good and with Raymond saying the code looks cleaner and everyone so far -- including me -- liking the overall idea this definitely seems worth continuing to work on. Thanks for starting this, Demur, and I hope you feel up for continuing to work on making this work!
msg262684 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-03-31 02:52
Added back HAVE_ARGUMENT & HAS_ARG. As a result printing has removed arguments

Removed some code which was labelled unrelated

This does _not_ include having f_lasti be -1 instead of -2
msg262716 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-01 02:09
Addressed feedback from josh.rosenberg besides reintroducing FOURTH/SET_FOURTH
msg262758 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-01 19:25
I reviewed wpy3.patch.

I concur with Raymond, it's really nice to have a regular structure for the bytecode.

--

Serhiy proposed to *reduce* the size of bytecode by adding new specialized bytecode which include the argument. For example (LOAD_CONST, 0) => LOAD_CONST_0. I would like to hear his opinion on this change.
https://mail.python.org/pipermail/python-ideas/2016-February/038276.html

Data+code loaded by import is the top #1 memory consumer on basic scripts according to tracemalloc:
https://docs.python.org/dev/library/tracemalloc.html#examples

I don't know the ratio between data and code. But here we are only talking about the co_code fields of code objects. I guess that the file size of .pyc is a good estimation.

I don't think that the memory footprint of bytecode (co_code fields of code objects) really matters on computers (and smartphones?) of 2016.

*If* I have to choose between CPU performance and memory footprint, I choose the CPU!

--

> This does _not_ include having f_lasti be -1 instead of -2

IMHO it's ok to break the C API, but I would prefer to keep the backward compatibility for the Python API (replace any negative number with -1 for the Python API).
msg262787 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-02 04:06
Got f_lasti working as -1. Applied PEP7. Unrelated: fixed a misnamed variable in test_grammar because it ran into a peephole bug (const -> jump_if_false erase didn't work when EXTENDED_ARGs were involved). dis has argval/arg set to None instead of the unused argument value

Things are seeming more brittle with f_lasti as -1. But maybe it's all in my head
msg263094 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-09 13:18
[12:36] <serprex> Could I get a code review for wordcode's 4th patchset? http://bugs.python.org/review/26647/#ps16875
...
[13:13] <SilentGhost> serprex: you'd be better off bumping the issue
msg263263 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-12 16:29
module_finder.patch: cleanup (optimize?) modulefinder.ModuleFinder.scan_opcodes_25(): Use an index rather than creating a lot of substrings.

It's unrelated to Wordcode, it's just that I noticed the inefficient code while reviewing the whole patch.
msg263265 - (view) Author: Roundup Robot (python-dev) Date: 2016-04-12 16:44
New changeset 7bf08a11d4c9 by Victor Stinner in branch 'default':
Issue #26647: Cleanup opcode
https://hg.python.org/cpython/rev/7bf08a11d4c9

New changeset 423e2a96189e by Victor Stinner in branch 'default':
Issue #26647: Cleanup modulefinder
https://hg.python.org/cpython/rev/423e2a96189e

New changeset f8398dba48fb by Victor Stinner in branch '3.5':
Issue #26647: Fix typo in test_grammar
https://hg.python.org/cpython/rev/f8398dba48fb
msg263278 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-12 22:01
Demur Rumed: can you please rebase your patch? And can you please generate a patch without the git format?
msg263334 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-13 13:03
Made changes from code review, did a little extra on fixing up type consistency, not sure if this is exactly the patch format you wanted; I tried `git difftool --extcmd='diff -u' python/master` but it's listing the original files as being from /tmp

I've updated modulefinder with haypo's index patch except in the context of wordcode
msg263335 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-13 13:41
Updated wpy5.patch to use a more standard diff format (patch generated with Mercurial, hg diff > patch).
msg263336 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-13 14:29
> Updated wpy5.patch to use a more standard diff format (patch generated with Mercurial, hg diff > patch).

Crap, I forgot Python/wordcode_helpers.h.

I updated a fixed wpy6.patch.
msg263337 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-13 14:50
Demur Rumed: Is the peephole optimizer able to emit *two* EXTENDED_ARG for jump larger than 16 bits? Currently, it only has to retry once to add EXTENDED_ARG if a jump is larger than 16 bits (to use 32-bit jump offset).
msg263339 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-13 16:23
I ran the Python benchmark suite on wpy6.patch.

* My platform: Linux-4.4.4-301.fc23.x86_64-x86_64-with-fedora-23-Twenty_Three
* My PC: CPU Intel i7-2600 (~2.9 GHz) with 12 GB of RAM
* Benchmark ran on isolated CPU: http://haypo-notes.readthedocs.org/microbenchmark.html
* Command line: ~/bin/taskset_isolated.py time python3 -u perf.py --rigorous "$ORIG_PYTHON" "$PATCHED_PYTHON" -b all 2>&1

It looks like we get more faster benchmarks than slower benchamrks. Faster is up to 11% faster, whereas the worst slowdown is only 4%. The overall results look good to me.

Slower:

* fannkuch: 1.04x slower
* pickle_dict: 1.04x slower
* telco: 1.03x slower
* django_v3: 1.02x slower
* simple_logging: 1.02x slower
* meteor_contest: 1.02x slower

Faster:

* unpack_sequence: 1.11x faster
* etree_parse: 1.06x faster
* call_method_slots: 1.06x faster
* etree_iterparse: 1.05x faster
* call_simple: 1.04x faster
* nbody: 1.04x faster
* float: 1.04x faster
* call_method_unknown: 1.03x faster
* call_method: 1.03x faster
* chaos: 1.03x faster
* mako_v2: 1.03x faster
* richards: 1.02x faster
* silent_logging1: 1.02x faster


Full Output:

Original python: ../wordcode/python
3.6.0a0 (default:ad5b079565ad, Apr 13 2016, 16:30:36) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)]

Patched python: ../wordcode/python
3.6.0a0 (default:c050d203e82b, Apr 13 2016, 16:30:24) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)]

INFO:root:Automatically selected timer: perf_counter
INFO:root:Skipping benchmark slowpickle; not compatible with Python 3.6
INFO:root:Skipping benchmark pybench; not compatible with Python 3.6
INFO:root:Skipping benchmark hg_startup; not compatible with Python 3.6
INFO:root:Skipping benchmark rietveld; not compatible with Python 3.6
INFO:root:Skipping benchmark slowspitfire; not compatible with Python 3.6
INFO:root:Skipping benchmark bzr_startup; not compatible with Python 3.6
INFO:root:Skipping benchmark html5lib_warmup; not compatible with Python 3.6
INFO:root:Skipping benchmark slowunpickle; not compatible with Python 3.6
INFO:root:Skipping benchmark html5lib; not compatible with Python 3.6
INFO:root:Skipping benchmark spambayes; not compatible with Python 3.6
[ 1/43] 2to3...
INFO:root:Running `../wordcode/python lib3/2to3/2to3 -f all lib/2to3`
INFO:root:Running `../wordcode/python lib3/2to3/2to3 -f all lib/2to3` 5 times
INFO:root:Running `../default/python lib3/2to3/2to3 -f all lib/2to3`
INFO:root:Running `../default/python lib3/2to3/2to3 -f all lib/2to3` 5 times
[ 2/43] call_method...
INFO:root:Running `../wordcode/python performance/bm_call_method.py -n 300 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_call_method.py -n 300 --timer perf_counter`
mer. avril 13 16:36:47 CEST 2016
Original python: ../wordcode/python
3.6.0a0 (default:ad5b079565ad, Apr 13 2016, 16:30:36) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)]

Patched python: ../wordcode/python
3.6.0a0 (default:c050d203e82b, Apr 13 2016, 16:30:24) 
[GCC 5.3.1 20151207 (Red Hat 5.3.1-2)]

INFO:root:Automatically selected timer: perf_counter
INFO:root:Skipping benchmark html5lib; not compatible with Python 3.6
INFO:root:Skipping benchmark html5lib_warmup; not compatible with Python 3.6
INFO:root:Skipping benchmark slowpickle; not compatible with Python 3.6
INFO:root:Skipping benchmark slowunpickle; not compatible with Python 3.6
INFO:root:Skipping benchmark slowspitfire; not compatible with Python 3.6
INFO:root:Skipping benchmark rietveld; not compatible with Python 3.6
INFO:root:Skipping benchmark bzr_startup; not compatible with Python 3.6
INFO:root:Skipping benchmark spambayes; not compatible with Python 3.6
INFO:root:Skipping benchmark pybench; not compatible with Python 3.6
INFO:root:Skipping benchmark hg_startup; not compatible with Python 3.6
[ 1/43] 2to3...
INFO:root:Running `../wordcode/python lib3/2to3/2to3 -f all lib/2to3`
INFO:root:Running `../wordcode/python lib3/2to3/2to3 -f all lib/2to3` 5 times
INFO:root:Running `../default/python lib3/2to3/2to3 -f all lib/2to3`
INFO:root:Running `../default/python lib3/2to3/2to3 -f all lib/2to3` 5 times
[ 2/43] call_method...
INFO:root:Running `../wordcode/python performance/bm_call_method.py -n 300 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_call_method.py -n 300 --timer perf_counter`
[ 3/43] call_method_slots...
INFO:root:Running `../wordcode/python performance/bm_call_method_slots.py -n 300 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_call_method_slots.py -n 300 --timer perf_counter`
[ 4/43] call_method_unknown...
INFO:root:Running `../wordcode/python performance/bm_call_method_unknown.py -n 300 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_call_method_unknown.py -n 300 --timer perf_counter`
[ 5/43] call_simple...
INFO:root:Running `../wordcode/python performance/bm_call_simple.py -n 300 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_call_simple.py -n 300 --timer perf_counter`
[ 6/43] chameleon_v2...
INFO:root:Running `../wordcode/python performance/bm_chameleon_v2.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_chameleon_v2.py -n 100 --timer perf_counter`
[ 7/43] chaos...
INFO:root:Running `../wordcode/python performance/bm_chaos.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_chaos.py -n 100 --timer perf_counter`
[ 8/43] django_v3...
INFO:root:Running `../wordcode/python performance/bm_django_v3.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_django_v3.py -n 100 --timer perf_counter`
[ 9/43] etree_generate...
INFO:root:Running `../wordcode/python performance/bm_elementtree.py -n 100 --timer perf_counter generate`
INFO:root:Running `../default/python performance/bm_elementtree.py -n 100 --timer perf_counter generate`
[10/43] etree_iterparse...
INFO:root:Running `../wordcode/python performance/bm_elementtree.py -n 100 --timer perf_counter iterparse`
INFO:root:Running `../default/python performance/bm_elementtree.py -n 100 --timer perf_counter iterparse`
[11/43] etree_parse...
INFO:root:Running `../wordcode/python performance/bm_elementtree.py -n 100 --timer perf_counter parse`
INFO:root:Running `../default/python performance/bm_elementtree.py -n 100 --timer perf_counter parse`
[12/43] etree_process...
INFO:root:Running `../wordcode/python performance/bm_elementtree.py -n 100 --timer perf_counter process`
INFO:root:Running `../default/python performance/bm_elementtree.py -n 100 --timer perf_counter process`
[13/43] fannkuch...
INFO:root:Running `../wordcode/python performance/bm_fannkuch.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_fannkuch.py -n 100 --timer perf_counter`
[14/43] fastpickle...
INFO:root:Running `../wordcode/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle`
INFO:root:Running `../default/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle`
[15/43] fastunpickle...
INFO:root:Running `../wordcode/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle unpickle`
INFO:root:Running `../default/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle unpickle`
[16/43] float...
INFO:root:Running `../wordcode/python performance/bm_float.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_float.py -n 100 --timer perf_counter`
[17/43] formatted_logging...
INFO:root:Running `../wordcode/python performance/bm_logging.py -n 100 --timer perf_counter formatted_output`
INFO:root:Running `../default/python performance/bm_logging.py -n 100 --timer perf_counter formatted_output`
[18/43] go...
INFO:root:Running `../wordcode/python performance/bm_go.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_go.py -n 100 --timer perf_counter`
[19/43] hexiom2...
INFO:root:Running `../wordcode/python performance/bm_hexiom2.py -n 4 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_hexiom2.py -n 4 --timer perf_counter`
[20/43] json_dump_v2...
INFO:root:Running `../wordcode/python performance/bm_json_v2.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_json_v2.py -n 100 --timer perf_counter`
[21/43] json_load...
INFO:root:Running `../wordcode/python performance/bm_json.py -n 100 --timer perf_counter json_load`
INFO:root:Running `../default/python performance/bm_json.py -n 100 --timer perf_counter json_load`
[22/43] mako_v2...
INFO:root:Running `../wordcode/python performance/bm_mako_v2.py -n 1000 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_mako_v2.py -n 1000 --timer perf_counter`
[23/43] meteor_contest...
INFO:root:Running `../wordcode/python performance/bm_meteor_contest.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_meteor_contest.py -n 100 --timer perf_counter`
[24/43] nbody...
INFO:root:Running `../wordcode/python performance/bm_nbody.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_nbody.py -n 100 --timer perf_counter`
[25/43] normal_startup...
INFO:root:Running `../wordcode/python -c ` 2000 times
INFO:root:Running `../default/python -c ` 2000 times
[26/43] nqueens...
INFO:root:Running `../wordcode/python performance/bm_nqueens.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_nqueens.py -n 100 --timer perf_counter`
[27/43] pathlib...
INFO:root:Running `../wordcode/python performance/bm_pathlib.py -n 1000 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_pathlib.py -n 1000 --timer perf_counter`
[28/43] pickle_dict...
INFO:root:Running `../wordcode/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle_dict`
INFO:root:Running `../default/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle_dict`
[29/43] pickle_list...
INFO:root:Running `../wordcode/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle_list`
INFO:root:Running `../default/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle pickle_list`
[30/43] pidigits...
INFO:root:Running `../wordcode/python performance/bm_pidigits.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_pidigits.py -n 100 --timer perf_counter`
[31/43] raytrace...
INFO:root:Running `../wordcode/python performance/bm_raytrace.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_raytrace.py -n 100 --timer perf_counter`
[32/43] regex_compile...
INFO:root:Running `../wordcode/python performance/bm_regex_compile.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_regex_compile.py -n 100 --timer perf_counter`
[33/43] regex_effbot...
INFO:root:Running `../wordcode/python performance/bm_regex_effbot.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_regex_effbot.py -n 100 --timer perf_counter`
[34/43] regex_v8...
INFO:root:Running `../wordcode/python performance/bm_regex_v8.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_regex_v8.py -n 100 --timer perf_counter`
[35/43] richards...
INFO:root:Running `../wordcode/python performance/bm_richards.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_richards.py -n 100 --timer perf_counter`
[36/43] silent_logging...
INFO:root:Running `../wordcode/python performance/bm_logging.py -n 100 --timer perf_counter no_output`
INFO:root:Running `../default/python performance/bm_logging.py -n 100 --timer perf_counter no_output`
[37/43] simple_logging...
INFO:root:Running `../wordcode/python performance/bm_logging.py -n 100 --timer perf_counter simple_output`
INFO:root:Running `../default/python performance/bm_logging.py -n 100 --timer perf_counter simple_output`
[38/43] spectral_norm...
INFO:root:Running `../wordcode/python performance/bm_spectral_norm.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_spectral_norm.py -n 100 --timer perf_counter`
[39/43] startup_nosite...
INFO:root:Running `../wordcode/python -S -c ` 4000 times
INFO:root:Running `../default/python -S -c ` 4000 times
[40/43] telco...
INFO:root:Running `../wordcode/python performance/bm_telco.py -n 100 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_telco.py -n 100 --timer perf_counter`
[41/43] tornado_http...
INFO:root:Running `../wordcode/python performance/bm_tornado_http.py -n 200 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_tornado_http.py -n 200 --timer perf_counter`
[42/43] unpack_sequence...
INFO:root:Running `../wordcode/python performance/bm_unpack_sequence.py -n 100000 --timer perf_counter`
INFO:root:Running `../default/python performance/bm_unpack_sequence.py -n 100000 --timer perf_counter`
[43/43] unpickle_list...
INFO:root:Running `../wordcode/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle unpickle_list`
INFO:root:Running `../default/python performance/bm_pickle.py -n 100 --timer perf_counter --use_cpickle unpickle_list`

Report on Linux smithers 4.4.4-301.fc23.x86_64 #1 SMP Fri Mar 4 17:42:42 UTC 2016 x86_64 x86_64
Total CPU cores: 8

### call_method ###
Min: 0.313558 -> 0.304460: 1.03x faster
Avg: 0.313797 -> 0.304661: 1.03x faster
Significant (t=773.69)
Stddev: 0.00015 -> 0.00014: 1.1084x smaller

### call_method_slots ###
Min: 0.317374 -> 0.300388: 1.06x faster
Avg: 0.317527 -> 0.300701: 1.06x faster
Significant (t=1971.52)
Stddev: 0.00011 -> 0.00010: 1.0595x smaller

### call_method_unknown ###
Min: 0.309548 -> 0.301112: 1.03x faster
Avg: 0.309619 -> 0.301828: 1.03x faster
Significant (t=636.50)
Stddev: 0.00008 -> 0.00020: 2.3452x larger

### call_simple ###
Min: 0.245480 -> 0.235982: 1.04x faster
Avg: 0.246004 -> 0.236310: 1.04x faster
Significant (t=492.66)
Stddev: 0.00023 -> 0.00025: 1.1069x larger

### chaos ###
Min: 0.271012 -> 0.264204: 1.03x faster
Avg: 0.271723 -> 0.264787: 1.03x faster
Significant (t=132.15)
Stddev: 0.00044 -> 0.00028: 1.5564x smaller

### django_v3 ###
Min: 0.544071 -> 0.555346: 1.02x slower
Avg: 0.544697 -> 0.556142: 1.02x slower
Significant (t=-210.46)
Stddev: 0.00036 -> 0.00041: 1.1510x larger

### etree_iterparse ###
Min: 0.215644 -> 0.205198: 1.05x faster
Avg: 0.219440 -> 0.208423: 1.05x faster
Significant (t=53.95)
Stddev: 0.00145 -> 0.00144: 1.0016x smaller

### etree_parse ###
Min: 0.287245 -> 0.271355: 1.06x faster
Avg: 0.288902 -> 0.273051: 1.06x faster
Significant (t=107.60)
Stddev: 0.00106 -> 0.00102: 1.0348x smaller

### fannkuch ###
Min: 0.957137 -> 0.993462: 1.04x slower
Avg: 0.965306 -> 0.995223: 1.03x slower
Significant (t=-42.85)
Stddev: 0.00665 -> 0.00214: 3.1094x smaller

### float ###
Min: 0.258390 -> 0.248217: 1.04x faster
Avg: 0.265902 -> 0.255380: 1.04x faster
Significant (t=17.29)
Stddev: 0.00441 -> 0.00419: 1.0510x smaller

### mako_v2 ###
Min: 0.040757 -> 0.039408: 1.03x faster
Avg: 0.041534 -> 0.040058: 1.04x faster
Significant (t=106.39)
Stddev: 0.00033 -> 0.00029: 1.1548x smaller

### meteor_contest ###
Min: 0.187423 -> 0.192079: 1.02x slower
Avg: 0.188739 -> 0.193440: 1.02x slower
Significant (t=-61.30)
Stddev: 0.00053 -> 0.00056: 1.0503x larger

### nbody ###
Min: 0.227627 -> 0.219617: 1.04x faster
Avg: 0.229736 -> 0.221310: 1.04x faster
Significant (t=23.23)
Stddev: 0.00276 -> 0.00235: 1.1745x smaller

### pickle_dict ###
Min: 0.491946 -> 0.513859: 1.04x slower
Avg: 0.492796 -> 0.515723: 1.05x slower
Significant (t=-158.63)
Stddev: 0.00063 -> 0.00130: 2.0672x larger

### richards ###
Min: 0.159527 -> 0.155970: 1.02x faster
Avg: 0.160603 -> 0.157190: 1.02x faster
Significant (t=36.37)
Stddev: 0.00067 -> 0.00066: 1.0168x smaller

### silent_logging ###
Min: 0.068349 -> 0.067301: 1.02x faster
Avg: 0.069759 -> 0.067481: 1.03x faster
Significant (t=56.73)
Stddev: 0.00038 -> 0.00013: 2.8514x smaller

### simple_logging ###
Min: 0.276149 -> 0.282515: 1.02x slower
Avg: 0.277709 -> 0.283773: 1.02x slower
Significant (t=-53.60)
Stddev: 0.00080 -> 0.00080: 1.0045x smaller

### telco ###
Min: 0.011922 -> 0.012221: 1.03x slower
Avg: 0.011985 -> 0.012283: 1.02x slower
Significant (t=-59.48)
Stddev: 0.00003 -> 0.00004: 1.0912x larger

### unpack_sequence ###
Min: 0.000047 -> 0.000042: 1.11x faster
Avg: 0.000047 -> 0.000042: 1.10x faster
Significant (t=2242.55)
Stddev: 0.00000 -> 0.00000: 1.2134x larger

The following not significant results are hidden, use -v to show them:
2to3, chameleon_v2, etree_generate, etree_process, fastpickle, fastunpickle, formatted_logging, go, hexiom2, json_dump_v2, json_load, normal_startup, nqueens, pathlib, pickle_list, pidigits, raytrace, regex_compile, regex_effbot, regex_v8, spectral_norm, startup_nosite, tornado_http, unpickle_list.
msg263340 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-13 16:36
I opened a thread on the python-dev mailing list to discuss wordcode:
https://mail.python.org/pipermail/python-dev/2016-April/144044.html
msg264256 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-26 11:45
@Serhiy and Yury: I'm waiting for your review on this change. I would like to push the wordcode change, I propose to push it sunday. Tell me if you need more time to review it.
msg264470 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-29 06:54
Demur: Serhiy and me reviewed your change wpy6.patch. Can you modify your change? If not, I can do it ;-) It's up to you.
msg264495 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-29 12:48
I should be able to submit a wpy7.patch this evening, though I was never able to generate a patch in the format you prefer. Should I fall back to piping git diff? At this point it may be better if you take in the last suggestions as I'd probably end up removing TARGET_WITH_IMPL & then there'd be debate about that being unnecessary

To be clear: I'll upload a wpy7.patch this evening, & you can decide whether to create your own patch from wpy6 or wpy7 or accept wpy7. In the meantime if you just update wpy6 then all well. Sorry for not being very straight forward
msg264496 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-04-29 13:06
To create a patch accepted by Rietveld, you should clone the Mercurial
repository, apply your patch, and then regenerate the diff using
Mercurial:

Something like:

$ cd python_hg_clone
$ patch -p1 < ../git_patch.patch
$ hg diff > hg_diff.patch

To clone: hg clone https://hg.python.org/cpython/
msg264536 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-04-29 23:10
Plain git diff formatted patch. I installed mercurial but then `hg clone https://hg.python.org/cpython` failed twice over weird protocol errors (2nd one had to do with negative length) so I gave up
msg264558 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-04-30 14:36
I found bugs in peephole.c. I have published my comments on Rietveld, but reviewing peephole.c is still not finished.
msg264604 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-01 17:54
I should mention a change in dis which hasn't come up for the sake of explicit discussion: I modified dis to not print EXTENDED_ARG. Instead one can tell by the byte indices having a hole (since they uniformly count up by 2 otherwise) & the argument value being greater than 255
msg264610 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-01 21:39
Oh, I dislike the dis change. Please keep the current behaviour and open an
issue to enhance the dis module.
msg264643 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-02 12:42
> I should mention a change in dis which hasn't come up for the sake of explicit discussion: I modified dis to not print EXTENDED_ARG.

This may be a problem since jumps can now point on address not output in disassemble.

May be this is good idea (I'm +0), but it deserves separate issue.
msg265285 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-10 23:07
I've been waiting on storchaka to finish code reviewing peephole based on his statement of 'Reviewing peephole.c still is not completed.' before uploading a patch which addresses the issues raised since
msg265346 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-11 19:58
Oh, sorry for misleading. I was going to continue reviewing peephole.c after found issues would addressed. In any case the patch needs to be synchronized with current sources (I expected that changes for modulefinder is no longer needed and changes for dis will be simpler). Please also wrap long lines in peephole.c.
msg265799 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-18 02:18
I've replaced wpy8 which I uploaded a few days ago as the previous version had a blank line removed from dis.rst by accident
msg265917 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-20 09:10
Added new comments on Rietveld.
msg265918 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-20 09:16
I regenerated wpy8.patch with Mercurial. I had to fix a minor conflict in Lib/importlib/_bootstrap_external.py. (I didn't read the patch.)
msg266045 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 03:39
Based on serhiy's LGTM I'm uploading hopefully final touches based on his feedback

New in this are changes to Makefile.pre & PCbuild files. I can't really test PCbuild but I did test that make followed by modifying wordcode_helpers.h followed by make rebuilds peephole & compile. There's now also modification of PC/launcher.c to include 3370 as 3.6

Doc/library/dis.rst: I've reverted changes to HAVE_ARGUMENT description since most of my 'pedantic' changes to differentiate between using arguments & ignoring arguments have been reverted
msg266053 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-22 05:52
wpy9.patch looks mostly good to me, but there compiler warnings.
msg266060 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-22 08:23
test_importlib and test_trace are failed with wpy9.patch.
msg266078 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 13:42
Compiler warnings can be fixed by using h instead of j in case RETURN_VALUE & casting CONST_LEN() to unsigned is safe

As for the failing tests: I've finally figured out how to use Tools/freeze to fix __phello__ et al. I've brought up the trace test failure a few times, essentially

return (1,
2,
3)

compiles to LOAD_CONST (1,2,3) being on separate line than the RETURN_VALUE whereas previously it did not. This issue does not occur for

return """1,
2,
3"""

Leading me to believe peephole has missed some detail in ltotab handling
msg266090 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 16:43
I've tracked down the lnotab issue after modifying master/wpy to not vacuum NOPs. Old code puts LOAD_CONST at after NOPs, on the same line as return op, whereas new code is putting LOAD_CONST before NOPs, on it's own line

I've attached a fix along with addressing compiler warnings & __phello__ & adding some asserts that make sure we aren't passing indices that point inside instructions to peephole's helper functions
msg266092 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 17:58
Sorry for the nuisance of uploading another patch so soon. wpyB modifies test_ctypes now that __phello__ is smaller, & fixes a typo in a comment I made & removes a blank line I had added in when adding in if(0) logic
msg266096 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-22 18:42
Warnings still emitted in debug build.
msg266101 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 19:07
I have verified that wpyC does not produce signed/unsigned warnings with make DEBUG=1
msg266110 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-22 21:38
Removes 0 <= unsigned assertion & fixes j < 0 check to avoid overflow bug
msg266112 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-22 22:04
LGTM. If no one has more comments, I'm going to commit the patch.
msg266114 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-22 22:28
wpyD.patch LGTM, go ahead! We can still polish it later and discuss how to implement the 16-bit fetch ;-)

It would be nice to add a short comment in Python/wordcode_helpers.h explaining that it contains code shared by the compiler and the peephole optimizer. It can be done later.
msg266233 - (view) Author: Roundup Robot (python-dev) Date: 2016-05-24 06:15
New changeset 3a57eafd8401 by Serhiy Storchaka in branch 'default':
Issue #26647: Python interpreter now uses 16-bit wordcode instead of bytecode.
https://hg.python.org/cpython/rev/3a57eafd8401
msg266234 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-24 06:25
Oh, I forgot to add a note in What's New. And the documentation of the dis module should be updated (EXTENDED_ARG etc). Could anyone do this?
msg266240 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-24 09:39
> New changeset 3a57eafd8401 by Serhiy Storchaka in branch 'default':
> Issue #26647: Python interpreter now uses 16-bit wordcode instead of bytecode.
> https://hg.python.org/cpython/rev/3a57eafd8401

Yeah, congrats Demur!
msg266377 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-25 17:12
I join in the congratulations, Demur! Thank you for your contributions.

I left this issue open for updating the documentation and other polishing.
msg266388 - (view) Author: Demur Rumed (Demur Rumed) * Date: 2016-05-25 18:39
A documentation touch up for EXTENDED_ARG is included in #27095
msg266407 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2016-05-26 04:08
Chatting to Russell Keith-Magee, I realised the bytecode section in the devguide's description of the code generation pipeline may need some tweaks to account for the differences between 3.6 and earlier versions: https://docs.python.org/devguide/compiler.html#ast-to-cfg-to-bytecode
msg266408 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2016-05-26 04:12
I switched the target component to Documentation to reflect that as far as we know this is feature complete from a functional perspective, but there hasn't been a review of the docs for bytecode references yet, nor a decision on whether or not we want to systematically switching to using "wordcode" instead.
msg266417 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-26 06:45
Hi, I ran the CPython benchmark suite (my fork modified to be more stable) on ed4eec682199 (patched) vs 7a7f54fe0698 (base). The patched version contains wordcode (issue #26647) + 16-bit fetch for opcode and oparg (issue #27097).

The speedup is quite nice. Attached default-May26-03-05-10.log contains the full output.

Faster (27):
- unpack_sequence: 1.11x faster
- simple_logging: 1.11x faster
- silent_logging: 1.10x faster
- formatted_logging: 1.09x faster
- raytrace: 1.08x faster
- chaos: 1.08x faster
- etree_process: 1.08x faster
- call_simple: 1.07x faster
- mako_v2: 1.07x faster
- tornado_http: 1.07x faster
- nqueens: 1.07x faster
- regex_compile: 1.06x faster
- pathlib: 1.06x faster
- 2to3: 1.06x faster
- richards: 1.05x faster
- spectral_norm: 1.05x faster
- etree_generate: 1.05x faster
- chameleon_v2: 1.04x faster
- pickle_list: 1.03x faster
- pickle_dict: 1.03x faster
- regex_v8: 1.03x faster
- go: 1.03x faster
- call_method: 1.03x faster
- django_v3: 1.03x faster
- telco: 1.02x faster
- json_load: 1.02x faster
- call_method_unknown: 1.02x faster

Slower (1):
- fannkuch: 1.07x slower

Not significat (14):
- unpickle_list
- startup_nosite
- regex_effbot
- pidigits
- normal_startup
- nbody
- meteor_contest
- json_dump_v2
- float
- fastunpickle
- fastpickle
- etree_parse
- etree_iterparse
- call_method_slots
msg266420 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-26 09:11
I think we should make yet few related changes:

* Change meaning of jump offsets. They should count not bytes, but code units (16-bit words). This will extend the range addressed by short commands (from 256 bytes to 256 words) and simplify ceval.c.
* Change f_lasti, tb_lasti etc to count code units instead of bytes.
* Change disassembler to show addresses in code units, not bytes.

These changes break compatibility (already broken by switching to 16-bit bytecode). The first one breaks compatibility with compiled bytecode and needs incrementing the magic number. That is why I think we should do this in this issue.

What is better, provide one large patch or separate simpler patches for every stage?
msg266425 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-26 11:10
Here is large patch (not including generated Python/importlib.h and Python/importlib_external.h).
msg266426 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-05-26 11:39
Serhiy: please open a new issue for your change. While it's related, it's
different enough to deserve its own issue.

By the way , please don't include generated importlib .h file in your
patches.
msg266432 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-05-26 13:14
Issue27129.
msg281656 - (view) Author: Roundup Robot (python-dev) Date: 2016-11-24 21:35
New changeset 303cedfb9e7a by Victor Stinner in branch '3.6':
Fix _PyGen_yf()
https://hg.python.org/cpython/rev/303cedfb9e7a
msg281663 - (view) Author: STINNER Victor (haypo) * (Python committer) Date: 2016-11-24 22:20
This issue is done: see issue #27129 for the next step.
msg281789 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2016-11-26 18:26
I left this issue open for documenting the wordcode. Now opened separate issue28810 for this.
History
Date User Action Args
2017-03-31 16:36:31dstufftsetpull_requests: + pull_request1041
2016-11-26 18:26:24serhiy.storchakasetmessages: + msg281789
2016-11-24 22:20:31hayposetstatus: open -> closed
resolution: fixed
dependencies: - Wordcode, part 2
messages: + msg281663
2016-11-24 21:35:18python-devsetmessages: + msg281656
2016-09-13 22:57:22Ryan Maysetnosy: + Ryan May
2016-06-12 15:13:20serhiy.storchakaunlinkissue27095 dependencies
2016-05-26 13:14:27serhiy.storchakasetdependencies: + Wordcode, part 2
messages: + msg266432
components: - Interpreter Core
stage: patch review -> needs patch
2016-05-26 11:39:57hayposetmessages: + msg266426
2016-05-26 11:25:13serhiy.storchakasetfiles: + wordcode.patch
2016-05-26 11:24:44serhiy.storchakasetfiles: - wordcode.patch
2016-05-26 11:10:59serhiy.storchakasetfiles: + wordcode.patch

messages: + msg266425
stage: needs patch -> patch review
2016-05-26 09:11:55serhiy.storchakasetmessages: + msg266420
components: + Interpreter Core
2016-05-26 06:45:03hayposetfiles: + default-May26-03-05-10.log

messages: + msg266417
2016-05-26 04:12:10ncoghlansetmessages: + msg266408
components: + Documentation, - Interpreter Core
2016-05-26 04:08:41ncoghlansetmessages: + msg266407
2016-05-25 18:39:57Demur Rumedsetmessages: + msg266388
2016-05-25 17:12:10serhiy.storchakasetmessages: + msg266377
2016-05-25 17:07:00serhiy.storchakaunlinkissue25823 dependencies
2016-05-24 09:39:50hayposetmessages: + msg266240
2016-05-24 06:25:06serhiy.storchakasetmessages: + msg266234
stage: commit review -> needs patch
2016-05-24 06:15:46python-devsetmessages: + msg266233
2016-05-23 21:42:47serhiy.storchakalinkissue27095 dependencies
2016-05-22 22:28:27hayposetmessages: + msg266114
2016-05-22 22:04:08serhiy.storchakasetassignee: serhiy.storchaka
messages: + msg266112
stage: patch review -> commit review
2016-05-22 21:38:23Demur Rumedsetfiles: + wpyD.patch

messages: + msg266110
2016-05-22 19:07:37Demur Rumedsetfiles: + wpyC.patch

messages: + msg266101
2016-05-22 18:42:35serhiy.storchakasetmessages: + msg266096
2016-05-22 17:59:02Demur Rumedsetfiles: + wpyB.patch

messages: + msg266092
2016-05-22 16:56:50serhiy.storchakasetfiles: + wpyA.patch
2016-05-22 16:43:29Demur Rumedsetfiles: + wpyA.patch

messages: + msg266090
2016-05-22 13:42:24Demur Rumedsetmessages: + msg266078
2016-05-22 08:23:12serhiy.storchakasetmessages: + msg266060
2016-05-22 05:52:48serhiy.storchakasetmessages: + msg266053
2016-05-22 03:39:32Demur Rumedsetfiles: + wpy9.patch

messages: + msg266045
2016-05-20 09:16:04hayposetmessages: + msg265918
2016-05-20 09:15:21hayposetfiles: + wpy8.patch
2016-05-20 09:10:51serhiy.storchakasetmessages: + msg265917
stage: patch review
2016-05-18 02:18:35Demur Rumedsetfiles: + wpy8.patch

messages: + msg265799
2016-05-18 02:17:17Demur Rumedsetfiles: - wpy8.patch
2016-05-13 04:04:54Demur Rumedsetfiles: + wpy8.patch
2016-05-11 19:58:27serhiy.storchakasetmessages: + msg265346
2016-05-10 23:07:44Demur Rumedsetmessages: + msg265285
2016-05-02 12:42:08serhiy.storchakasetmessages: + msg264643
2016-05-01 21:39:28hayposetmessages: + msg264610
2016-05-01 17:54:13Demur Rumedsetmessages: + msg264604
2016-04-30 16:07:32serhiy.storchakasetdependencies: + modulefinder should reuse the dis module
2016-04-30 14:36:00serhiy.storchakasetmessages: + msg264558
2016-04-30 05:22:59serhiy.storchakasetfiles: + wpy7.patch
2016-04-29 23:10:33Demur Rumedsetfiles: + wpy7.patch

messages: + msg264536
2016-04-29 13:06:42hayposetmessages: + msg264496
2016-04-29 12:48:47Demur Rumedsetmessages: + msg264495
2016-04-29 06:54:57hayposetmessages: + msg264470
2016-04-26 12:10:10serhiy.storchakalinkissue25823 dependencies
2016-04-26 11:45:19hayposetmessages: + msg264256
2016-04-13 16:36:01hayposetmessages: + msg263340
2016-04-13 16:23:45hayposetmessages: + msg263339
2016-04-13 14:50:31hayposetmessages: + msg263337
2016-04-13 14:29:09hayposetfiles: + wpy6.patch

messages: + msg263336
2016-04-13 14:28:22hayposetfiles: - wpy5.patch
2016-04-13 13:41:53hayposetfiles: + wpy5.patch

messages: + msg263335
2016-04-13 13:03:05Demur Rumedsetfiles: + wpy5.patch

messages: + msg263334
2016-04-12 22:01:37hayposetmessages: + msg263278
2016-04-12 16:44:17python-devsetnosy: + python-dev
messages: + msg263265
2016-04-12 16:29:01hayposetfiles: + module_finder.patch

messages: + msg263263
2016-04-09 13:18:30Demur Rumedsetmessages: + msg263094
2016-04-02 04:07:02Demur Rumedsetfiles: + wpy4.patch

messages: + msg262787
2016-04-01 19:25:10hayposetmessages: + msg262758
2016-04-01 02:12:49Demur Rumedsetfiles: + wpy3.patch
2016-04-01 02:10:17Demur Rumedsetfiles: - wpy3.patch
2016-04-01 02:09:27Demur Rumedsetfiles: + wpy3.patch

messages: + msg262716
2016-03-31 02:52:26Demur Rumedsetfiles: + wpy2.patch

messages: + msg262684
2016-03-30 18:25:05brett.cannonsetmessages: + msg262677
2016-03-30 17:49:06josh.rsetnosy: + josh.r
2016-03-30 16:03:05rhettingersetmessages: + msg262671
2016-03-30 12:28:38Demur Rumedsetfiles: + 2to3re.txt

messages: + msg262662
2016-03-30 10:46:18rhettingersetmessages: + msg262660
2016-03-30 10:13:26rhettingersetnosy: + rhettinger
messages: + msg262658
2016-03-30 03:28:57brett.cannonsetmessages: + msg262648
2016-03-30 02:29:30Demur Rumedsetfiles: + exarg_in_funcs.txt

messages: + msg262647
2016-03-30 02:12:59Demur Rumedsetfiles: + wcpybm.txt

messages: + msg262646
2016-03-29 21:10:34hayposettitle: Wordcode -> ceval: use Wordcode, 16-bit bytecode
2016-03-29 20:54:00Demur Rumedsetnosy: + abarnert
2016-03-29 20:32:48hayposetmessages: + msg262624
2016-03-29 20:20:13hayposetmessages: + msg262622
2016-03-29 15:40:41Demur Rumedsetmessages: + msg262616
2016-03-29 09:53:01hayposetnosy: + serhiy.storchaka
messages: + msg262597
2016-03-28 00:15:38Demur Rumedsetmessages: + msg262542
2016-03-27 17:23:58brett.cannonsetnosy: + haypo
2016-03-27 07:08:25SilentGhostsetnosy: + brett.cannon, georg.brandl, ncoghlan, benjamin.peterson, yselivanov
2016-03-26 23:27:48Demur Rumedcreate