This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author belopolsky
Recipients ajaksu2, amaury.forgeotdarc, belopolsky, nedbat, rhettinger
Date 2008-03-30.17:39:09
SpamBayes Score 0.0016431
Marked as misclassified No
Message-id <>
In-reply-to <>
On Sat, Mar 29, 2008 at 4:58 PM, Raymond Hettinger
<> wrote:

>  This has basically almost never been a problem in the real world.

I believe Ned gave an important use case.  In coverage testing,
optimized runs can show false gaps in coverage.  In addition, a no
optimize option would provide a valuable learning tool.  Python has an
excellent simple VM very suitable for a case study in introductory CS
courses.  Unfortunately, inability to disable peephole optimizer makes
understanding the resulting bytecode more difficult, particularly
given some arbitrary choices made by the optimizer (such as 2*3+1 =>
7, but 1+2*3 => 1+6).  Furthermore, as Raymond suggested in another
thread, peephole optimizer was deliberately kept to bare minimum out
of concerns about compilation time.  Given that most python code is
pre-compiled, I think it is a rare case when code size/speed
improvements would not be worth increased compilation time.  In a rare
case when compilation time is an issue, users can consider disabling
optimization.  Finally, an easy way to disable the optimizer would
help in developing the optimizer itself by providing an easy way to
measure improvements and debugging.

> No need to complicate the world further by adding yet another option and
>  the accompanying implementation-specific knowledge of why you would
>  ever want to use it.

This would not really be a new option.  Most users expect varying
levels of optimization with -O option and python already has 3 levels:
plain, -O, and -OO or Py_OptimizeFlag = 0,1, and 2. Moreover, in fact,
 Py_OptimizeFlag can be set to an arbitrary positive integer using
undocumented -OOO.. option. I don't see how anyone would consider
adding say -G with Py_OptimizeFlag = -1 that would disable all
optimization as "complicating the world."

>  Also, when the peepholer is moved (after the AST is created, but before
>  the opcodes), then little oddities like this will go away.

I don't see how moving optimization up the chain will help with this
particular issue.  Note that the problem is not with peepholer emiting
erroneous line number information, but the fact that the continue
statement is optimized away by replacing the if statement's jump to
continue with a direct jump to the start of the loop.  As I stated in
my first comment, trace output is correct and as long as the compiler
avoids redundant double jumps, the continue statement will not show up
in trace regardless where in compilation chain it is optimized.   The
only way to get correct coverage information is to disable double jump
Date User Action Args
2008-03-30 17:39:12belopolskysetspambayes_score: 0.0016431 -> 0.0016431
recipients: + belopolsky, rhettinger, amaury.forgeotdarc, ajaksu2, nedbat
2008-03-30 17:39:10belopolskylinkissue2506 messages
2008-03-30 17:39:09belopolskycreate