This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients amaury.forgeotdarc, asvetlov, eric.smith, mark.dickinson, meador.inge, pitrou, rhettinger, sdaoden, serhiy.storchaka, stutzbach, vstinner, xuanji
Date 2012-09-22.00:41:57
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
If I understood correctly, the optimization proposed by Antoine was somehow rejected because _PyLong_IS_SMALL_INT() may be optimized "incorrectly".

If a compiler "miscompiles" this macro, can't we disable this optimization on this specific compiler, instead of missing an interesting optimization on all compilers? I bet that no compiler will do insane optimization on such test.

Where in CPython source code do we use directly references to small_ints? The common case is to write PyLong_FromLong(0). Can compiler call PyLong_FromLong() to detect that the result is part of the small_ints array? Or that it is not part of small_ints?

The corner case is very unlikely to me.


I tested Antoine's patch with GCC 4.6 and CFLAGS="-O3 -flto" (-flto: standard link-time optimizer): the test suite pass. I don't see any magic optimization here, sorry.


"fwiw I've always found this helpful for undefined behavior: and, just as it says "x+1 > x" will be optimized to a nop, by the same logic "v >= &array[0] && v < &array[array_len]" will also be optimized to a nop."

"x+1 > x" and "v >= &array[0]" are not the same checks. In the first test, x is used in both parts. I don't understand.
Date User Action Args
2012-09-22 00:42:00vstinnersetrecipients: + vstinner, rhettinger, amaury.forgeotdarc, mark.dickinson, pitrou, eric.smith, stutzbach, asvetlov, meador.inge, xuanji, sdaoden, serhiy.storchaka
2012-09-22 00:41:59vstinnersetmessageid: <>
2012-09-22 00:41:59vstinnerlinkissue10044 messages
2012-09-22 00:41:57vstinnercreate