This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients petr.viktorin, vstinner
Date 2021-04-01.17:04:01
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
I wrote a microbenchmark on Py_INCREF()+Py_DECREF():

$ python3 -m pyperf compare_to ref.json limited.json 
Mean +- std dev: [ref] 3.45 ns +- 0.17 ns -> [limited] 6.03 ns +- 0.21 ns: 1.75x slower

If a function is only made of Py_INCREF() and Py_DECREF(), it can be up to 1.8x slower in the worst case. But in practice, I don't think that functions are only made of Py_INCREF() and Py_DECREF(). They do a few other things.

I'm not sure how to run a "macro benchmark" on my PR 25131, since I don't know any C extension doing anything useful. There is xxlimited, but it does almost nothing.

What would be a fair benchmark for this change?


To run my microbenchmark:

git apply bench.patch
./configure --with-lto --enable-optimizations
./python -m venv env
./env/bin/python -m pip install pyperf
./env/bin/python ../ -o ../limited.json -v
./env/bin/python ../ -o ../ref.json -v
Date User Action Args
2021-04-01 17:04:01vstinnersetrecipients: + vstinner, petr.viktorin
2021-04-01 17:04:01vstinnersetmessageid: <>
2021-04-01 17:04:01vstinnerlinkissue43688 messages
2021-04-01 17:04:01vstinnercreate