This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients catalin.manciu, florin.papa, vstinner
Date 2016-02-19.11:12:44
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1455880365.01.0.0599772710283.issue26382@psf.upfronthosting.co.za>
In-reply-to
Content
"Theoretically, an object type that consistently allocates more than the small object threshold would perform a bit slower because it would first jump to the small object allocator, do the size comparison and then jump to malloc."

I expect that the cost of the extra check is *very* cheap (completly negligible) compared to the cost of a call to malloc().

To have an idea of the cost of the Python code around system allocators, you can take a look at the Performance section of my PEP 445 which added an indirection to all Python allocators:
https://www.python.org/dev/peps/pep-0445/#performances

I was unable to measure an overhead on macro benchmarks (perf.py). The overhead on microbenchmarks was really hard to measure because it was so low that benchmarks were very unable.
History
Date User Action Args
2016-02-19 11:12:45vstinnersetrecipients: + vstinner, florin.papa, catalin.manciu
2016-02-19 11:12:45vstinnersetmessageid: <1455880365.01.0.0599772710283.issue26382@psf.upfronthosting.co.za>
2016-02-19 11:12:44vstinnerlinkissue26382 messages
2016-02-19 11:12:44vstinnercreate