This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients lemburg, orivej, pitrou
Date 2008-01-27.00:24:00
SpamBayes Score 0.0031868345
Marked as misclassified No
Message-id <1201393442.0.0.233654345185.issue1943@psf.upfronthosting.co.za>
In-reply-to
Content
I just tried bumping KEEPALIVE_SIZE_LIMIT to 200. It makes up for a bit
of the speedup, but the PyVarObject version is still faster. Moreover,
and perhaps more importantly, it eats less memory (800KB less in
pybench, 3MB less when running the whole test suite).

(then there are of course microbenchmarks. For example:
# With KEEPALIVE_SIZE_LIMIT = 200
./python -m timeit -s "s=open('LICENSE', 'r').read()" "s.split()"
1000 loops, best of 3: 556 usec per loop
# With this patch
./python -m timeit -s "s=open('LICENSE', 'r').read()" "s.split()"
1000 loops, best of 3: 391 usec per loop
)

I don't understand the argument for codecs having to resize the unicode
object. Since codecs also have to resize the bytes object when encoding,
the argument should apply to bytes objects as well, yet bytes (and str
in 2.6) is a PyVarObject.

I admit I don't know the exact reasons for PyUnicode's design. I just
know that the patch doesn't break the API, and runs the test suite fine.
Are there any specific things to look for?
History
Date User Action Args
2008-01-27 00:24:02pitrousetspambayes_score: 0.00318683 -> 0.0031868345
recipients: + pitrou, lemburg, orivej
2008-01-27 00:24:02pitrousetspambayes_score: 0.00318683 -> 0.00318683
messageid: <1201393442.0.0.233654345185.issue1943@psf.upfronthosting.co.za>
2008-01-27 00:24:00pitroulinkissue1943 messages
2008-01-27 00:24:00pitroucreate