This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author josh.r
Recipients josh.r, methane
Date 2018-04-05.14:15:19
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1522937719.12.0.682650639539.issue33231@psf.upfronthosting.co.za>
In-reply-to
Content
Patch is good, but while we're at it, is there any reason why this multi-allocation design was even used? It PyMem_Mallocs a buffer, makes a C-style string in it, then uses PyUnicode_FromString to convert C-style string to Python str.

Seems like the correct approach would be to just use PyUnicode_New to preallocate the final string buffer up front, then pull out the internal buffer with PyUnicode_1BYTE_DATA and populate that directly, saving a pointless allocation/deallocation, which also means the failure case means no cleanup needed at all, while barely changing the code (aside from removing the need to explicitly NUL terminate).

Only reason I can see to avoid this would be if the codec names could contain arbitrary Unicode encoded as UTF-8 (and therefore strlen wouldn't tell you the final length in Unicode ordinals), but I'm pretty sure that's not the case (if it is, we're not normalizing properly, since we only lower case ASCII). If Unicode codec names need to be handled, there are other options, though the easy savings go away.
History
Date User Action Args
2018-04-05 14:15:19josh.rsetrecipients: + josh.r, methane
2018-04-05 14:15:19josh.rsetmessageid: <1522937719.12.0.682650639539.issue33231@psf.upfronthosting.co.za>
2018-04-05 14:15:19josh.rlinkissue33231 messages
2018-04-05 14:15:19josh.rcreate