This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author loewis
Recipients aliles, benjamin.peterson, hynek, jcea, loewis, pitrou, serhiy.storchaka, stutzbach
Date 2012-09-17.12:51:15
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <50571CBF.6070503@v.loewis.de>
In-reply-to <1347884778.13.0.815854108199.issue15490@psf.upfronthosting.co.za>
Content
Am 17.09.2012 14:26, schrieb Serhiy Storchaka:
>> I would personally prefer if the computations where done in
>> Py_ssize_t, not PyObject*
> 
> I too. But on platforms with 64-bit pointers and 32-bit sizes we can
> allocate total more than PY_SIZE_MAX bytes (hey, I remember the DOS
> memory models with 16-bit size_t and 32-bit pointers). Even faster we
> get an overflow if allow the repeated counting of shared objects.
> What to do with overflow? Return PY_SIZE_MAX or ignore the
> possibility of errors?

It can never overflow. We cannot allocate more memory than SIZE_MAX;
this is (mostly) guaranteed by the C standard. I don't know whether
you deliberately brought up the obscure case of 64-bit pointers and
32-bit sizes. If there are such systems, we don't support them.
History
Date User Action Args
2012-09-17 12:51:15loewissetrecipients: + loewis, jcea, pitrou, benjamin.peterson, stutzbach, aliles, hynek, serhiy.storchaka
2012-09-17 12:51:15loewislinkissue15490 messages
2012-09-17 12:51:15loewiscreate