Message170604
Am 17.09.2012 14:26, schrieb Serhiy Storchaka:
>> I would personally prefer if the computations where done in
>> Py_ssize_t, not PyObject*
>
> I too. But on platforms with 64-bit pointers and 32-bit sizes we can
> allocate total more than PY_SIZE_MAX bytes (hey, I remember the DOS
> memory models with 16-bit size_t and 32-bit pointers). Even faster we
> get an overflow if allow the repeated counting of shared objects.
> What to do with overflow? Return PY_SIZE_MAX or ignore the
> possibility of errors?
It can never overflow. We cannot allocate more memory than SIZE_MAX;
this is (mostly) guaranteed by the C standard. I don't know whether
you deliberately brought up the obscure case of 64-bit pointers and
32-bit sizes. If there are such systems, we don't support them. |
|
Date |
User |
Action |
Args |
2012-09-17 12:51:15 | loewis | set | recipients:
+ loewis, jcea, pitrou, benjamin.peterson, stutzbach, aliles, hynek, serhiy.storchaka |
2012-09-17 12:51:15 | loewis | link | issue15490 messages |
2012-09-17 12:51:15 | loewis | create | |
|