This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author T.Rex
Recipients T.Rex
Date 2020-08-13.13:12:55
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1597324375.68.0.918268589299.issue41540@roundup.psfhosted.org>
In-reply-to
Content
Some more explanations.

On AIX, the memory is controlled by the ulimit command.
"Global memory" comprises the physical memory and the paging space, associated with the Data Segment.

By default, both Memory and Data Segment are limited:
# ulimit -a
data seg size           (kbytes, -d) 131072
max memory size         (kbytes, -m) 32768
...

However, it is possible to remove the limit, like:
# ulimit -d unlimited

Now, when the "data seg size" is limited, the malloc() routine checks if enough memory/paging-space are available, and it immediately returns a NULL pointer.

But, when the "data seg size" is unlimited, the malloc() routine first tries to allocate and quickly consumes the paging space, which is much slower than acquiring memory since it consumes disk space. And it nearly hangs the OS. Thus, in that case, it does NOT check if enough memory of data segments are available. Bad.

So, this issue appears on AIX only if we have:
# ulimit -d unlimited

Anyway, the test:
    if (size > (size_t)PY_SSIZE_T_MAX)
in:
    Objects/obmalloc.c: PyMem_RawMalloc()
seems weird to me since the max of size is always lower than PY_SSIZE_T_MAX .
History
Date User Action Args
2020-08-13 13:12:55T.Rexsetrecipients: + T.Rex
2020-08-13 13:12:55T.Rexsetmessageid: <1597324375.68.0.918268589299.issue41540@roundup.psfhosted.org>
2020-08-13 13:12:55T.Rexlinkissue41540 messages
2020-08-13 13:12:55T.Rexcreate