This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author caarlos0
Recipients caarlos0
Date 2020-11-19.17:18:09
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1605806290.38.0.199386148713.issue42411@roundup.psfhosted.org>
In-reply-to
Content
A common use case is running python inside containers, for instance, for training models and things like that.

The python process sees the host memory/cpu, and ignores its limits, which often leads to OOMKills, for instance:

docker run -m 1G --cpus 1 python:rc-alpine python -c 'x = bytearray(80 * 1024 * 1024 * 1000)'


Linux will kill the process once it reaches 1GB of RAM used.

Ideally, we should have an option to make Python try to allocate only the ram its limited to, maybe something similar to Java's +X:UseContainerSupport.
History
Date User Action Args
2020-11-19 17:18:10caarlos0setrecipients: + caarlos0
2020-11-19 17:18:10caarlos0setmessageid: <1605806290.38.0.199386148713.issue42411@roundup.psfhosted.org>
2020-11-19 17:18:10caarlos0linkissue42411 messages
2020-11-19 17:18:09caarlos0create