This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author caarlos0
Recipients asvetlov, caarlos0
Date 2020-11-20.18:43:42
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1605897822.91.0.676820014346.issue42411@roundup.psfhosted.org>
In-reply-to
Content
The problem is that, instead of getting a MemoryError, Python tries to "go out of bounds" and allocate more memory than the cgroup allows, causing Linux to kill the process.

A workaround is to set RLIMIT_AS to the contents of /sys/fs/cgroup/memory/memory.limit_in_bytes, which is more or less what Java does when that flag is enabled (there are more things: cgroups v2 has a different path I think).

Setting RLIMIT_AS, we get the MemoryError as expected, instead of a SIGKILL.

My proposal is to either make it the default or hide it behind some sort of flag/environment variable, so users don't need to do that everywhere...

PS: On java, that flag also causes its OS API to return the limits when asked for how much memory is available, instead of returning the host's memory (default behavior).

PS: I'm not an avid Python user, just an ops guy, so I mostly write yaml these days... please let me know if I said doesn't make sense. 

Thanks!
History
Date User Action Args
2020-11-20 18:43:42caarlos0setrecipients: + caarlos0, asvetlov
2020-11-20 18:43:42caarlos0setmessageid: <1605897822.91.0.676820014346.issue42411@roundup.psfhosted.org>
2020-11-20 18:43:42caarlos0linkissue42411 messages
2020-11-20 18:43:42caarlos0create