This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: respect cgroups limits when trying to allocate memory
Type: Stage:
Components: Interpreter Core Versions: Python 3.10, Python 3.9, Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: asvetlov, caarlos0, caleb2, christian.heimes
Priority: normal Keywords:

Created on 2020-11-19 17:18 by caarlos0, last changed 2022-04-11 14:59 by admin.

Messages (11)
msg381442 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-11-19 17:18
A common use case is running python inside containers, for instance, for training models and things like that.

The python process sees the host memory/cpu, and ignores its limits, which often leads to OOMKills, for instance:

docker run -m 1G --cpus 1 python:rc-alpine python -c 'x = bytearray(80 * 1024 * 1024 * 1000)'


Linux will kill the process once it reaches 1GB of RAM used.

Ideally, we should have an option to make Python try to allocate only the ram its limited to, maybe something similar to Java's +X:UseContainerSupport.
msg381494 - (view) Author: Andrew Svetlov (asvetlov) * (Python committer) Date: 2020-11-20 18:35
Could you explain the proposal?

How "+X:UseContainerSupport" behaves for Java? Sorry, I did not use Java for ages and don't follow the modern Java best practices.

From my understanding, without the Docker the allocation of `bytearray(80 * 1024 * 1024 * 1000)` leads to `raise MemoryError` if there is no such memory available and malloc()/callloc returns NULL.

The exception is typically not handled at all but unwinded to "kill the process" behavior.

The reason for this situation is: in Python when you are trying to handle out-of-memory behavior the handler has a very which chance to allocate a Python object under the hood and raise MemoryError at any line of the Python exception handler.
msg381495 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-11-20 18:43
The problem is that, instead of getting a MemoryError, Python tries to "go out of bounds" and allocate more memory than the cgroup allows, causing Linux to kill the process.

A workaround is to set RLIMIT_AS to the contents of /sys/fs/cgroup/memory/memory.limit_in_bytes, which is more or less what Java does when that flag is enabled (there are more things: cgroups v2 has a different path I think).

Setting RLIMIT_AS, we get the MemoryError as expected, instead of a SIGKILL.

My proposal is to either make it the default or hide it behind some sort of flag/environment variable, so users don't need to do that everywhere...

PS: On java, that flag also causes its OS API to return the limits when asked for how much memory is available, instead of returning the host's memory (default behavior).

PS: I'm not an avid Python user, just an ops guy, so I mostly write yaml these days... please let me know if I said doesn't make sense. 

Thanks!
msg381497 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2020-11-20 18:57
I can neither reproduce the issue with podman and cgroupv2 nor with docker and cgroupsv1. In both cases I'm getting a MemoryError as expected:

# podman run -m 1G --cpus 1 python:rc-alpine python -c 'x = bytearray(80 * 1024 * 1024 * 1000)'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
MemoryError

# docker run -m 1GB fedora:33 python3 -c 'x = bytearray(80 * 1024 * 1024 * 1000)'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
MemoryError
msg381498 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-11-20 19:02
Maybe you're trying to allocate more memory than the host has available? I found out that it gives MemoryError in those cases too (kind of easy to reproduce on docker for mac)...
msg381499 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-11-20 19:05
FWIW, here, both cases:

```
❯ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                            PORTS               NAMES
30fc350a8dbd        python:rc-alpine    "python -c 'x = byte…"   24 seconds ago       Exited (137) 11 seconds ago                           great_murdock
5ba46a022910        fedora:33           "python3 -c 'x = byt…"   57 seconds ago       Exited (137) 43 seconds ago                           boring_edison
```
msg381500 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2020-11-20 19:09
I doubt it. My test hosts have between 16G and 64G of RAM + plenty of swap.

What's your platform, distribution, Kernel version, Docker version, and libseccomp version?
msg381502 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-11-20 19:48
Just did more tests here:

**on my machine**:

$ docker run --name test -m 1GB fedora:33 python3 -c 'import resource; m = int(open("/sys/fs/cgroup/memory/memory.limit_in_bytes").read()); resource.setrlimit(resource.RLIMIT_AS, (m, m)); print(resource.getrlimit(resource.RLIMIT_AS)); x = bytearray(4 * 1024 * 1024 * 1000)'; docker inspect test | grep OOMKilled; docker rm test
Traceback (most recent call last):
  File "<string>", line 1, in <module>
MemoryError
(1073741824, 1073741824)
            "OOMKilled": false,
test

$ docker run --name test -m 1GB fedora:33 python3 -c 'x = bytearray(4 * 1024 * 1024 * 1000)'; docker inspect test | grep OOMKilled; docker rm test
            "OOMKilled": true,
test

**on a k8s cluster**:

$ kubectl run -i -t debug --rm --image=fedora:33 --restart=Never --limits='memory=1Gi'
If you don't see a command prompt, try pressing enter.
[root@debug /]# python3
Python 3.9.0 (default, Oct  6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x = bytearray(4 * 1024 * 1024 * 1000)
Killed
[root@debug /]# python3
Python 3.9.0 (default, Oct  6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import resource
>>> m = int(open("/sys/fs/cgroup/memory/memory.limit_in_bytes").read())
>>> resource.setrlimit(resource.RLIMIT_AS, (m, m))
>>> print(resource.getrlimit(resource.RLIMIT_AS))
(1073741824, 1073741824)
>>> x = bytearray(4 * 1024 * 1024 * 1000)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError
>>>
msg381504 - (view) Author: Christian Heimes (christian.heimes) * (Python committer) Date: 2020-11-20 20:27
Even if we would decide to add a memory limit based on cgroups, there is no way to implement a limit in Python correctly. We rely on the platforms malloc() implementation to handle memory allocation for us.

Python has an abstraction layer for memory allocator, but the allocator only tracks Python objects and does not keep information about the size of slabs. Memory tracking would increase memory usage and decrease performance. It would also not track other memory like 3rd party libraries, extension modules, thread stacks, and other processes in the same cgroups hierarchy.

I'm pretty sure that the RLIMIT_AS approach will not work if you run multiple processes in the same container (e.g. spawn subprocesses).

I'll talk to our glibc and container experts at work next week. Perhaps they are aware of a better way to handle cgroups memory limits more gracefully.
msg382405 - (view) Author: Carlos Alexandro Becker (caarlos0) Date: 2020-12-03 12:52
Any updates?
msg406037 - (view) Author: Caleb Collins-Parks (caleb2) Date: 2021-11-09 18:08
@christian.heimes following up on this - we have been having frequent memory issues with Python 3.7 in Kubernetes. It could just be the code, but if it does turn out this is a bug then fixing it could be very beneficial.
History
Date User Action Args
2022-04-11 14:59:38adminsetgithub: 86577
2021-11-09 18:08:57caleb2setnosy: + caleb2
messages: + msg406037
2020-12-03 12:52:36caarlos0setmessages: + msg382405
2020-11-20 20:27:07christian.heimessetmessages: + msg381504
components: + Interpreter Core, - IO
versions: - Python 3.6, Python 3.7
2020-11-20 19:48:04caarlos0setmessages: + msg381502
2020-11-20 19:09:44christian.heimessetmessages: + msg381500
2020-11-20 19:05:29caarlos0setmessages: + msg381499
2020-11-20 19:02:03caarlos0setmessages: + msg381498
2020-11-20 18:57:35christian.heimessetnosy: + christian.heimes
messages: + msg381497
2020-11-20 18:43:42caarlos0setmessages: + msg381495
2020-11-20 18:35:55asvetlovsetnosy: + asvetlov
messages: + msg381494
2020-11-19 17:18:10caarlos0create