At first glance, this is nonsense. You limit the maximum memory.
Of course, you should oom when you exceed it
In fact, the maximum memory used by many common applications cannot be simply limited by limits
In the docker container, the container cannot perceive that it is in a restricted and resource isolated environment.
It will think that it can use all host resources like normal non-container applications
In Linux Environment
Even if -c allocates 1000 microcores and 200m memory, the container still thinks it is using 6 cores and 8GB memory of the host
docker run -i -t --name ubuntu -c 1000 -m 200M ubuntu /bin/bash
This situation will cause some trouble for traditional programs, such as Java
Memory aspect
If – XMS is not set, Java will use (it thinks) 1 / 64 of the physical memory as the minimum heap capacity by default.
If – Xmx is not set, Java will use (it thinks) 1 / 4 of the physical memory as the maximum heap capacity by default.
initial heap size
Larger of 1/64th of the machine's physical memory on the machine or some reasonable minimum. Before Java SE 5.0, the default initial heap size was a reasonable minimum, which varies by platform. You can override this default using the -Xms command-line option.
maximum heap size
Smaller of 1/4th of the physical memory or 1GB. Before Java SE 5.0, the default maximum heap size was 64MB. You can override this default using the -Xmx command-line option.
Note: The boundaries and fractions given for the heap size are correct for Java SE 5.0. They are likely to be different in subsequent releases as computers get more powerful.
Because the wrong resource information is obtained, the application in the container may allocate more resources than it can use, triggering an oom exception
So, can this problem be avoided by simply setting – Xmx to 200m? Not at all
The container – M parameter will limit the maximum memory usage of the entire container, but – XMS and – Xmx only limit the size of the heap, which will actually be used by a complete Java program
- The maximum heap memory of the Java program is the value set by – Xmx
- Garbage collection memory used in garbage collection
- Memory used by JIT optimization
- Memory used by off-heap of Java program
- Memory used by Metaspace of Java program
- The memory occupied by JNI code
- The memory is occupied when the JVM starts.
This does not include some resources that the Ubuntu container itself may consume (although it is negligible), so setting – Xmx to 80% to 90% of – M may be a safe choice
The specific value needs to be tested according to the actual project
In terms of CPU, because CPU can be oversold, there will be no oom,
However, the program in the container thinks it can use all six cores of the host. Is there really no problem?
docker run --name cputest -it --rm -c 512 progrium/stress --cpu 1
Since – C only indicates the CPU Competitive Allocation of the container, with a maximum of 512 micro cores, it still uses 100% of the CPU here. – the CPU forcibly locks it on one core, so it uses one core of all cores
docker run --name cputest -it --rm -c 512 progrium/stress --cpu 6
All host CPUs are used this time
In fact, the – C parameter does not affect the program to use of enough CPU resources. Even the – C 512 will not affect the scheduling of threads on multiple cores (not limited to 0.5 of one kernel),
Docker – C only restricts competition
Requests on Kubernetes limit contention and limits limit the CPU fragment time obtained by the CPU, so they will not affect CPU scheduling
There will be no case where 6 cores and 120 threads are scheduled too frequently