When using docker, you can bind the container to a CPU kernel through the parameter cpuset
In this case, the number of context switches between CPUs by the operating system is greatly reduced, so the performance of applications in the container will be greatly improved. In fact, cpuset is a common way to deploy online application type pods in production environment
To use functions similar to docker cpuset on kubernetes, the following two conditions need to be met The QoS of the pod is guaranteed
Set requests and limits to the same value, such as 2
To control CPU resources, you need to use the requests and limits attributes
In short Requests shows the CPU of the process in cgroups on the CFS scheduler Shares, and limit is the bandwidth limit on CFS Requests and limits actually use different algorithms to control CPU resources
Let’s start with requests Use – CPU shares in docker run to allocate CPU resources in proportion For example, two containers
–cpu-shares shares can only provide a certain proportion of competition, so when there is only one container, the container can still obtain all CPU resources
On kubernetes The agileek / cpuset test image is used for pressure test. There are six cores with 16g of memory on node slave2 When there is only one S1 with 100m requests and only one replica, it can still eat 100% of the CPU because there is no competition
name: s1
resources:
requests:
cpu: 100m
Unfortunately, this image does not seem to support a special number for multi-core CPUs. I have six cores on this node. Here, I need to adjust replica to 6 to eat up all six cores on this node
We join a competitor, clone S1 to S2, and adjust requests to 50m on S2 Finally, on the slave2 node s1 (100m)* 6 s2 (50m) * 6 It can be seen that the CPU occupancy ratio of S1 and S2 is gradually approaching 4:2
Limits shows the CPU cfs_ period_ Us and CPU cfs_ quota_ Us attribute cpu. cfs_ period_ Us: set the time period (in microseconds)( μ s) ) must be the same as CFS_ quota_ Use with us. cpu. cfs_ quota_ Us: set the maximum time (in microseconds) that can be used in the cycle( μ s))。 The configuration here refers to the maximum usage of a single CPU by a task. For example, set the limit to 50% Set CPU cfs_ quota_ Us is set to 50000, relative to CPU cfs_ period_ 100000 US is 50% set up s1 requests 100m limits 100m s2 requests 50m limits 50m Will get
In this case, the total CPU utilization directly drops to 6%, and the kernel application curve almost drops to 0
QoS acts on the pod, but the resources that determine QoS act on the container. Then, when there is only one container on a pod, this problem becomes very simple
The word kubernetes itself means helmsman and aviator in Greek. It was opened source by Google in 2014
Now that you have seen this article, I believe you must understand how valuable it is to master and learn it. There is a saying that software is eating the world and containers are eating software. This is why I have written a lot of content about docker at the same time.
Kubernetes is becoming an operating system for container and cloud environments. Before 2017, kubernetes was in competition with mesosphere and docker swarm. It was not until 2017 that kubernetes gradually became the de facto standard of container arrangement. At present, mainstream cloud manufacturers provide kubernetes services.
By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.
In other words, if you need to use this image, you need to use it with output Build directly to local, where local is an unprocessed raw file, which is output to the specified folder, rather than the traditional docker local image
The docker container will think that it can use all the CPU resources of the host. After starting the container, try to occupy a completed CPU core
docker run -it --name ubuntu ubuntu /bin/bash
while : ; do : ; done &
You can see that one of the four cores is 100US
We know that the docker – CPU shares parameter can control the percentage of CPU the container can occupy For example, when a container is started separately, because there is no competition, even if – CPU shares = 500, the container can occupy all cores
To compete, we lock two containers on the first kernel and set the percentage
docker run -it --name test1 --cpu-shares=500 --cpuset-cpus 0 ubuntu /bin/bash
Mainly, there are two bash processes on the top at the bottom right, one is 66.4 and the other is 33.2
However – CPU shares can only control the percentage of usage, which is the only parameter provided by the early docker that can allocate CPU resources
This principle is the same as requests in kubernetes resources
If you want to implement limits in kubernetes resources, you need to use
cpu.cfs_quota_us
cpu.cfs_period_us
The default period is 100ms, or 100000 US. The following statement means that 100000 units of resources can be used, 20000, or 20%
If you want to use TLS to safely access the dockerd 2376 port on the remote server, you need to generate a certificate on the remote server, copy it to the client PC, and then use the certificate to access 2376. You can use the following script to generate a certificate
gen_docker_ca.sh
#!/bin/bash
#related configs
SERVER="192.168.194.142"
PASSWORD="password"
COUNTRY="CN"
STATE="LN"
CITY="DL"
ORGANIZATION="lz"
ORGANIZATIONAL_UNIT="Dev"
EMAIL="test@lizhe.name"
###start create file###
echo "start create file"
#go to the path
cd /opt/docker_ca
#use aes256 gen the key
openssl genrsa -aes256 -passout pass:$PASSWORD -out ca-key.pem 4096
#gen ca cert
openssl req -new -x509 -passin "pass:$PASSWORD" -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=$COUNTRY/ST=$STATE/L=$CITY/O=$ORGANIZATION/OU=$ORGANIZATIONAL_UNIT/CN=$SERVER/emailAddress=$EMAIL"
#gen server used private key
openssl genrsa -out server-key.pem 4096
#gen csr
openssl req -subj "/CN=$SERVER" -sha256 -new -key server-key.pem -out server.csr
#white list
sh -c 'echo subjectAltName = IP:'$SERVER',IP:0.0.0.0 >> extfile.cnf'
#extendedKeyUsage = serverAuth put into extfile.cnf
sh -c 'echo extendedKeyUsage = serverAuth >> extfile.cnf'
#Use the CA certificate, CA key and the above server certificate request file to generate the server self signed certificate
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -passin "pass:$PASSWORD" -\CAcreateserial -out server-cert.pem -extfile extfile.cnf
#Generate client certificate RSA private key file
openssl genrsa -out key.pem 4096
#Generate client certificate request file
openssl req -subj '/CN=client' -new -key key.pem -out client.csr
#Continue setting certificate extension properties
sh -c 'echo extendedKeyUsage = clientAuth >> extfile.cnf'
#Generate client self signed certificate (generated according to the above client private key file and client certificate request file)
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -passin "pass:$PASSWORD" -\CAcreateserial -out cert.pem -extfile extfile.cnf
#Change key permissions
chmod 0400 ca-key.pem key.pem server-key.pem
#Change key permissions
chmod 0444 ca.pem server-cert.pem cert.pem
#Delete useless files
rm client.csr server.csr
echo "Generate file complete"
###End###
vim /lib/systemd/system/docker.service
systemctl daemon-reload
systemctl restart docker
Copy ca.pem cert.pem key from docker server PEM these three files to MacOS
The generated certificate file is as follows, and the modification permission is 400
Use the correct certificate to access the remote dockerd
The –privileged flag maps the user ID 0 we saw earlier directly to the user ID 0 of the host and gives it unrestricted access to any system call it chooses
docker run -i -t --rm --privileged ubuntu /bin/bash