Docker privileged
The –privileged flag maps the user ID 0 we saw earlier directly to the user ID 0 of the host and gives it unrestricted access to any system call it chooses
docker run -i -t --rm --privileged ubuntu /bin/bash
The –privileged flag maps the user ID 0 we saw earlier directly to the user ID 0 of the host and gives it unrestricted access to any system call it chooses
docker run -i -t --rm --privileged ubuntu /bin/bash
Docker Cli
This is a command line tool for users
Dockerd
Listen to docker API requests. Dockerd receives API requests through UNIX, TCP and FD.
UNIX socket is / var / run / docker For sock, you need root or docker group permission to start dockerd
Dockerd will pull up containerd and keep communication with containerd when starting
Containerd
Containerd contains a daemon service that exposes grpc APIs. These APIs are relatively low-level. Dockerd manages the life cycle of containers through container, and container runs containers through runc
Runc
/Usr / bin / docker runc can be regarded as part of containerd, which is a binary tool for running OCI compliant containers.
The container image is packaged in OCI standard format and generally includes config. Config JSON file and system root directory
docker save -o nginx.tar nginx
Save the container as an image tar file, and then decompress it to see the internal structure of the image
containerd-shim
The existence of containerd ship enables the container to run independently from containerd. (by default, dockerd is stopped and the container is stopped, but it can be realized through daemon.json configuration. After dockerd is stopped, the container runs as usual.)
As the parent process of container, containerd shim is mainly responsible for the following responsibilities:
The calling order is dockerd — > containerd — > containerd shim — > runc API — > “CMD”
Swarm engine has been integrated in the new version of docker and does not need additional installation
docker swarm init --advertise-addr 192.168.194.137
Use the command prompted above to join to the cluster on node 2
docker swarm join --token SWMTKN-1-42oor9ruw9jv0bqw7vqmm6c29jwo5mnf34xvfn9iwojpor8c4q-bbxltikj5m5r1fnkhezlhjkts 192.168.194.137:2377
Now you can see that there are two nodes in the cluster
Create an overlay network
docker network create --driver overlay --opt encrypted --subnet 10.10.1.0/24 ol_net --attachable
Start nginx service
Finally, I start a Ubuntu container. You can see that this container on node 1 can ping nginx on node 1 and node 2 at the same time
For ease of understanding, I will try to use manual operation instead of dockerfile
Start a Ubuntu container first
docker run -i -t --name base ubuntu /bin/bash
If you install docker in this container and try to execute docker PS, you will get the following error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
The reason is actually very simple, because there is docker inside the container In fact, the simplest way to build an image in a container is to mount the sock, and then the container will share the same docker with the host sock
docker run -i -t --name base2 -v /var/run/docker.sock:/var/run/docker.sock ubuntu /bin/bash
However, the side effect is that because the host’s sock is used, all containers running on the host can be seen here, and actually use the same docker environment as the host
Create a dockerfile
FROM nginx
RUN echo 'hello dockerfile' > /usr/share/nginx/html/index.html
Build again
error checking context: 'no permission to read from '/proc/1/mem''.
This is because the current container does not have the root permission to add the host when it is created
Add –privileged
docker run -i -t --name base3 --privileged -v /var/run/docker.sock:/var/run/docker.sock ubuntu /bin/bash
Here is an episode. If you use dockerfile directly under / path, you will get an error
error checking context: ‘file (‘/proc/4427/fd/5′) not found or excluded by . dockerignore’.
The solution is not to build under the / root path
Docker version 1.11 is a big change,
First, the first mock exam is divided into 4 independent modules from single module, engine, containerd, runc and containerd-shim.
Containerd shim replaces the original process 1 and is used to recycle orphan processes
Containerd is used to manage the life cycle of containers,
Runc is used to run the container
This change made it more compliant and standardized in 2017
In order to install the version before 1.11, I tried for almost 4 hours
The main reason is that almost all apt installation packages have been offline
Students who want to try can follow the following steps
Download the DEB offline package and install it with Ubuntu 16. 18 and 20 are not compatible
http://archive.ubuntu.com/ubuntu/pool/universe/d/docker.io/docker.io_1.10.3-0ubuntu6_amd64.deb
Start a Ubuntu image, block its No. 1 process, and generate a zombie process by reference
Kill process 42, so that the parent process of process 45 will become process 1 of the container
Then kill process 45. At this time, process 1 of the container should not be able to recycle it
Then we get a zombie process that can’t be recycled
In the traditional CentOS container, if you want to use systemctl, you will encounter the following situations
The system prompts you that you cannot use systemctl because the current system is not booted by systemd
Therefore, it is not difficult to understand why the following commands can use systemctl
docker run -itd --name=centos --privileged=true centos /usr/sbin/init
But relatively speaking, the situation of Ubuntu will be more trouble
docker run -i -t -d --name ubuntu --privileged -v /run/systemd/system:/run/systemd/system -v /bin/systemctl:/bin/systemctl -v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket -it ubuntu systemctl
Use the –init parameter to change the container’s process 1 to /sbin/docker-init
When the init parameter is not used, as shown in the figure below, pay attention to the differences
Kill process 25 and change the parent process of process 28 into process 1
Kill process 28 again and generate a verification in the zombie process by yourself. At this time, process 28 should become a zombie process. However, process 28 is recycled normally because process 1 of the container is docker init
Conclusion: when the –init parameter is used, the No. 1 process of the container is docker-init, which can normally recycle orphan processes, thus avoiding the problem of zombie processes under specific conditions
Create a python program
#!/usr/bin/env python3
import subprocess
import time
import os
print(os.getpid())
p = subprocess.Popen(['/bin/sleep', '1'])
time.sleep(1000000)
In this program, the main process will start a sleep subprocess, and the execution of this sleep will end in 1 second.
At this time, it will enter the state of waiting to be recycled
However, the main process enters sleep (1000000) at this time. During the blocking process, the child process is not recycled, so the child process is not recycled
6701 main process and 6702 subprocess become zombies
In other words, you can easily get a zombie process as long as you meet two conditions
Based on the above two conditions, let’s try docker
Next, let’s kill process 27, so that the parent process of process 30 will become process 1 (that is, sleep process)
At this point, the parent process of process 30 becomes a sleep process that does not care about it
Because process 27 has been recycled by process 24, there are no zombie processes in the host
We’ll kill process 30, which nobody cares about
In this way, we successfully created a zombie process
PID 30 in container, PID 7132 on host
If the container is stopped, the zombie process will be recycled by the host
To sum up, when
To understand how to avoid the zombie process, we first need to know where this problem comes from
The maximum value of Linux system process number is 32768
When the total number of PID reaches this value, the system will not be able to create a new process, which is a very serious problem
I once encountered a system bug, that is, when calling the external system, the external system did not return, the process would not be released and kept waiting, which eventually led to the bug of PID depletion
I don’t want to repeat the Linux Process tree here,
Under normal circumstances, when the process exits, the PID will be recycled by the parent process. The specific method is
There is a special case where the parent process ends earlier than the child’s process. For example, the parent process is killed
In this case, the parent process of the child process will be directed to init (that is, process 1), so process 1 will be responsible for recycling
The init process in the early Linux system was very lightweight, called sysinit
However, in the modern Linux environment, the init process is generally a heavyweight process such as systemd or upstartd, which will do a lot of extra work.
(in fact, this issue has caused a lot of debate. Some developers believe that using a heavier init process will do more harm than good)
By constantly calling bash, you can get a standard process tree
We can see that the parent process of process 3099 is 3093, and its child processes include 3105, 3111, 3117 and 3123
Here, if you kill process 3099, you will find that the parent process of its direct child process 3105 has indeed become
/lib/systemd/systemd –user (PID 1634)
Although it is indeed systemd, what about the agreed process 1?
Process 1 is /sbin/init auto noprompt
Want to know why? After Ubuntu 18 (I’m currently Ubuntu 20),
/sbin/init is a soft link to / lib/system/systemd
On the above normal Linux system, even if we kill the parent process of 3105, it will not become a zombie. At this time, no zombie process can be found by using ps -ef | grep defunct
After killing process 21, process 24 will be hosted to process 1 of docker
We talked about how to connect containers in the previous section
A network that shares the same third container,
To use localhost to access the endpoints of other containers. The main principle is the namespace of Linux network
However, this problem is different in docker compose
Start the two containers through the following script and configure them to share a network called unetwork
version: '3'
services:
utest1:
image: nginx
networks:
- unetwork
utest2:
image: ubuntu
command: sleep infinity
networks:
- unetwork
networks:
unetwork:
driver: bridge
ipam:
config:
- subnet: 192.168.100.0/24
you can see
Docker compose can use a variety of networks
For details, please refer to
https://docs.docker.com/engine/reference/commandline/network_create/
If there is no special configs, the bridge pattern is generated here