CoreDNS

曾几何时,我第一次使用kubeadm安装 kubernetes 1.10 时,kubernetes自带的还是 kube-dns,

在一些教材中也一直把kube-dns作为dns服务在k8s上的默认实现。

趁此机会这里我要好好捋一捋,在kubernetes1.11开始,CoreDNS就开始作为 kubeadm的默认实现了

传统的kubeDNS由3个部分组成

  1. kubedns: 依赖 client-go 中的 informer 机制监视 k8s 中的 Service 和 Endpoint 的变化,并将这些结构维护进内存来服务内部 DNS 解析请求。
  2. dnsmasq: 区分 Domain 是集群内部还是外部,给外部域名提供上游解析,内部域名发往 10053 端口,并将解析结果缓存,提高解析效率。
  3. sidecar: 对 kubedns 和 dnsmasq 进行健康检查和收集监控指标。

在 kubedns 包含两个部分, kubedns 和 skydns。

其中 kubedns 是负责监听 k8s 集群中的 Service 和 Endpoint 的变化,并将这些变化通过 treecache 的数据结构缓存下来,作为 Backend 给 skydns 提供 Record。 而真正负责dns解析的其实是 skydns(skydns 目前有两个版本 skydns1 和 skydns2)

对于新版本的kubernetes,默认已经开始使用CoreDNS了

CoreDNS 是一个高速并且十分灵活的DNS服务。CoreDNS 允许你通过编写插件的形式去自行处理DNS数据。

CoreDNS 定义了一套插件的接口,只要实现 Handler 接口就能将插件注册到插件链

kubernetes 插件可以让 coreDNS 读取到k8s集群内的 endpoint 以及 service 等信息,从而替代 kubeDNS 作为 k8s 集群内的 DNS 解析服务

为了指定dns服务器地址,这里我需要重新启动 kubelet并且指定新的配置文件

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 127.0.0.1

port: 10250

readOnlyPort: 10255

cgroupDriver: cgroupfs

clusterDNS:

- "10.0.0.10"

clusterDomain: cluster.local.

failSwapOn: false

authentication:

  anonymous:

    enabled: true
sudo ./kubelet --pod-manifest-path=pods --fail-swap-on=false --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.2 --kubeconfig=kubeconfig.yaml --config=kubeletconfig.yaml

安装 coredns

CoreDNS 是通过yaml文件安装到kubernetes的

git clone https://github.com/coredns/deployment

需要使用的是 kubernetes 文件夹下的 deploy.sh

/home/lizhe/dns/deployment/kubernetes/deploy.sh

不过使用之前需要做一些修改,因为我们没有安装kube-dns,所以脚本会报错,

  #CLUSTER_DNS_IP=$(kubectl get service –namespace kube-system kube-dns -o jsonpath=”{.spec.clusterIP}”)

修改为
  CLUSTER_DNS_IP=10.0.0.10

./deploy.sh 10.0.0.10/24 cluster.local > coredns.yaml

这样会得到 coredns.yaml 内容如下

apiVersion: v1

kind: ServiceAccount

metadata:

  name: coredns

  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

  name: system:coredns

rules:

- apiGroups:

  - ""

  resources:

  - endpoints

  - services

  - pods

  - namespaces

  verbs:

  - list

  - watch

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  annotations:

    rbac.authorization.kubernetes.io/autoupdate: "true"

  labels:

    kubernetes.io/bootstrapping: rbac-defaults

  name: system:coredns

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: system:coredns

subjects:

- kind: ServiceAccount

  name: coredns

  namespace: kube-system

---

apiVersion: v1

kind: ConfigMap

metadata:

  name: coredns

  namespace: kube-system

data:

  Corefile: |

    .:53 {

        errors

        health {

          lameduck 5s

        }

        ready

        kubernetes cluster.local in-addr.arpa ip6.arpa {

          fallthrough in-addr.arpa ip6.arpa

        }

        prometheus :9153

        forward . /etc/resolv.conf {

          max_concurrent 1000

        }

        cache 30

        loop

        reload

        loadbalance

    }

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: coredns

  namespace: kube-system

  labels:

    k8s-app: kube-dns

    kubernetes.io/name: "CoreDNS"

spec:

  # replicas: not specified here:

  # 1. Default is 1.

  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.

  strategy:

    type: RollingUpdate

    rollingUpdate:

      maxUnavailable: 1

  selector:

    matchLabels:

      k8s-app: kube-dns

  template:

    metadata:

      labels:

        k8s-app: kube-dns

    spec:

      priorityClassName: system-cluster-critical

      serviceAccountName: coredns

      tolerations:

        - key: "CriticalAddonsOnly"

          operator: "Exists"

      nodeSelector:

        kubernetes.io/os: linux

      affinity:

         podAntiAffinity:

           preferredDuringSchedulingIgnoredDuringExecution:

           - weight: 100

             podAffinityTerm:

               labelSelector:

                 matchExpressions:

                   - key: k8s-app

                     operator: In

                     values: ["kube-dns"]

               topologyKey: kubernetes.io/hostname

      containers:

      - name: coredns

        image: coredns/coredns:1.7.0

        imagePullPolicy: IfNotPresent

        resources:

          limits:

            memory: 170Mi

          requests:

            cpu: 100m

            memory: 70Mi

        args: [ "-conf", "/etc/coredns/Corefile" ]

        volumeMounts:

        - name: config-volume

          mountPath: /etc/coredns

          readOnly: true

        ports:

        - containerPort: 53

          name: dns

          protocol: UDP

        - containerPort: 53

          name: dns-tcp

          protocol: TCP

        - containerPort: 9153

          name: metrics

          protocol: TCP

        securityContext:

          allowPrivilegeEscalation: false

          capabilities:

            add:

            - NET_BIND_SERVICE

            drop:

            - all

          readOnlyRootFilesystem: true

        livenessProbe:

          httpGet:

            path: /health

            port: 8080

            scheme: HTTP

          initialDelaySeconds: 60

          timeoutSeconds: 5

          successThreshold: 1

          failureThreshold: 5

        readinessProbe:

          httpGet:

            path: /ready

            port: 8181

            scheme: HTTP

      dnsPolicy: Default

      volumes:

        - name: config-volume

          configMap:

            name: coredns

            items:

            - key: Corefile

              path: Corefile

---

apiVersion: v1

kind: Service

metadata:

  name: kube-dns

  namespace: kube-system

  annotations:

    prometheus.io/port: "9153"

    prometheus.io/scrape: "true"

  labels:

    k8s-app: kube-dns

    kubernetes.io/cluster-service: "true"

    kubernetes.io/name: "CoreDNS"

spec:

  selector:

    k8s-app: kube-dns

  clusterIP: 10.0.0.10

  ports:

  - name: dns

    port: 53

    protocol: UDP

  - name: dns-tcp

    port: 53

    protocol: TCP

  - name: metrics

    port: 9153

    protocol: TCP

这里不要忘记对应一下 kube-proxy和kubelet 配置文件中的 CIDR

调用apply加载这个yaml

这里我得到如下错误

failed with No API token found for service account “coredns” 

docker pull coredns/coredns:1.7.0

两个解决方案,

禁用ServiceAccount
配置ServiceAccount秘钥

这里直接选简单粗暴的

将:KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”

改为:KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
 

也就是停掉目前运行的 apiserver,使用下面的命令代替
sudo ./kube-apiserver –etcd-servers=http://127.0.0.1:2379 –service-cluster-ip-range=10.0.0.0/24 –admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota

可以看到 coredns 加载了,但是启动失败

得到了下面错误,这里我在简装版集群上安装 coredns 的尝试暂时宣告失败

Couldn’t find network status for kube-system/coredns-6bb956f586-xr4sn through plugin: invalid network status for

Send a Message