容器不能帮你隔离系统时间
本文的标题已经直白的不能再直白了,就是这一句
容器无法帮你隔离Linux系统的 系统时间。
在Linux系统中,有很多资源和对象是不能被Namespace化的,最典型的例子就是系统时间
在容器内设置系统时间是不允许的
反过来如果在 宿主机 上修改了时间,那么会直接影响容器内的系统时间
本文的标题已经直白的不能再直白了,就是这一句
容器无法帮你隔离Linux系统的 系统时间。
在Linux系统中,有很多资源和对象是不能被Namespace化的,最典型的例子就是系统时间
在容器内设置系统时间是不允许的
反过来如果在 宿主机 上修改了时间,那么会直接影响容器内的系统时间
本文涉及到的所有yaml的完整信息都在以下链接中
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 10.42.0.180/32
cni.projectcalico.org/podIPs: 10.42.0.180/32
creationTimestamp: "2021-04-11T16:06:17Z"
generateName: elasticsearch-master-
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
controller-revision-hash: elasticsearch-master-6b6cbdf4dd
heritage: Tiller
release: efk
statefulset.kubernetes.io/pod-name: elasticsearch-master-0
name: elasticsearch-master-0
namespace: efk
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: elasticsearch-master
uid: a1b5af70-0a20-44b3-adb5-298d51c296d0
resourceVersion: "313207"
selfLink: /api/v1/namespaces/efk/pods/elasticsearch-master-0
uid: f6e6615c-7532-43ef-a0e9-3a1a87e1352b
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: ES_JAVA_OPTS
value: -Xmx1g -Xms1g
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xqsn2
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: elasticsearch-master-0
initContainers:
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: configure-sysctl
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xqsn2
readOnly: true
nodeName: 192.168.204.139
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
serviceAccount: default
serviceAccountName: default
subdomain: elasticsearch-master-headless
terminationGracePeriodSeconds: 120
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: elasticsearch-master
persistentVolumeClaim:
claimName: elasticsearch-master-elasticsearch-master-0
- name: default-token-xqsn2
secret:
defaultMode: 420
secretName: default-token-xqsn2
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:19Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:57Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:57Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:17Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://7ee20dc3683259847554412c52241a6a64638f228b12d7fd916feb08ca73a6b9
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imageID: docker-pullable://ranchercharts/elasticsearch-elasticsearch@sha256:4c36f5486f292aff534c28506e8cd0f86e4ae177ffce06005bbfa5b312738838
lastState: {}
name: elasticsearch
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2021-04-11T16:06:19Z"
hostIP: 192.168.204.139
initContainerStatuses:
- containerID: docker://e943b905ff851effa892a7a864b679fd0f1eefaa4fc7953df2aa5fd0c062af84
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imageID: docker-pullable://ranchercharts/elasticsearch-elasticsearch@sha256:4c36f5486f292aff534c28506e8cd0f86e4ae177ffce06005bbfa5b312738838
lastState: {}
name: configure-sysctl
ready: true
restartCount: 0
state:
terminated:
containerID: docker://e943b905ff851effa892a7a864b679fd0f1eefaa4fc7953df2aa5fd0c062af84
exitCode: 0
finishedAt: "2021-04-11T16:06:19Z"
reason: Completed
startedAt: "2021-04-11T16:06:19Z"
phase: Running
podIP: 10.42.0.180
podIPs:
- ip: 10.42.0.180
qosClass: Burstable
startTime: "2021-04-11T16:06:17Z"
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/podIP: 10.42.2.145/32
cni.projectcalico.org/podIPs: 10.42.2.145/32
creationTimestamp: "2021-04-11T16:06:17Z"
generateName: elasticsearch-master-
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
controller-revision-hash: elasticsearch-master-6b6cbdf4dd
heritage: Tiller
release: efk
statefulset.kubernetes.io/pod-name: elasticsearch-master-1
name: elasticsearch-master-1
namespace: efk
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: elasticsearch-master
uid: a1b5af70-0a20-44b3-adb5-298d51c296d0
resourceVersion: "313226"
selfLink: /api/v1/namespaces/efk/pods/elasticsearch-master-1
uid: 17cc103a-87a5-4866-8e2e-ffdeb99312a8
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: ES_JAVA_OPTS
value: -Xmx1g -Xms1g
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xqsn2
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: elasticsearch-master-1
initContainers:
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: configure-sysctl
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-xqsn2
readOnly: true
nodeName: 192.168.204.138
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
serviceAccount: default
serviceAccountName: default
subdomain: elasticsearch-master-headless
terminationGracePeriodSeconds: 120
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: elasticsearch-master
persistentVolumeClaim:
claimName: elasticsearch-master-elasticsearch-master-1
- name: default-token-xqsn2
secret:
defaultMode: 420
secretName: default-token-xqsn2
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:20Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:59Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:59Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2021-04-11T16:06:17Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://d3b96c62a007957da29b760e1596b7c45c47218760f1a0b7aece479e2bd56566
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imageID: docker-pullable://ranchercharts/elasticsearch-elasticsearch@sha256:4c36f5486f292aff534c28506e8cd0f86e4ae177ffce06005bbfa5b312738838
lastState: {}
name: elasticsearch
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2021-04-11T16:06:20Z"
hostIP: 192.168.204.138
initContainerStatuses:
- containerID: docker://6ec812dcf93911cce587b34926a331ecd155221fb0793e5234bf3ecebcfc02f8
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imageID: docker-pullable://ranchercharts/elasticsearch-elasticsearch@sha256:4c36f5486f292aff534c28506e8cd0f86e4ae177ffce06005bbfa5b312738838
lastState: {}
name: configure-sysctl
ready: true
restartCount: 0
state:
terminated:
containerID: docker://6ec812dcf93911cce587b34926a331ecd155221fb0793e5234bf3ecebcfc02f8
exitCode: 0
finishedAt: "2021-04-11T16:06:20Z"
reason: Completed
startedAt: "2021-04-11T16:06:20Z"
phase: Running
podIP: 10.42.2.145
podIPs:
- ip: 10.42.2.145
qosClass: Burstable
startTime: "2021-04-11T16:06:17Z"
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
esMajorVersion: "7"
creationTimestamp: "2021-04-11T16:06:17Z"
generation: 1
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
heritage: Tiller
io.cattle.field/appId: efk
release: efk
name: elasticsearch-master
namespace: efk
resourceVersion: "313252"
selfLink: /apis/apps/v1/namespaces/efk/statefulsets/elasticsearch-master
uid: a1b5af70-0a20-44b3-adb5-298d51c296d0
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: elasticsearch-master
serviceName: elasticsearch-master-headless
template:
metadata:
creationTimestamp: null
labels:
app: elasticsearch-master
chart: elasticsearch-7.3.0
heritage: Tiller
release: efk
name: elasticsearch-master
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- elasticsearch-master
topologyKey: kubernetes.io/hostname
weight: 1
containers:
- env:
- name: node.name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2,
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: cluster.name
value: elasticsearch
- name: network.host
value: 0.0.0.0
- name: ES_JAVA_OPTS
value: -Xmx1g -Xms1g
- name: node.data
value: "true"
- name: node.ingest
value: "true"
- name: node.master
value: "true"
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file
http () {
local path="${1}"
if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
else
BASIC_AUTH=''
fi
curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
}
if [ -f "${START_FILE}" ]; then
echo 'Elasticsearch is already running, lets check the node is healthy'
http "/"
else
echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
touch ${START_FILE}
exit 0
else
echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
exit 1
fi
fi
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: elasticsearch-master
dnsPolicy: ClusterFirst
initContainers:
- command:
- sysctl
- -w
- vm.max_map_count=262144
image: ranchercharts/elasticsearch-elasticsearch:7.3.0
imagePullPolicy: IfNotPresent
name: configure-sysctl
resources: {}
securityContext:
privileged: true
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
terminationGracePeriodSeconds: 120
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs-provisioner
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 3
currentRevision: elasticsearch-master-6b6cbdf4dd
observedGeneration: 1
readyReplicas: 3
replicas: 3
updateRevision: elasticsearch-master-6b6cbdf4dd
updatedReplicas: 3
[elasticsearch@elasticsearch-master-0 ~]$ export
declare -x EFK_KIBANA_PORT="tcp://10.43.234.253:5601"
declare -x EFK_KIBANA_PORT_5601_TCP="tcp://10.43.234.253:5601"
declare -x EFK_KIBANA_PORT_5601_TCP_ADDR="10.43.234.253"
declare -x EFK_KIBANA_PORT_5601_TCP_PORT="5601"
declare -x EFK_KIBANA_PORT_5601_TCP_PROTO="tcp"
declare -x EFK_KIBANA_SERVICE_HOST="10.43.234.253"
declare -x EFK_KIBANA_SERVICE_PORT="5601"
declare -x EFK_KIBANA_SERVICE_PORT_HTTP="5601"
declare -x EFK_KUBE_STATE_METRICS_PORT="tcp://10.43.197.75:8080"
declare -x EFK_KUBE_STATE_METRICS_PORT_8080_TCP="tcp://10.43.197.75:8080"
declare -x EFK_KUBE_STATE_METRICS_PORT_8080_TCP_ADDR="10.43.197.75"
declare -x EFK_KUBE_STATE_METRICS_PORT_8080_TCP_PORT="8080"
declare -x EFK_KUBE_STATE_METRICS_PORT_8080_TCP_PROTO="tcp"
declare -x EFK_KUBE_STATE_METRICS_SERVICE_HOST="10.43.197.75"
declare -x EFK_KUBE_STATE_METRICS_SERVICE_PORT="8080"
declare -x EFK_KUBE_STATE_METRICS_SERVICE_PORT_HTTP="8080"
declare -x ELASTICSEARCH_MASTER_PORT="tcp://10.43.34.31:9200"
declare -x ELASTICSEARCH_MASTER_PORT_9200_TCP="tcp://10.43.34.31:9200"
declare -x ELASTICSEARCH_MASTER_PORT_9200_TCP_ADDR="10.43.34.31"
declare -x ELASTICSEARCH_MASTER_PORT_9200_TCP_PORT="9200"
declare -x ELASTICSEARCH_MASTER_PORT_9200_TCP_PROTO="tcp"
declare -x ELASTICSEARCH_MASTER_PORT_9300_TCP="tcp://10.43.34.31:9300"
declare -x ELASTICSEARCH_MASTER_PORT_9300_TCP_ADDR="10.43.34.31"
declare -x ELASTICSEARCH_MASTER_PORT_9300_TCP_PORT="9300"
declare -x ELASTICSEARCH_MASTER_PORT_9300_TCP_PROTO="tcp"
declare -x ELASTICSEARCH_MASTER_SERVICE_HOST="10.43.34.31"
declare -x ELASTICSEARCH_MASTER_SERVICE_PORT="9200"
declare -x ELASTICSEARCH_MASTER_SERVICE_PORT_HTTP="9200"
declare -x ELASTICSEARCH_MASTER_SERVICE_PORT_TRANSPORT="9300"
declare -x ELASTIC_CONTAINER="true"
declare -x ES_JAVA_OPTS="-Xmx1g -Xms1g"
declare -x HOME="/usr/share/elasticsearch"
declare -x HOSTNAME="elasticsearch-master-0"
declare -x KIBANA_HTTP_PORT="tcp://10.43.156.150:80"
declare -x KIBANA_HTTP_PORT_80_TCP="tcp://10.43.156.150:80"
declare -x KIBANA_HTTP_PORT_80_TCP_ADDR="10.43.156.150"
declare -x KIBANA_HTTP_PORT_80_TCP_PORT="80"
declare -x KIBANA_HTTP_PORT_80_TCP_PROTO="tcp"
declare -x KIBANA_HTTP_SERVICE_HOST="10.43.156.150"
declare -x KIBANA_HTTP_SERVICE_PORT="80"
declare -x KIBANA_HTTP_SERVICE_PORT_HTTP_ACCESS_KIBANA="80"
declare -x KUBERNETES_PORT="tcp://10.43.0.1:443"
declare -x KUBERNETES_PORT_443_TCP="tcp://10.43.0.1:443"
declare -x KUBERNETES_PORT_443_TCP_ADDR="10.43.0.1"
declare -x KUBERNETES_PORT_443_TCP_PORT="443"
declare -x KUBERNETES_PORT_443_TCP_PROTO="tcp"
declare -x KUBERNETES_SERVICE_HOST="10.43.0.1"
declare -x KUBERNETES_SERVICE_PORT="443"
declare -x KUBERNETES_SERVICE_PORT_HTTPS="443"
declare -x LS_COLORS="rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:"
declare -x OLDPWD
declare -x PATH="/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/usr/share/elasticsearch"
declare -x SHLVL="3"
declare -x TERM="xterm-256color"
declare -x cluster.initial_master_nodes
declare -x cluster.name
declare -x discovery.seed_hosts
declare -x network.host
declare -x node.data
declare -x node.ingest
declare -x node.master
declare -x node.name
[elasticsearch@elasticsearch-master-0 ~]$
RoleBinding 和 ClusterRoleBinding 允许用户将 Subject (User、ServiceAccount 和 Group)和 特定的 Role 绑定到一起。
从而得一个
谁 (Subject)可以对 谁(Role & ClusterRole 中的 resources: [“pods”])做 什么 (Role & ClusterRole 中的 verbs: [“get”, “watch”, “list”])
注意这是一个 被允许的动作的叠加,不存在任何禁止的语义
下面是一个简单的rolebinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: myname
namespace: default
subjects:
- kind: User
name: lizhe
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role # Role or ClusterRole
name: pod-viewer # name of role or clusterrole
apiGroup: rbac.authorization.k8s.io
Kubernetes 的 Role 是一组权限的集合,表明了拥有这个集合的Subject(Kubernetes User Account、Kubernetes Service Account、Group)可以做什么
仅仅是 允许,Role不包含 不允许,也就是绑定了这个role的subject可以做什么
例如 允许在特定的namespace (pod-role namespace)下,列出pod的对应权限集合
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
与普通的Role不同,还有一种 可以在任何 namespace 下使用的权限集合
以下这种表示 任意namespace下,可以列出的pod的权限集合
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pod-clusterrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
Kubernetes 拥有大量内置的集群role
calico-node 2021-10-09T13:06:48Z
cattle-fleet-system-fleet-agent-role 2021-10-09T13:21:08Z
cattle-globalrole-admin 2021-10-09T13:19:47Z
cattle-globalrole-authn-manage 2021-10-09T13:19:47Z
cattle-globalrole-catalogs-manage 2021-10-09T13:19:47Z
cattle-globalrole-catalogs-use 2021-10-09T13:19:47Z
cattle-globalrole-clusters-create 2021-10-09T13:19:47Z
cattle-globalrole-clustertemplaterevisions-create 2021-10-09T13:19:47Z
cattle-globalrole-clustertemplates-create 2021-10-09T13:19:47Z
cattle-globalrole-features-manage 2021-10-09T13:19:47Z
cattle-globalrole-kontainerdrivers-manage 2021-10-09T13:19:47Z
cattle-globalrole-nodedrivers-manage 2021-10-09T13:19:47Z
cattle-globalrole-podsecuritypolicytemplates-manage 2021-10-09T13:19:47Z
cattle-globalrole-restricted-admin 2021-10-09T13:19:47Z
cattle-globalrole-roles-manage 2021-10-09T13:19:47Z
cattle-globalrole-settings-manage 2021-10-09T13:19:47Z
cattle-globalrole-user 2021-10-09T13:19:47Z
cattle-globalrole-user-base 2021-10-09T13:19:47Z
cattle-globalrole-users-manage 2021-10-09T13:19:47Z
cattle-globalrole-view-rancher-metrics 2021-10-09T13:19:47Z
cattle-impersonation-u-at6ks67sqf 2021-10-09T13:20:18Z
cattle-impersonation-u-b4qkhsnliz 2021-10-09T13:20:16Z
cattle-impersonation-u-mo773yttt4 2021-10-09T13:22:27Z
cattle-impersonation-u-oz75ayhmkg 2021-10-09T13:20:19Z
cattle-unauthenticated 2021-10-09T13:19:35Z
cert-manager-cainjector 2021-10-09T13:14:13Z
cert-manager-controller-approve:cert-manager-io 2021-10-09T13:14:13Z
cert-manager-controller-certificates 2021-10-09T13:14:13Z
cert-manager-controller-certificatesigningrequests 2021-10-09T13:14:13Z
cert-manager-controller-challenges 2021-10-09T13:14:13Z
cert-manager-controller-clusterissuers 2021-10-09T13:14:13Z
cert-manager-controller-ingress-shim 2021-10-09T13:14:13Z
cert-manager-controller-issuers 2021-10-09T13:14:13Z
cert-manager-controller-orders 2021-10-09T13:14:13Z
cert-manager-edit 2021-10-09T13:14:13Z
cert-manager-view 2021-10-09T13:14:13Z
cert-manager-webhook:subjectaccessreviews 2021-10-09T13:14:13Z
cluster-admin 2021-10-09T13:05:55Z
cluster-crd-clusterRole 2021-10-09T13:19:37Z
cluster-owner 2021-10-09T13:20:16Z
create-ns 2021-10-09T13:20:20Z
edit 2021-10-09T13:05:55Z
flannel 2021-10-09T13:06:48Z
fleet-bundle-deployment 2021-10-09T13:20:45Z
fleet-content 2021-10-09T13:20:45Z
fleet-controller 2021-10-09T13:20:31Z
fleet-controller-bootstrap 2021-10-09T13:20:31Z
fleetworkspace-admin 2021-10-09T13:19:35Z
fleetworkspace-member 2021-10-09T13:19:35Z
fleetworkspace-readonly 2021-10-09T13:19:35Z
gitjob 2021-10-09T13:20:31Z
global-unrestricted-psp-clusterrole 2021-10-09T13:05:56Z
local-clustermember 2021-10-09T13:20:16Z
local-clusterowner 2021-10-09T13:20:13Z
p-b5lm7-namespaces-edit 2021-10-09T13:20:09Z
p-b5lm7-namespaces-readonly 2021-10-09T13:20:09Z
p-qpk2x-namespaces-edit 2021-10-09T13:20:09Z
p-qpk2x-namespaces-readonly 2021-10-09T13:20:09Z
project-crd-clusterRole 2021-10-09T13:19:37Z
project-member 2021-10-09T13:20:15Z
project-member-promoted 2021-10-09T13:20:19Z
rke2-cloud-controller-manager 2021-10-09T13:05:57Z
rke2-coredns-rke2-coredns 2021-10-09T13:06:48Z
rke2-coredns-rke2-coredns-autoscaler 2021-10-09T13:06:48Z
rke2-ingress-nginx 2021-10-09T13:07:47Z
system-unrestricted-psp-role 2021-10-09T13:05:56Z
system:aggregate-to-admin 2021-10-09T13:05:55Z
system:aggregate-to-edit 2021-10-09T13:05:55Z
system:aggregate-to-view 2021-10-09T13:05:55Z
system:auth-delegator 2021-10-09T13:05:55Z
system:basic-user 2021-10-09T13:05:55Z
system:certificates.k8s.io:certificatesigningrequests:nodeclient 2021-10-09T13:05:55Z
system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 2021-10-09T13:05:55Z
system:certificates.k8s.io:kube-apiserver-client-approver 2021-10-09T13:05:55Z
system:certificates.k8s.io:kube-apiserver-client-kubelet-approver 2021-10-09T13:05:55Z
system:certificates.k8s.io:kubelet-serving-approver 2021-10-09T13:05:55Z
system:certificates.k8s.io:legacy-unknown-approver 2021-10-09T13:05:55Z
system:controller:attachdetach-controller 2021-10-09T13:05:55Z
system:controller:certificate-controller 2021-10-09T13:05:56Z
system:controller:clusterrole-aggregation-controller 2021-10-09T13:05:55Z
system:controller:cronjob-controller 2021-10-09T13:05:55Z
system:controller:daemon-set-controller 2021-10-09T13:05:55Z
system:controller:deployment-controller 2021-10-09T13:05:55Z
system:controller:disruption-controller 2021-10-09T13:05:55Z
system:controller:endpoint-controller 2021-10-09T13:05:55Z
system:controller:endpointslice-controller 2021-10-09T13:05:55Z
system:controller:endpointslicemirroring-controller 2021-10-09T13:05:55Z
system:controller:ephemeral-volume-controller 2021-10-09T13:05:56Z
system:controller:expand-controller 2021-10-09T13:05:55Z
system:controller:generic-garbage-collector 2021-10-09T13:05:56Z
system:controller:horizontal-pod-autoscaler 2021-10-09T13:05:56Z
system:controller:job-controller 2021-10-09T13:05:56Z
system:controller:namespace-controller 2021-10-09T13:05:56Z
system:controller:node-controller 2021-10-09T13:05:56Z
system:controller:persistent-volume-binder 2021-10-09T13:05:56Z
system:controller:pod-garbage-collector 2021-10-09T13:05:56Z
system:controller:pv-protection-controller 2021-10-09T13:05:56Z
system:controller:pvc-protection-controller 2021-10-09T13:05:56Z
system:controller:replicaset-controller 2021-10-09T13:05:56Z
system:controller:replication-controller 2021-10-09T13:05:56Z
system:controller:resourcequota-controller 2021-10-09T13:05:56Z
system:controller:root-ca-cert-publisher 2021-10-09T13:05:56Z
system:controller:route-controller 2021-10-09T13:05:56Z
system:controller:service-account-controller 2021-10-09T13:05:56Z
system:controller:service-controller 2021-10-09T13:05:56Z
system:controller:statefulset-controller 2021-10-09T13:05:56Z
system:controller:ttl-after-finished-controller 2021-10-09T13:05:56Z
system:controller:ttl-controller 2021-10-09T13:05:56Z
system:discovery 2021-10-09T13:05:55Z
system:heapster 2021-10-09T13:05:55Z
system:kube-aggregator 2021-10-09T13:05:55Z
system:kube-controller-manager 2021-10-09T13:05:55Z
system:kube-dns 2021-10-09T13:05:55Z
system:kube-proxy 2021-10-09T13:05:56Z
system:kube-scheduler 2021-10-09T13:05:55Z
system:kubelet-api-admin 2021-10-09T13:05:55Z
system:monitoring 2021-10-09T13:05:55Z
system:node 2021-10-09T13:05:55Z
system:node-bootstrapper 2021-10-09T13:05:55Z
system:node-problem-detector 2021-10-09T13:05:55Z
system:node-proxier 2021-10-09T13:05:55Z
system:persistent-volume-provisioner 2021-10-09T13:05:55Z
system:public-info-viewer 2021-10-09T13:05:55Z
system:rke2-controller 2021-10-09T13:05:56Z
system:rke2-metrics-server 2021-10-09T13:07:39Z
system:rke2-metrics-server-aggregated-reader 2021-10-09T13:07:39Z
system:service-account-issuer-discovery 2021-10-09T13:05:55Z
system:volume-scheduler 2021-10-09T13:05:55Z
u-at6ks67sqf-view 2021-10-09T13:20:14Z
u-b4qkhsnliz-view 2021-10-09T13:20:11Z
u-mo773yttt4-view 2021-10-09T13:22:27Z
u-oz75ayhmkg-view 2021-10-09T13:20:14Z
user-whnpx-view 2021-10-09T13:19:48Z
view 2021-10-09T13:05:55Z
这些 role 大多数面向的是系统实用程序, 但其中有四个是针对一般用户的
cluster-admin: 提供完整的集群访问权限
admin: 提供整个命名空间完整的访问权限
edit: 允许用户修改命名空间中的一切
view: 提供命名空间只读权限
在 Kubernetes User Account 中我们介绍了如何创建一个用户,用户就是 User Account,是给外部用户使用的
这里的 Service Account ,是给 Pod 使用的
基于 Service Account , 运行在Pod中的进程可以获取对应的username和token,从而用来访问 Api Server
对于 Service Account 最简单的使用是 base 在 namespace 上的
每一个namespace都携带一个名叫 default 的默认的 service account
我们在helloworld namespace 上部署一个nginx,然后我通过这个nginx尝试访问apiserver
这里需要分为2个步骤
1 先对应服务器端 CA
1.1 可以直接选择忽略
curl -k https://192.168.204.149:6443/api
1.2 找到并使用服务器端CA
看到 401 之后,此时并不是 RBAC 权限阻止了你,而是 api server 阻止了你
2. 需要对应 apiserver 的 token
/var/run/secrets/kubernetes.io/serviceaccount/token
找到token之后在命令行中加入token
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjN6UW1TTTR2VlI2M3MxVklnRjFPZmtIa1JiVU92bVk1UENtMlBRQVFWQTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJoZWxsb3dvcmxkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tbHZjY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg2YTQ3MjE1LTRiODctNDRhYy1hMTc2LWZkN2E0OTI4MjU1YiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpoZWxsb3dvcmxkOmRlZmF1bHQifQ.RnvDLG86wveeOmbY1jwJyu2YYX84KFnZkWP0hMo5gbSnMNfy8MDW6l00NiTqueTS5Efvzl9IQNo7cYJWLmwZ43hRSj9UaLnLHoWl3wgb0eT16AlSVL7WX5f05jdxH55gKqv-gOsF1u1btzltiI5a5iz5bvr02lOa23JD6xWpPKt5beBfcW6BSvbBboMYtwh_4fue6nOCBd96-nhOcd3ytdhQUu6Ta2o3BW7Ro1bMyJuwTWD_YxCKQj3Ab47n13W2MiztvXOF9dqGCRACBy_ZqvXNB3QsCyjDgnSWcc4jlkxKYsgmiQgOgFkdDzIrC-qGAdLpTF941QtYYZXqH91keQ" https://192.168.204.149:6443/api
此时你已经可以访问 api 端点了,但是仍然不能访问pod
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjN6UW1TTTR2VlI2M3MxVklnRjFPZmtIa1JiVU92bVk1UENtMlBRQVFWQTAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJoZWxsb3dvcmxkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tbHZjY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg2YTQ3MjE1LTRiODctNDRhYy1hMTc2LWZkN2E0OTI4MjU1YiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpoZWxsb3dvcmxkOmRlZmF1bHQifQ.RnvDLG86wveeOmbY1jwJyu2YYX84KFnZkWP0hMo5gbSnMNfy8MDW6l00NiTqueTS5Efvzl9IQNo7cYJWLmwZ43hRSj9UaLnLHoWl3wgb0eT16AlSVL7WX5f05jdxH55gKqv-gOsF1u1btzltiI5a5iz5bvr02lOa23JD6xWpPKt5beBfcW6BSvbBboMYtwh_4fue6nOCBd96-nhOcd3ytdhQUu6Ta2o3BW7Ro1bMyJuwTWD_YxCKQj3Ab47n13W2MiztvXOF9dqGCRACBy_ZqvXNB3QsCyjDgnSWcc4jlkxKYsgmiQgOgFkdDzIrC-qGAdLpTF941QtYYZXqH91keQ" https://192.168.204.149:6443/api/v1/namespaces/default/pods
看见403之后,这里可以肯定已经是被 RBAC 阻止了
我们来给这个 serviceaccount 一个 最大管理员权限
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --group=system:serviceaccounts:helloworld
由于赋给了最大管理员权限,所以此时,不仅能看到 default namespace 下的 pod 信息,还能看到 helloworld namespace下的pod信息
Service Account 使用一种特殊的secret对象,叫做 ServiceAccountToken,它会被自动mount到各个pod中
在 Kubernetes User Account 中我们使用OPENSSL创建了CSR和私钥,并且为kubernetes创建了一个新用户,这里提供一个相对简单的方法,使用golang和shell
创建 main.go
package main
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/asn1"
"encoding/pem"
"os"
)
func main() {
name := os.Args[1]
user := os.Args[2]
key, err := rsa.GenerateKey(rand.Reader, 2048)
if err != nil {
panic(err)
}
keyDer := x509.MarshalPKCS1PrivateKey(key)
keyBlock := pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: keyDer,
}
keyFile, err := os.Create(name + "-key.pem")
if err != nil {
panic(err)
}
pem.Encode(keyFile, &keyBlock)
keyFile.Close()
commonName := user
emailAddress := "hello@lizhe.name"
org := "lizhe"
orgUnit := "lz"
city := "DL"
state := "LN"
country := "CN"
subject := pkix.Name{
CommonName: commonName,
Country: []string{country},
Locality: []string{city},
Organization: []string{org},
OrganizationalUnit: []string{orgUnit},
Province: []string{state},
}
asn1, err := asn1.Marshal(subject.ToRDNSequence())
if err != nil {
panic(err)
}
csr := x509.CertificateRequest{
RawSubject: asn1,
EmailAddresses: []string{emailAddress},
SignatureAlgorithm: x509.SHA256WithRSA,
}
bytes, err := x509.CreateCertificateRequest(rand.Reader, &csr, key)
if err != nil {
panic(err)
}
csrFile, err := os.Create(name + ".csr")
if err != nil {
panic(err)
}
pem.Encode(csrFile, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bytes})
csrFile.Close()
}
调用脚本,此脚本需要两个参数 client 和 lizhe
client 会得到一个 client.csr 而 lizhe 将会是这个 csr的 subject
go run main.go client lizhe
创建一个 main.sh 文件,填入以下内容
#!/bin/sh
csr_name="my-client-csr"
name="${1}"
csr="${2}"
cat <<EOF | kubectl create -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${csr_name}
spec:
groups:
- system:authenticated
request: $(cat ${csr} | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
echo
echo "Approving signing request."
kubectl certificate approve ${csr_name}
echo
echo "Downloading certificate."
kubectl get csr ${csr_name} -o jsonpath='{.status.certificate}' \
| base64 --decode > $(basename ${csr} .csr).crt
echo
echo "Cleaning up"
kubectl delete csr ${csr_name}
echo
echo "Add the following to the 'users' list in your kubeconfig file:"
echo "- name: ${name}"
echo " user:"
echo " client-certificate: ${PWD}/$(basename ${csr} .csr).crt"
echo " client-key: ${PWD}/$(basename ${csr} .csr)-key.pem"
echo
echo "Next you may want to add a role-binding for this user."
调用这个shell,注意因为没有太多的异常处理,请确认可以成功运行
./main.sh lizhe client.csr
正常的配置文件是这样的
我们需要使用以下内容,所以我copy一份上面的config文件,并且进行一些改动
- name: lizhe
user:
client-certificate: /home/lizhe/works/createuser/client.crt
client-key: /home/lizhe/works/createuser/client-key.pem
拷贝文件到工作目录
sudo cp /etc/rancher/k3s/k3s.yaml /home/lizhe/works/createuser/k3s_lizhe.yaml
修改 k3s_lizhe.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTVRnd01UazNNREF3SGhjTk1qRXdOREV3TURFMU5UQXdXaGNOTXpFd05EQTRNREUxTlRBdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTVRnd01UazNNREF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSL1NnS2NVeGM4T01ZeHhMcDZWeDVyak9JSUJnNUpGS3lBV3d5WXRpWG8KM3NxdHVMQXVnUldXVVpwQklKbXRsQ1htcjg0ZFR4ejJmbXpuV3ZSa3ZKOC9vMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTZsemFvSjdVKy9qUnNHYVZqOWxqCjRiam1aMGt3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnTUxQcEF2ejFuTzFKTWR6QnhKMmY3UnUzWTlVd0VkYkkKbXphOXBLZTlOdlFDSVFEQWh1WFp4SlJuMGxwVDQvdVFUUEtIck9DYjZNMExncXNDYXgyMjVIVFNOdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: lizhe
user:
client-certificate: /home/lizhe/works/createuser/client.crt
client-key: /home/lizhe/works/createuser/client-key.pem
我们来使用curl检查一下生成的 x509 文件是不是能正常工作
curl --key ./client-key.pem --cert ./client.crt --insecure https://localhost:6443/api
下面我们去创建 rolebinding
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: default
name: podrole
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
EOF
kubectl create rolebinding podrolebinding -n default --clusterrole podrole --user lizhe
尝试使用lizhe这个用户来查询 default namespace 下的 pods
curl --cert ./client.crt --key ./client-key.pem --insecure -s https://localhost:6443/api/v1/namespaces/default/pods
什么都没有得到,我们创建一个pod再试
配置对应的 kubeconfig文件
sudo kubectl config --kubeconfig=./k3s.yaml set-credentials lizhe --client-certificate=/home/lizhe/works/createuser/client.crt --client-key=/home/lizhe/works/createuser/client-key.pem
sudo kubectl config --kubeconfig=./k3s.yaml set-context lizhe_context --cluster=default --namespace=default --user=lizhe
sudo kubectl config --kubeconfig=./k3s.yaml use-context lizhe_context
也可以直接创建一个新的 kubeconfig文件,这样这个文件就只包含 lizhe 这一个用户
kubectl config --kubeconfig=./k3s_lizhe.yaml set-credentials lizhe --client-certificate=/home/lizhe/works/createuser/client.crt --client-key=/home/lizhe/works/createuser/client-key.pem
kubectl config --kubeconfig=./k3s_lizhe.yaml set-context lizhe_context --cluster=default --namespace=default --user=lizhe
这样创建的文件没有cluster信息,需要手动拷贝
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTVRnd01UazNNREF3SGhjTk1qRXdOREV3TURFMU5UQXdXaGNOTXpFd05EQTRNREUxTlRBdwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTVRnd01UazNNREF3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFSL1NnS2NVeGM4T01ZeHhMcDZWeDVyak9JSUJnNUpGS3lBV3d5WXRpWG8KM3NxdHVMQXVnUldXVVpwQklKbXRsQ1htcjg0ZFR4ejJmbXpuV3ZSa3ZKOC9vMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVTZsemFvSjdVKy9qUnNHYVZqOWxqCjRiam1aMGt3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUlnTUxQcEF2ejFuTzFKTWR6QnhKMmY3UnUzWTlVd0VkYkkKbXphOXBLZTlOdlFDSVFEQWh1WFp4SlJuMGxwVDQvdVFUUEtIck9DYjZNMExncXNDYXgyMjVIVFNOdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: lizhe
name: lizhe_context
current-context: ""
kind: Config
preferences: {}
users:
- name: lizhe
user:
client-certificate: client.crt
client-key: client-key.pem
每次打开 shell,通过直接使用这个 KUBECONFIG,都能切换lizhe这个用户了
API Server使用以下几种授权策略,通过API Server的启动参数 --authorization-mode
设置
RBAC(Role-Based Access Control)基于角色的访问控制,在Kubernetes 1.5 中引入,现为默认标准。
相对其他访问控制方式,拥有如下优势:
Kubernetes 的 RBAC 流程需要定义三个主要组件
Subject 即 实际需要被检查的对象,包含 user (Kubernetes User Account)、service account(Kubernetes Service Account) 和 group
Rule 是一组包含了类似 CRUD 动作的列表,还包含一些kubernetes附加的功能,watch、list 和 exec。kubernetes使用 Role(Kubernetes Role & ClusterRole)来定义 Rule 的适用范围。
在Kubernetes User Account中我们添加了一个用户,下面语句可以将这个用户直接配置进kube config 配置文件
sudo kubectl config --kubeconfig=./kubeconfig.yaml set-credentials lizhe --client-certificate=/home/lizhe/works/client.crt --client-key=/home/lizhe/works/client.key
创建一个新的context
kubectl config --kubeconfig=./kubeconfig.yaml set-context lizhe_context --cluster=local --namespace=default --user=lizhe
切换 context
kubectl config --kubeconfig=./kubeconfig.yaml use-context lizhe_context
这个用户只能访问 default namespace 下的 list pod