发布时间:2021-01-22 16:56:08编辑:admin阅读(3144)
在开始 kiali 亲和性调度之前,先演示一个简单的例子介绍 pod 选择调度到指定 node:
使用命令查看当前所有 k8s 节点:
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 5h11m v1.18.1k8s-node01 Ready <none> 5h8m v1.18.1
现在给 k8s-w-206 这个节点打上一个标签,标签内容为 name: xiao,命令如下:
kubectl label node k8s-node01 name=xiao
编写 pod 资源文件flaskapp-deployment.yaml,文件中使用 nodeSelector 指定该 pod 要调度到 k8s-node01节点之上
apiVersion: apps/v1 kind: Deployment metadata: name: flaskapp-1 spec: selector: matchLabels: run: flaskapp-1 replicas: 1 template: metadata: labels: run: flaskapp-1 spec: containers: - name: flaskapp-1 image: jcdemo/flaskapp ports: - containerPort: 5000 nodeSelector: name: xiao
部署 flaskapp-deployment.yaml,发现 pod 果然被调度到了 k8s-node01 这个 node,效果如下:
[root@k8s-master ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES flaskapp-1-58b69c66f9-hv498 1/1 Running 0 7m30s 10.244.1.30 k8s-node01 <none> <none>
上面举例 pod 使用 nodeSelector 选择 node,这就是最简单的 k8s 调度方式。
节点亲和性调度策略示例:
apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/e2e-az-name operator: In values: - e2e-az1 - e2e-az2 preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: another-node-label-key operator: In values: - another-node-label-value containers: - name: with-node-affinity image: k8s.gcr.io/pause:2.0
举一个生活的例子,以前去医院看病,病人(pod)不能挑医生(node),排队叫到谁就是谁,冷冰冰完全没有亲和性而言;如今可以网上挂号了,病人也可以挑选中意的医生,这样就有了亲和性,说明社会进步了。
当然病人在挑选医生的过程中也会有两种情况:一种是硬性(required),比如非要某医生,即使他忙,也愿意一直等下去;还有一种是软性(prefered),比如优先选择某医生,但是如果真不行,其他医生也未尝不可。
下面的理论可以对照上面的例子。
节点亲和性,也就是 NodeAffinity,用来控制 pod 部署或者不能部署在哪台机器上。
节点亲和性调度策略分为硬策略分为软策略和硬策略两种方式。硬策略是如果没有满足条件的节点,就会不断重试直到条件满足了为止;软策略是如果没有满足条件的节点,pod 就会忽略这条规则,继续完成调度过程。
节点亲和性软硬策略的语法分别介绍如下。
requiredDuringSchedulingIgnoredDuringExecution: pod 必须部署到满足条件的节点上,如果节点不满足条件,就不停重试。
preferredDuringSchedulingIgnoredDuringExecution: pod 优先部署到满足条件的节点,如果节点不满足条件,就忽略这些条件,调度到其他节点。
apiVersion: apps/v1 kind: Deployment metadata: name: redis-cache spec: selector: matchLabels: app: store replicas: 3 template: metadata: labels: app: store spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - store topologyKey: "kubernetes.io/hostname" containers: - name: redis-server image: redis:3.2-alpine
创建了一个Deployment,副本数为3,指定了反亲和规则如上所示,pod的label为app:store,那么pod调度的时候将不会调度到node上已经运行了label为app:store的pod了,这样就会使得Deployment的三副本分别部署在不同的host的node上.
apiVersion: apps/v1 kind: Deployment metadata: name: redis-cache spec: selector: matchLabels: app: store replicas: 3 template: metadata: labels: app: store spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - web-store topologyKey: "kubernetes.io/hostname" podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - store topologyKey: "kubernetes.io/hostname" containers: - name: web-app image: nginx:1.12-alpine
在一个例子中基础之上,要求pod的亲和性满足requiredDuringSchedulingIgnoredDuringExecution中topologyKey=”kubernetes.io/hostname”,并且node上需要运行有app=store的label.
举例3
apiVersion: apps/v1 kind: Deployment metadata: name: web-server spec: selector: matchLabels: app: web-server replicas: 3 template: metadata: labels: app: web-server spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - web-store topologyKey: "kubernetes.io/hostname" containers: - name: web-app image: hub.easystack.io/library/nginx:1.9.0
在一些应用中,pod副本之间需要共享cache,需要将pod运行在一个节点之上
本文参考链接:
https://blog.51cto.com/14268033/2487240
https://blog.csdn.net/jettery/article/details/79003562
上一篇: istio kiali 内部介绍
47570
45927
36869
34434
29043
25681
24531
19685
19206
17718
5542°
6120°
5653°
5708°
6663°
5450°
5454°
5961°
5935°
7265°