当前位置:   article > 正文

阿里云服务器centos_7_9_x64位,3台,搭建k8s集群

阿里云服务器centos_7_9_x64位,3台,搭建k8s集群

目录

1.环境信息

2.搭建过程

2.1 安装Docker源

2.2  安装Docker

2.3 安装kubeadm,kubelet和kubectl

2.4 部署Kubernetes Master(node1)

2.5 安装Pod网络插件(CNI)

2.6 加入Kubernetes Node

2.7 测试kubernetes集群

3.部署 Dashboard

 3.1 在node1上新建文件:kubernetes-dashboard.yaml

3.2 Docker拉去镜像


1.环境信息

阿里云服务器每台服务器可有两个IP,一个是供外网直接访问,另一个内网,我的服务器配置是:

172.16.247.250  node1
172.16.247.248  node2
172.16.247.249  node3

通xshell连接,我配置node1就是master

关闭防火墙(所有服务器上执行)

  1. systemctl stop firewalld
  2. systemctl disable firewalld

关闭selinux

  1. sed -i 's/enforcing/disabled/' /etc/selinux/config
  2. setenforce 0

关闭swap

  1. swapoff -a # 临时关闭
  2. sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭

将桥接的IPv4流量传递到iptables的链

  1. cat > /etc/sysctl.d/k8s.conf << EOF
  2. net.bridge.bridge-nf-call-ip6tables = 1
  3. net.bridge.bridge-nf-call-iptables = 1
  4. EOF

2.搭建过程

所有节点安装Docker/kubeadm/kubelet

2.1 安装Docker源

yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2.2  安装Docker

yum -y install docker-ce-18.06.1.ce-3.el7

开启自启和启动

systemctl enable docker && systemctl start docker

查看版本

docker --version

2.3 安装kubeadm,kubelet和kubectl

添加阿里云YUM的软件源

  1. cat > /etc/yum.repos.d/kubernetes.repo << EOF
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  9. EOF

安装kubeadm,kubelet和kubectl  ,由于版本更新频繁,这里指定版本号部署:

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

设置开机自启

systemctl enable kubelet

2.4 部署Kubernetes Master(node1)

  1. kubeadm init \
  2. --apiserver-advertise-address=172.16.247.250 \
  3. --image-repository registry.aliyuncs.com/google_containers \
  4. --kubernetes-version v1.15.0 \
  5. --service-cidr=10.1.0.0/16 \
  6. --pod-network-cidr=10.244.0.0/16

在node1上执行

使用kubectl工具

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.5 安装Pod网络插件(CNI)

在node1服务器上新建文件:kube-flannel.yml

  1. ---
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: psp.flannel.unprivileged
  6. annotations:
  7. seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
  8. seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
  9. apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
  10. apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
  11. spec:
  12. privileged: false
  13. volumes:
  14. - configMap
  15. - secret
  16. - emptyDir
  17. - hostPath
  18. allowedHostPaths:
  19. - pathPrefix: "/etc/cni/net.d"
  20. - pathPrefix: "/etc/kube-flannel"
  21. - pathPrefix: "/run/flannel"
  22. readOnlyRootFilesystem: false
  23. # Users and groups
  24. runAsUser:
  25. rule: RunAsAny
  26. supplementalGroups:
  27. rule: RunAsAny
  28. fsGroup:
  29. rule: RunAsAny
  30. # Privilege Escalation
  31. allowPrivilegeEscalation: false
  32. defaultAllowPrivilegeEscalation: false
  33. # Capabilities
  34. allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  35. defaultAddCapabilities: []
  36. requiredDropCapabilities: []
  37. # Host namespaces
  38. hostPID: false
  39. hostIPC: false
  40. hostNetwork: true
  41. hostPorts:
  42. - min: 0
  43. max: 65535
  44. # SELinux
  45. seLinux:
  46. # SELinux is unused in CaaSP
  47. rule: 'RunAsAny'
  48. ---
  49. kind: ClusterRole
  50. apiVersion: rbac.authorization.k8s.io/v1
  51. metadata:
  52. name: flannel
  53. rules:
  54. - apiGroups: ['extensions']
  55. resources: ['podsecuritypolicies']
  56. verbs: ['use']
  57. resourceNames: ['psp.flannel.unprivileged']
  58. - apiGroups:
  59. - ""
  60. resources:
  61. - pods
  62. verbs:
  63. - get
  64. - apiGroups:
  65. - ""
  66. resources:
  67. - nodes
  68. verbs:
  69. - list
  70. - watch
  71. - apiGroups:
  72. - ""
  73. resources:
  74. - nodes/status
  75. verbs:
  76. - patch
  77. ---
  78. kind: ClusterRoleBinding
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. metadata:
  81. name: flannel
  82. roleRef:
  83. apiGroup: rbac.authorization.k8s.io
  84. kind: ClusterRole
  85. name: flannel
  86. subjects:
  87. - kind: ServiceAccount
  88. name: flannel
  89. namespace: kube-system
  90. ---
  91. apiVersion: v1
  92. kind: ServiceAccount
  93. metadata:
  94. name: flannel
  95. namespace: kube-system
  96. ---
  97. kind: ConfigMap
  98. apiVersion: v1
  99. metadata:
  100. name: kube-flannel-cfg
  101. namespace: kube-system
  102. labels:
  103. tier: node
  104. app: flannel
  105. data:
  106. cni-conf.json: |
  107. {
  108. "name": "cbr0",
  109. "cniVersion": "0.3.1",
  110. "plugins": [
  111. {
  112. "type": "flannel",
  113. "delegate": {
  114. "hairpinMode": true,
  115. "isDefaultGateway": true
  116. }
  117. },
  118. {
  119. "type": "portmap",
  120. "capabilities": {
  121. "portMappings": true
  122. }
  123. }
  124. ]
  125. }
  126. net-conf.json: |
  127. {
  128. "Network": "10.244.0.0/16",
  129. "Backend": {
  130. "Type": "vxlan"
  131. }
  132. }
  133. ---
  134. apiVersion: apps/v1
  135. kind: DaemonSet
  136. metadata:
  137. name: kube-flannel-ds
  138. namespace: kube-system
  139. labels:
  140. tier: node
  141. app: flannel
  142. spec:
  143. selector:
  144. matchLabels:
  145. app: flannel
  146. template:
  147. metadata:
  148. labels:
  149. tier: node
  150. app: flannel
  151. spec:
  152. affinity:
  153. nodeAffinity:
  154. requiredDuringSchedulingIgnoredDuringExecution:
  155. nodeSelectorTerms:
  156. - matchExpressions:
  157. - key: kubernetes.io/os
  158. operator: In
  159. values:
  160. - linux
  161. hostNetwork: true
  162. priorityClassName: system-node-critical
  163. tolerations:
  164. - operator: Exists
  165. effect: NoSchedule
  166. serviceAccountName: flannel
  167. initContainers:
  168. - name: install-cni-plugin
  169. image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
  170. command:
  171. - cp
  172. args:
  173. - -f
  174. - /flannel
  175. - /opt/cni/bin/flannel
  176. volumeMounts:
  177. - name: cni-plugin
  178. mountPath: /opt/cni/bin
  179. - name: install-cni
  180. image: quay.io/coreos/flannel:v0.15.1
  181. command:
  182. - cp
  183. args:
  184. - -f
  185. - /etc/kube-flannel/cni-conf.json
  186. - /etc/cni/net.d/10-flannel.conflist
  187. volumeMounts:
  188. - name: cni
  189. mountPath: /etc/cni/net.d
  190. - name: flannel-cfg
  191. mountPath: /etc/kube-flannel/
  192. containers:
  193. - name: kube-flannel
  194. image: quay.io/coreos/flannel:v0.15.1
  195. command:
  196. - /opt/bin/flanneld
  197. args:
  198. - --ip-masq
  199. - --kube-subnet-mgr
  200. resources:
  201. requests:
  202. cpu: "100m"
  203. memory: "50Mi"
  204. limits:
  205. cpu: "100m"
  206. memory: "50Mi"
  207. securityContext:
  208. privileged: false
  209. capabilities:
  210. add: ["NET_ADMIN", "NET_RAW"]
  211. env:
  212. - name: POD_NAME
  213. valueFrom:
  214. fieldRef:
  215. fieldPath: metadata.name
  216. - name: POD_NAMESPACE
  217. valueFrom:
  218. fieldRef:
  219. fieldPath: metadata.namespace
  220. volumeMounts:
  221. - name: run
  222. mountPath: /run/flannel
  223. - name: flannel-cfg
  224. mountPath: /etc/kube-flannel/
  225. volumes:
  226. - name: run
  227. hostPath:
  228. path: /run/flannel
  229. - name: cni-plugin
  230. hostPath:
  231. path: /opt/cni/bin
  232. - name: cni
  233. hostPath:
  234. path: /etc/cni/net.d
  235. - name: flannel-cfg
  236. configMap:
  237. name: kube-flannel-cfg

然后在node1上执行:

kubectl apply -f kube-flannel.yml

2.6 加入Kubernetes Node

剩下的node2 node3分别执行:

docker pull lizhenliang/flannel:v0.11.0-amd64

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

  1. kubeadm join 172.16.247.250:6443 --token y7z5d4.3q8basg1rhwfqeo2 \
  2. --discovery-token-ca-cert-hash sha256:775f6b8fe3c32065d76cc670e48757135e2d1be30b4362e0c80f9f190f2356c2

查看Node

 kubectl get node

 node1 服务器上执行

2.7 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:
创建nginx容器

 kubectl create deployment nginx --image=nginx

暴露对外端口

 kubectl expose deployment nginx --port=80 --type=NodePort

查看nginx是否运行成功

 kubectl get pod,svc

这个时候,在阿云安全组,打开这个端口可访问,否则是无法访问的确

浏览器分别执行:

  1. http://47.100.41.232:30415/
  2. http://101.132.190.9:30415/
  3. http://101.132.190.9:30415/

三个结点都可访问,说明集群已经搭建完成

扩容nginx副本wei3个,成功

kubectl scale deployment nginx --replicas=3
kubectl get pods

3.部署 Dashboard

 3.1 在node1上新建文件:kubernetes-dashboard.yaml

  1. # Copyright 2017 The Kubernetes Authors.
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ------------------- Dashboard Secret ------------------- #
  15. apiVersion: v1
  16. kind: Secret
  17. metadata:
  18. labels:
  19. k8s-app: kubernetes-dashboard
  20. name: kubernetes-dashboard-certs
  21. namespace: kube-system
  22. type: Opaque
  23. ---
  24. # ------------------- Dashboard Service Account ------------------- #
  25. apiVersion: v1
  26. kind: ServiceAccount
  27. metadata:
  28. labels:
  29. k8s-app: kubernetes-dashboard
  30. name: kubernetes-dashboard
  31. namespace: kube-system
  32. ---
  33. # ------------------- Dashboard Role & Role Binding ------------------- #
  34. kind: Role
  35. apiVersion: rbac.authorization.k8s.io/v1
  36. metadata:
  37. name: kubernetes-dashboard-minimal
  38. namespace: kube-system
  39. rules:
  40. # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
  41. - apiGroups: [""]
  42. resources: ["secrets"]
  43. verbs: ["create"]
  44. # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
  45. - apiGroups: [""]
  46. resources: ["configmaps"]
  47. verbs: ["create"]
  48. # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  49. - apiGroups: [""]
  50. resources: ["secrets"]
  51. resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  52. verbs: ["get", "update", "delete"]
  53. # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  54. - apiGroups: [""]
  55. resources: ["configmaps"]
  56. resourceNames: ["kubernetes-dashboard-settings"]
  57. verbs: ["get", "update"]
  58. # Allow Dashboard to get metrics from heapster.
  59. - apiGroups: [""]
  60. resources: ["services"]
  61. resourceNames: ["heapster"]
  62. verbs: ["proxy"]
  63. - apiGroups: [""]
  64. resources: ["services/proxy"]
  65. resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  66. verbs: ["get"]
  67. ---
  68. apiVersion: rbac.authorization.k8s.io/v1
  69. kind: RoleBinding
  70. metadata:
  71. name: kubernetes-dashboard-minimal
  72. namespace: kube-system
  73. roleRef:
  74. apiGroup: rbac.authorization.k8s.io
  75. kind: Role
  76. name: kubernetes-dashboard-minimal
  77. subjects:
  78. - kind: ServiceAccount
  79. name: kubernetes-dashboard
  80. namespace: kube-system
  81. ---
  82. # ------------------- Dashboard Deployment ------------------- #
  83. kind: Deployment
  84. apiVersion: apps/v1
  85. metadata:
  86. labels:
  87. k8s-app: kubernetes-dashboard
  88. name: kubernetes-dashboard
  89. namespace: kube-system
  90. spec:
  91. replicas: 1
  92. revisionHistoryLimit: 10
  93. selector:
  94. matchLabels:
  95. k8s-app: kubernetes-dashboard
  96. template:
  97. metadata:
  98. labels:
  99. k8s-app: kubernetes-dashboard
  100. spec:
  101. containers:
  102. - name: kubernetes-dashboard
  103. image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1
  104. ports:
  105. - containerPort: 8443
  106. protocol: TCP
  107. args:
  108. - --auto-generate-certificates
  109. # Uncomment the following line to manually specify Kubernetes API server Host
  110. # If not specified, Dashboard will attempt to auto discover the API server and connect
  111. # to it. Uncomment only if the default does not work.
  112. # - --apiserver-host=http://my-address:port
  113. volumeMounts:
  114. - name: kubernetes-dashboard-certs
  115. mountPath: /certs
  116. # Create on-disk volume to store exec logs
  117. - mountPath: /tmp
  118. name: tmp-volume
  119. livenessProbe:
  120. httpGet:
  121. scheme: HTTPS
  122. path: /
  123. port: 8443
  124. initialDelaySeconds: 30
  125. timeoutSeconds: 30
  126. volumes:
  127. - name: kubernetes-dashboard-certs
  128. secret:
  129. secretName: kubernetes-dashboard-certs
  130. - name: tmp-volume
  131. emptyDir: {}
  132. serviceAccountName: kubernetes-dashboard
  133. # Comment the following tolerations if Dashboard must not be deployed on master
  134. tolerations:
  135. - key: node-role.kubernetes.io/master
  136. effect: NoSchedule
  137. ---
  138. # ------------------- Dashboard Service ------------------- #
  139. kind: Service
  140. apiVersion: v1
  141. metadata:
  142. labels:
  143. k8s-app: kubernetes-dashboard
  144. name: kubernetes-dashboard
  145. namespace: kube-system
  146. spec:
  147. type: NodePort
  148. ports:
  149. - port: 443
  150. targetPort: 8443
  151. nodePort: 30001
  152. selector:
  153. k8s-app: kubernetes-dashboard

3.2 Docker拉去镜像

docker pull  lizhenliang/kubernetes-dashboard-amd64:v1.10.1

执行kubernetes-dashboard.yaml 文件

 kubectl apply -f kubernetes-dashboard.yaml 

查看暴露的端口

kubectl get pods,svc -n kube-system

访问 Dashboard的web界面注意,http,是无法访问的

阿里云服务器还得开通端口访问才可以:

https://47.100.41.232:30001/#!/login

创建service account并绑定默认cluster-admin管理员集群角色

kubectl create serviceaccount dashboard-admin -n kube-system
 kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
 kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

三台服务器都可以访问到

  1. https://47.100.41.232:30001/#!/storageclass?namespace=default
  2. https://101.132.190.9:30001/#!/overview?namespace=default
  3. https://101.132.186.133:30001/#!/overview?namespace=default

 k8s集群搭建完成

参考文献:

kubeadm部署Kubernetes(k8s)完整版详细教程_kubadm 安装k8s-CSDN博客

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/90014
推荐阅读
相关标签
  

闽ICP备14008679号