当前位置:   article > 正文

基于 k8s+docker 环境构建一个高可用、高性能的 web 集群。_基于docker和k8s的集群设计和实现毕业设计

基于docker和k8s的集群设计和实现毕业设计

目录

项目架构图

项目描述

项目环境

环境准备

IP地址规划

关闭selinux和firewall

配置静态ip地址

修改主机名

升级系统(可做可不做)

添加hosts解析

项目步骤

一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。

二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。

部署堡垒机

部署firewall服务器

三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。

四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。

1.部署gitlab

2.部署Jenkins

3.部署harbor

五.将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小20个业务pod,最多40个业务pod。

六.启动mysql的pod,为web业务提供数据库服务。

尝试:k8s部署有状态的MySQL

七.使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

八.使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。

使用dashboard对整个集群资源进行掌控

九.安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。

十.使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。


项目架构图

项目描述

模拟公司的web业务,部署k8s,web,MySQL,nfs,harbor,zabbix,Prometheus,gitlab,Jenkins,ansible环境,保障web业务的高可用,达到一个高负载的生产环境。

项目环境

CentOS 7.9,ansible 2.9.27,Docker 20.10.6,Docker Compose 2.18.1,Kubernetes 1.20.6,Calico 3.23,Harbor 2.4.1,nfs v4,metrics-server 0.6.0,ingress-nginx-controllerv1.1.0,kube-webhook-certgen-v1.1.0,MySQL 5.7.42,Dashboard v2.5.0,Prometheus 2.34.0,zabbix 5.0,Grafana 10.0.0,jenkinsci/blueocean,Gitlab-16.0.4-jh。

环境准备

10台全新的Linux服务器,关闭firewall和seLinux,配置静态ip地址,修改主机名,添加hosts解析

IP地址规划

serverip
k8smaster192.168.2.104
k8snode1192.168.2.111
k8snode2192.168.2.112
ansibe192.168.2.119
nfs192.168.2.121
gitlab192.168.2.124
harbor192.168.2.106
zabbix192.168.2.117
firewalld192.168.2.141
Bastionhost192.168.2.140

关闭selinux和firewall

  1. # 防火墙并且设置防火墙开启不启动
  2. service firewalld stop && systemctl disable firewalld
  3. # 临时关闭seLinux
  4. setenforce 0
  5. # 永久关闭seLinux
  6. sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  7. [root@k8smaster ~]# service firewalld stop
  8. Redirecting to /bin/systemctl stop firewalld.service
  9. [root@k8smaster ~]# systemctl disable firewalld
  10. Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  11. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  12. [root@k8smaster ~]# reboot
  13. [root@k8smaster ~]# getenforce
  14. Disabled

配置静态ip地址

  1. cd /etc/sysconfig/network-scripts/
  2. vim ifcfg-ens33
  3. TYPE="Ethernet"
  4. BOOTPROTO="static"
  5. DEVICE="ens33"
  6. NAME="ens33"
  7. ONBOOT="yes"
  8. IPADDR="192.168.2.104"
  9. PREFIX=24
  10. GATEWAY="192.168.2.1"
  11. DNS1=114.114.114.114
  12. TYPE="Ethernet"
  13. BOOTPROTO="static"
  14. DEVICE="ens33"
  15. NAME="ens33"
  16. ONBOOT="yes"
  17. IPADDR="192.168.2.111"
  18. PREFIX=24
  19. GATEWAY="192.168.2.1"
  20. DNS1=114.114.114.114
  21. TYPE="Ethernet"
  22. BOOTPROTO="static"
  23. DEVICE="ens33"
  24. NAME="ens33"
  25. ONBOOT="yes"
  26. IPADDR="192.168.2.112"
  27. PREFIX=24
  28. GATEWAY="192.168.2.1"
  29. DNS1=114.114.114.114

修改主机名

  1. hostnamcectl set-hostname k8smaster
  2. hostnamcectl set-hostname k8snode1
  3. hostnamcectl set-hostname k8snode2
  4. #切换用户,重新加载环境
  5. su - root
  6. [root@k8smaster ~]#
  7. [root@k8snode1 ~]#
  8. [root@k8snode2 ~]#

升级系统(可做可不做)

yum update -y

添加hosts解析

  1. vim /etc/hosts
  2. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  3. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  4. 192.168.2.104 k8smaster
  5. 192.168.2.111 k8snode1
  6. 192.168.2.112 k8snode2

项目步骤

一.使用ProcessOn设计了整个集群的架构,规划好服务器的IP地址,使用kubeadm安装k8s单master的集群环境(1个master+2个node节点)。

  1. # 1.互相之间建立免密通道
  2. ssh-keygen # 一路回车
  3. ssh-copy-id k8smaster
  4. ssh-copy-id k8snode1
  5. ssh-copy-id k8snode2
  6. # 2.关闭交换分区(Kubeadm初始化的时候会检测)
  7. # 临时关闭:swapoff -a
  8. # 永久关闭:注释swap挂载,给swap这行开头加一下注释
  9. [root@k8smaster ~]# cat /etc/fstab
  10. #
  11. # /etc/fstab
  12. # Created by anaconda on Thu Mar 23 15:22:20 2023
  13. #
  14. # Accessible filesystems, by reference, are maintained under '/dev/disk'
  15. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
  16. #
  17. /dev/mapper/centos-root / xfs defaults 0 0
  18. UUID=00236222-82bd-4c15-9c97-e55643144ff3 /boot xfs defaults 0 0
  19. /dev/mapper/centos-home /home xfs defaults 0 0
  20. #/dev/mapper/centos-swap swap swap defaults 0 0
  21. # 3.加载相关内核模块
  22. modprobe br_netfilter
  23. echo "modprobe br_netfilter" >> /etc/profile
  24. cat > /etc/sysctl.d/k8s.conf <<EOF
  25. net.bridge.bridge-nf-call-ip6tables = 1
  26. net.bridge.bridge-nf-call-iptables = 1
  27. net.ipv4.ip_forward = 1
  28. EOF
  29. #重新加载,使配置生效
  30. sysctl -p /etc/sysctl.d/k8s.conf
  31. # 为什么要执行modprobe br_netfilter?
  32. #    "modprobe br_netfilter"命令用于在Linux系统中加载br_netfilter内核模块。这个模块是Linux内# 核中的一个网络桥接模块,它允许管理员使用iptables等工具对桥接到同一网卡的流量进行过滤和管理。
  33. # 因为要使用Linux系统作为路由器或防火墙,并且需要对来自不同网卡的数据包进行过滤、转发或NAT操作。
  34. # 为什么要开启net.ipv4.ip_forward = 1参数?
  35. #   要让Linux系统具有路由转发功能,需要配置一个Linux的内核参数net.ipv4.ip_forward。这个参数指# 定了Linux系统当前对路由转发功能的支持情况;其值为0时表示禁止进行IP转发;如果是1,则说明IP转发# 功能已经打开。
  36. # 4.配置阿里云的repo源
  37. yum install -y yum-utils
  38. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  39. yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
  40. # 5.配置安装k8s组件需要的阿里云的repo源
  41. [root@k8smaster ~]# vim /etc/yum.repos.d/kubernetes.repo
  42. [kubernetes]
  43. name=Kubernetes
  44. baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
  45. enabled=1
  46. gpgcheck=0
  47. # 6.配置时间同步
  48. [root@k8smaster ~]# crontab -e
  49. * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
  50. #重启crond服务
  51. [root@k8smaster ~]# service crond restart
  52. # 7.安装docker服务
  53. yum install docker-ce-20.10.6 -y
  54. # 启动docker,设置开机自启
  55. systemctl start docker && systemctl enable docker.service
  56. # 8.配置docker镜像加速器和驱动
  57. vim /etc/docker/daemon.json
  58. {
  59. "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  60. "exec-opts": ["native.cgroupdriver=systemd"]
  61. }
  62. # 重新加载配置,重启docker服务
  63. systemctl daemon-reload && systemctl restart docker
  64. # 9.安装初始化k8s需要的软件包
  65. yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
  66. # 设置kubelet开机启动
  67. systemctl enable kubelet
  68. #注:每个软件包的作用
  69. #Kubeadm:  kubeadm是一个工具,用来初始化k8s集群的
  70. #kubelet:   安装在集群所有节点上,用于启动Pod的
  71. #kubectl:   通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
  72. # 10.kubeadm初始化k8s集群
  73. # 把初始化k8s集群需要的离线镜像包上传到k8smaster、k8snode1、k8snode2机器上,然后解压
  74. docker load -i k8simage-1-20-6.tar.gz
  75. # 把文件远程拷贝到node节点
  76. root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode1:/root
  77. root@k8smaster ~]# scp k8simage-1-20-6.tar.gz root@k8snode2:/root
  78. # 查看镜像
  79. [root@k8snode1 ~]# docker images
  80. REPOSITORY TAG IMAGE ID CREATED SIZE
  81. registry.aliyuncs.com/google_containers/kube-proxy v1.20.6 9a1ebfd8124d 2 years ago 118MB
  82. registry.aliyuncs.com/google_containers/kube-scheduler v1.20.6 b93ab2ec4475 2 years ago 47.3MB
  83. registry.aliyuncs.com/google_containers/kube-controller-manager v1.20.6 560dd11d4550 2 years ago 116MB
  84. registry.aliyuncs.com/google_containers/kube-apiserver v1.20.6 b05d611c1af9 2 years ago 122MB
  85. calico/pod2daemon-flexvol v3.18.0 2a22066e9588 2 years ago 21.7MB
  86. calico/node v3.18.0 5a7c4970fbc2 2 years ago 172MB
  87. calico/cni v3.18.0 727de170e4ce 2 years ago 131MB
  88. calico/kube-controllers v3.18.0 9a154323fbf7 2 years ago 53.4MB
  89. registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 2 years ago 253MB
  90. registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 3 years ago 45.2MB
  91. registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 years ago 683kB
  92. # 11.使用kubeadm初始化k8s集群
  93. kubeadm config print init-defaults > kubeadm.yaml
  94. [root@k8smaster ~]# vim kubeadm.yaml
  95. apiVersion: kubeadm.k8s.io/v1beta2
  96. bootstrapTokens:
  97. - groups:
  98. - system:bootstrappers:kubeadm:default-node-token
  99. token: abcdef.0123456789abcdef
  100. ttl: 24h0m0s
  101. usages:
  102. - signing
  103. - authentication
  104. kind: InitConfiguration
  105. localAPIEndpoint:
  106. advertiseAddress: 192.168.2.104 #控制节点的ip
  107. bindPort: 6443
  108. nodeRegistration:
  109. criSocket: /var/run/dockershim.sock
  110. name: k8smaster #控制节点主机名
  111. taints:
  112. - effect: NoSchedule
  113. key: node-role.kubernetes.io/master
  114. ---
  115. apiServer:
  116. timeoutForControlPlane: 4m0s
  117. apiVersion: kubeadm.k8s.io/v1beta2
  118. certificatesDir: /etc/kubernetes/pki
  119. clusterName: kubernetes
  120. controllerManager: {}
  121. dns:
  122. type: CoreDNS
  123. etcd:
  124. local:
  125. dataDir: /var/lib/etcd
  126. imageRepository: registry.aliyuncs.com/google_containers # 需要修改为阿里云的仓库
  127. kind: ClusterConfiguration
  128. kubernetesVersion: v1.20.6
  129. networking:
  130. dnsDomain: cluster.local
  131. serviceSubnet: 10.96.0.0/12
  132. podSubnet: 10.244.0.0/16 #指定pod网段,需要新增加这个
  133. scheduler: {}
  134. #追加如下几行
  135. ---
  136. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  137. kind: KubeProxyConfiguration
  138. mode: ipvs
  139. ---
  140. apiVersion: kubelet.config.k8s.io/v1beta1
  141. kind: KubeletConfiguration
  142. cgroupDriver: systemd
  143. # 12.基于kubeadm.yaml文件初始化k8s
  144. [root@k8smaster ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
  145. mkdir -p $HOME/.kube
  146. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  147. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  148. kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  149. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  150. # 13.扩容k8s集群-添加工作节点
  151. [root@k8snode1 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  152. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  153. [root@k8snode2 ~]# kubeadm join 192.168.2.104:6443 --token abcdef.0123456789abcdef \
  154. --discovery-token-ca-cert-hash sha256:83421a7d1baa62269508259b33e6563e45fbeb9139a9c214cbe9fc107f07cb4c
  155. # 14.在k8smaster上查看集群节点状况
  156. [root@k8smaster ~]# kubectl get nodes
  157. NAME STATUS ROLES AGE VERSION
  158. k8smaster NotReady control-plane,master 2m49s v1.20.6
  159. k8snode1 NotReady <none> 19s v1.20.6
  160. k8snode2 NotReady <none> 14s v1.20.6
  161. # 15.k8snode1,k8snode2的ROLES角色为空,<none>就表示这个节点是工作节点。
  162. 可以把k8snode1,k8snode2的ROLES变成work
  163. [root@k8smaster ~]# kubectl label node k8snode1 node-role.kubernetes.io/worker=worker
  164. node/k8snode1 labeled
  165. [root@k8smaster ~]# kubectl label node k8snode2 node-role.kubernetes.io/worker=worker
  166. node/k8snode2 labeled
  167. [root@k8smaster ~]# kubectl get nodes
  168. NAME STATUS ROLES AGE VERSION
  169. k8smaster NotReady control-plane,master 2m43s v1.20.6
  170. k8snode1 NotReady worker 2m15s v1.20.6
  171. k8snode2 NotReady worker 2m11s v1.20.6
  172. # 注意:上面状态都是NotReady状态,说明没有安装网络插件
  173. # 16.安装kubernetes网络组件-Calico
  174. # 上传calico.yaml到k8smaster上,使用yaml文件安装calico网络插件 。
  175. wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml --no-check-certificate
  176. [root@k8smaster ~]# kubectl apply -f calico.yaml
  177. configmap/calico-config created
  178. customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
  179. customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
  180. customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
  181. customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
  182. customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
  183. customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
  184. customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
  185. customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
  186. customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
  187. customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
  188. customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
  189. customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
  190. customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
  191. customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
  192. customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
  193. clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
  194. clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
  195. clusterrole.rbac.authorization.k8s.io/calico-node created
  196. clusterrolebinding.rbac.authorization.k8s.io/calico-node created
  197. daemonset.apps/calico-node created
  198. serviceaccount/calico-node created
  199. deployment.apps/calico-kube-controllers created
  200. serviceaccount/calico-kube-controllers created
  201. poddisruptionbudget.policy/calico-kube-controllers created
  202. # 再次查看集群状态
  203. [root@k8smaster ~]# kubectl get nodes
  204. NAME STATUS ROLES AGE VERSION
  205. k8smaster Ready control-plane,master 5m57s v1.20.6
  206. k8snode1 Ready worker 3m27s v1.20.6
  207. k8snode2 Ready worker 3m22s v1.20.6
  208. # STATUS状态是Ready,说明k8s集群正常运行了

二.部署ansible完成相关软件的自动化运维工作,部署防火墙服务器,部署堡垒机。

  1. # 1.建立免密通道 在ansible主机上生成密钥对
  2. [root@ansible ~]# ssh-keygen -t ecdsa
  3. Generating public/private ecdsa key pair.
  4. Enter file in which to save the key (/root/.ssh/id_ecdsa):
  5. Created directory '/root/.ssh'.
  6. Enter passphrase (empty for no passphrase):
  7. Enter same passphrase again:
  8. Your identification has been saved in /root/.ssh/id_ecdsa.
  9. Your public key has been saved in /root/.ssh/id_ecdsa.pub.
  10. The key fingerprint is:
  11. SHA256:FNgCSDVk6i3foP88MfekA2UzwNn6x3kyi7V+mLdoxYE root@ansible
  12. The key's randomart image is:
  13. +---[ECDSA 256]---+
  14. | ..+*o =. |
  15. | .o .* o. |
  16. | . +. . |
  17. | . . ..= E . |
  18. | o o +S+ o . |
  19. | + o+ o O + |
  20. | . . .= B X |
  21. | . .. + B.o |
  22. | ..o. +oo.. |
  23. +----[SHA256]-----+
  24. [root@ansible ~]# cd /root/.ssh
  25. [root@ansible .ssh]# ls
  26. id_ecdsa id_ecdsa.pub
  27. # 2.上传公钥到所有服务器的root用户家目录下
  28. # 所有服务器上开启ssh服务 ,开放22号端口,允许root用户登录
  29. # 上传公钥到k8smaster
  30. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.104
  31. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  32. The authenticity of host '192.168.2.104 (192.168.2.104)' can't be established.
  33. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  34. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  35. Are you sure you want to continue connecting (yes/no)? yes
  36. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  37. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  38. root@192.168.2.104's password:
  39. Number of key(s) added: 1
  40. Now try logging into the machine, with: "ssh 'root@192.168.2.104'"
  41. and check to make sure that only the key(s) you wanted were added.
  42. # 上传公钥到k8snode
  43. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.111
  44. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  45. The authenticity of host '192.168.2.111 (192.168.2.111)' can't be established.
  46. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  47. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  48. Are you sure you want to continue connecting (yes/no)? yes
  49. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  50. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  51. root@192.168.2.111's password:
  52. Number of key(s) added: 1
  53. Now try logging into the machine, with: "ssh 'root@192.168.2.111'"
  54. and check to make sure that only the key(s) you wanted were added.
  55. [root@ansible .ssh]# ssh-copy-id -i id_ecdsa.pub root@192.168.2.112
  56. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_ecdsa.pub"
  57. The authenticity of host '192.168.2.112 (192.168.2.112)' can't be established.
  58. ECDSA key fingerprint is SHA256:l7LRfACELrI6mU2XvYaCz+sDBWiGkYnAecPgnxJxdvE.
  59. ECDSA key fingerprint is MD5:b6:f7:e1:c5:23:24:5c:16:1f:66:42:ba:80:a6:3c:fd.
  60. Are you sure you want to continue connecting (yes/no)? yes
  61. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  62. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  63. root@192.168.2.112's password:
  64. Number of key(s) added: 1
  65. Now try logging into the machine, with: "ssh 'root@192.168.2.112'"
  66. and check to make sure that only the key(s) you wanted were added.
  67. # 验证是否实现免密码密钥认证
  68. [root@ansible .ssh]# ssh root@192.168.2.121
  69. Last login: Tue Jun 20 10:33:33 2023 from 192.168.2.240
  70. [root@nfs ~]# exit
  71. 登出
  72. Connection to 192.168.2.121 closed.
  73. [root@ansible .ssh]# ssh root@192.168.2.112
  74. Last login: Tue Jun 20 10:34:18 2023 from 192.168.2.240
  75. [root@k8snode2 ~]# exit
  76. 登出
  77. Connection to 192.168.2.112 closed.
  78. [root@ansible .ssh]#
  79. # 3.安装ansible,在管理节点上
  80. # 目前,只要机器上安装了 Python 2.6 或 Python 2.7 (windows系统不可以做控制主机),都可以运行Ansible.
  81. [root@ansible .ssh]# yum install epel-release -y
  82. [root@ansible .ssh]# yum install ansible -y
  83. [root@ansible ~]# ansible --version
  84. ansible 2.9.27
  85. config file = /etc/ansible/ansible.cfg
  86. configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  87. ansible python module location = /usr/lib/python2.7/site-packages/ansible
  88. executable location = /usr/bin/ansible
  89. python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
  90. # 4.编写主机清单
  91. [root@ansible .ssh]# cd /etc/ansible
  92. [root@ansible ansible]# ls
  93. ansible.cfg hosts roles
  94. [root@ansible ansible]# vim hosts
  95. ## 192.168.1.110
  96. [k8smaster]
  97. 192.168.2.104
  98. [k8snode]
  99. 192.168.2.111
  100. 192.168.2.112
  101. [nfs]
  102. 192.168.2.121
  103. [gitlab]
  104. 192.168.2.124
  105. [harbor]
  106. 192.168.2.106
  107. [zabbix]
  108. 192.168.2.117
  109. # 测试
  110. [root@ansible ansible]# ansible all -m shell -a "ip add"

部署堡垒机

仅需两步快速安装 JumpServer:
准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
以 root 用户执行如下命令一键安装 JumpServer。

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash

部署firewall服务器

  1. # 关闭虚拟机,增加一块网卡(ens37)
  2. # 编写脚本实现SNAT_DNAT功能
  3. [root@firewalld ~]# cat snat_dnat.sh
  4. #!/bin/bash
  5. # open route
  6. echo 1 >/proc/sys/net/ipv4/ip_forward
  7. # stop firewall
  8. systemctl stop firewalld
  9. systemctl disable firewalld
  10. # clear iptables rule
  11. iptables -F
  12. iptables -t nat -F
  13. # enable snat
  14. iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o ens33 -j MASQUERADE
  15. #内网来的192.168.2.0网段过来的ip地址全部伪装(替换)为ens33接口的公网ip地址,好处就是不需要考虑ens33接口的ip地址是多少,你是哪个ip地址,我就伪装成哪个ip地址
  16. # enable dnat
  17. iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 2233 -j DNAT --to-destination 192.168.2.104:22
  18. # open web 80
  19. iptables -t nat -A PREROUTING -d 192.168.0.169 -i ens33 -p tcp --dport 80 -j DNAT --to-destination 192.168.2.104:80
  20. # web服务器上操作
  21. [root@k8smaster ~]# cat open_app.sh
  22. #!/bin/bash
  23. # open ssh
  24. iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT
  25. # open dns
  26. iptables -t filter -A INPUT -p udp --dport 53 -s 192.168.2.0/24 -j ACCEPT
  27. # open dhcp
  28. iptables -t filter -A INPUT -p udp --dport 67 -j ACCEPT
  29. # open http/https
  30. iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT
  31. iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT
  32. # open mysql
  33. iptables -t filter -A INPUT -p tcp --dport 3306 -j ACCEPT
  34. # default policy DROP
  35. iptables -t filter -P INPUT DROP
  36. # drop icmp request
  37. iptables -t filter -A INPUT -p icmp --icmp-type 8 -j DROP

三.部署nfs服务器,为整个web集群提供数据,让所有的web业务pod都去访问,通过pv、pvc和卷挂载实现。

  1. # 1.搭建好nfs服务器
  2. [root@nfs ~]# yum install nfs-utils -y
  3. # 建议k8s集群内的所有的节点都安装nfs-utils软件,因为节点服务器里创建卷需要支持nfs网络文件系统
  4. [root@k8smaster ~]# yum install nfs-utils -y
  5. [root@k8smaster ~]# service nfs restart
  6. Redirecting to /bin/systemctl restart nfs.service
  7. [root@k8smaster ~]# ps aux |grep nfs
  8. root 87368 0.0 0.0 0 0 ? S< 16:49 0:00 [nfsd4_callbacks]
  9. root 87374 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  10. root 87375 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  11. root 87376 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  12. root 87377 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  13. root 87378 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  14. root 87379 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  15. root 87380 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  16. root 87381 0.0 0.0 0 0 ? S 16:49 0:00 [nfsd]
  17. root 96648 0.0 0.0 112824 988 pts/0 S+ 17:02 0:00 grep --color=auto nfs
  18. # 2.设置共享目录
  19. [root@nfs ~]# vim /etc/exports
  20. [root@nfs ~]# cat /etc/exports
  21. /web 192.168.2.0/24(rw,no_root_squash,sync)
  22. # 3.新建共享目录和index.html
  23. [root@nfs ~]# mkdir /web
  24. [root@nfs ~]# cd /web
  25. [root@nfs web]# echo "welcome to changsha" >index.html
  26. [root@nfs web]# ls
  27. index.html
  28. [root@nfs web]# ll -d /web
  29. drwxr-xr-x. 2 root root 24 6月 18 16:46 /web
  30. # 4.刷新nfs或者重新输出共享目录
  31. [root@nfs ~]# exportfs -r #输出所有共享目录
  32. [root@nfs ~]# exportfs -v #显示输出的共享目录
  33. /web 192.168.2.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
  34. # 5.重启nfs服务并且设置nfs开机自启
  35. [root@nfs web]# systemctl restart nfs && systemctl enable nfs
  36. Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
  37. # 6.在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
  38. [root@k8snode1 ~]# mkdir /node1_nfs
  39. [root@k8snode1 ~]# mount 192.168.2.121:/web /node1_nfs
  40. 您在 /var/spool/mail/root 中有新邮件
  41. [root@k8snode1 ~]# df -Th|grep nfs
  42. 192.168.2.121:/web nfs4 17G 1.5G 16G 9% /node1_nfs
  43. # 7.取消挂载
  44. [root@k8snode1 ~]# umount /node1_nfs
  45. # 8.创建pv使用nfs服务器上的共享目录
  46. [root@k8smaster pv]# vim nfs-pv.yml
  47. [root@k8smaster pv]# cat nfs-pv.yml
  48. apiVersion: v1
  49. kind: PersistentVolume
  50. metadata:
  51. name: pv-web
  52. labels:
  53. type: pv-web
  54. spec:
  55. capacity:
  56. storage: 10Gi
  57. accessModes:
  58. - ReadWriteMany
  59. storageClassName: nfs # pv对应的名字
  60. nfs:
  61. path: "/web" # nfs共享的目录
  62. server: 192.168.2.121 # nfs服务器的ip地址
  63. readOnly: false # 访问模式
  64. [root@k8smaster pv]# kubectl apply -f nfs-pv.yml
  65. persistentvolume/pv-web created
  66. [root@k8smaster pv]# kubectl get pv
  67. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  68. pv-web 10Gi RWX Retain Available nfs 5s
  69. # 9.创建pvc使用pv
  70. [root@k8smaster pv]# vim nfs-pvc.yml
  71. [root@k8smaster pv]# cat nfs-pvc.yml
  72. apiVersion: v1
  73. kind: PersistentVolumeClaim
  74. metadata:
  75. name: pvc-web
  76. spec:
  77. accessModes:
  78. - ReadWriteMany
  79. resources:
  80. requests:
  81. storage: 1Gi
  82. storageClassName: nfs #使用nfs类型的pv
  83. [root@k8smaster pv]# kubectl apply -f pvc-nfs.yaml
  84. persistentvolumeclaim/sc-nginx-pvc created
  85. [root@k8smaster pv]# kubectl apply -f nfs-pvc.yml
  86. persistentvolumeclaim/pvc-web created
  87. [root@k8smaster pv]# kubectl get pvc
  88. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  89. pvc-web Bound pv-web 10Gi RWX nfs 6s
  90. # 10.创建pod使用pvc
  91. [root@k8smaster pv]# vim nginx-deployment.yaml
  92. [root@k8smaster pv]# cat nginx-deployment.yaml
  93. apiVersion: apps/v1
  94. kind: Deployment
  95. metadata:
  96. name: nginx-deployment
  97. labels:
  98. app: nginx
  99. spec:
  100. replicas: 3
  101. selector:
  102. matchLabels:
  103. app: nginx
  104. template:
  105. metadata:
  106. labels:
  107. app: nginx
  108. spec:
  109. volumes:
  110. - name: sc-pv-storage-nfs
  111. persistentVolumeClaim:
  112. claimName: pvc-web
  113. containers:
  114. - name: sc-pv-container-nfs
  115. image: nginx
  116. imagePullPolicy: IfNotPresent
  117. ports:
  118. - containerPort: 80
  119. name: "http-server"
  120. volumeMounts:
  121. - mountPath: "/usr/share/nginx/html"
  122. name: sc-pv-storage-nfs
  123. [root@k8smaster pv]# kubectl apply -f nginx-deployment.yaml
  124. deployment.apps/nginx-deployment created
  125. [root@k8smaster pv]# kubectl get pod -o wide
  126. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  127. nginx-deployment-76855d4d79-2q4vh 1/1 Running 0 42s 10.244.185.194 k8snode2 <none> <none>
  128. nginx-deployment-76855d4d79-mvgq7 1/1 Running 0 42s 10.244.185.195 k8snode2 <none> <none>
  129. nginx-deployment-76855d4d79-zm8v4 1/1 Running 0 42s 10.244.249.3 k8snode1 <none> <none>
  130. # 11.测试访问
  131. [root@k8smaster pv]# curl 10.244.185.194
  132. welcome to changsha
  133. [root@k8smaster pv]# curl 10.244.185.195
  134. welcome to changsha
  135. [root@k8smaster pv]# curl 10.244.249.3
  136. welcome to changsha
  137. [root@k8snode1 ~]# curl 10.244.185.194
  138. welcome to changsha
  139. [root@k8snode1 ~]# curl 10.244.185.195
  140. welcome to changsha
  141. [root@k8snode1 ~]# curl 10.244.249.3
  142. welcome to changsha
  143. [root@k8snode2 ~]# curl 10.244.185.194
  144. welcome to changsha
  145. [root@k8snode2 ~]# curl 10.244.185.195
  146. welcome to changsha
  147. [root@k8snode2 ~]# curl 10.244.249.3
  148. welcome to changsha
  149. # 12.修改内容
  150. [root@nfs web]# echo "hello,world" >> index.html
  151. [root@nfs web]# cat index.html
  152. welcome to changsha
  153. hello,world
  154. # 13.再次访问
  155. [root@k8snode1 ~]# curl 10.244.249.3
  156. welcome to changsha
  157. hello,world

四.构建CI/CD环境,部署gitlab,Jenkins,harbor实现相关的代码发布,镜像制作,数据备份等流水线工作。

1.部署gitlab

  1. # 部署gitlab
  2. https://gitlab.cn/install/
  3. [root@localhost ~]# hostnamectl set-hostname gitlab
  4. [root@localhost ~]# su - root
  5. su - root
  6. 上一次登录:日 6月 18 18:28:08 CST 2023从 192.168.2.240pts/0 上
  7. [root@gitlab ~]# cd /etc/sysconfig/network-scripts/
  8. [root@gitlab network-scripts]# vim ifcfg-ens33
  9. [root@gitlab network-scripts]# service network restart
  10. Restarting network (via systemctl): [ 确定 ]
  11. [root@gitlab network-scripts]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
  12. [root@gitlab network-scripts]# service firewalld stop && systemctl disable firewalld
  13. Redirecting to /bin/systemctl stop firewalld.service
  14. Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
  15. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
  16. [root@gitlab network-scripts]# reboot
  17. [root@gitlab ~]# getenforce
  18. Disabled
  19. # 1.安装和配置必须的依赖项
  20. yum install -y curl policycoreutils-python openssh-server perl
  21. # 2.配置极狐GitLab 软件源镜像
  22. [root@gitlab ~]# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
  23. ==> Detected OS centos
  24. ==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo
  25. [gitlab-jh]
  26. name=JiHu GitLab
  27. baseurl=https://packages.gitlab.cn/repository/el/$releasever/
  28. gpgcheck=0
  29. gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key
  30. priority=1
  31. enabled=1
  32. ==> Generate yum cache for gitlab-jh
  33. ==> Successfully added gitlab-jh repo. To install JiHu GitLab, run "sudo yum/dnf install gitlab-jh".
  34. [root@gitlab ~]# yum install gitlab-jh -y
  35. Thank you for installing JiHu GitLab!
  36. GitLab was unable to detect a valid hostname for your instance.
  37. Please configure a URL for your JiHu GitLab instance by setting `external_url`
  38. configuration in /etc/gitlab/gitlab.rb file.
  39. Then, you can start your JiHu GitLab instance by running the following command:
  40. sudo gitlab-ctl reconfigure
  41. For a comprehensive list of configuration options please see the Omnibus GitLab readme
  42. https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.md
  43. Help us improve the installation experience, let us know how we did with a 1 minute survey:
  44. https://wj.qq.com/s2/10068464/dc66
  45. [root@gitlab ~]# vim /etc/gitlab/gitlab.rb
  46. external_url 'http://myweb.first.com'
  47. [root@gitlab ~]# gitlab-ctl reconfigure
  48. Notes:
  49. Default admin account has been configured with following details:
  50. Username: root
  51. Password: You didn't opt-in to print initial root password to STDOUT.
  52. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
  53. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  54. gitlab Reconfigured!
  55. # 查看密码
  56. [root@gitlab ~]# cat /etc/gitlab/initial_root_password
  57. # WARNING: This value is valid only in the following conditions
  58. # 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
  59. # 2. Password hasn't been changed manually, either via UI or via command line.
  60. #
  61. # If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  62. Password: Al5rgYomhXDz5kNfDl3y8qunrSX334aZZxX5vONJ05s=
  63. # NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
  64. # 可以登录后修改语言为中文
  65. # 用户的profile/preferences
  66. # 修改密码
  67. [root@gitlab ~]# gitlab-rake gitlab:env:info
  68. System information
  69. System:
  70. Proxy: no
  71. Current User: git
  72. Using RVM: no
  73. Ruby Version: 3.0.6p216
  74. Gem Version: 3.4.13
  75. Bundler Version:2.4.13
  76. Rake Version: 13.0.6
  77. Redis Version: 6.2.11
  78. Sidekiq Version:6.5.7
  79. Go Version: unknown
  80. GitLab information
  81. Version: 16.0.4-jh
  82. Revision: c2ed99db36f
  83. Directory: /opt/gitlab/embedded/service/gitlab-rails
  84. DB Adapter: PostgreSQL
  85. DB Version: 13.11
  86. URL: http://myweb.first.com
  87. HTTP Clone URL: http://myweb.first.com/some-group/some-project.git
  88. SSH Clone URL: git@myweb.first.com:some-group/some-project.git
  89. Elasticsearch: no
  90. Geo: no
  91. Using LDAP: no
  92. Using Omniauth: yes
  93. Omniauth Providers:
  94. GitLab Shell
  95. Version: 14.20.0
  96. Repository storages:
  97. - default: unix:/var/opt/gitlab/gitaly/gitaly.socket
  98. GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell

2.部署Jenkins

  1. # Jenkins部署到k8s里
  2. # 1.安装git软件
  3. [root@k8smaster jenkins]# yum install git -y
  4. # 2.下载相关的yaml文件
  5. [root@k8smaster jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins
  6. 正克隆到 'kubernetes-jenkins'...
  7. remote: Enumerating objects: 16, done.
  8. remote: Counting objects: 100% (7/7), done.
  9. remote: Compressing objects: 100% (7/7), done.
  10. remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9
  11. Unpacking objects: 100% (16/16), done.
  12. [root@k8smaster jenkins]# ls
  13. kubernetes-jenkins
  14. [root@k8smaster jenkins]# cd kubernetes-jenkins/
  15. [root@k8smaster kubernetes-jenkins]# ls
  16. deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml
  17. # 3.创建命名空间
  18. [root@k8smaster kubernetes-jenkins]# cat namespace.yaml
  19. apiVersion: v1
  20. kind: Namespace
  21. metadata:
  22. name: devops-tools
  23. [root@k8smaster kubernetes-jenkins]# kubectl apply -f namespace.yaml
  24. namespace/devops-tools created
  25. [root@k8smaster kubernetes-jenkins]# kubectl get ns
  26. NAME STATUS AGE
  27. default Active 22h
  28. devops-tools Active 19s
  29. ingress-nginx Active 139m
  30. kube-node-lease Active 22h
  31. kube-public Active 22h
  32. kube-system Active 22h
  33. # 4.创建服务账号,集群角色,绑定
  34. [root@k8smaster kubernetes-jenkins]# cat serviceAccount.yaml
  35. ---
  36. apiVersion: rbac.authorization.k8s.io/v1
  37. kind: ClusterRole
  38. metadata:
  39. name: jenkins-admin
  40. rules:
  41. - apiGroups: [""]
  42. resources: ["*"]
  43. verbs: ["*"]
  44. ---
  45. apiVersion: v1
  46. kind: ServiceAccount
  47. metadata:
  48. name: jenkins-admin
  49. namespace: devops-tools
  50. ---
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. kind: ClusterRoleBinding
  53. metadata:
  54. name: jenkins-admin
  55. roleRef:
  56. apiGroup: rbac.authorization.k8s.io
  57. kind: ClusterRole
  58. name: jenkins-admin
  59. subjects:
  60. - kind: ServiceAccount
  61. name: jenkins-admin
  62. [root@k8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml
  63. clusterrole.rbac.authorization.k8s.io/jenkins-admin created
  64. serviceaccount/jenkins-admin created
  65. clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created
  66. # 5.创建卷,用来存放数据
  67. [root@k8smaster kubernetes-jenkins]# cat volume.yaml
  68. kind: StorageClass
  69. apiVersion: storage.k8s.io/v1
  70. metadata:
  71. name: local-storage
  72. provisioner: kubernetes.io/no-provisioner
  73. volumeBindingMode: WaitForFirstConsumer
  74. ---
  75. apiVersion: v1
  76. kind: PersistentVolume
  77. metadata:
  78. name: jenkins-pv-volume
  79. labels:
  80. type: local
  81. spec:
  82. storageClassName: local-storage
  83. claimRef:
  84. name: jenkins-pv-claim
  85. namespace: devops-tools
  86. capacity:
  87. storage: 10Gi
  88. accessModes:
  89. - ReadWriteOnce
  90. local:
  91. path: /mnt
  92. nodeAffinity:
  93. required:
  94. nodeSelectorTerms:
  95. - matchExpressions:
  96. - key: kubernetes.io/hostname
  97. operator: In
  98. values:
  99. - k8snode1 # 需要修改为k8s里的node节点的名字
  100. ---
  101. apiVersion: v1
  102. kind: PersistentVolumeClaim
  103. metadata:
  104. name: jenkins-pv-claim
  105. namespace: devops-tools
  106. spec:
  107. storageClassName: local-storage
  108. accessModes:
  109. - ReadWriteOnce
  110. resources:
  111. requests:
  112. storage: 3Gi
  113. [root@k8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml
  114. storageclass.storage.k8s.io/local-storage created
  115. persistentvolume/jenkins-pv-volume created
  116. persistentvolumeclaim/jenkins-pv-claim created
  117. [root@k8smaster kubernetes-jenkins]# kubectl get pv
  118. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  119. jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s
  120. pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h
  121. [root@k8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume
  122. Name: jenkins-pv-volume
  123. Labels: type=local
  124. Annotations: <none>
  125. Finalizers: [kubernetes.io/pv-protection]
  126. StorageClass: local-storage
  127. Status: Bound
  128. Claim: devops-tools/jenkins-pv-claim
  129. Reclaim Policy: Retain
  130. Access Modes: RWO
  131. VolumeMode: Filesystem
  132. Capacity: 10Gi
  133. Node Affinity:
  134. Required Terms:
  135. Term 0: kubernetes.io/hostname in [k8snode1]
  136. Message:
  137. Source:
  138. Type: LocalVolume (a persistent volume backed by local storage on a node)
  139. Path: /mnt
  140. Events: <none>
  141. # 6.部署Jenkins
  142. [root@k8smaster kubernetes-jenkins]# cat deployment.yaml
  143. apiVersion: apps/v1
  144. kind: Deployment
  145. metadata:
  146. name: jenkins
  147. namespace: devops-tools
  148. spec:
  149. replicas: 1
  150. selector:
  151. matchLabels:
  152. app: jenkins-server
  153. template:
  154. metadata:
  155. labels:
  156. app: jenkins-server
  157. spec:
  158. securityContext:
  159. fsGroup: 1000
  160. runAsUser: 1000
  161. serviceAccountName: jenkins-admin
  162. containers:
  163. - name: jenkins
  164. image: jenkins/jenkins:lts
  165. imagePullPolicy: IfNotPresent
  166. resources:
  167. limits:
  168. memory: "2Gi"
  169. cpu: "1000m"
  170. requests:
  171. memory: "500Mi"
  172. cpu: "500m"
  173. ports:
  174. - name: httpport
  175. containerPort: 8080
  176. - name: jnlpport
  177. containerPort: 50000
  178. livenessProbe:
  179. httpGet:
  180. path: "/login"
  181. port: 8080
  182. initialDelaySeconds: 90
  183. periodSeconds: 10
  184. timeoutSeconds: 5
  185. failureThreshold: 5
  186. readinessProbe:
  187. httpGet:
  188. path: "/login"
  189. port: 8080
  190. initialDelaySeconds: 60
  191. periodSeconds: 10
  192. timeoutSeconds: 5
  193. failureThreshold: 3
  194. volumeMounts:
  195. - name: jenkins-data
  196. mountPath: /var/jenkins_home
  197. volumes:
  198. - name: jenkins-data
  199. persistentVolumeClaim:
  200. claimName: jenkins-pv-claim
  201. [root@k8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml
  202. deployment.apps/jenkins created
  203. [root@k8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools
  204. NAME READY UP-TO-DATE AVAILABLE AGE
  205. jenkins 1/1 1 1 5m36s
  206. [root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
  207. NAME READY STATUS RESTARTS AGE
  208. jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s
  209. # 7.启动服务发布Jenkins的pod
  210. [root@k8smaster kubernetes-jenkins]# cat service.yaml
  211. apiVersion: v1
  212. kind: Service
  213. metadata:
  214. name: jenkins-service
  215. namespace: devops-tools
  216. annotations:
  217. prometheus.io/scrape: 'true'
  218. prometheus.io/path: /
  219. prometheus.io/port: '8080'
  220. spec:
  221. selector:
  222. app: jenkins-server
  223. type: NodePort
  224. ports:
  225. - port: 8080
  226. targetPort: 8080
  227. nodePort: 32000
  228. [root@k8smaster kubernetes-jenkins]# kubectl apply -f service.yaml
  229. service/jenkins-service created
  230. [root@k8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools
  231. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  232. jenkins-service NodePort 10.104.76.252 <none> 8080:32000/TCP 24s
  233. # 8.在Windows机器上访问Jenkins,宿主机ip+端口号
  234. http://192.168.2.104:32000/login?from=%2F
  235. # 9.进入pod里获取登录的密码
  236. [root@k8smaster kubernetes-jenkins]# kubectl exec -it jenkins-7fdc8dd5fd-bg66q -n devops-tools -- bash
  237. bash-5.1$ cat /var/jenkins_home/secrets/initialAdminPassword
  238. b0232e2dad164f89ad2221e4c46b0d46
  239. # 修改密码
  240. [root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
  241. NAME READY STATUS RESTARTS AGE
  242. jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91s

3.部署harbor

  1. # 前提是安装好 docker 和 docker compose
  2. # 1.配置阿里云的repo源
  3. yum install -y yum-utils
  4. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  5. # 2.安装docker服务
  6. yum install docker-ce-20.10.6 -y
  7. # 启动docker,设置开机自启
  8. systemctl start docker && systemctl enable docker.service
  9. # 3.查看docker版本,docker compose版本
  10. [root@harbor ~]# docker version
  11. Client: Docker Engine - Community
  12. Version: 24.0.2
  13. API version: 1.41 (downgraded from 1.43)
  14. Go version: go1.20.4
  15. Git commit: cb74dfc
  16. Built: Thu May 25 21:55:21 2023
  17. OS/Arch: linux/amd64
  18. Context: default
  19. Server: Docker Engine - Community
  20. Engine:
  21. Version: 20.10.6
  22. API version: 1.41 (minimum version 1.12)
  23. Go version: go1.13.15
  24. Git commit: 8728dd2
  25. Built: Fri Apr 9 22:43:57 2021
  26. OS/Arch: linux/amd64
  27. Experimental: false
  28. containerd:
  29. Version: 1.6.21
  30. GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
  31. runc:
  32. Version: 1.1.7
  33. GitCommit: v1.1.7-0-g860f061
  34. docker-init:
  35. Version: 0.19.0
  36. GitCommit: de40ad0
  37. [root@harbor ~]# docker compose version
  38. Docker Compose version v2.18.1
  39. # 4.安装 docker-compose
  40. [root@harbor ~]# ls
  41. anaconda-ks.cfg docker-compose-linux-x86_64 harbor
  42. [root@harbor ~]# chmod +x docker-compose-linux-x86_64
  43. [root@harbor ~]# mv docker-compose-linux-x86_64 /usr/local/sbin/docker-compose
  44. # 5.安装 harbor,到 harbor 官网或者 github 下载harbor源码包
  45. [root@harbor harbor]# ls
  46. harbor-offline-installer-v2.4.1.tgz
  47. # 6.解压
  48. [root@harbor harbor]# tar xf harbor-offline-installer-v2.4.1.tgz
  49. [root@harbor harbor]# ls
  50. harbor harbor-offline-installer-v2.4.1.tgz
  51. [root@harbor harbor]# cd harbor
  52. [root@harbor harbor]# ls
  53. common.sh harbor.v2.4.1.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
  54. [root@harbor harbor]# pwd
  55. /root/harbor/harbor
  56. # 7.修改配置文件
  57. [root@harbor harbor]# cat harbor.yml
  58. # Configuration file of Harbor
  59. # The IP address or hostname to access admin UI and registry service.
  60. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
  61. hostname: 192.168.2.106 # 修改为主机ip地址
  62. # http related config
  63. http:
  64. # port for http, default is 80. If https enabled, this port will redirect to https port
  65. port: 5000 # 修改成其他端口号
  66. #https可以全关闭
  67. # https related config
  68. #https:
  69. # https port for harbor, default is 443
  70. #port: 443
  71. # The path of cert and key files for nginx
  72. #certificate: /your/certificate/path
  73. #private_key: /your/private/key/path
  74. # # Uncomment following will enable tls communication between all harbor components
  75. # internal_tls:
  76. # # set enabled to true means internal tls is enabled
  77. # enabled: true
  78. # # put your cert and key files on dir
  79. # dir: /etc/harbor/tls/internal
  80. # Uncomment external_url if you want to enable external proxy
  81. # And when it enabled the hostname will no longer used
  82. # external_url: https://reg.mydomain.com:8433
  83. # The initial password of Harbor admin
  84. # It only works in first time to install harbor
  85. # Remember Change the admin password from UI after launching Harbor.
  86. harbor_admin_password: Harbor12345 #登录密码
  87. # Harbor DB configuration
  88. database:
  89. # The password for the root user of Harbor DB. Change this before any production use.
  90. password: root123
  91. # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  92. max_idle_conns: 100
  93. # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  94. # Note: the default number of connections is 1024 for postgres of harbor.
  95. max_open_conns: 900
  96. # The default data volume
  97. data_volume: /data
  98. # 8.执行部署脚本
  99. [root@harbor harbor]# ./install.sh
  100. [Step 0]: checking if docker is installed ...
  101. Note: docker version: 24.0.2
  102. [Step 1]: checking docker-compose is installed ...
  103. ✖ Need to install docker-compose(1.18.0+) by yourself first and run this script again.
  104. [root@harbor harbor]# ./install.sh
  105. [+] Running 10/10
  106. ⠿ Network harbor_harbor Created 0.7s
  107. ⠿ Container harbor-log Started 1.6s
  108. ⠿ Container registry Started 5.2s
  109. ⠿ Container harbor-db Started 4.9s
  110. ⠿ Container harbor-portal Started 5.1s
  111. ⠿ Container registryctl Started 4.8s
  112. ⠿ Container redis Started 3.9s
  113. ⠿ Container harbor-core Started 6.5s
  114. ⠿ Container harbor-jobservice Started 9.0s
  115. ⠿ Container nginx Started 9.1s
  116. ✔ ----Harbor has been installed and started successfully.----
  117. # 9.配置开机自启
  118. [root@harbor harbor]# vim /etc/rc.local
  119. [root@harbor harbor]# cat /etc/rc.local
  120. #!/bin/bash
  121. # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
  122. #
  123. # It is highly advisable to create own systemd services or udev rules
  124. # to run scripts during boot instead of using this file.
  125. #
  126. # In contrast to previous versions due to parallel execution during boot
  127. # this script will NOT be run after all other services.
  128. #
  129. # Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
  130. # that this script will be executed during boot.
  131. touch /var/lock/subsys/local
  132. /usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d
  133. # 10.设置权限
  134. [root@harbor harbor]# chmod +x /etc/rc.local /etc/rc.d/rc.local
  135. # 11.登录
  136. http://192.168.2.106:5000/
  137. # 账号:admin
  138. # 密码:Harbor12345
  139. # 新建一个项目
  140. # 测试(以nginx为例进行推送到harbor上)
  141. [root@harbor harbor]# docker image ls | grep nginx
  142. nginx latest 605c77e624dd 17 months ago 141MB
  143. goharbor/nginx-photon v2.4.1 78aad8c8ef41 18 months ago 45.7MB
  144. [root@harbor harbor]# docker tag nginx:latest 192.168.2.106:5000/test/nginx1:v1
  145. [root@harbor harbor]# docker image ls | grep nginx
  146. 192.168.2.106:5000/test/nginx1 v1 605c77e624dd 17 months ago 141MB
  147. nginx latest 605c77e624dd 17 months ago 141MB
  148. goharbor/nginx-photon v2.4.1 78aad8c8ef41 18 months ago 45.7MB
  149. [root@harbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1
  150. The push refers to repository [192.168.2.106:5000/test/nginx1]
  151. Get https://192.168.2.106:5000/v2/: http: server gave HTTP response to HTTPS client
  152. [root@harbor harbor]# vim /etc/docker/daemon.json
  153. {
  154. "insecure-registries":["192.168.2.106:5000"]
  155. }
  156. [root@harbor harbor]# docker login 192.168.2.106:5000
  157. Username: admin
  158. Password:
  159. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  160. Configure a credential helper to remove this warning. See
  161. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  162. Login Succeeded
  163. [root@harbor harbor]# docker push 192.168.2.106:5000/test/nginx1:v1
  164. The push refers to repository [192.168.2.106:5000/test/nginx1]
  165. d874fd2bc83b: Pushed
  166. 32ce5f6a5106: Pushed
  167. f1db227348d0: Pushed
  168. b8d6e692a25e: Pushed
  169. e379e8aedd4d: Pushed
  170. 2edcec3590a4: Pushed
  171. v1: digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3 size: 1570
  172. [root@harbor harbor]# cat /etc/docker/daemon.json
  173. {
  174. "insecure-registries":["192.168.2.106:5000"]
  175. }

五.将自己用go开发的web接口系统制作成镜像,部署到k8s里作为web应用;采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小20个业务pod,最多40个业务pod。

  1. # k8s集群每个节点都登入到harbor中,以便于从harbor中拉回镜像。
  2. [root@k8snode2 ~]# cat /etc/docker/daemon.json
  3. {
  4. "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
  5. "insecure-registries":["192.168.2.106:5000"],
  6. "exec-opts": ["native.cgroupdriver=systemd"]
  7. }
  8. # 重新加载配置,重启docker服务
  9. systemctl daemon-reload && systemctl restart docker
  10. # 登录harbor
  11. [root@k8smaster mysql]# docker login 192.168.2.106:5000
  12. Username: admin
  13. Password:
  14. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  15. Configure a credential helper to remove this warning. See
  16. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  17. Login Succeeded
  18. [root@k8snode1 ~]# docker login 192.168.2.106:5000
  19. Username: admin
  20. Password:
  21. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  22. Configure a credential helper to remove this warning. See
  23. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  24. Login Succeeded
  25. [root@k8snode2 ~]# docker login 192.168.2.106:5000
  26. Username: admin
  27. Password:
  28. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  29. Configure a credential helper to remove this warning. See
  30. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  31. Login Succeeded
  32. # 测试:从harbor拉取nginx镜像
  33. [root@k8snode1 ~]# docker pull 192.168.2.106:5000/test/nginx1:v1
  34. [root@k8snode1 ~]# docker images
  35. REPOSITORY TAG IMAGE ID CREATED SIZE
  36. mysql 5.7.42 2be84dd575ee 5 days ago 569MB
  37. nginx latest 605c77e624dd 17 months ago 141MB
  38. 192.168.2.106:5000/test/nginx1 v1 605c77e624dd 17 months ago 141MB
  39. # 制作镜像
  40. [root@harbor ~]# cd go
  41. [root@harbor go]# ls
  42. scweb Dockerfile
  43. [root@harbor go]# cat Dockerfile
  44. FROM centos:7
  45. WORKDIR /go
  46. COPY . /go
  47. RUN ls /go && pwd
  48. ENTRYPOINT ["/go/scweb"]
  49. [root@harbor go]# docker build -t scmyweb:1.1 .
  50. [root@harbor go]# docker image ls | grep scweb
  51. scweb 1.1 f845e97e9dfd 4 hours ago 214MB
  52. [root@harbor go]# docker tag scweb:1.1 192.168.2.106:5000/test/web:v2
  53. [root@harbor go]# docker image ls | grep web
  54. 192.168.2.106:5000/test/web v2 00900ace4935 4 minutes ago 214MB
  55. scweb 1.1 00900ace4935 4 minutes ago 214MB
  56. [root@harbor go]# docker push 192.168.2.106:5000/test/web:v2
  57. The push refers to repository [192.168.2.106:5000/test/web]
  58. 3e252407b5c2: Pushed
  59. 193a27e04097: Pushed
  60. b13a87e7576f: Pushed
  61. 174f56854903: Pushed
  62. v1: digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29 size: 1153
  63. [root@k8snode1 ~]# docker login 192.168.2.106:5000
  64. Authenticating with existing credentials...
  65. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  66. Configure a credential helper to remove this warning. See
  67. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  68. Login Succeeded
  69. [root@k8snode1 ~]# docker pull 192.168.2.106:5000/test/web:v2
  70. v1: Pulling from test/web
  71. 2d473b07cdd5: Pull complete
  72. bc5e56dd1476: Pull complete
  73. 694440c745ce: Pull complete
  74. 78694d1cffbb: Pull complete
  75. Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
  76. Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
  77. 192.168.2.106:5000/test/web:v1
  78. [root@k8snode1 ~]# docker images
  79. REPOSITORY TAG IMAGE ID CREATED SIZE
  80. 192.168.2.106:5000/test/web v2 f845e97e9dfd 4 hours ago 214MB
  81. [root@k8snode2 ~]# docker login 192.168.2.106:5000
  82. Authenticating with existing credentials...
  83. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  84. Configure a credential helper to remove this warning. See
  85. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  86. Login Succeeded
  87. [root@k8snode2 ~]# docker pull 192.168.2.106:5000/test/web:v2
  88. v1: Pulling from test/web
  89. 2d473b07cdd5: Pull complete
  90. bc5e56dd1476: Pull complete
  91. 694440c745ce: Pull complete
  92. 78694d1cffbb: Pull complete
  93. Digest: sha256:a723c83407c49e6fcf9aa67a041a4b6241cf9856170c1703014a61dec3726b29
  94. Status: Downloaded newer image for 192.168.2.106:5000/test/web:v2
  95. 192.168.2.106:5000/test/web:v1
  96. [root@k8snode2 ~]# docker images
  97. REPOSITORY TAG IMAGE ID CREATED SIZE
  98. 192.168.2.106:5000/test/web v2 f845e97e9dfd 4 hours ago 214MB
  99. # 采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个,最多10个pod
  100. # HorizontalPodAutoscaler(简称 HPA )自动更新工作负载资源(例如Deployment),目的是自动扩缩# 工作负载以满足需求。
  101. https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  102. # 1.安装metrics server
  103. # 下载components.yaml配置文件
  104. wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  105. # 替换image
  106. image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
  107. imagePullPolicy: IfNotPresent
  108. args:
  109. # // 新增下面两行参数
  110. - --kubelet-insecure-tls
  111. - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
  112. # 修改components.yaml配置文件
  113. [root@k8smaster ~]# cat components.yaml
  114. spec:
  115. containers:
  116. - args:
  117. - --kubelet-insecure-tls
  118. - --kubelet-preferred-address-types=InternalIP
  119. - --cert-dir=/tmp
  120. - --secure-port=4443
  121. - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalIP,Hostname
  122. - --kubelet-use-node-status-port
  123. - --metric-resolution=15s
  124. image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
  125. imagePullPolicy: IfNotPresent
  126. # 执行安装命令
  127. [root@k8smaster metrics]# kubectl apply -f components.yaml
  128. serviceaccount/metrics-server created
  129. clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
  130. clusterrole.rbac.authorization.k8s.io/system:metrics-server created
  131. rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
  132. clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
  133. clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
  134. service/metrics-server created
  135. deployment.apps/metrics-server created
  136. apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
  137. # 查看效果
  138. [root@k8smaster metrics]# kubectl get pod -n kube-system
  139. NAME READY STATUS RESTARTS AGE
  140. calico-kube-controllers-6949477b58-xdk88 1/1 Running 1 22h
  141. calico-node-4knc8 1/1 Running 4 22h
  142. calico-node-8jzrn 1/1 Running 1 22h
  143. calico-node-9d7pt 1/1 Running 2 22h
  144. coredns-7f89b7bc75-52c4x 1/1 Running 2 22h
  145. coredns-7f89b7bc75-82jrx 1/1 Running 1 22h
  146. etcd-k8smaster 1/1 Running 1 22h
  147. kube-apiserver-k8smaster 1/1 Running 1 22h
  148. kube-controller-manager-k8smaster 1/1 Running 1 22h
  149. kube-proxy-8wp9c 1/1 Running 2 22h
  150. kube-proxy-d46jp 1/1 Running 1 22h
  151. kube-proxy-whg4f 1/1 Running 1 22h
  152. kube-scheduler-k8smaster 1/1 Running 1 22h
  153. metrics-server-6c75959ddf-hw7cs 1/1 Running 0 61s
  154. # 能够使用下面的命令查看到pod的效果,说明metrics server已经安装成功
  155. [root@k8smaster metrics]# kubectl top node
  156. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  157. k8smaster 322m 16% 1226Mi 71%
  158. k8snode1 215m 10% 874Mi 50%
  159. k8snode2 190m 9% 711Mi 41%
  160. # 确保metrics-server安装好
  161. # 查看pod、apiservice验证metrics-server安装好了
  162. [root@k8smaster HPA]# kubectl get pod -n kube-system|grep metrics
  163. metrics-server-6c75959ddf-hw7cs 1/1 Running 4 6h35m
  164. [root@k8smaster HPA]# kubectl get apiservice |grep metrics
  165. v1beta1.metrics.k8s.io kube-system/metrics-server True 6h35m
  166. [root@k8smaster HPA]# kubectl top node
  167. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  168. k8smaster 349m 17% 1160Mi 67%
  169. k8snode1 271m 13% 1074Mi 62%
  170. k8snode2 226m 11% 1224Mi 71%
  171. [root@k8snode1 ~]# docker images|grep metrics
  172. registry.aliyuncs.com/google_containers/metrics-server v0.6.0 5787924fe1d8 14 months ago 68.8MB
  173. 您在 /var/spool/mail/root 中有新邮件
  174. # node节点上查看
  175. [root@k8snode1 ~]# docker images|grep metrics
  176. registry.aliyuncs.com/google_containers/metrics-server v0.6.0 5787924fe1d8 17 months ago 68.8MB
  177. kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 2 years ago 34.4MB
  178. # 2.以yaml文件启动web并暴露服务
  179. [root@k8smaster hpa]# cat my-web.yaml
  180. apiVersion: apps/v1
  181. kind: Deployment
  182. metadata:
  183. labels:
  184. app: myweb
  185. name: myweb
  186. spec:
  187. replicas: 3
  188. selector:
  189. matchLabels:
  190. app: myweb
  191. template:
  192. metadata:
  193. labels:
  194. app: myweb
  195. spec:
  196. containers:
  197. - name: myweb
  198. image: 192.168.2.106:5000/test/web:v2
  199. imagePullPolicy: IfNotPresent
  200. ports:
  201. - containerPort: 8000
  202. resources:
  203. limits:
  204. cpu: 300m
  205. requests:
  206. cpu: 100m
  207. ---
  208. apiVersion: v1
  209. kind: Service
  210. metadata:
  211. labels:
  212. app: myweb-svc
  213. name: myweb-svc
  214. spec:
  215. selector:
  216. app: myweb
  217. type: NodePort
  218. ports:
  219. - port: 8000
  220. protocol: TCP
  221. targetPort: 8000
  222. nodePort: 30001
  223. [root@k8smaster HPA]# kubectl apply -f my-web.yaml
  224. deployment.apps/myweb created
  225. service/myweb-svc created
  226. # 3.创建HPA功能
  227. [root@k8smaster HPA]# kubectl autoscale deployment myweb --cpu-percent=50 --min=1 --max=10
  228. horizontalpodautoscaler.autoscaling/myweb autoscaled
  229. [root@k8smaster HPA]# kubectl get pod
  230. NAME READY STATUS RESTARTS AGE
  231. myweb-6dc7b4dfcb-9q85g 1/1 Running 0 9s
  232. myweb-6dc7b4dfcb-ddq82 1/1 Running 0 9s
  233. myweb-6dc7b4dfcb-l7sw7 1/1 Running 0 9s
  234. [root@k8smaster HPA]# kubectl get svc
  235. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  236. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d2h
  237. myweb-svc NodePort 10.102.83.168 <none> 8000:30001/TCP 15s
  238. [root@k8smaster HPA]# kubectl get hpa
  239. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  240. myweb Deployment/myweb <unknown>/50% 1 10 3 16s
  241. # 4.访问
  242. http://192.168.2.112:30001/
  243. [root@k8smaster HPA]# kubectl get hpa
  244. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  245. myweb Deployment/myweb 1%/50% 1 10 1 11m
  246. [root@k8smaster HPA]# kubectl get pod
  247. NAME READY STATUS RESTARTS AGE
  248. myweb-6dc7b4dfcb-ddq82 1/1 Running 0 10m
  249. # 5.删除hpa
  250. [root@k8smaster HPA]# kubectl delete hpa myweb-svc

六.启动mysql的pod,为web业务提供数据库服务。

  1. [root@k8smaster mysql]# cat mysql-deployment.yaml
  2. # 定义mysql的Deployment
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. labels:
  7. app: mysql
  8. name: mysql
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: mysql
  14. template:
  15. metadata:
  16. labels:
  17. app: mysql
  18. spec:
  19. containers:
  20. - image: mysql:5.7.42
  21. name: mysql
  22. imagePullPolicy: IfNotPresent
  23. env:
  24. - name: MYSQL_ROOT_PASSWORD
  25. value: "123456"
  26. ports:
  27. - containerPort: 3306
  28. ---
  29. #定义mysql的Service
  30. apiVersion: v1
  31. kind: Service
  32. metadata:
  33. labels:
  34. app: svc-mysql
  35. name: svc-mysql
  36. spec:
  37. selector:
  38. app: mysql
  39. type: NodePort
  40. ports:
  41. - port: 3306
  42. protocol: TCP
  43. targetPort: 3306
  44. nodePort: 30007
  45. [root@k8smaster mysql]# kubectl apply -f mysql-deployment.yaml
  46. deployment.apps/mysql created
  47. service/svc-mysql created
  48. [root@k8smaster mysql]# kubectl get svc
  49. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  50. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
  51. svc-mysql NodePort 10.105.96.217 <none> 3306:30007/TCP 10m
  52. [root@k8smaster mysql]# kubectl get pod
  53. NAME READY STATUS RESTARTS AGE
  54. mysql-5f9bccd855-6kglf 1/1 Running 0 8m59s
  55. [root@k8smaster mysql]# kubectl exec -it mysql-5f9bccd855-6kglf -- bash
  56. bash-4.2# mysql -uroot -p123456
  57. mysql: [Warning] Using a password on the command line interface can be insecure.
  58. Welcome to the MySQL monitor. Commands end with ; or \g.
  59. Your MySQL connection id is 2
  60. Server version: 5.7.42 MySQL Community Server (GPL)
  61. Copyright (c) 2000, 2023, Oracle and/or its affiliates.
  62. Oracle is a registered trademark of Oracle Corporation and/or its
  63. affiliates. Other names may be trademarks of their respective
  64. owners.
  65. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  66. mysql> show databases;
  67. +--------------------+
  68. | Database |
  69. +--------------------+
  70. | information_schema |
  71. | mysql |
  72. | performance_schema |
  73. | sys |
  74. +--------------------+
  75. 4 rows in set (0.01 sec)
  76. mysql> exit
  77. Bye
  78. bash-4.2# exit
  79. exit
  80. [root@k8smaster mysql]#
  81. # Web服务和MySQL数据库结合起来
  82. # 第一种:在mysql的service中增加以下内容
  83. ports:
  84. - name: mysql
  85. protocol: TCP
  86. port: 3306
  87. targetPort: 3306
  88. # 在web的pod中增加以下内容
  89. env:
  90. - name: MYSQL_HOST
  91. value: mysql
  92. - name: MYSQL_PORT
  93. value: "3306"
  94. # 第二种:安装MySQL驱动程序,在 Go 代码中引入并初始化该驱动程序。
  95. # 1.导入必要的包和驱动程序import ( "database/sql"
  96. "fmt"
  97. _ "github.com/go-sql-driver/mysql" # 导入 MySQL 驱动程序
  98. )
  99. # 2.建立数据库连接db, err := sql.Open("mysql", "username:password@tcp(hostname:port)/dbname")
  100. if err != nil {
  101. fmt.Println("Failed to connect to database:", err)
  102. return
  103. }
  104. defer db.Close() # 记得关闭数据库连接

尝试:k8s部署有状态的MySQL

  1. # 1.创建 ConfigMap
  2. [root@k8smaster mysql]# cat mysql-configmap.yaml
  3. apiVersion: v1
  4. kind: ConfigMap
  5. metadata:
  6. name: mysql
  7. labels:
  8. app: mysql
  9. data:
  10. primary.cnf: |
  11. # 仅在主服务器上应用此配置
  12. [mysqld]
  13. log-bin
  14. replica.cnf: |
  15. # 仅在副本服务器上应用此配置
  16. [mysqld]
  17. super-read-only
  18. [root@k8smaster mysql]# kubectl apply -f mysql-configmap.yaml
  19. configmap/mysql created
  20. [root@k8smaster mysql]# kubectl get cm
  21. NAME DATA AGE
  22. kube-root-ca.crt 1 6d22h
  23. mysql 2 5s
  24. # 2.创建服务
  25. [root@k8smaster mysql]# cat mysql-services.yaml
  26. # 为 StatefulSet 成员提供稳定的 DNS 表项的无头服务(Headless Service)
  27. apiVersion: v1
  28. kind: Service
  29. metadata:
  30. name: mysql
  31. labels:
  32. app: mysql
  33. app.kubernetes.io/name: mysql
  34. spec:
  35. ports:
  36. - name: mysql
  37. port: 3306
  38. clusterIP: None
  39. selector:
  40. app: mysql
  41. ---
  42. # 用于连接到任一 MySQL 实例执行读操作的客户端服务
  43. # 对于写操作,你必须连接到主服务器:mysql-0.mysql
  44. apiVersion: v1
  45. kind: Service
  46. metadata:
  47. name: mysql-read
  48. labels:
  49. app: mysql
  50. app.kubernetes.io/name: mysql
  51. readonly: "true"
  52. spec:
  53. ports:
  54. - name: mysql
  55. port: 3306
  56. selector:
  57. app: mysql
  58. [root@k8smaster mysql]# kubectl apply -f mysql-services.yaml
  59. service/mysql created
  60. service/mysql-read created
  61. [root@k8smaster mysql]# kubectl get svc
  62. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  63. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d22h
  64. mysql ClusterIP None <none> 3306/TCP 7s
  65. mysql-read ClusterIP 10.102.31.144 <none> 3306/TCP 7s
  66. # 3.创建 StatefulSet
  67. [root@k8smaster mysql]# cat mysql-statefulset.yaml
  68. apiVersion: apps/v1
  69. kind: StatefulSet
  70. metadata:
  71. name: mysql
  72. spec:
  73. selector:
  74. matchLabels:
  75. app: mysql
  76. app.kubernetes.io/name: mysql
  77. serviceName: mysql
  78. replicas: 3
  79. template:
  80. metadata:
  81. labels:
  82. app: mysql
  83. app.kubernetes.io/name: mysql
  84. spec:
  85. initContainers:
  86. - name: init-mysql
  87. image: mysql:5.7.42
  88. imagePullPolicy: IfNotPresent
  89. command:
  90. - bash
  91. - "-c"
  92. - |
  93. set -ex
  94. # 基于 Pod 序号生成 MySQL 服务器的 ID。
  95. [[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
  96. ordinal=${BASH_REMATCH[1]}
  97. echo [mysqld] > /mnt/conf.d/server-id.cnf
  98. # 添加偏移量以避免使用 server-id=0 这一保留值。
  99. echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
  100. # 将合适的 conf.d 文件从 config-map 复制到 emptyDir。
  101. if [[ $ordinal -eq 0 ]]; then
  102. cp /mnt/config-map/primary.cnf /mnt/conf.d/
  103. else
  104. cp /mnt/config-map/replica.cnf /mnt/conf.d/
  105. fi
  106. volumeMounts:
  107. - name: conf
  108. mountPath: /mnt/conf.d
  109. - name: config-map
  110. mountPath: /mnt/config-map
  111. - name: clone-mysql
  112. image: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0
  113. command:
  114. - bash
  115. - "-c"
  116. - |
  117. set -ex
  118. # 如果已有数据,则跳过克隆。
  119. [[ -d /var/lib/mysql/mysql ]] && exit 0
  120. # 跳过主实例(序号索引 0)的克隆。
  121. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
  122. ordinal=${BASH_REMATCH[1]}
  123. [[ $ordinal -eq 0 ]] && exit 0
  124. # 从原来的对等节点克隆数据。
  125. ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
  126. # 准备备份。
  127. xtrabackup --prepare --target-dir=/var/lib/mysql
  128. volumeMounts:
  129. - name: data
  130. mountPath: /var/lib/mysql
  131. subPath: mysql
  132. - name: conf
  133. mountPath: /etc/mysql/conf.d
  134. containers:
  135. - name: mysql
  136. image: mysql:5.7.42
  137. imagePullPolicy: IfNotPresent
  138. env:
  139. - name: MYSQL_ALLOW_EMPTY_PASSWORD
  140. value: "1"
  141. ports:
  142. - name: mysql
  143. containerPort: 3306
  144. volumeMounts:
  145. - name: data
  146. mountPath: /var/lib/mysql
  147. subPath: mysql
  148. - name: conf
  149. mountPath: /etc/mysql/conf.d
  150. resources:
  151. requests:
  152. cpu: 500m
  153. memory: 1Gi
  154. livenessProbe:
  155. exec:
  156. command: ["mysqladmin", "ping"]
  157. initialDelaySeconds: 30
  158. periodSeconds: 10
  159. timeoutSeconds: 5
  160. readinessProbe:
  161. exec:
  162. # 检查我们是否可以通过 TCP 执行查询(skip-networking 是关闭的)。
  163. command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
  164. initialDelaySeconds: 5
  165. periodSeconds: 2
  166. timeoutSeconds: 1
  167. - name: xtrabackup
  168. image: registry.cn-hangzhou.aliyuncs.com/google_samples_thepoy/xtrabackup:1.0
  169. ports:
  170. - name: xtrabackup
  171. containerPort: 3307
  172. command:
  173. - bash
  174. - "-c"
  175. - |
  176. set -ex
  177. cd /var/lib/mysql
  178. # 确定克隆数据的 binlog 位置(如果有的话)。
  179. if [[ -f xtrabackup_slave_info && "x$(<xtrabackup_slave_info)" != "x" ]]; then
  180. # XtraBackup 已经生成了部分的 “CHANGE MASTER TO” 查询
  181. # 因为我们从一个现有副本进行克隆。(需要删除末尾的分号!)
  182. cat xtrabackup_slave_info | sed -E 's/;$//g' > change_master_to.sql.in
  183. # 在这里要忽略 xtrabackup_binlog_info (它是没用的)。
  184. rm -f xtrabackup_slave_info xtrabackup_binlog_info
  185. elif [[ -f xtrabackup_binlog_info ]]; then
  186. # 我们直接从主实例进行克隆。解析 binlog 位置。
  187. [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
  188. rm -f xtrabackup_binlog_info xtrabackup_slave_info
  189. echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
  190. MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
  191. fi
  192. # 检查我们是否需要通过启动复制来完成克隆。
  193. if [[ -f change_master_to.sql.in ]]; then
  194. echo "Waiting for mysqld to be ready (accepting connections)"
  195. until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
  196. echo "Initializing replication from clone position"
  197. mysql -h 127.0.0.1 \
  198. -e "$(<change_master_to.sql.in), \
  199. MASTER_HOST='mysql-0.mysql', \
  200. MASTER_USER='root', \
  201. MASTER_PASSWORD='', \
  202. MASTER_CONNECT_RETRY=10; \
  203. START SLAVE;" || exit 1
  204. # 如果容器重新启动,最多尝试一次。
  205. mv change_master_to.sql.in change_master_to.sql.orig
  206. fi
  207. # 当对等点请求时,启动服务器发送备份。
  208. exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
  209. "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
  210. volumeMounts:
  211. - name: data
  212. mountPath: /var/lib/mysql
  213. subPath: mysql
  214. - name: conf
  215. mountPath: /etc/mysql/conf.d
  216. resources:
  217. requests:
  218. cpu: 100m
  219. memory: 100Mi
  220. volumes:
  221. - name: conf
  222. emptyDir: {}
  223. - name: config-map
  224. configMap:
  225. name: mysql
  226. volumeClaimTemplates:
  227. - metadata:
  228. name: data
  229. spec:
  230. accessModes: ["ReadWriteOnce"]
  231. resources:
  232. requests:
  233. storage: 1Gi
  234. [root@k8smaster mysql]# kubectl apply -f mysql-statefulset.yaml
  235. statefulset.apps/mysql created
  236. [root@k8smaster mysql]# kubectl get pod
  237. NAME READY STATUS RESTARTS AGE
  238. mysql-0 0/2 Pending 0 3s
  239. [root@k8smaster mysql]# kubectl describe pod mysql-0
  240. Events:
  241. Type Reason Age From Message
  242. ---- ------ ---- ---- -------
  243. Warning FailedScheduling 16s (x2 over 16s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  244. [root@k8smaster mysql]# kubectl get pvc
  245. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  246. data-mysql-0 Pending 3m27s
  247. [root@k8smaster mysql]# kubectl get pvc data-mysql-0 -o yaml
  248. apiVersion: v1
  249. kind: PersistentVolumeClaim
  250. metadata:
  251. creationTimestamp: "2023-06-25T06:17:36Z"
  252. finalizers:
  253. - kubernetes.io/pvc-protection
  254. labels:
  255. app: mysql
  256. app.kubernetes.io/name: mysql
  257. [root@k8smaster mysql]# cat mysql-pv.yaml
  258. apiVersion: v1
  259. kind: PersistentVolume
  260. metadata:
  261. name: mysql-pv
  262. spec:
  263. capacity:
  264. storage: 1Gi
  265. accessModes:
  266. - ReadWriteOnce
  267. nfs:
  268. path: "/data/db" # nfs共享的目录
  269. server: 192.168.2.121 # nfs服务器的ip地址
  270. [root@k8smaster mysql]# kubectl apply -f mysql-pv.yaml
  271. persistentvolume/mysql-pv created
  272. [root@k8smaster mysql]# kubectl get pv
  273. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  274. jenkins-pv-volume 10Gi RWO Retain Terminating devops-tools/jenkins-pv-claim local-storage 5d23h
  275. mysql-pv 1Gi RWO Retain Terminating default/data-mysql-0 15m
  276. [root@k8smaster mysql]# kubectl patch pv jenkins-pv-volume -p '{"metadata":{"finalizers":null}}'
  277. persistentvolume/jenkins-pv-volume patched
  278. [root@k8smaster mysql]# kubectl patch pv mysql-pv -p '{"metadata":{"finalizers":null}}'
  279. persistentvolume/mysql-pv patched
  280. [root@k8smaster mysql]# kubectl get pv
  281. No resources found
  282. [root@k8smaster mysql]# kubectl get pod
  283. NAME READY STATUS RESTARTS AGE
  284. mysql-0 0/2 Init:0/2 0 7m20s
  285. [root@k8smaster mysql]# kubectl describe pod mysql-0
  286. Events:
  287. Type Reason Age From Message
  288. ---- ------ ---- ---- -------
  289. Warning FailedScheduling 10m (x3 over 10m) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 pvc(s) bound to non-existent pv(s).
  290. Normal Scheduled 10m default-scheduler Successfully assigned default/mysql-0 to k8snode2
  291. Warning FailedMount 10m kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data conf config-map default-token-24tkk]: error processing PVC default/data-mysql-0: PVC is not bound
  292. Warning FailedMount 9m46s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-24tkk data conf config-map]: error processing PVC default/data-mysql-0: PVC is not bound
  293. Warning FailedMount 5m15s kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data conf config-map default-token-24tkk]: timed out waiting for the condition
  294. Warning FailedMount 3m kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[config-map default-token-24tkk data conf]: timed out waiting for the condition
  295. Warning FailedMount 74s (x12 over 9m31s) kubelet MountVolume.SetUp failed for volume "mysql-pv" : mount failed: exit status 32
  296. Mounting command: mount
  297. Mounting arguments: -t nfs 192.168.2.121:/data/db /var/lib/kubelet/pods/424bb72d-8bf5-400f-b954-7fa3666ca0b3/volumes/kubernetes.io~nfs/mysql-pv
  298. Output: mount.nfs: mounting 192.168.2.121:/data/db failed, reason given by server: No such file or directory
  299. Warning FailedMount 42s (x2 over 7m29s) kubelet Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[conf config-map default-token-24tkk data]: timed out waiting for the condition
  300. 1Gi RWO Retain Terminating default/data-mysql-0 15m
  301. [root@nfs data]# pwd
  302. /data
  303. [root@nfs data]# mkdir db replica replica-3
  304. [root@nfs data]# ls
  305. db replica replica-3
  306. [root@k8smaster mysql]# kubectl get pod
  307. NAME READY STATUS RESTARTS AGE
  308. mysql-0 2/2 Running 0 21m
  309. mysql-1 0/2 Pending 0 2m34s
  310. [root@k8smaster mysql]# kubectl describe pod mysql-1
  311. Events:
  312. Type Reason Age From Message
  313. ---- ------ ---- ---- -------
  314. Warning FailedScheduling 58s (x4 over 3m22s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  315. [root@k8smaster mysql]# cat mysql-pv-2.yaml
  316. apiVersion: v1
  317. kind: PersistentVolume
  318. metadata:
  319. name: mysql-pv-2
  320. spec:
  321. capacity:
  322. storage: 1Gi
  323. accessModes:
  324. - ReadWriteOnce
  325. nfs:
  326. path: "/data/replica" # nfs共享的目录
  327. server: 192.168.2.121 # nfs服务器的ip地址
  328. [root@k8smaster mysql]# kubectl apply -f mysql-pv-2.yaml
  329. persistentvolume/mysql-pv-2 created
  330. [root@k8smaster mysql]# kubectl get pv
  331. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  332. mysql-pv 1Gi RWO Retain Bound default/data-mysql-0 24m
  333. mysql-pv-2 1Gi RWO Retain Bound default/data-mysql-1 7s
  334. [root@k8smaster mysql]# kubectl get pod
  335. NAME READY STATUS RESTARTS AGE
  336. mysql-0 2/2 Running 0 25m
  337. mysql-1 1/2 Running 0 7m20s
  338. [root@k8smaster mysql]# cat mysql-pv-3.yaml
  339. apiVersion: v1
  340. kind: PersistentVolume
  341. metadata:
  342. name: mysql-pv-3
  343. spec:
  344. capacity:
  345. storage: 1Gi
  346. accessModes:
  347. - ReadWriteOnce
  348. nfs:
  349. path: "/data/replicai-3" # nfs共享的目录
  350. server: 192.168.2.121 # nfs服务器的ip地址
  351. [root@k8smaster mysql]# kubectl apply -f mysql-pv-3.yaml
  352. persistentvolume/mysql-pv-3 created
  353. [root@k8smaster mysql]# kubectl get pod
  354. NAME READY STATUS RESTARTS AGE
  355. mysql-0 2/2 Running 0 29m
  356. mysql-1 2/2 Running 0 11m
  357. mysql-2 0/2 Pending 0 3m46s
  358. [root@k8smaster mysql]# kubectl describe pod mysql-2
  359. Events:
  360. Type Reason Age From Message
  361. ---- ------ ---- ---- -------
  362. Warning FailedScheduling 2m13s (x4 over 4m16s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
  363. Warning FailedScheduling 47s (x2 over 2m5s) default-scheduler 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.

七.使用探针(liveness、readiness、startup)的(httpget、exec)方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

  1. livenessProbe:
  2. exec:
  3. command:
  4. - ls
  5. - /tmp
  6. initialDelaySeconds: 5
  7. periodSeconds: 5
  8. readinessProbe:
  9. exec:
  10. command:
  11. - ls
  12. - /tmp
  13. initialDelaySeconds: 5
  14. periodSeconds: 5
  15. startupProbe:
  16. httpGet:
  17. path: /
  18. port: 8000
  19. failureThreshold: 30
  20. periodSeconds: 10
  21. [root@k8smaster probe]# vim my-web.yaml
  22. apiVersion: apps/v1
  23. kind: Deployment
  24. metadata:
  25. labels:
  26. app: myweb
  27. name: myweb
  28. spec:
  29. replicas: 3
  30. selector:
  31. matchLabels:
  32. app: myweb
  33. template:
  34. metadata:
  35. labels:
  36. app: myweb
  37. spec:
  38. containers:
  39. - name: myweb
  40. image: 192.168.2.106:5000/test/web:v2
  41. imagePullPolicy: IfNotPresent
  42. ports:
  43. - containerPort: 8000
  44. resources:
  45. limits:
  46. cpu: 300m
  47. requests:
  48. cpu: 100m
  49. livenessProbe:
  50. exec:
  51. command:
  52. - ls
  53. - /tmp
  54. initialDelaySeconds: 5
  55. periodSeconds: 5
  56. readinessProbe:
  57. exec:
  58. command:
  59. - ls
  60. - /tmp
  61. initialDelaySeconds: 5
  62. periodSeconds: 5
  63. startupProbe:
  64. httpGet:
  65. path: /
  66. port: 8000
  67. failureThreshold: 30
  68. periodSeconds: 10
  69. ---
  70. apiVersion: v1
  71. kind: Service
  72. metadata:
  73. labels:
  74. app: myweb-svc
  75. name: myweb-svc
  76. spec:
  77. selector:
  78. app: myweb
  79. type: NodePort
  80. ports:
  81. - port: 8000
  82. protocol: TCP
  83. targetPort: 8000
  84. nodePort: 30001
  85. [root@k8smaster probe]# kubectl apply -f my-web.yaml
  86. deployment.apps/myweb created
  87. service/myweb-svc created
  88. [root@k8smaster probe]# kubectl get pod
  89. NAME READY STATUS RESTARTS AGE
  90. myweb-6b89fb9c7b-4cdh9 1/1 Running 0 53s
  91. myweb-6b89fb9c7b-dh87w 1/1 Running 0 53s
  92. myweb-6b89fb9c7b-zvc52 1/1 Running 0 53s
  93. [root@k8smaster probe]# kubectl describe pod myweb-6b89fb9c7b-4cdh9
  94. Name: myweb-6b89fb9c7b-4cdh9
  95. Namespace: default
  96. Priority: 0
  97. Node: k8snode2/192.168.2.112
  98. Start Time: Thu, 22 Jun 2023 16:47:20 +0800
  99. Labels: app=myweb
  100. pod-template-hash=6b89fb9c7b
  101. Annotations: cni.projectcalico.org/podIP: 10.244.185.219/32
  102. cni.projectcalico.org/podIPs: 10.244.185.219/32
  103. Status: Running
  104. IP: 10.244.185.219
  105. IPs:
  106. IP: 10.244.185.219
  107. Controlled By: ReplicaSet/myweb-6b89fb9c7b
  108. Containers:
  109. myweb:
  110. Container ID: docker://8c55c0c825483f86e4b3c87413984415b2ccf5cad78ed005eed8bedb4252c130
  111. Image: 192.168.2.106:5000/test/web:v2
  112. Image ID: docker-pullable://192.168.2.106:5000/test/web@sha256:3bef039aa5c13103365a6868c9f052a000de376a45eaffcbad27d6ddb1f6e354
  113. Port: 8000/TCP
  114. Host Port: 0/TCP
  115. State: Running
  116. Started: Thu, 22 Jun 2023 16:47:23 +0800
  117. Ready: True
  118. Restart Count: 0
  119. Limits:
  120. cpu: 300m
  121. Requests:
  122. cpu: 100m
  123. Liveness: exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3
  124. Readiness: exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3
  125. Startup: http-get http://:8000/ delay=0s timeout=1s period=10s #success=1 #failure=30
  126. Environment: <none>
  127. Mounts:
  128. /var/run/secrets/kubernetes.io/serviceaccount from default-token-24tkk (ro)
  129. Conditions:
  130. Type Status
  131. Initialized True
  132. Ready True
  133. ContainersReady True
  134. PodScheduled True
  135. Volumes:
  136. default-token-24tkk:
  137. Type: Secret (a volume populated by a Secret)
  138. SecretName: default-token-24tkk
  139. Optional: false
  140. QoS Class: Burstable
  141. Node-Selectors: <none>
  142. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  143. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  144. Events:
  145. Type Reason Age From Message
  146. ---- ------ ---- ---- -------
  147. Normal Scheduled 55s default-scheduler Successfully assigned default/myweb-6b89fb9c7b-4cdh9 to k8snode2
  148. Normal Pulled 52s kubelet Container image "192.168.2.106:5000/test/web:v2" already present on machine
  149. Normal Created 52s kubelet Created container myweb
  150. Normal Started 52s kubelet Started container myweb

八.使用ingress给web业务做负载均衡,使用dashboard对整个集群资源进行掌控。

  1. # ingress controller 本质上是一个nginx软件,用来做负载均衡。
  2. # ingress 是k8s内部管理nginx配置(nginx.conf)的组件,用来给ingress controller传参。
  3. [root@k8smaster ingress]# ls
  4. ingress-controller-deploy.yaml kube-webhook-certgen-v1.1.0.tar.gz sc-nginx-svc-1.yaml
  5. ingress-nginx-controllerv1.1.0.tar.gz sc-ingress.yaml
  6. ingress-controller-deploy.yaml 是部署ingress controller使用的yaml文件
  7. ingress-nginx-controllerv1.1.0.tar.gz ingress-nginx-controller镜像
  8. kube-webhook-certgen-v1.1.0.tar.gz kube-webhook-certgen镜像
  9. sc-ingress.yaml 创建ingress的配置文件
  10. sc-nginx-svc-1.yaml 启动sc-nginx-svc-1服务和相关pod的yaml
  11. nginx-deployment-nginx-svc-2.yaml 启动nginx-deployment-nginx-svc-2服务和相关pod的yaml
  12. # 第1大步骤:安装ingress controller
  13. # 1.将镜像scp到所有的node节点服务器上
  14. [root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode1:/root
  15. ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 101.1MB/s 00:02
  16. [root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode2:/root
  17. ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 98.1MB/s 00:02
  18. [root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode1:/root
  19. kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 93.3MB/s 00:00
  20. [root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode2:/root
  21. kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 39.3MB/s 00:01
  22. # 2.导入镜像,在所有的节点服务器上进行
  23. [root@k8snode1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
  24. [root@k8snode1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
  25. [root@k8snode2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
  26. [root@k8snode2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
  27. [root@k8snode1 ~]# docker images
  28. REPOSITORY TAG IMAGE ID CREATED SIZE
  29. nginx latest 605c77e624dd 17 months ago 141MB
  30. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v1.1.0 ae1a7201ec95 19 months ago 285MB
  31. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen v1.1.1 c41e9fcadf5a 20 months ago 47.7MB
  32. [root@k8snode2 ~]# docker images
  33. REPOSITORY TAG IMAGE ID CREATED SIZE
  34. nginx latest 605c77e624dd 17 months ago 141MB
  35. registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller v1.1.0 ae1a7201ec95 19 months ago 285MB
  36. registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen v1.1.1 c41e9fcadf5a 20 months ago 47.7MB
  37. # 3.执行yaml文件去创建ingres controller
  38. [root@k8smaster ingress]# kubectl apply -f ingress-controller-deploy.yaml
  39. namespace/ingress-nginx created
  40. serviceaccount/ingress-nginx created
  41. configmap/ingress-nginx-controller created
  42. clusterrole.rbac.authorization.k8s.io/ingress-nginx created
  43. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
  44. role.rbac.authorization.k8s.io/ingress-nginx created
  45. rolebinding.rbac.authorization.k8s.io/ingress-nginx created
  46. service/ingress-nginx-controller-admission created
  47. service/ingress-nginx-controller created
  48. deployment.apps/ingress-nginx-controller created
  49. ingressclass.networking.k8s.io/nginx created
  50. validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
  51. serviceaccount/ingress-nginx-admission created
  52. clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
  53. clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
  54. role.rbac.authorization.k8s.io/ingress-nginx-admission created
  55. rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
  56. job.batch/ingress-nginx-admission-create created
  57. job.batch/ingress-nginx-admission-patch created
  58. # 4.查看ingress controller的相关命名空间
  59. [root@k8smaster ingress]# kubectl get ns
  60. NAME STATUS AGE
  61. default Active 20h
  62. ingress-nginx Active 30s
  63. kube-node-lease Active 20h
  64. kube-public Active 20h
  65. kube-system Active 20h
  66. # 5.查看ingress controller的相关service
  67. [root@k8smaster ingress]# kubectl get svc -n ingress-nginx
  68. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  69. ingress-nginx-controller NodePort 10.105.213.95 <none> 80:31457/TCP,443:32569/TCP 64s
  70. ingress-nginx-controller-admission ClusterIP 10.98.225.196 <none> 443/TCP 64s
  71. # 6.查看ingress controller的相关pod
  72. [root@k8smaster ingress]# kubectl get pod -n ingress-nginx
  73. NAME READY STATUS RESTARTS AGE
  74. ingress-nginx-admission-create-9sg56 0/1 Completed 0 80s
  75. ingress-nginx-admission-patch-8sctb 0/1 Completed 1 80s
  76. ingress-nginx-controller-6c8ffbbfcf-bmdj9 1/1 Running 0 80s
  77. ingress-nginx-controller-6c8ffbbfcf-j576v 1/1 Running 0 80s
  78. # 第2大步骤:创建pod和暴露pod的服务
  79. [root@k8smaster new]# cat sc-nginx-svc-1.yaml
  80. apiVersion: apps/v1
  81. kind: Deployment
  82. metadata:
  83. name: sc-nginx-deploy
  84. labels:
  85. app: sc-nginx-feng
  86. spec:
  87. replicas: 3
  88. selector:
  89. matchLabels:
  90. app: sc-nginx-feng
  91. template:
  92. metadata:
  93. labels:
  94. app: sc-nginx-feng
  95. spec:
  96. containers:
  97. - name: sc-nginx-feng
  98. image: nginx
  99. imagePullPolicy: IfNotPresent
  100. ports:
  101. - containerPort: 80
  102. ---
  103. apiVersion: v1
  104. kind: Service
  105. metadata:
  106. name: sc-nginx-svc
  107. labels:
  108. app: sc-nginx-svc
  109. spec:
  110. selector:
  111. app: sc-nginx-feng
  112. ports:
  113. - name: name-of-service-port
  114. protocol: TCP
  115. port: 80
  116. targetPort: 80
  117. [root@k8smaster new]# kubectl apply -f sc-nginx-svc-1.yaml
  118. deployment.apps/sc-nginx-deploy created
  119. service/sc-nginx-svc created
  120. [root@k8smaster ingress]# kubectl get pod
  121. NAME READY STATUS RESTARTS AGE
  122. sc-nginx-deploy-7bb895f9f5-hmf2n 1/1 Running 0 7s
  123. sc-nginx-deploy-7bb895f9f5-mczzg 1/1 Running 0 7s
  124. sc-nginx-deploy-7bb895f9f5-zzndv 1/1 Running 0 7s
  125. [root@k8smaster ingress]# kubectl get svc
  126. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  127. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
  128. sc-nginx-svc ClusterIP 10.96.76.55 <none> 80/TCP 26s
  129. # 查看服务器的详细信息,查看Endpoints对应的pod的ip和端口是否正常
  130. [root@k8smaster ingress]# kubectl describe svc sc-nginx-svc
  131. Name: sc-nginx-svc
  132. Namespace: default
  133. Labels: app=sc-nginx-svc
  134. Annotations: <none>
  135. Selector: app=sc-nginx-feng
  136. Type: ClusterIP
  137. IP Families: <none>
  138. IP: 10.96.76.55
  139. IPs: 10.96.76.55
  140. Port: name-of-service-port 80/TCP
  141. TargetPort: 80/TCP
  142. Endpoints: 10.244.185.209:80,10.244.185.210:80,10.244.249.16:80
  143. Session Affinity: None
  144. Events: <none>
  145. # 访问服务暴露的ip
  146. [root@k8smaster ingress]# curl 10.96.76.55
  147. <!DOCTYPE html>
  148. <html>
  149. <head>
  150. <title>Welcome to nginx!</title>
  151. <style>
  152. html { color-scheme: light dark; }
  153. body { width: 35em; margin: 0 auto;
  154. font-family: Tahoma, Verdana, Arial, sans-serif; }
  155. </style>
  156. </head>
  157. <body>
  158. <h1>Welcome to nginx!</h1>
  159. <p>If you see this page, the nginx web server is successfully installed and
  160. working. Further configuration is required.</p>
  161. <p>For online documentation and support please refer to
  162. <a href="http://nginx.org/">nginx.org</a>.<br/>
  163. Commercial support is available at
  164. <a href="http://nginx.com/">nginx.com</a>.</p>
  165. <p><em>Thank you for using nginx.</em></p>
  166. </body>
  167. </html>
  168. # 第3大步骤:启用ingress关联ingress controller 和service
  169. # 创建一个yaml文件,去启动ingress
  170. [root@k8smaster ingress]# cat sc-ingress.yaml
  171. apiVersion: networking.k8s.io/v1
  172. kind: Ingress
  173. metadata:
  174. name: sc-ingress
  175. annotations:
  176. kubernets.io/ingress.class: nginx #注释 这个ingress 是关联ingress controller的
  177. spec:
  178. ingressClassName: nginx #关联ingress controller
  179. rules:
  180. - host: www.feng.com
  181. http:
  182. paths:
  183. - pathType: Prefix
  184. path: /
  185. backend:
  186. service:
  187. name: sc-nginx-svc
  188. port:
  189. number: 80
  190. - host: www.zhang.com
  191. http:
  192. paths:
  193. - pathType: Prefix
  194. path: /
  195. backend:
  196. service:
  197. name: sc-nginx-svc-2
  198. port:
  199. number: 80
  200. [root@k8smaster ingress]# kubectl apply -f my-ingress.yaml
  201. ingress.networking.k8s.io/my-ingress created
  202. # 查看ingress
  203. [root@k8smaster ingress]# kubectl get ingress
  204. NAME CLASS HOSTS ADDRESS PORTS AGE
  205. sc-ingress nginx www.feng.com,www.zhang.com 192.168.2.111,192.168.2.112 80 52s
  206. # 第4大步骤:查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
  207. [root@k8smaster ingress]# kubectl get pod -n ingress-nginx
  208. NAME READY STATUS RESTARTS AGE
  209. ingress-nginx-admission-create-9sg56 0/1 Completed 0 6m53s
  210. ingress-nginx-admission-patch-8sctb 0/1 Completed 1 6m53s
  211. ingress-nginx-controller-6c8ffbbfcf-bmdj9 1/1 Running 0 6m53s
  212. ingress-nginx-controller-6c8ffbbfcf-j576v 1/1 Running 0 6m53s
  213. [root@k8smaster ingress]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-6c8ffbbfcf-bmdj9 -- bash
  214. bash-5.1$ cat nginx.conf |grep feng.com
  215. ## start server www.feng.com
  216. server_name www.feng.com ;
  217. ## end server www.feng.com
  218. bash-5.1$ cat nginx.conf |grep zhang.com
  219. ## start server www.zhang.com
  220. server_name www.zhang.com ;
  221. ## end server www.zhang.com
  222. bash-5.1$ cat nginx.conf|grep -C3 upstream_balancer
  223. error_log /var/log/nginx/error.log notice;
  224. upstream upstream_balancer {
  225. server 0.0.0.1:1234; # placeholder
  226. # 获取ingress controller对应的service暴露宿主机的端口,访问宿主机和相关端口,就可以验证ingress controller是否能进行负载均衡
  227. [root@k8smaster ingress]# kubectl get svc -n ingress-nginx
  228. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  229. ingress-nginx-controller NodePort 10.105.213.95 <none> 80:31457/TCP,443:32569/TCP 8m12s
  230. ingress-nginx-controller-admission ClusterIP 10.98.225.196 <none> 443/TCP 8m12s
  231. # 在其他的宿主机或者windows机器上使用域名进行访问
  232. [root@zabbix ~]# vim /etc/hosts
  233. [root@zabbix ~]# cat /etc/hosts
  234. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  235. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  236. 192.168.2.111 www.feng.com
  237. 192.168.2.112 www.zhang.com
  238. # 因为我们是基于域名做的负载均衡的配置,所以必须要在浏览器里使用域名去访问,不能使用ip地址
  239. # 同时ingress controller做负载均衡的时候是基于http协议的,7层负载均衡。
  240. [root@zabbix ~]# curl www.feng.com
  241. <!DOCTYPE html>
  242. <html>
  243. <head>
  244. <title>Welcome to nginx!</title>
  245. <style>
  246. html { color-scheme: light dark; }
  247. body { width: 35em; margin: 0 auto;
  248. font-family: Tahoma, Verdana, Arial, sans-serif; }
  249. </style>
  250. </head>
  251. <body>
  252. <h1>Welcome to nginx!</h1>
  253. <p>If you see this page, the nginx web server is successfully installed and
  254. working. Further configuration is required.</p>
  255. <p>For online documentation and support please refer to
  256. <a href="http://nginx.org/">nginx.org</a>.<br/>
  257. Commercial support is available at
  258. <a href="http://nginx.com/">nginx.com</a>.</p>
  259. <p><em>Thank you for using nginx.</em></p>
  260. </body>
  261. </html>
  262. # 访问www.zhang.com出现异常,503错误,是nginx内部错误
  263. [root@zabbix ~]# curl www.zhang.com
  264. <html>
  265. <head><title>503 Service Temporarily Unavailable</title></head>
  266. <body>
  267. <center><h1>503 Service Temporarily Unavailable</h1></center>
  268. <hr><center>nginx</center>
  269. </body>
  270. </html>
  271. # 第5大步骤:启动第2个服务和pod,使用了pv+pvc+nfs
  272. # 需要提前准备好nfs服务器+创建pv和pvc
  273. [root@k8smaster pv]# pwd
  274. /root/pv
  275. [root@k8smaster pv]# ls
  276. nfs-pvc.yml nfs-pv.yml nginx-deployment.yml
  277. [root@k8smaster pv]# cat nfs-pv.yml
  278. apiVersion: v1
  279. kind: PersistentVolume
  280. metadata:
  281. name: pv-web
  282. labels:
  283. type: pv-web
  284. spec:
  285. capacity:
  286. storage: 10Gi
  287. accessModes:
  288. - ReadWriteMany
  289. storageClassName: nfs # pv对应的名字
  290. nfs:
  291. path: "/web" # nfs共享的目录
  292. server: 192.168.2.121 # nfs服务器的ip地址
  293. readOnly: false # 访问模式
  294. [root@k8smaster pv]# kubectl apply -f nfs-pv.yaml
  295. [root@k8smaster pv]# kubectl apply -f nfs-pvc.yaml
  296. [root@k8smaster pv]# kubectl get pv
  297. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  298. pv-web 10Gi RWX Retain Bound default/pvc-web nfs 19h
  299. [root@k8smaster pv]# kubectl get pvc
  300. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  301. pvc-web Bound pv-web 10Gi RWX nfs 19h
  302. [root@k8smaster ingress]# cat nginx-deployment-nginx-svc-2.yaml
  303. apiVersion: apps/v1
  304. kind: Deployment
  305. metadata:
  306. name: nginx-deployment
  307. labels:
  308. app: nginx
  309. spec:
  310. replicas: 3
  311. selector:
  312. matchLabels:
  313. app: sc-nginx-feng-2
  314. template:
  315. metadata:
  316. labels:
  317. app: sc-nginx-feng-2
  318. spec:
  319. volumes:
  320. - name: sc-pv-storage-nfs
  321. persistentVolumeClaim:
  322. claimName: pvc-web
  323. containers:
  324. - name: sc-pv-container-nfs
  325. image: nginx
  326. imagePullPolicy: IfNotPresent
  327. ports:
  328. - containerPort: 80
  329. name: "http-server"
  330. volumeMounts:
  331. - mountPath: "/usr/share/nginx/html"
  332. name: sc-pv-storage-nfs
  333. ---
  334. apiVersion: v1
  335. kind: Service
  336. metadata:
  337. name: sc-nginx-svc-2
  338. labels:
  339. app: sc-nginx-svc-2
  340. spec:
  341. selector:
  342. app: sc-nginx-feng-2
  343. ports:
  344. - name: name-of-service-port
  345. protocol: TCP
  346. port: 80
  347. targetPort: 80
  348. [root@k8smaster ingress]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml
  349. deployment.apps/nginx-deployment created
  350. service/sc-nginx-svc-2 created
  351. [root@k8smaster ingress]# kubectl get svc -n ingress-nginx
  352. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  353. ingress-nginx-controller NodePort 10.105.213.95 <none> 80:31457/TCP,443:32569/TCP 24m
  354. ingress-nginx-controller-admission ClusterIP 10.98.225.196 <none> 443/TCP 24m
  355. [root@k8smaster ingress]# kubectl get ingress
  356. NAME CLASS HOSTS ADDRESS PORTS AGE
  357. sc-ingress nginx www.feng.com,www.zhang.com 192.168.2.111,192.168.2.112 80 18m
  358. # 访问宿主机暴露的端口号30092或者80都可以
  359. # 使用ingress controller暴露服务,感觉不需要使用30000以上的端口访问,可以直接访问80或者443
  360. 比使用service 暴露服务还是有点优势
  361. [root@zabbix ~]# curl www.zhang.com
  362. welcome to changsha
  363. hello,world
  364. [root@zabbix ~]# curl www.feng.com
  365. <!DOCTYPE html>
  366. <html>
  367. <head>
  368. <title>Welcome to nginx!</title>
  369. <style>
  370. html { color-scheme: light dark; }
  371. body { width: 35em; margin: 0 auto;
  372. font-family: Tahoma, Verdana, Arial, sans-serif; }
  373. </style>
  374. </head>
  375. <body>
  376. <h1>Welcome to nginx!</h1>
  377. <p>If you see this page, the nginx web server is successfully installed and
  378. working. Further configuration is required.</p>
  379. <p>For online documentation and support please refer to
  380. <a href="http://nginx.org/">nginx.org</a>.<br/>
  381. Commercial support is available at
  382. <a href="http://nginx.com/">nginx.com</a>.</p>
  383. <p><em>Thank you for using nginx.</em></p>
  384. </body>
  385. </html>

使用dashboard对整个集群资源进行掌控

  1. # 1.先下载recommended.yaml文件
  2. [root@k8smaster dashboard]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
  3. --2023-06-19 10:18:50-- https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
  4. 正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.111.133, ...
  5. 正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... 已连接。
  6. 已发出 HTTP 请求,正在等待回应... 200 OK
  7. 长度:7621 (7.4K) [text/plain]
  8. 正在保存至: “recommended.yaml”
  9. 100%[=============================================================================>] 7,621 --.-K/s 用时 0s
  10. 2023-06-19 10:18:52 (23.6 MB/s) - 已保存 “recommended.yaml” [7621/7621])
  11. [root@k8smaster dashboard]# ls
  12. recommended.yaml
  13. # 2.启动
  14. [root@k8smaster dashboard]# kubectl apply -f recommended.yaml
  15. namespace/kubernetes-dashboard created
  16. serviceaccount/kubernetes-dashboard created
  17. service/kubernetes-dashboard created
  18. secret/kubernetes-dashboard-certs created
  19. secret/kubernetes-dashboard-csrf created
  20. secret/kubernetes-dashboard-key-holder created
  21. configmap/kubernetes-dashboard-settings created
  22. role.rbac.authorization.k8s.io/kubernetes-dashboard created
  23. clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
  24. rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  25. clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
  26. deployment.apps/kubernetes-dashboard created
  27. service/dashboard-metrics-scraper created
  28. deployment.apps/dashboard-metrics-scraper created
  29. # 3.查看是否启动dashboard的pod
  30. [root@k8smaster dashboard]# kubectl get ns
  31. NAME STATUS AGE
  32. default Active 18h
  33. ingress-nginx Active 13h
  34. kube-node-lease Active 18h
  35. kube-public Active 18h
  36. kube-system Active 18h
  37. kubernetes-dashboard Active 9s
  38. # kubernetes-dashboard 是dashboard自己的命名空间
  39. [root@k8smaster dashboard]# kubectl get pod -n kubernetes-dashboard
  40. NAME READY STATUS RESTARTS AGE
  41. dashboard-metrics-scraper-5b8896d7fc-6kjlr 1/1 Running 0 4m56s
  42. kubernetes-dashboard-cb988587b-s2f6z 1/1 Running 0 4m57s
  43. # 4.查看dashboard对应的服务,因为发布服务的类型是ClusterIP ,外面的机器不能访问,不便于我们通过浏览器访问,因此需要改成NodePort
  44. [root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
  45. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  46. dashboard-metrics-scraper ClusterIP 10.110.32.41 <none> 8000/TCP 4m24s
  47. kubernetes-dashboard ClusterIP 10.106.104.124 <none> 443/TCP 4m24s
  48. # 5.删除已经创建的dashboard 的服务
  49. [root@k8smaster dashboard]# kubectl delete svc kubernetes-dashboard -n kubernetes-dashboard
  50. service "kubernetes-dashboard" deleted
  51. [root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
  52. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  53. dashboard-metrics-scraper ClusterIP 10.110.32.41 <none> 8000/TCP 5m39s
  54. # 6.创建一个nodeport的service
  55. [root@k8smaster dashboard]# vim dashboard-svc.yml
  56. [root@k8smaster dashboard]# cat dashboard-svc.yml
  57. kind: Service
  58. apiVersion: v1
  59. metadata:
  60. labels:
  61. k8s-app: kubernetes-dashboard
  62. name: kubernetes-dashboard
  63. namespace: kubernetes-dashboard
  64. spec:
  65. type: NodePort
  66. ports:
  67. - port: 443
  68. targetPort: 8443
  69. selector:
  70. k8s-app: kubernetes-dashboard
  71. [root@k8smaster dashboard]# kubectl apply -f dashboard-svc.yml
  72. service/kubernetes-dashboard created
  73. [root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
  74. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  75. dashboard-metrics-scraper ClusterIP 10.110.32.41 <none> 8000/TCP 8m11s
  76. kubernetes-dashboard NodePort 10.103.185.254 <none> 443:32571/TCP 37s
  77. # 7.想要访问dashboard服务,就要有访问权限,创建kubernetes-dashboard管理员角色
  78. [root@k8smaster dashboard]# vim dashboard-svc-account.yaml
  79. [root@k8smaster dashboard]# cat dashboard-svc-account.yaml
  80. apiVersion: v1
  81. kind: ServiceAccount
  82. metadata:
  83. name: dashboard-admin
  84. namespace: kube-system
  85. ---
  86. kind: ClusterRoleBinding
  87. apiVersion: rbac.authorization.k8s.io/v1
  88. metadata:
  89. name: dashboard-admin
  90. subjects:
  91. - kind: ServiceAccount
  92. name: dashboard-admin
  93. namespace: kube-system
  94. roleRef:
  95. kind: ClusterRole
  96. name: cluster-admin
  97. apiGroup: rbac.authorization.k8s.io
  98. [root@k8smaster dashboard]# kubectl apply -f dashboard-svc-account.yaml
  99. serviceaccount/dashboard-admin created
  100. clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
  101. # 8.获取dashboard的secret对象的名字
  102. [root@k8smaster dashboard]# kubectl get secret -n kube-system|grep admin|awk '{print $1}'
  103. dashboard-admin-token-hd2nl
  104. [root@k8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system
  105. Name: dashboard-admin-token-hd2nl
  106. Namespace: kube-system
  107. Labels: <none>
  108. Annotations: kubernetes.io/service-account.name: dashboard-admin
  109. kubernetes.io/service-account.uid: 4e42ca6a-e5eb-4672-bf3e-ae22935417ef
  110. Type: kubernetes.io/service-account-token
  111. Data
  112. ====
  113. ca.crt: 1066 bytes
  114. namespace: 11 bytes
  115. token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ
  116. [root@k8smaster dashboard]#
  117. # 9.获取secret里的token的内容--》token理解为认证的密码
  118. [root@k8smaster dashboard]# kubectl describe secret dashboard-admin-token-hd2nl -n kube-system|awk '/^token/ {print $2}'
  119. eyJhbGciOiJSUzI1NiIsImtpZCI6InBBckJ2U051Y3J4NjVPY2VxOVZzRjBIdzdjNzgycFppcVZ5WWFnQlNsS00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taGQybmwiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGU0MmNhNmEtZTVlYi00NjcyLWJmM2UtYWUyMjkzNTQxN2VmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.EAVV-s6OnS4htu4kvv3UvlZpqzg5Ei1_tNiBLr08GquUxKX09JGvQhsZQYgluNmS2yqad_lxK_Ie_RgwayqfBdXYtugQPM8m9gZHScsUdo_3b8b4ZEUz7KlDzJVBdBvDFSJjz-7cJhtj-HtazRuLluJbeoQV4zXMXvfhDhYt0k126eiqKzvbHhJmNM8U5XViAUmpUPCUjqFHm8tS1Su7aW75R-qXH6aGjGOv7kTpQdOjFeVO-AbFRIcbDOcqYRrKMyZu0yuH9QZGL35L1Lj3HgePsDbwd3jm2ZS05BjuacSFGle6CdZTOB0b5haeUlFrZ6FWsU-2qoQ67ysOwB0xKQ
  120. # 10.浏览器里访问
  121. [root@k8smaster dashboard]# kubectl get svc -n kubernetes-dashboard
  122. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  123. dashboard-metrics-scraper ClusterIP 10.110.32.41 <none> 8000/TCP 11m
  124. kubernetes-dashboard NodePort 10.103.185.254 <none> 443:32571/TCP 4m4s
  125. # 访问宿主机的ip+端口号
  126. https://192.168.2.104:32571/#/login
  127. # 11.输入上面获得的token,登录。
  128. thisisunsafe
  129. https://192.168.2.104:32571/#/workloads?namespace=default

九.安装zabbix和promethues对整个集群资源(cpu,内存,网络带宽,web服务,数据库服务,磁盘IO等)进行监控。

  1. # 部署zabbix
  2. # 1.安装zabbix服务器的源
  3. 源:repository 软件仓库,用来找到zabbix官方网站提供的软件,可以下载软件的地方
  4. [root@zabbix ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
  5. 获取https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm
  6. 警告:/var/tmp/rpm-tmp.lL96Rw: 头V4 RSA/SHA512 Signature, 密钥 ID a14fe591: NOKEY
  7. 准备中... ################################# [100%]
  8. 正在升级/安装...
  9. 1:zabbix-release-5.0-1.el7 ################################# [100%]
  10. [root@zabbix ~]# cd /etc/yum.repos.d/
  11. [root@zabbix yum.repos.d]# ls
  12. CentOS-Base.repo CentOS-Debuginfo.repo CentOS-Media.repo CentOS-Vault.repo zabbix.repo
  13. CentOS-CR.repo CentOS-fasttrack.repo CentOS-Sources.repo CentOS-x86_64-kernel.repo
  14. CentOS-Base.repo 仓库文件: 用来找到centos官方提供的下载软件的地方的文件
  15. Base 存放centos官方基本软件的仓库
  16. zabbix.repo 帮助我们找到zabbix官方提供的软件下载地方的文件
  17. [root@zabbix yum.repos.d]# cat zabbix.repo
  18. [zabbix] 源的名字
  19. name=Zabbix Official Repository - $basearch 对这个源的介绍
  20. baseurl=http://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/ 具体源的位置
  21. enabled=1 表示这个源可以使用
  22. gpgcheck=1 操作系统会对下载的软件进行gpg检验码的检查,防止软件不是正版的
  23. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591 --》防伪码
  24. # 2.安装zabbix相关的软件
  25. [root@zabbix yum.repos.d]# yum install zabbix-server-mysql zabbix-agent -y
  26. zabbix-server-mysql 安装zabbix server和连接mysql功能的软件
  27. zabbix-agent zabbix的代理软件
  28. # 3.安装Zabbix前端
  29. [root@zabbix yum.repos.d]# yum install centos-release-scl -y
  30. # 修改仓库文件,启用前端的源
  31. [root@zabbix yum.repos.d]# vim zabbix.repo
  32. [zabbix-frontend]
  33. name=Zabbix Official Repository frontend - $basearch
  34. baseurl=http://repo.zabbix.com/zabbix/5.0/rhel/7/$basearch/frontend
  35. enabled=1 # 修改为1
  36. gpgcheck=1
  37. gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
  38. # 安装web相关的软件
  39. [root@zabbix yum.repos.d]# yum install zabbix-web-mysql-scl zabbix-nginx-conf-scl -y
  40. # 4.安装mariadb数据库
  41. [root@zabbix yum.repos.d]# yum install mariadb mariadb-server -y
  42. mariadb-server 服务器端的软件包
  43. mariadb 提供客户端命令的软件包
  44. # 注意:如果已经安装过mysql的centos系统,就不需要安装mariadb
  45. [root@zabbix yum.repos.d]# service mariadb start # 启动mariadb
  46. Redirecting to /bin/systemctl start mariadb.service
  47. [root@zabbix yum.repos.d]# systemctl enable mariadb # 设置开机启动mariadb数据库
  48. Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
  49. # 查看mysqld进程运行
  50. [root@zabbix yum.repos.d]# ps aux|grep mysqld
  51. mysql 11940 0.1 0.0 113412 1596 ? Ss 15:09 0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
  52. mysql 12105 1.1 4.3 968920 80820 ? Sl 15:09 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
  53. root 12159 0.0 0.0 112824 980 pts/0 S+ 15:09 0:00 grep --color=auto mysqld
  54. [root@zabbix yum.repos.d]# netstat -anplut|grep 3306
  55. tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 12105/mysqld
  56. # 5.在数据库主机上运行以下命令
  57. [root@zabbix yum.repos.d]# mysql -uroot -p
  58. Enter password:
  59. Welcome to the MariaDB monitor. Commands end with ; or \g.
  60. Your MariaDB connection id is 2
  61. Server version: 5.5.68-MariaDB MariaDB Server
  62. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  63. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  64. MariaDB [(none)]> show databases;
  65. +--------------------+
  66. | Database |
  67. +--------------------+
  68. | information_schema |
  69. | mysql |
  70. | performance_schema |
  71. | test |
  72. +--------------------+
  73. 4 rows in set (0.01 sec)
  74. MariaDB [(none)]> create database zabbix character set utf8 collate utf8_bin;
  75. Query OK, 1 row affected (0.00 sec)
  76. MariaDB [(none)]> create user zabbix@localhost identified by 'sc123456'; # 创建用户zabbix@localhost 密码是sc123456
  77. Query OK, 0 rows affected (0.00 sec)
  78. MariaDB [(none)]> grant all privileges on zabbix.* to zabbix@localhost; #授权zabbix@localhost用户对zabbix.*库里的表有所有的权限(insert,delete,update,select等)
  79. Query OK, 0 rows affected (0.00 sec)
  80. MariaDB [(none)]> set global log_bin_trust_function_creators = 1;
  81. Query OK, 0 rows affected (0.00 sec)
  82. MariaDB [(none)]> exit
  83. Bye
  84. # 导入初始化数据,会在zabbix库里新建很多的表
  85. [root@zabbix yum.repos.d]# cd /usr/share/doc/zabbix-server-mysql-5.0.35/
  86. [root@zabbix zabbix-server-mysql-5.0.35]# ls
  87. AUTHORS ChangeLog COPYING create.sql.gz double.sql NEWS README
  88. [root@zabbix zabbix-server-mysql-5.0.33]# zcat create.sql.gz |mysql -uzabbix -p'sc123456' zabbix
  89. [root@zabbix zabbix-server-mysql-5.0.33]# mysql -uzabbix -psc123456
  90. Welcome to the MariaDB monitor. Commands end with ; or \g.
  91. Your MariaDB connection id is 4
  92. Server version: 5.5.68-MariaDB MariaDB Server
  93. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  94. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  95. MariaDB [(none)]> show databases;
  96. +--------------------+
  97. | Database |
  98. +--------------------+
  99. | information_schema |
  100. | test |
  101. | zabbix |
  102. +--------------------+
  103. 3 rows in set (0.00 sec)
  104. MariaDB [(none)]> use zabbix;
  105. Reading table information for completion of table and column names
  106. You can turn off this feature to get a quicker startup with -A
  107. Database changed
  108. MariaDB [zabbix]> show tables;
  109. +----------------------------+
  110. | Tables_in_zabbix |
  111. +----------------------------+
  112. | acknowledges |
  113. | actions |
  114. | alerts |
  115. | application_discovery |
  116. | application_prototype |
  117. # 导入数据库架构后禁用log_bin_trust_function_creators选项
  118. [root@zabbix zabbix-server-mysql-5.0.33]# mysql -uroot -p
  119. Enter password:
  120. Welcome to the MariaDB monitor. Commands end with ; or \g.
  121. Your MariaDB connection id is 5
  122. Server version: 5.5.68-MariaDB MariaDB Server
  123. Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
  124. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  125. MariaDB [(none)]> set global log_bin_trust_function_creators = 0;
  126. Query OK, 0 rows affected (0.00 sec)
  127. MariaDB [(none)]> exit
  128. Bye
  129. # 6.为 Zabbix 服务器配置数据库
  130. # 编辑文件 /etc/zabbix/zabbix_server.conf
  131. [root@zabbix zabbix-server-mysql-5.0.33]# cd /etc/zabbix/
  132. [root@zabbix zabbix]# vim zabbix_server.conf
  133. # DBPassword=
  134. DBPassword=sc123456
  135. # 7.为 Zabbix 前端配置 PHP
  136. # 编辑文件 /etc/opt/rh/rh-nginx116/nginx/conf.d/zabbix.conf 取消注释
  137. [root@zabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/conf.d/
  138. [root@zabbix conf.d]# ls
  139. zabbix.conf
  140. [root@zabbix conf.d]# vim zabbix.conf
  141. server {
  142. listen 8080;
  143. server_name zabbix.com;
  144. # 编辑/etc/opt/rh/rh-nginx116/nginx/nginx.conf
  145. [root@zabbix conf.d]# cd /etc/opt/rh/rh-nginx116/nginx/
  146. [root@zabbix nginx]# vim nginx.conf
  147. server {
  148. listen 80 default_server; #修改80为8080
  149. listen [::]:80 default_server;
  150. # 避免zabbix和nginx监听同一个端口,导致zabbix启动不起来。
  151. # 编辑文件 /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf
  152. [root@zabbix nginx]# cd /etc/opt/rh/rh-php72/php-fpm.d
  153. [root@zabbix php-fpm.d]# ls
  154. www.conf zabbix.conf
  155. [root@zabbix php-fpm.d]# vim zabbix.conf
  156. listen.acl_users = apache,nginx
  157. php_value[date.timezone] = Asia/Shanghai
  158. # 建议一定要关闭selinux,不然会导致zabbix_server启动不了
  159. # 8.启动Zabbix服务器和代理进程并且设置开机启动
  160. [root@zabbix php-fpm.d]# systemctl restart zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm
  161. [root@zabbix php-fpm.d]# systemctl enable zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm
  162. Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-server.service to /usr/lib/systemd/system/zabbix-server.service.
  163. Created symlink from /etc/systemd/system/multi-user.target.wants/zabbix-agent.service to /usr/lib/systemd/system/zabbix-agent.service.
  164. Created symlink from /etc/systemd/system/multi-user.target.wants/rh-nginx116-nginx.service to /usr/lib/systemd/system/rh-nginx116-nginx.service.
  165. Created symlink from /etc/systemd/system/multi-user.target.wants/rh-php72-php-fpm.service to /usr/lib/systemd/system/rh-php72-php-fpm.service.
  166. # 9.浏览器里访问
  167. http://192.168.2.117:8080
  168. # 默认登录的账号和密码
  169. username: Admin
  170. password: zabbix
  171. # 使用Prometheus监控Kubernetes
  172. # 1.在所有节点提前下载镜像
  173. docker pull prom/node-exporter
  174. docker pull prom/prometheus:v2.0.0
  175. docker pull grafana/grafana:6.1.4
  176. [root@k8smaster ~]# docker images
  177. REPOSITORY TAG IMAGE ID CREATED SIZE
  178. prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB
  179. grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB
  180. prom/prometheus v2.0.0 67141fa03496 5 years ago 80.2MB
  181. [root@k8snode1 ~]# docker images
  182. REPOSITORY TAG IMAGE ID CREATED SIZE
  183. prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB
  184. grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB
  185. prom/prometheus
  186. [root@k8snode2 ~]# docker images
  187. REPOSITORY TAG IMAGE ID CREATED SIZE
  188. prom/node-exporter latest 1dbe0e931976 18 months ago 20.9MB
  189. grafana/grafana 6.1.4 d9bdb6044027 4 years ago 245MB
  190. prom/prometheus v2.0.0 67141fa03496 5 years ago 80.2MB
  191. # 2.采用daemonset方式部署node-exporter
  192. [root@k8smaster prometheus]# ll
  193. 总用量 36
  194. -rw-r--r-- 1 root root 5632 6月 25 16:23 configmap.yaml
  195. -rw-r--r-- 1 root root 1515 6月 25 16:26 grafana-deploy.yaml
  196. -rw-r--r-- 1 root root 256 6月 25 16:27 grafana-ing.yaml
  197. -rw-r--r-- 1 root root 225 6月 25 16:27 grafana-svc.yaml
  198. -rw-r--r-- 1 root root 716 6月 25 16:22 node-exporter.yaml
  199. -rw-r--r-- 1 root root 1104 6月 25 16:25 prometheus.deploy.yml
  200. -rw-r--r-- 1 root root 233 6月 25 16:25 prometheus.svc.yml
  201. -rw-r--r-- 1 root root 716 6月 25 16:23 rbac-setukp.yaml
  202. [root@k8smaster prometheus]# cat node-exporter.yaml
  203. ---
  204. apiVersion: apps/v1
  205. kind: DaemonSet
  206. metadata:
  207. name: node-exporter
  208. namespace: kube-system
  209. labels:
  210. k8s-app: node-exporter
  211. spec:
  212. selector:
  213. matchLabels:
  214. k8s-app: node-exporter
  215. template:
  216. metadata:
  217. labels:
  218. k8s-app: node-exporter
  219. spec:
  220. containers:
  221. - image: prom/node-exporter
  222. name: node-exporter
  223. ports:
  224. - containerPort: 9100
  225. protocol: TCP
  226. name: http
  227. ---
  228. apiVersion: v1
  229. kind: Service
  230. metadata:
  231. labels:
  232. k8s-app: node-exporter
  233. name: node-exporter
  234. namespace: kube-system
  235. spec:
  236. ports:
  237. - name: http
  238. port: 9100
  239. nodePort: 31672
  240. protocol: TCP
  241. type: NodePort
  242. selector:
  243. k8s-app: node-exporter
  244. [root@k8smaster prometheus]# kubectl apply -f node-exporter.yaml
  245. daemonset.apps/node-exporter created
  246. service/node-exporter created
  247. [root@k8smaster prometheus]# kubectl get pods -A
  248. NAMESPACE NAME READY STATUS RESTARTS AGE
  249. kube-system node-exporter-fcmx5 1/1 Running 0 47s
  250. kube-system node-exporter-qccwb 1/1 Running 0 47s
  251. [root@k8smaster prometheus]# kubectl get daemonset -A
  252. NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  253. kube-system calico-node 3 3 3 3 3 kubernetes.io/os=linux 7d
  254. kube-system kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 7d
  255. kube-system node-exporter 2 2 2 2 2 <none> 2m29s
  256. [root@k8smaster prometheus]# kubectl get service -A
  257. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  258. kube-system node-exporter NodePort 10.111.247.142 <none> 9100:31672/TCP 3m24s
  259. # 3.部署Prometheus
  260. [root@k8smaster prometheus]# cat rbac-setup.yaml
  261. apiVersion: rbac.authorization.k8s.io/v1
  262. kind: ClusterRole
  263. metadata:
  264. name: prometheus
  265. rules:
  266. - apiGroups: [""]
  267. resources:
  268. - nodes
  269. - nodes/proxy
  270. - services
  271. - endpoints
  272. - pods
  273. verbs: ["get", "list", "watch"]
  274. - apiGroups:
  275. - extensions
  276. resources:
  277. - ingresses
  278. verbs: ["get", "list", "watch"]
  279. - nonResourceURLs: ["/metrics"]
  280. verbs: ["get"]
  281. ---
  282. apiVersion: v1
  283. kind: ServiceAccount
  284. metadata:
  285. name: prometheus
  286. namespace: kube-system
  287. ---
  288. apiVersion: rbac.authorization.k8s.io/v1
  289. kind: ClusterRoleBinding
  290. metadata:
  291. name: prometheus
  292. roleRef:
  293. apiGroup: rbac.authorization.k8s.io
  294. kind: ClusterRole
  295. name: prometheus
  296. subjects:
  297. - kind: ServiceAccount
  298. name: prometheus
  299. namespace: kube-system
  300. [root@k8smaster prometheus]# kubectl apply -f rbac-setup.yaml
  301. clusterrole.rbac.authorization.k8s.io/prometheus created
  302. serviceaccount/prometheus created
  303. clusterrolebinding.rbac.authorization.k8s.io/prometheus created
  304. [root@k8smaster prometheus]# cat configmap.yaml
  305. apiVersion: v1
  306. kind: ConfigMap
  307. metadata:
  308. name: prometheus-config
  309. namespace: kube-system
  310. data:
  311. prometheus.yml: |
  312. global:
  313. scrape_interval: 15s
  314. evaluation_interval: 15s
  315. scrape_configs:
  316. - job_name: 'kubernetes-apiservers'
  317. kubernetes_sd_configs:
  318. - role: endpoints
  319. scheme: https
  320. tls_config:
  321. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  322. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  323. relabel_configs:
  324. - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
  325. action: keep
  326. regex: default;kubernetes;https
  327. - job_name: 'kubernetes-nodes'
  328. kubernetes_sd_configs:
  329. - role: node
  330. scheme: https
  331. tls_config:
  332. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  333. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  334. relabel_configs:
  335. - action: labelmap
  336. regex: __meta_kubernetes_node_label_(.+)
  337. - target_label: __address__
  338. replacement: kubernetes.default.svc:443
  339. - source_labels: [__meta_kubernetes_node_name]
  340. regex: (.+)
  341. target_label: __metrics_path__
  342. replacement: /api/v1/nodes/${1}/proxy/metrics
  343. - job_name: 'kubernetes-cadvisor'
  344. kubernetes_sd_configs:
  345. - role: node
  346. scheme: https
  347. tls_config:
  348. ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  349. bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  350. relabel_configs:
  351. - action: labelmap
  352. regex: __meta_kubernetes_node_label_(.+)
  353. - target_label: __address__
  354. replacement: kubernetes.default.svc:443
  355. - source_labels: [__meta_kubernetes_node_name]
  356. regex: (.+)
  357. target_label: __metrics_path__
  358. replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
  359. - job_name: 'kubernetes-service-endpoints'
  360. kubernetes_sd_configs:
  361. - role: endpoints
  362. relabel_configs:
  363. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
  364. action: keep
  365. regex: true
  366. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
  367. action: replace
  368. target_label: __scheme__
  369. regex: (https?)
  370. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
  371. action: replace
  372. target_label: __metrics_path__
  373. regex: (.+)
  374. - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
  375. action: replace
  376. target_label: __address__
  377. regex: ([^:]+)(?::\d+)?;(\d+)
  378. replacement: $1:$2
  379. - action: labelmap
  380. regex: __meta_kubernetes_service_label_(.+)
  381. - source_labels: [__meta_kubernetes_namespace]
  382. action: replace
  383. target_label: kubernetes_namespace
  384. - source_labels: [__meta_kubernetes_service_name]
  385. action: replace
  386. target_label: kubernetes_name
  387. - job_name: 'kubernetes-services'
  388. kubernetes_sd_configs:
  389. - role: service
  390. metrics_path: /probe
  391. params:
  392. module: [http_2xx]
  393. relabel_configs:
  394. - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
  395. action: keep
  396. regex: true
  397. - source_labels: [__address__]
  398. target_label: __param_target
  399. - target_label: __address__
  400. replacement: blackbox-exporter.example.com:9115
  401. - source_labels: [__param_target]
  402. target_label: instance
  403. - action: labelmap
  404. regex: __meta_kubernetes_service_label_(.+)
  405. - source_labels: [__meta_kubernetes_namespace]
  406. target_label: kubernetes_namespace
  407. - source_labels: [__meta_kubernetes_service_name]
  408. target_label: kubernetes_name
  409. - job_name: 'kubernetes-ingresses'
  410. kubernetes_sd_configs:
  411. - role: ingress
  412. relabel_configs:
  413. - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
  414. action: keep
  415. regex: true
  416. - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
  417. regex: (.+);(.+);(.+)
  418. replacement: ${1}://${2}${3}
  419. target_label: __param_target
  420. - target_label: __address__
  421. replacement: blackbox-exporter.example.com:9115
  422. - source_labels: [__param_target]
  423. target_label: instance
  424. - action: labelmap
  425. regex: __meta_kubernetes_ingress_label_(.+)
  426. - source_labels: [__meta_kubernetes_namespace]
  427. target_label: kubernetes_namespace
  428. - source_labels: [__meta_kubernetes_ingress_name]
  429. target_label: kubernetes_name
  430. - job_name: 'kubernetes-pods'
  431. kubernetes_sd_configs:
  432. - role: pod
  433. relabel_configs:
  434. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
  435. action: keep
  436. regex: true
  437. - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
  438. action: replace
  439. target_label: __metrics_path__
  440. regex: (.+)
  441. - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
  442. action: replace
  443. regex: ([^:]+)(?::\d+)?;(\d+)
  444. replacement: $1:$2
  445. target_label: __address__
  446. - action: labelmap
  447. regex: __meta_kubernetes_pod_label_(.+)
  448. - source_labels: [__meta_kubernetes_namespace]
  449. action: replace
  450. target_label: kubernetes_namespace
  451. - source_labels: [__meta_kubernetes_pod_name]
  452. action: replace
  453. target_label: kubernetes_pod_name
  454. [root@k8smaster prometheus]# kubectl apply -f configmap.yaml
  455. configmap/prometheus-config created
  456. [root@k8smaster prometheus]# cat prometheus.deploy.yml
  457. apiVersion: apps/v1
  458. kind: Deployment
  459. metadata:
  460. labels:
  461. name: prometheus-deployment
  462. name: prometheus
  463. namespace: kube-system
  464. spec:
  465. replicas: 1
  466. selector:
  467. matchLabels:
  468. app: prometheus
  469. template:
  470. metadata:
  471. labels:
  472. app: prometheus
  473. spec:
  474. containers:
  475. - image: prom/prometheus:v2.0.0
  476. name: prometheus
  477. command:
  478. - "/bin/prometheus"
  479. args:
  480. - "--config.file=/etc/prometheus/prometheus.yml"
  481. - "--storage.tsdb.path=/prometheus"
  482. - "--storage.tsdb.retention=24h"
  483. ports:
  484. - containerPort: 9090
  485. protocol: TCP
  486. volumeMounts:
  487. - mountPath: "/prometheus"
  488. name: data
  489. - mountPath: "/etc/prometheus"
  490. name: config-volume
  491. resources:
  492. requests:
  493. cpu: 100m
  494. memory: 100Mi
  495. limits:
  496. cpu: 500m
  497. memory: 2500Mi
  498. serviceAccountName: prometheus
  499. volumes:
  500. - name: data
  501. emptyDir: {}
  502. - name: config-volume
  503. configMap:
  504. name: prometheus-config
  505. [root@k8smaster prometheus]# kubectl apply -f prometheus.deploy.yml
  506. deployment.apps/prometheus created
  507. [root@k8smaster prometheus]# cat prometheus.svc.yml
  508. kind: Service
  509. apiVersion: v1
  510. metadata:
  511. labels:
  512. app: prometheus
  513. name: prometheus
  514. namespace: kube-system
  515. spec:
  516. type: NodePort
  517. ports:
  518. - port: 9090
  519. targetPort: 9090
  520. nodePort: 30003
  521. selector:
  522. app: prometheus
  523. [root@k8smaster prometheus]# kubectl apply -f prometheus.svc.yml
  524. service/prometheus created
  525. 4.部署grafana
  526. [root@k8smaster prometheus]# cat grafana-deploy.yaml
  527. apiVersion: apps/v1
  528. kind: Deployment
  529. metadata:
  530. name: grafana-core
  531. namespace: kube-system
  532. labels:
  533. app: grafana
  534. component: core
  535. spec:
  536. replicas: 1
  537. selector:
  538. matchLabels:
  539. app: grafana
  540. template:
  541. metadata:
  542. labels:
  543. app: grafana
  544. component: core
  545. spec:
  546. containers:
  547. - image: grafana/grafana:6.1.4
  548. name: grafana-core
  549. imagePullPolicy: IfNotPresent
  550. # env:
  551. resources:
  552. # keep request = limit to keep this container in guaranteed class
  553. limits:
  554. cpu: 100m
  555. memory: 100Mi
  556. requests:
  557. cpu: 100m
  558. memory: 100Mi
  559. env:
  560. # The following env variables set up basic auth twith the default admin user and admin password.
  561. - name: GF_AUTH_BASIC_ENABLED
  562. value: "true"
  563. - name: GF_AUTH_ANONYMOUS_ENABLED
  564. value: "false"
  565. # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
  566. # value: Admin
  567. # does not really work, because of template variables in exported dashboards:
  568. # - name: GF_DASHBOARDS_JSON_ENABLED
  569. # value: "true"
  570. readinessProbe:
  571. httpGet:
  572. path: /login
  573. port: 3000
  574. # initialDelaySeconds: 30
  575. # timeoutSeconds: 1
  576. #volumeMounts: #先不进行挂载
  577. #- name: grafana-persistent-storage
  578. # mountPath: /var
  579. #volumes:
  580. #- name: grafana-persistent-storage
  581. #emptyDir: {}
  582. [root@k8smaster prometheus]# kubectl apply -f grafana-deploy.yaml
  583. deployment.apps/grafana-core created
  584. [root@k8smaster prometheus]# cat grafana-svc.yaml
  585. apiVersion: v1
  586. kind: Service
  587. metadata:
  588. name: grafana
  589. namespace: kube-system
  590. labels:
  591. app: grafana
  592. component: core
  593. spec:
  594. type: NodePort
  595. ports:
  596. - port: 3000
  597. selector:
  598. app: grafana
  599. component: core
  600. [root@k8smaster prometheus]# kubectl apply -f grafana-svc.yaml
  601. service/grafana created
  602. [root@k8smaster prometheus]# cat grafana-ing.yaml
  603. apiVersion: extensions/v1beta1
  604. kind: Ingress
  605. metadata:
  606. name: grafana
  607. namespace: kube-system
  608. spec:
  609. rules:
  610. - host: k8s.grafana
  611. http:
  612. paths:
  613. - path: /
  614. backend:
  615. serviceName: grafana
  616. servicePort: 3000
  617. [root@k8smaster prometheus]# kubectl apply -f grafana-ing.yaml
  618. Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
  619. ingress.extensions/grafana created
  620. # 5.检查、测试
  621. [root@k8smaster prometheus]# kubectl get pods -A
  622. NAMESPACE NAME READY STATUS RESTARTS AGE
  623. kube-system grafana-core-78958d6d67-49c56 1/1 Running 0 31m
  624. kube-system node-exporter-fcmx5 1/1 Running 0 9m33s
  625. kube-system node-exporter-qccwb 1/1 Running 0 9m33s
  626. kube-system prometheus-68546b8d9-qxsm7 1/1 Running 0 2m47s
  627. [root@k8smaster mysql]# kubectl get svc -A
  628. NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  629. kube-system grafana NodePort 10.110.87.158 <none> 3000:31267/TCP 31m
  630. kube-system node-exporter NodePort 10.111.247.142 <none> 9100:31672/TCP 39m
  631. kube-system prometheus NodePort 10.102.0.186 <none> 9090:30003/TCP 32m
  632. # 访问
  633. # node-exporter采集的数据
  634. http://192.168.2.104:31672/metrics
  635. # Prometheus的页面
  636. http://192.168.2.104:30003
  637. # grafana的页面,
  638. http://192.168.2.104:31267
  639. # 账户:admin;密码:*******

十.使用测试软件ab对整个k8s集群和相关的服务器进行压力测试。

  1. # 1.运行php-apache服务器并暴露服务
  2. [root@k8smaster hpa]# ls
  3. php-apache.yaml
  4. [root@k8smaster hpa]# cat php-apache.yaml
  5. apiVersion: apps/v1
  6. kind: Deployment
  7. metadata:
  8. name: php-apache
  9. spec:
  10. selector:
  11. matchLabels:
  12. run: php-apache
  13. template:
  14. metadata:
  15. labels:
  16. run: php-apache
  17. spec:
  18. containers:
  19. - name: php-apache
  20. image: k8s.gcr.io/hpa-example
  21. imagePullPolicy: IfNotPresent
  22. ports:
  23. - containerPort: 80
  24. resources:
  25. limits:
  26. cpu: 500m
  27. requests:
  28. cpu: 200m
  29. ---
  30. apiVersion: v1
  31. kind: Service
  32. metadata:
  33. name: php-apache
  34. labels:
  35. run: php-apache
  36. spec:
  37. ports:
  38. - port: 80
  39. selector:
  40. run: php-apache
  41. [root@k8smaster hpa]# kubectl apply -f php-apache.yaml
  42. deployment.apps/php-apache created
  43. service/php-apache created
  44. [root@k8smaster hpa]# kubectl get deploy
  45. NAME READY UP-TO-DATE AVAILABLE AGE
  46. php-apache 1/1 1 1 93s
  47. [root@k8smaster hpa]# kubectl get pod
  48. NAME READY STATUS RESTARTS AGE
  49. php-apache-567d9f79d-mhfsp 1/1 Running 0 44s
  50. # 创建HPA功能
  51. [root@k8smaster hpa]# kubectl autoscale deployment php-apache --cpu-percent=10 --min=1 --max=10
  52. horizontalpodautoscaler.autoscaling/php-apache autoscaled
  53. [root@k8smaster hpa]# kubectl get hpa
  54. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  55. php-apache Deployment/php-apache <unknown>/10% 1 10 0 7s
  56. # 测试,增加负载
  57. [root@k8smaster hpa]# kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
  58. If you don't see a command prompt, try pressing enter.
  59. OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
  60. [root@k8smaster hpa]# kubectl get hpa
  61. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  62. php-apache Deployment/php-apache 0%/10% 1 10 1 3m24s
  63. [root@k8smaster hpa]# kubectl get hpa
  64. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  65. php-apache Deployment/php-apache 238%/10% 1 10 1 3m41s
  66. [root@k8smaster hpa]# kubectl get hpa
  67. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  68. php-apache Deployment/php-apache 250%/10% 1 10 4 3m57s
  69. # 一旦CPU利用率降至0,HPA会自动将副本数缩减为 1。自动扩缩完成副本数量的改变可能需要几分钟的时间
  70. # 2.对web服务进行压力测试,观察promethues和dashboard
  71. # ab命令访问web:192.168.2.112:30001 同时进入prometheus和dashboard观察pod
  72. # 四种方式观察
  73. kubectl top pod
  74. http://192.168.2.117:3000/
  75. http://192.168.2.117:9090/targets
  76. https://192.168.2.104:32571/
  77. [root@nfs ~]# yum install httpd-tools -y
  78. [root@nfs data]# ab -n 1000000 -c 10000 -g output.dat http://192.168.2.112:30001/
  79. This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
  80. Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  81. Licensed to The Apache Software Foundation, http://www.apache.org/
  82. Benchmarking 192.168.2.112 (be patient)
  83. apr_socket_recv: Connection reset by peer (104)
  84. Total of 3694 requests completed
  85. # 1000个请求,10并发数 ab -n 1000 -c 10 -g output.dat http://192.168.2.112:30001/
  86. -t 60 在60秒内发送尽可能多的请求

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/知新_RL/article/detail/149167
推荐阅读
相关标签
  

闽ICP备14008679号