当前位置:   article > 正文

基于k8s的高性能综合web服务器搭建

基于k8s的高性能综合web服务器搭建

目录

基于k8s的高性能综合web服务器搭建

项目描述:

项目规划图:

项目环境:  k8s, docker  centos7.9  nginx  prometheus  grafana flask  ansible Jenkins等

        1.规划设计整个集群的架构,k8s单master的集群环境(单master,双worker),部署dashboard监视集群资源

规划好IP地址

关闭selinux和firewalld

        2.部署ansible完成相关业务的自动化运维工作,同时部署防火墙服务器和堡垒机,提升整个集群的安全性。

3.部署堡垒机和防火墙

部署堡垒机仅需两步快速安装 JumpServer:准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;以 root 用户执行如下命令一键安装 JumpServer。

##出现这个就表示你的jumpserver初步部署完成

##部署防火墙

        #编写脚本,实现iptables

4.部署nfs服务器,为整个web集群提供数据存储服务,让所有的web业务pod都取访问,通过pv和pvc、卷挂载实现。

 5..使用go语言搭建一个简易的镜像,启动nginx,采用HPA技术,当cpu使用率达到60%的时候,进行水平扩缩,最小10个,最多40个pod。

##访问测试,表示服务启动

#下面开始制作镜像,打标签,登录harbor仓库,上传,其他节点拉取镜像

#使用水平扩缩技术

 5.构建CI/CD环境,安装gitlab、Jenkins、harbor实现相关的代码发布、镜像制作、数据备份等流水线工作  

部署Jenkins

#接下来部署harbor

简单测试harbor仓库是否可以使用

 

7.部署promethues+grafana对集群里的所有服务器(cpu,内存,网络带宽,磁盘IO等)进行常规性能监控,包括k8s集群节点服务器。

#安装grafana,绘制优美的图片,方便我们进行观察

#我将密码修改为123456

​编辑

8.使用ingress给web业务做基于域名的负载均衡

拓展小知识

部署过程

        9.使用探针(liveless、readiness、startup)的httpGet和exec方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

   10.使用ab工具对整个k8s集群里的web服务进行压力测试


基于k8s的高性能综合web服务器搭建

项目描述:

模拟企业里的k8s测试环境,部署web,mysql,nfs,harbor,Prometheus,gitlab,Jenkins等应用,构建一个高可用高性能的web系统,同时能监控整个k8s集群的使用,部署了CICD的一套系统。

项目规划图:

项目环境:  k8s docker  centos7.9  nginx  prometheus  grafana flask  ansible Jenkins等

步骤:

        1.规划设计整个集群的架构,k8s单master的集群环境(单master,双worker),部署dashboard监视集群资源

规划好IP地址

master        jekens    192.168.0.20

Slave1                    192.168.0.21
slave2                    192.168.0.22
ansible                 192.168.0.30
防火墙                  192.168.0.31
堡垒机(jumpserver代理)             192.168.0.32
prometheus         192.168.0.33
harbor                 192.168.0.34
 gitlab              192.168.0.35  
nfs服务器          192.168.0.36

  1. #修改主机名,改为master
  2. hostnamectl set-hostname master
  3. su ##切换用户

关闭selinux和firewalld

  1. #关闭防火墙,和selinux
  2. [root@ansible ~]# systemctl stop firewalld
  3. [root@ansible ~]# systemctl disable firewalld
  4. [root@ansible ~]# getenforce
  5. Disabled
  6. [root@ansible ~]# cat /etc/selinux/config
  7. # This file controls the state of SELinux on the system.
  8. # SELINUX= can take one of these three values:
  9. # enforcing - SELinux security policy is enforced.
  10. # permissive - SELinux prints warnings instead of enforcing.
  11. # disabled - No SELinux policy is loaded.
  12. SELINUX=disabled
  13. ##其他所有机器关闭
  1. #IP地址规划
  2. [root@ansible ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
  3. BOOTPROTO=none
  4. NAME=ens33
  5. DEVICE=ens33
  6. ONBOOT=yes
  7. IPADDR=192.168.0.30
  8. GATEWAY=192.168.0.2
  9. DNS1=8.8.8.8
  10. DNS2=114.114.114.114
  11. ##其他机器合适规划IP地址

        2.部署ansible完成相关业务的自动化运维工作,同时部署防火墙服务器和堡垒机,提升整个集群的安全性。

  1. #在kubernetes集群里面,和ansible建立免密通道
  2. #一直回车就好,就是用默认的就好
  3. [root@ansible ~]# ssh-keygen
  4. Generating public/private rsa key pair.
  5. Enter file in which to save the key (/root/.ssh/id_rsa):
  6. Created directory '/root/.ssh'.
  7. Enter passphrase (empty for no passphrase):
  8. Enter same passphrase again:
  9. Your identification has been saved in /root/.ssh/id_rsa.
  10. Your public key has been saved in /root/.ssh/id_rsa.pub.
  11. The key fingerprint is:
  12. SHA256:BT7myvQ1r1QoEJgurdR4MZxdCulsFbyC3S4j/08xT5E root@ansible
  13. The key's randomart image is:
  14. +---[RSA 2048]----+
  15. | ..Booo |
  16. | O.++ . . |
  17. | X =o.+ E |
  18. | = @ o+ o o |
  19. | . = o. S = . |
  20. | o oo.o B + |
  21. | o oo o o . |
  22. | . . . . |
  23. | .... . |
  24. +----[SHA256]-----+
  25. ##传递ansible的id_rsa.pub 到其他的master集群上
  26. [root@ansible ~]# ssh-copy-id master
  27. /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
  28. The authenticity of host 'master (192.168.0.20)' can't be established.
  29. ECDSA key fingerprint is SHA256:xactOuiFsm9merQVjdeiV4iZwI4rXUnviFYTXL2h8fc.
  30. ECDSA key fingerprint is MD5:69:58:6b:ab:c4:8c:27:e2:b2:7c:31:bb:63:20:81:61.
  31. Are you sure you want to continue connecting (yes/no)? yes
  32. /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
  33. /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
  34. root@master's password:
  35. Permission denied, please try again.
  36. root@master's password:
  37. Number of key(s) added: 1
  38. Now try logging into the machine, with: "ssh 'master'"
  39. and check to make sure that only the key(s) you wanted were added.
  40. [root@ansible .ssh]# ls
  41. id_rsa id_rsa.pub known_hosts
  42. #前面配置好这个IP地址
  43. [root@ansible .ssh]# cat /etc/hosts
  44. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  45. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  46. 192.168.0.20 master
  47. 192.168.0.21 worker1
  48. 192.168.0.22 worker2
  49. 192.168.0.30 ansible
  50. ##ansible的/etc/hosts文件的内容是要多一点,管理的节点更多
  51. ##测试登录
  52. [root@ansible ~]# ssh worker1
  53. Last login: Wed Apr 3 11:11:49 2024 from ansible
  54. [root@worker1 ~]#
  55. 同理测试
  56. [root@ansible ~]# ssh worker2
  57. [root@ansible ~]# ssh master
  58. #安装ansible
  59. [root@ansible .ssh]# yum install epel-release -y
  60. [root@ansible .ssh]# yum install ansible -y
  61. #编写主机清单
  62. #主机清单
  63. [master]
  64. 192.168.0.20
  65. [workers]
  66. 192.168.0.21
  67. 192.168.0.22
  68. [nfs]
  69. 192.168.0.36
  70. [gitlab]
  71. 192.168.0.35
  72. [harbor]
  73. 192.168.0.34
  74. [promethus]
  75. 192.168.0.33

3.部署堡垒机和防火墙

部署堡垒机
仅需两步快速安装 JumpServer:
准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
以 root 用户执行如下命令一键安装 JumpServer。

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash

##这个是安装完成的提示提示符号

>>> 安装完成了
1. 可以使用如下命令启动, 然后访问
cd /opt/jumpserver-installer-v3.10.7
./jmsctl.sh start

2. 其它一些管理命令
./jmsctl.sh stop
./jmsctl.sh restart
./jmsctl.sh backup
./jmsctl.sh upgrade
更多还有一些命令, 你可以 ./jmsctl.sh --help 来了解

3. Web 访问
http://192.168.0.32:80
默认用户: admin  默认密码: admin

4. SSH/SFTP 访问
ssh -p2222 admin@192.168.0.32
sftp -P2222 admin@192.168.0.32

##出现这个就表示你的jumpserver初步部署完成

##部署防火墙

  1. #防火墙的配置 WAN口是ens36,LAN是ens33
  2. [root@firewalld ~]# ip a
  3. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  4. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  5. inet 127.0.0.1/8 scope host lo
  6. valid_lft forever preferred_lft forever
  7. inet6 ::1/128 scope host
  8. valid_lft forever preferred_lft forever
  9. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  10. link/ether 00:0c:29:e7:7d:f3 brd ff:ff:ff:ff:ff:ff
  11. inet 192.168.0.31/24 brd 192.168.0.255 scope global noprefixroute ens33
  12. valid_lft forever preferred_lft forever
  13. inet6 fe80::20c:29ff:fee7:7df3/64 scope link
  14. valid_lft forever preferred_lft forever
  15. 3: ens36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  16. link/ether 00:0c:29:e7:7d:fd brd ff:ff:ff:ff:ff:ff
  17. inet 192.168.1.5/24 brd 192.168.1.255 scope global noprefixroute dynamic ens36
  18. valid_lft 5059sec preferred_lft 5059sec
  19. inet6 fe80::347c:1701:c765:777b/64 scope link noprefixroute
  20. valid_lft forever preferred_lft forever
  21. #本地的服务器可以是将网关设置成防火墙的IP地址--》当做LAN口
  22. [root@nfs ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
  23. BOOTPROTO=none
  24. NAME=ens33
  25. DEVICE=ens33
  26. ONBOOT=yes
  27. GATEWAY=192.168.0.31
  28. IPADDR=192.168.0.36
  29. DNS1=8.8.8.8
  1. #查看路由
  2. [root@nfs ~]# ip route
  3. default via 192.168.1.5 dev ens33 proto static metric 100 
  4. 192.168.0.0/24 dev ens33 proto kernel scope link src 192.168.0.36 metric 100 
  5. 192.168.1.5 dev ens33 proto static scope link metric 100

        #编写脚本,实现iptables

#脚本

4.部署nfs服务器,为整个web集群提供数据存储服务,让所有的web业务pod都取访问,通过pv和pvc、卷挂载实现。

  1. ##在所有的k8s集群上,部署nfs服务器,设置pv,pvc,实现卷的永久挂载
  2. [root@nfs ~]# yum install nfs-utils -y
  3. [root@worker1 ~]# yum install nfs-utils -y
  4. [root@worker2 ~]# yum install nfs-utils -y
  5. [root@master ~]# yum install nfs-utils -y
  6. #设置共享目录
  7. [root@nfs ~]# vim /etc/exports
  8. [root@nfs ~]# cat /etc/exports
  9. /web/data 192.168.0.0/24(rw,root squashing,sync)
  10. ##root squashing--》当做root用户--》可以读写
  11. #输出共享目录
  12. [root@nfs data]# exportfs -rv
  13. exporting 192.168.0.0/24:/web/data
  14. #创建共享目录
  15. [root@nfs /]# cd web/
  16. [root@nfs web]# ls
  17. data
  18. [root@nfs web]# cd data
  19. [root@nfs data]# ls
  20. index.html
  21. [root@nfs data]# cat index.html ##编写网页头文件
  22. welcome to sanchuang !!! \n
  23. welcome to sanchuang !!!
  24. 0000000000000000000000
  25. welcome to sanchuang !!!
  26. welcome to sanchuang !!!
  27. welcome to sanchuang !!!
  28. 666666666666666666 !!!
  29. 777777777777777777 !!!
  30. ##刷新服务
  31. [root@nfs data]# service nfs restart
  32. #设置nfs服务开机启动
  33. [root@nfs web]# systemctl restart nfs && systemctl enable nfs
  34. Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
  35. #在k8s集群里面挂载
  36. 在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录
  37. [root@k8snode1 ~]# mkdir /worker1_nfs
  38. [root@worker1 ~]# mount 192.168.0.36:/web /worker1_nfs
  39. [root@worker1 ~]# df -Th|grep nfs
  40. 192.168.0.36:/web nfs4 50G 3.8G 47G 8% /worker1_nfs
  41. ##master
  42. 192.168.0.36:/web nfs4 54G 4.1G 50G 8% /master_nfs
  43. #worker2
  44. [root@worker2 ~]# df -Th|grep nfs
  45. 192.168.0.36:/web nfs4 50G 3.8G 47G 8% /worker2_nfs
  46. ##创建pv-pvc目录,存放对整个系统的pv-pvc
  47. [root@master ~]# cd /pv-pvc/
  48. [root@master pv-pvc]# ls
  49. nfs-pvc-yaml nfs-pv.yaml
  50. [root@master pv-pvc]# kubectl apply -f nfs-pv.yaml
  51. persistentvolume/pv-web created
  52. [root@master pv-pvc]# kubectl apply -f nfs-pvc-yaml
  53. persistentvolumeclaim/pvc-web created
  54. [root@master pv-pvc]# cat nfs-pv.yaml
  55. apiVersion: v1
  56. kind: PersistentVolume
  57. metadata:
  58. name: pv-web
  59. labels:
  60. type: pv-web
  61. spec:
  62. capacity:
  63. storage: 10Gi
  64. accessModes:
  65. - ReadWriteMany
  66. storageClassName: nfs # pv对应的名字
  67. nfs:
  68. path: "/web" # nfs共享的目录
  69. server: 192.168.0.36 # nfs服务器的ip地址
  70. readOnly: false # 访问模式
  71. [root@master pv-pvc]# cat nfs-pvc.yaml
  72. cat: nfs-pvc.yaml: 没有那个文件或目录
  73. [root@master pv-pvc]# cat nfs-pvc-yaml
  74. apiVersion: v1
  75. kind: PersistentVolumeClaim
  76. metadata:
  77. name: pvc-web
  78. spec:
  79. accessModes:
  80. - ReadWriteMany
  81. resources:
  82. requests:
  83. storage: 1Gi
  84. storageClassName: nfs #使用nfs类型的pv
  85. #效果图
  86. [root@master pv-pvc]# kubectl get pvc
  87. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  88. pvc-web Bound pv-web 10Gi RWX nfs 2m44s
  89. ##创建pod 使用pvc
  90. [root@master pv-pvc]# cat nginx-deployment.yaml
  91. apiVersion: apps/v1
  92. kind: Deployment
  93. metadata:
  94. name: nginx-deployment
  95. labels:
  96. app: nginx
  97. spec:
  98. replicas: 3
  99. selector:
  100. matchLabels:
  101. app: nginx
  102. template:
  103. metadata:
  104. labels:
  105. app: nginx
  106. spec:
  107. volumes:
  108. - name: sc-pv-storage-nfs
  109. persistentVolumeClaim:
  110. claimName: pvc-web
  111. containers:
  112. - name: sc-pv-container-nfs
  113. image: nginx
  114. imagePullPolicy: IfNotPresent
  115. ports:
  116. - containerPort: 80
  117. name: "http-server"
  118. volumeMounts:
  119. - mountPath: "/usr/share/nginx/html"
  120. name: sc-pv-storage-nfs
  121. #启动pod
  122. [root@master pv-pvc]# kubectl apply -f nginx-deployment.yaml
  123. deployment.apps/nginx-deployment created
  124. [root@master pv-pvc]# kubectl get pod -o wide
  125. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  126. nginx-deployment-d4c8d4d89-9spwk 1/1 Running 0 111s 10.224.235.133 worker1 <none> <none>
  127. nginx-deployment-d4c8d4d89-lk4mb 1/1 Running 0 111s 10.224.189.70 worker2 <none> <none>
  128. nginx-deployment-d4c8d4d89-ml8l7 1/1 Running 0 111s 10.224.189.69 worker2 <none> <none>
  129. [root@master pv-pvc]# kubectl apply -f nfs-pv.yaml
  130. persistentvolume/pv-web created
  131. [root@master pv-pvc]# kubectl apply -f nfs-pvc.yaml
  132. persistentvolumeclaim/pvc-web created
  133. [root@master pv-pvc]# kubectl apply -f nginx-deployment.yaml
  134. deployment.apps/nginx-deployment created
  135. [root@master pv-pvc]# kubectl get pod -o wide
  136. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  137. nginx-deployment-d4c8d4d89-2xh6w 1/1 Running 0 12s 10.224.235.134 worker1 <none> <none>
  138. nginx-deployment-d4c8d4d89-c64c4 1/1 Running 0 12s 10.224.189.71 worker2 <none> <none>
  139. nginx-deployment-d4c8d4d89-fhvfd 1/1 Running 0 12s 10.224.189.72 worker2 <none> <none>
  140. ##连接测试发现成功了
  141. [root@master pv-pvc]# curl 10.224.235.134
  142. welcome to sanchuang !!! \n
  143. welcome to sanchuang !!!
  144. 0000000000000000000000
  145. welcome to sanchuang !!!
  146. welcome to sanchuang !!!
  147. welcome to sanchuang !!!
  148. 666666666666666666 !!!
  149. 777777777777777777 !!!

 5..使用go语言搭建一个简易的镜像,启动nginx,采用HPA技术,当cpu使用率达到60%的时候,进行水平扩缩,最小10个,最多40个pod。

  1. #使用go语言制作简易镜像,上传到本地harbor仓库,让其他的节点下载,启动web服务
  2. [root@harbor harbor]# mkdir go
  3. [root@harbor harbor]# cd go
  4. [root@harbor go]# pwd
  5. /harbor/go
  6. [root@harbor go]# ls
  7. apiserver.tar.gz
  8. [root@harbor go]#
  9. 安装go语言的环境
  10. [root@harbor yum.repos.d]# yum install epel-release -y
  11. [root@harbor yum.repos.d]# yum install golang -y
  12. [root@harbor go]# vim server.go
  13. package main
  14. //server.go是主运行文件
  15. import (
  16. "net/http"
  17. "github.com/gin-gonic/gin"
  18. )
  19. //gin-->go中的web框架
  20. //入口函数
  21. func main(){
  22. //创建一个web服务器
  23. r:=gin.Default()
  24. // 当访问/sc=>返回{"message":"hello, sanchuang"}
  25. r.GET("/",func(c *gin.Context){
  26. //200,返回的数据
  27. c.JSON(http.StatusOK,gin.H{
  28. "message":"hello,sanchuanger 2024 nice",
  29. })
  30. })
  31. //运行web服务
  32. r.Run()
  33. }
  34. [root@harbor go]# cat Dockerfile
  35. FROM centos:7
  36. WORKDIR /go
  37. COPY . /go
  38. RUN ls /go && pwd
  39. ENTRYPOINT ["/go/scweb"]
  40. #上传apiserver,这个是k8s里面的重要组件
  41. [root@harbor go]# ls
  42. apiserver.tar.gz server.go
  43. [root@harbor go]# vim server.go
  44. [root@harbor go]# go env -w GOPROXY=https://goproxy.cn,direct
  45. [root@harbor go]# go mod init web
  46. go: creating new go.mod: module web
  47. go: to add module requirements and sums:
  48. go mod tidy
  49. [root@harbor go]# go mod tidy
  50. go: finding module for package github.com/gin-gonic/gin
  51. go: downloading github.com/gin-gonic/gin v1.9.1
  52. go: found github.com/gin-gonic/gin in github.com/gin-gonic/gin v1.9.1
  53. go: downloading github.com/gin-contrib/sse v0.1.0
  54. go: downloading github.com/mattn/go-isatty v0.0.19
  55. go: downloading golang.org/x/net v0.10.0
  56. go: downloading github.com/stretchr/testify v1.8.3
  57. go: downloading google.golang.org/protobuf v1.30.0
  58. go: downloading github.com/go-playground/validator/v10 v10.14.0
  59. go: downloading github.com/pelletier/go-toml/v2 v2.0.8
  60. go: downloading github.com/ugorji/go/codec v1.2.11
  61. go: downloading gopkg.in/yaml.v3 v3.0.1
  62. go: downloading github.com/bytedance/sonic v1.9.1
  63. go: downloading github.com/goccy/go-json v0.10.2
  64. go: downloading github.com/json-iterator/go v1.1.12
  65. go: downloading golang.org/x/sys v0.8.0
  66. go: downloading github.com/davecgh/go-spew v1.1.1
  67. go: downloading github.com/pmezard/go-difflib v1.0.0
  68. go: downloading github.com/gabriel-vasile/mimetype v1.4.2
  69. go: downloading github.com/go-playground/universal-translator v0.18.1
  70. go: downloading github.com/leodido/go-urn v1.2.4
  71. go: downloading golang.org/x/crypto v0.9.0
  72. go: downloading golang.org/x/text v0.9.0
  73. go: downloading github.com/go-playground/locales v0.14.1
  74. go: downloading github.com/modern-go/reflect2 v1.0.2
  75. go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd
  76. go: downloading github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311
  77. go: downloading golang.org/x/arch v0.3.0
  78. go: downloading github.com/twitchyliquid64/golang-asm v0.15.1
  79. go: downloading github.com/klauspost/cpuid/v2 v2.2.4
  80. go: downloading github.com/go-playground/assert/v2 v2.2.0
  81. go: downloading github.com/google/go-cmp v0.5.5
  82. go: downloading gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
  83. go: downloading golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543
  84. [root@harbor go]# go run server.go
  85. [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
  86. [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
  87. - using env: export GIN_MODE=release
  88. - using code: gin.SetMode(gin.ReleaseMode)
  89. [GIN-debug] GET / --> main.main.func1 (3 handlers)
  90. [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
  91. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
  92. [GIN-debug] Environment variable PORT is undefined. Using port :8080 by default
  93. [GIN-debug] Listening and serving HTTP on :8080
  94. 运行代码,默认监听的是8080,这个步骤只是测试我们的server.go能否正常运行
  95. #将这个server.go编写成一个二进制可以执行文件
  96. [root@harbor go]# go build -o k8s-web .
  97. [root@harbor go]# ls
  98. apiserver.tar.gz go.mod go.sum k8s-web server.go

##访问测试,表示服务启动

[root@harbor go]# ./k8s-web 
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
 - using env:    export GIN_MODE=release
 - using code:    gin.SetMode(gin.ReleaseMode)

[GIN-debug] GET    /                         --> main.main.func1 (3 handlers)
[GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value.
Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details.
[GIN-debug] Environment variable PORT is undefined. Using port :8080 by default
[GIN-debug] Listening and serving HTTP on :8080
[GIN] 2024/04/04 - 12:38:39 | 200 |     120.148?s |     192.168.0.1 | GET      "/"
 

#下面开始制作镜像,打标签,登录harbor仓库,上传,其他节点拉取镜像

  1. [root@harbor go]# cat Dockerfile
  2. FROM centos:7
  3. WORKDIR /harbor/go
  4. COPY . /harbor/go
  5. RUN ls /harbor/go && pwd
  6. ENTRYPOINT ["/harbor/k8s-web"]
  7. [root@harbor go]# docker pull centos:7
  8. 7: Pulling from library/centos
  9. 2d473b07cdd5: Pull complete
  10. Digest: sha256:9d4bcbbb213dfd745b58be38b13b996ebb5ac315fe75711bd618426a630e0987
  11. Status: Downloaded newer image for centos:7
  12. docker.io/library/centos:7
  13. [root@harbor go]# vim Dockerfile
  14. [root@harbor go]# docker build -t scmyweb:1.1 .
  15. [+] Building 2.5s (9/9) FINISHED docker:default
  16. => [internal] load build definition from Dockerfile 0.0s
  17. => => transferring dockerfile: 147B 0.0s
  18. => [internal] load metadata for docker.io/library/centos:7 0.0s
  19. => [internal] load .dockerignore 0.0s
  20. => => transferring context: 2B 0.0s
  21. => [1/4] FROM docker.io/library/centos:7 0.0s
  22. => [internal] load build context 0.1s
  23. => => transferring context: 295B 0.0s
  24. => [2/4] WORKDIR /harbor/go 0.4s
  25. => [3/4] COPY . /harbor/go 0.4s
  26. => [4/4] RUN ls /harbor/go && pwd 1.4s
  27. => exporting to image 0.1s
  28. => => exporting layers 0.1s
  29. => => writing image sha256:fed4a30515b10e9f15c6dd7ba092b553658d3c7a33466bf38a20762bde68 0.0s
  30. => => naming to docker.io/library/scmyweb:1.1 0.0s
  31. [root@harbor go]# docker tag scmyweb:1.1 192.168.0.34:5001/k8s-web/web:v1
  32. [root@harbor go]# docker image ls | grep web
  33. 192.168.0.34:5001/k8s-web/web v1 fed4a30515b1 3 minutes ago 221MB
  34. ##将镜像上传到harbor仓库,然后让worker1和worker2来拉取镜像
  35. [root@worker1 ~]# docker pull 192.168.0.34:5001/k8s-web/web:v1
  36. [root@worker2 ~]# docker pull 192.168.0.34:5001/k8s-web/web:v1
  37. #检查一下
  38. [root@worker2 ~]# docker images|grep web
  39. 192.168.0.34:5001/k8s-web/web v1 fed4a

#使用水平扩缩技术

  1. # 采用HPA技术,当cpu使用率达到50%的时候,进行水平扩缩,最小1个,最多10个pod
  2. # HorizontalPodAutoscaler(简称 HPA )自动更新工作负载资源(例如Deployment),目的是自动扩缩# 工作负载以满足需求。
  3. https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
  4. # 1.安装metrics server
  5. # 下载components.yaml配置文件
  6. wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
  7. # 替换image
  8. image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
  9. imagePullPolicy: IfNotPresent
  10. args:
  11. # // 新增下面两行参数
  12. - --kubelet-insecure-tls
  13. - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
  14. [root@master metrics]# docker load -i metrics-server-v0.6.3.tar
  15. d0157aa0c95a: Loading layer 327.7kB/327.7kB
  16. 6fbdf253bbc2: Loading layer 51.2kB/51.2kB
  17. 1b19a5d8d2dc: Loading layer 3.185MB/3.185MB
  18. ff5700ec5418: Loading layer 10.24kB/10.24kB
  19. d52f02c6501c: Loading layer 10.24kB/10.24kB
  20. e624a5370eca: Loading layer 10.24kB/10.24kB
  21. 1a73b54f556b: Loading layer 10.24kB/10.24kB
  22. d2d7ec0f6756: Loading layer 10.24kB/10.24kB
  23. 4cb10dd2545b: Loading layer 225.3kB/225.3kB
  24. ebc813d4c836: Loading layer 66.45MB/66.45MB
  25. Loaded image: registry.k8s.io/metrics-server/metrics-server:v0.6.3
  26. [root@master metrics]# vim components.yaml
  27. [root@master mysql]# kubectl top nodes
  28. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  29. master 343m 17% 1677Mi 45%
  30. worker1 176m 8% 1456Mi 39%
  31. worker2 184m 9% 1335Mi 36%

#部署服务,开启HPA

  1. ##创建nginx服务,开启水平扩缩功能最少3个,最多20个,CPU大于70,就开始水平扩缩
  2. [root@master nginx]# kubectl apply -f web-hpa.yaml
  3. deployment.apps/ab-nginx created
  4. service/ab-nginx-svc created
  5. horizontalpodautoscaler.autoscaling/ab-nginx created
  6. [root@master nginx]# cat web-hpa.yaml
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. metadata:
  10. name: ab-nginx
  11. spec:
  12. selector:
  13. matchLabels:
  14. run: ab-nginx
  15. template:
  16. metadata:
  17. labels:
  18. run: ab-nginx
  19. spec:
  20. #nodeName: node-2 #取消指定
  21. containers:
  22. - name: ab-nginx
  23. image: nginx
  24. imagePullPolicy: IfNotPresent
  25. ports:
  26. - containerPort: 80
  27. resources:
  28. limits:
  29. cpu: 100m
  30. requests:
  31. cpu: 50m
  32. ---
  33. apiVersion: v1
  34. kind: Service
  35. metadata:
  36. name: ab-nginx-svc
  37. labels:
  38. run: ab-nginx-svc
  39. spec:
  40. type: NodePort
  41. ports:
  42. - port: 80
  43. targetPort: 80
  44. nodePort: 31000
  45. selector:
  46. run: ab-nginx
  47. ---
  48. apiVersion: autoscaling/v1
  49. kind: HorizontalPodAutoscaler
  50. metadata:
  51. name: ab-nginx
  52. spec:
  53. scaleTargetRef:
  54. apiVersion: apps/v1
  55. kind: Deployment
  56. name: ab-nginx
  57. minReplicas: 3
  58. maxReplicas: 20
  59. targetCPUUtilizationPercentage: 70
  60. [root@master nginx]# kubectl get deploy
  61. NAME READY UP-TO-DATE AVAILABLE AGE
  62. ab-nginx 3/3 3 3 2m10s
  63. [root@master nginx]# kubectl get hpa
  64. NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
  65. ab-nginx Deployment/ab-nginx 0%/70% 3 20 3 2m28s
  66. ##访问成功
  67. [root@master nginx]# curl 192.168.0.20:31000
  68. <!DOCTYPE html>
  69. <html>
  70. <head>
  71. <title>Welcome to nginx!</title>
  72. <style>
  73. html { color-scheme: light dark; }
  74. body { width: 35em; margin: 0 auto;
  75. font-family: Tahoma, Verdana, Arial, sans-serif; }
  76. </style>
  77. </head>
  78. <body>
  79. <h1>Welcome to nginx!</h1>
  80. <p>If you see this page, the nginx web server is successfully installed and
  81. working. Further configuration is required.</p>
  82. <p>For online documentation and support please refer to
  83. <a href="http://nginx.org/">nginx.org</a>.<br/>
  84. Commercial support is available at
  85. <a href="http://nginx.com/">nginx.com</a>.</p>
  86. <p><em>Thank you for using nginx.</em></p>
  87. </body>
  88. </html>

#开启MySQL的pod,为web业务提供数据库服务支持。

  1. 1.编写yaml文件,包括了deployment、service
  2. [root@master ~]# mkdir /mysql
  3. [root@master ~]# cd /mysql/
  4. [root@master mysql]# vim mysql.yaml
  5. apiVersion: apps/v1
  6. kind: Deployment
  7. metadata:
  8. labels:
  9. app: mysql
  10. name: mysql
  11. spec:
  12. replicas: 1
  13. selector:
  14. matchLabels:
  15. app: mysql
  16. template:
  17. metadata:
  18. labels:
  19. app: mysql
  20. spec:
  21. containers:
  22. - image: mysql:latest
  23. name: mysql
  24. imagePullPolicy: IfNotPresent
  25. env:
  26. - name: MYSQL_ROOT_PASSWORD
  27. value: "123456" #mysql的密码
  28. ports:
  29. - containerPort: 3306
  30. ---
  31. apiVersion: v1
  32. kind: Service
  33. metadata:
  34. labels:
  35. app: svc-mysql
  36. name: svc-mysql
  37. spec:
  38. selector:
  39. app: mysql
  40. type: NodePort
  41. ports:
  42. - port: 3306
  43. protocol: TCP
  44. targetPort: 3306
  45. nodePort: 30007
  46. 2.部署
  47. [root@master mysql]# kubectl apply -f mysql.yaml
  48. deployment.apps/mysql created
  49. service/svc-mysql created
  50. [root@master mysql]# kubectl get svc
  51. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  52. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
  53. php-apache ClusterIP 10.96.134.145 <none> 80/TCP 21h
  54. svc-mysql NodePort 10.109.190.20 <none> 3306:30007/TCP 9s
  55. [root@master mysql]# kubectl get pod
  56. NAME READY STATUS RESTARTS AGE
  57. mysql-597ff9595d-tzqzl 0/1 ContainerCreating 0 27s
  58. nginx-deployment-794d8c5666-dsxkq 1/1 Running 1 (15m ago) 22h
  59. nginx-deployment-794d8c5666-fsctm 1/1 Running 1 (15m ago) 22h
  60. nginx-deployment-794d8c5666-spkzs 1/1 Running 1 (15m ago) 22h
  61. php-apache-7b9f758896-2q44p 1/1 Running 1 (15m ago) 21h
  62. [root@master mysql]# kubectl exec -it mysql-597ff9595d-tzqzl -- bash
  63. root@mysql-597ff9595d-tzqzl:/# mysql -uroot -p123456 #容器内部进入mysql
  64. mysql: [Warning] Using a password on the command line interface can be insecure.
  65. Welcome to the MySQL monitor. Commands end with ; or \g.
  66. Your MySQL connection id is 8
  67. Server version: 8.0.27 MySQL Community Server - GPL
  68. Copyright (c) 2000, 2021, Oracle and/or its affiliates.
  69. Oracle is a registered trademark of Oracle Corporation and/or its
  70. affiliates. Other names may be trademarks of their respective
  71. owners.
  72. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  73. mysql>

 5.构建CI/CD环境,安装gitlab、Jenkins、harbor实现相关的代码发布、镜像制作、数据备份等流水线工作  

  1. #配置gitlub服务器
  2. [root@localhost ~]# hostnamectl set-hostname gitlab
  3. [root@localhost ~]# su
  4. [root@gitlab ~]#
  5. #部署过程
  6. # 1.安装和配置必须的依赖项
  7. yum install -y curl policycoreutils-python openssh-server perl
  8. # 2.配置极狐GitLab 软件源镜像
  9. [root@gitlab ~]# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash
  10. ==> Detected OS centos
  11. ==> Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo
  12. [gitlab-jh]
  13. name=JiHu GitLab
  14. baseurl=https://packages.gitlab.cn/repository/el/$releasever/
  15. gpgcheck=0
  16. gpgkey=https://packages.gitlab.cn/repository/raw/gpg/public.gpg.key
  17. priority=1
  18. enabled=1
  19. ==> Generate yum cache for gitlab-jh
  20. ==> Successfully added gitlab-jh repo. To install JiHu GitLab, run "sudo yum/dnf install gitlab-jh".
  21. [root@gitlab ~]# yum install gitlab-jh -y
  22. Thank you for installing JiHu GitLab!
  23. GitLab was unable to detect a valid hostname for your instance.
  24. Please configure a URL for your JiHu GitLab instance by setting `external_url`
  25. configuration in /etc/gitlab/gitlab.rb file.
  26. Then, you can start your JiHu GitLab instance by running the following command:
  27. sudo gitlab-ctl reconfigure
  28. For a comprehensive list of configuration options please see the Omnibus GitLab readme
  29. https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.md
  30. Help us improve the installation experience, let us know how we did with a 1 minute survey:
  31. https://wj.qq.com/s2/10068464/dc66
  32. [root@gitlab ~]# vim /etc/gitlab/gitlab.rb
  33. external_url 'http://myweb.first.com'
  34. [root@gitlab ~]# gitlab-ctl reconfigure
  35. Notes:
  36. Default admin account has been configured with following details:
  37. Username: root
  38. Password: You didn't opt-in to print initial root password to STDOUT.
  39. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours.
  40. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  41. gitlab Reconfigured!
  42. # 查看密码
  43. [root@gitlab ~]# cat /etc/gitlab/initial_root_password
  44. # WARNING: This value is valid only in the following conditions
  45. # 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
  46. # 2. Password hasn't been changed manually, either via UI or via command line.
  47. #
  48. # If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
  49. Password: mzYlWEzJG6nzbExL6L25J7jhbup0Ye8QFldcD/rXNqg=
  50. # NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
  51. # 可以登录后修改语言为中文
  52. # 用户的profile/preferences
  53. # 修改密码
  54. [root@gitlab ~]# gitlab-rake gitlab:env:info
  55. System information
  56. System:
  57. Proxy: no
  58. Current User: git
  59. Using RVM: no
  60. Ruby Version: 3.0.6p216
  61. Gem Version: 3.4.13
  62. Bundler Version:2.4.13
  63. Rake Version: 13.0.6
  64. Redis Version: 6.2.11
  65. Sidekiq Version:6.5.7
  66. Go Version: unknown
  67. GitLab information
  68. Version: 16.0.4-jh
  69. Revision: c2ed99db36f
  70. Directory: /opt/gitlab/embedded/service/gitlab-rails
  71. DB Adapter: PostgreSQL
  72. DB Version: 13.11
  73. URL: http://myweb.first.com
  74. HTTP Clone URL: http://myweb.first.com/some-group/some-project.git
  75. SSH Clone URL: git@myweb.first.com:some-group/some-project.git
  76. Elasticsearch: no
  77. Geo: no
  78. Using LDAP: no
  79. Using Omniauth: yes
  80. Omniauth Providers:
  81. GitLab Shell
  82. Version: 14.20.0
  83. Repository storages:
  84. - default: unix:/var/opt/gitlab/gitaly/gitaly.socket
  85. GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell

部署Jenkins

  1. # Jenkins部署到k8s里
  2. # 1.安装git软件
  3. [root@master jenkins]# yum install git -y
  4. # 2.下载相关的yaml文件
  5. [root@master jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins
  6. 正克隆到 'kubernetes-jenkins'...
  7. remote: Enumerating objects: 16, done.
  8. remote: Counting objects: 100% (7/7), done.
  9. remote: Compressing objects: 100% (7/7), done.
  10. remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9
  11. Unpacking objects: 100% (16/16), done.
  12. [root@k8smaster jenkins]# ls
  13. kubernetes-jenkins
  14. [root@master jenkins]# cd kubernetes-jenkins/
  15. [root@master kubernetes-jenkins]# ls
  16. deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml
  17. # 3.创建命名空间
  18. [root@master kubernetes-jenkins]# cat namespace.yaml
  19. apiVersion: v1 kubectl apply -f namespace.yaml
  20. kind: Namespace
  21. metadata:
  22. name: devops-tools
  23. [root@master kubernetes-jenkins]# kubectl apply -f namespace.yaml
  24. namespace/devops-tools created
  25. [root@master kubernetes-jenkins]# kubectl get ns
  26. NAME STATUS AGE
  27. default Active 22h
  28. devops-tools Active 19s
  29. ingress-nginx Active 139m
  30. kube-node-lease Active 22h
  31. kube-public Active 22h
  32. kube-system Active 22h
  33. # 4.创建服务账号,集群角色,绑定
  34. [root@k8smaster kubernetes-jenkins]# cat serviceAccount.yaml
  35. ---
  36. apiVersion: rbac.authorization.k8s.io/v1
  37. kind: ClusterRole
  38. metadata:
  39. name: jenkins-admin
  40. rules:
  41. - apiGroups: [""]
  42. resources: ["*"]
  43. verbs: ["*"]
  44. ---
  45. apiVersion: v1
  46. kind: ServiceAccount
  47. metadata:
  48. name: jenkins-admin
  49. namespace: devops-tools
  50. ---
  51. apiVersion: rbac.authorization.k8s.io/v1
  52. kind: ClusterRoleBinding
  53. metadata:
  54. name: jenkins-admin
  55. roleRef:
  56. apiGroup: rbac.authorization.k8s.io
  57. kind: ClusterRole
  58. name: jenkins-admin
  59. subjects:
  60. - kind: ServiceAccount
  61. name: jenkins-admin
  62. [root@k8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml
  63. clusterrole.rbac.authorization.k8s.io/jenkins-admin created
  64. serviceaccount/jenkins-admin created
  65. clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created
  66. # 5.创建卷,用来存放数据
  67. [root@k8smaster kubernetes-jenkins]# cat volume.yaml
  68. kind: StorageClass
  69. apiVersion: storage.k8s.io/v1
  70. metadata:
  71. name: local-storage
  72. provisioner: kubernetes.io/no-provisioner
  73. volumeBindingMode: WaitForFirstConsumer
  74. ---
  75. apiVersion: v1
  76. kind: PersistentVolume
  77. metadata:
  78. name: jenkins-pv-volume
  79. labels:
  80. type: local
  81. spec:
  82. storageClassName: local-storage
  83. claimRef:
  84. name: jenkins-pv-claim
  85. namespace: devops-tools
  86. capacity:
  87. storage: 10Gi
  88. accessModes:
  89. - ReadWriteOnce
  90. local:
  91. path: /mnt
  92. nodeAffinity:
  93. required:
  94. nodeSelectorTerms:
  95. - matchExpressions:
  96. - key: kubernetes.io/hostname
  97. operator: In
  98. values:
  99. - k8snode1 # 需要修改为k8s里的node节点的名字
  100. ---
  101. apiVersion: v1
  102. kind: PersistentVolumeClaim
  103. metadata:
  104. name: jenkins-pv-claim
  105. namespace: devops-tools
  106. spec:
  107. storageClassName: local-storage
  108. accessModes:
  109. - ReadWriteOnce
  110. resources:
  111. requests:
  112. storage: 3Gi
  113. [root@k8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml
  114. storageclass.storage.k8s.io/local-storage created
  115. persistentvolume/jenkins-pv-volume created
  116. persistentvolumeclaim/jenkins-pv-claim created
  117. [root@k8smaster kubernetes-jenkins]# kubectl get pv
  118. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  119. jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s
  120. pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h
  121. [root@k8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume
  122. Name: jenkins-pv-volume
  123. Labels: type=local
  124. Annotations: <none>
  125. Finalizers: [kubernetes.io/pv-protection]
  126. StorageClass: local-storage
  127. Status: Bound
  128. Claim: devops-tools/jenkins-pv-claim
  129. Reclaim Policy: Retain
  130. Access Modes: RWO
  131. VolumeMode: Filesystem
  132. Capacity: 10Gi
  133. Node Affinity:
  134. Required Terms:
  135. Term 0: kubernetes.io/hostname in [k8snode1]
  136. Message:
  137. Source:
  138. Type: LocalVolume (a persistent volume backed by local storage on a node)
  139. Path: /mnt
  140. Events: <none>
  141. # 6.部署Jenkins
  142. [root@k8smaster kubernetes-jenkins]# cat deployment.yaml
  143. apiVersion: apps/v1
  144. kind: Deployment
  145. metadata:
  146. name: jenkins
  147. namespace: devops-tools
  148. spec:
  149. replicas: 1
  150. selector:
  151. matchLabels:
  152. app: jenkins-server
  153. template:
  154. metadata:
  155. labels:
  156. app: jenkins-server
  157. spec:
  158. securityContext:
  159. fsGroup: 1000
  160. runAsUser: 1000
  161. serviceAccountName: jenkins-admin
  162. containers:
  163. - name: jenkins
  164. image: jenkins/jenkins:lts
  165. imagePullPolicy: IfNotPresent
  166. resources:
  167. limits:
  168. memory: "2Gi"
  169. cpu: "1000m"
  170. requests:
  171. memory: "500Mi"
  172. cpu: "500m"
  173. ports:
  174. - name: httpport
  175. containerPort: 8080
  176. - name: jnlpport
  177. containerPort: 50000
  178. livenessProbe:
  179. httpGet:
  180. path: "/login"
  181. port: 8080
  182. initialDelaySeconds: 90
  183. periodSeconds: 10
  184. timeoutSeconds: 5
  185. failureThreshold: 5
  186. readinessProbe:
  187. httpGet:
  188. path: "/login"
  189. port: 8080
  190. initialDelaySeconds: 60
  191. periodSeconds: 10
  192. timeoutSeconds: 5
  193. failureThreshold: 3
  194. volumeMounts:
  195. - name: jenkins-data
  196. mountPath: /var/jenkins_home
  197. volumes:
  198. - name: jenkins-data
  199. persistentVolumeClaim:
  200. claimName: jenkins-pv-claim
  201. [root@k8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml
  202. deployment.apps/jenkins created
  203. [root@k8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools
  204. NAME READY UP-TO-DATE AVAILABLE AGE
  205. jenkins 1/1 1 1 5m36s
  206. [root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
  207. NAME READY STATUS RESTARTS AGE
  208. jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s
  209. # 7.启动服务发布Jenkins的pod
  210. [root@k8smaster kubernetes-jenkins]# cat service.yaml
  211. apiVersion: v1
  212. kind: Service
  213. metadata:
  214. name: jenkins-service
  215. namespace: devops-tools
  216. annotations:
  217. prometheus.io/scrape: 'true'
  218. prometheus.io/path: /
  219. prometheus.io/port: '8080'
  220. spec:
  221. selector:
  222. app: jenkins-server
  223. type: NodePort
  224. ports:
  225. - port: 8080
  226. targetPort: 8080
  227. nodePort: 32000
  228. [root@k8smaster kubernetes-jenkins]# kubectl apply -f service.yaml
  229. service/jenkins-service created
  230. [root@k8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools
  231. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  232. jenkins-service NodePort 10.104.76.252 <none> 8080:32000/TCP 24s
  233. # 8.在Windows机器上访问Jenkins,宿主机ip+端口号
  234. http://192.168.0.20:32000
  235. # 9.进入pod里获取登录的密码
  236. [root@master kubernetes-jenkins]# kubectl exec -it jenkins-b96f7764f-znvfj -n devops-tools -- bash
  237. jenkins@jenkins-b96f7764f-znvfj:/$ cat /var/jenkins_home/secrets/initialAdminPassword
  238. bbb283b8dc35449bbdb3d6824f12446c
  239. # 修改密码
  240. [root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
  241. NAME READY STATUS RESTARTS AGE
  242. jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91s

出现这个图片表是你安装成功

#接下来部署harbor

  1. [root@harbor ~]# yum install -y yum-utils
  2. [root@harbor ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. [root@harbor ~]# yum install docker-ce-20.10.6 -y
  4. [root@harbor ~]# systemctl start docker && systemctl enable docker.service
  5. Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
  6. 查看docker版本,docker compose版本
  7. [root@harbor ~]# docker version
  8. Client: Docker Engine - Community
  9. Version: 24.0.2
  10. API version: 1.41 (downgraded from 1.43)
  11. Go version: go1.20.4
  12. Git commit: cb74dfc
  13. Built: Thu May 25 21:55:21 2023
  14. OS/Arch: linux/amd64
  15. Context: default
  16. [root@harbor ~]# docker compose version
  17. Docker Compose version v2.25.0
  18. ##开始安装harbor
  19. [root@harbor harbor]# vim harbor.yml.tmpl
  20. # Configuration file of Harbor
  21. # The IP address or hostname to access admin UI and registry service.
  22. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
  23. hostname: 192.168.0.34
  24. # http related config
  25. http:
  26. # port for http, default is 80. If https enabled, this port will redirect to https port
  27. port: 123
  28. # https related config
  29. #https:
  30. # https port for harbor, default is 443
  31. # port: 1234
  32. # The path of cert and key files for nginx
  33. #certificate: /your/certificate/path
  34. #private_key: /your/private/key/path
  35. ##注意要把https的部分注释掉,不然会出问题
  36. # 配置开机自启
  37. [root@harbor harbor]# vim /etc/rc.local
  38. [root@harbor harbor]# cat /etc/rc.local
  39. #!/bin/bash
  40. # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
  41. #
  42. # It is highly advisable to create own systemd services or udev rules
  43. # to run scripts during boot instead of using this file.
  44. #
  45. # In contrast to previous versions due to parallel execution during boot
  46. # this script will NOT be run after all other services.
  47. #
  48. # Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
  49. # that this script will be executed during boot.
  50. touch /var/lock/subsys/local
  51. /usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d
  52. # 设置权限
  53. [root@harbor harbor]# chmod +x /etc/rc.local /etc/rc.d/rc.local

  1. 添加harbor仓库到k8s集群上
  2. master机器:
  3. [root@master ~]# vim /etc/docker/daemon.json
  4. {
  5. "registry-mirrors": ["https://ruk1gp3w.mirror.aliyuncs.com"],
  6. "insecure-registries" : ["192.168.0.34:5001"]
  7. }
  8. 然后重启docker
  9. [root@master ~]# systemctl daemon-reload
  10. [root@master ~]# systemctl restart docker
  11. worker1机器:
  12. [root@worker1 ~]# vim /etc/docker/daemon.json
  13. {
  14. "registry-mirrors": ["https://ruk1gp3w.mirror.aliyuncs.com"],
  15. "insecure-registries" : ["192.168.0.34:5001"]
  16. }
  17. 然后重启docker
  18. [root@worker1~]# systemctl daemon-reload
  19. [root@worker1 ~]# systemctl restart docker
  20. worker2机器:
  21. [root@worker2 ~]# vim /etc/docker/daemon.json
  22. {
  23. "registry-mirrors": ["https://ruk1gp3w.mirror.aliyuncs.com"],
  24. "insecure-registries" : ["192.168.0.34:5001"]
  25. }
  26. 然后重启docker
  27. [root@mworker2 ~]# systemctl daemon-reload
  28. [root@worker2~]# systemctl restart docker

简单测试harbor仓库是否可以使用

  1. [root@master ~]# docker login 192.168.0.34:5001
  2. Authenticating with existing credentials...
  3. WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
  4. Configure a credential helper to remove this warning. See
  5. https://docs.docker.com/engine/reference/commandline/login/#credentials-store
  6. Login Succeeded

 

  1. 1.编写yaml文件,包括了deployment、service
  2. [root@master ~]# cd service/
  3. [root@master service]# ls
  4. mysql nginx
  5. [root@master service]# cd mysql/
  6. [root@master mysql]# ls
  7. [root@master mysql]# vim mysql.yaml
  8. [root@master mysql]# ls
  9. mysql.yaml
  10. [root@master mysql]# docker pull mysql:latest
  11. latest: Pulling from library/mysql
  12. 72a69066d2fe: Pull complete
  13. 93619dbc5b36: Pull complete
  14. 99da31dd6142: Pull complete
  15. 626033c43d70: Pull complete
  16. 37d5d7efb64e: Pull complete
  17. ac563158d721: Pull complete
  18. d2ba16033dad: Pull complete
  19. 688ba7d5c01a: Pull complete
  20. 00e060b6d11d: Pull complete
  21. 1c04857f594f: Pull complete
  22. 4d7cfa90e6ea: Pull complete
  23. e0431212d27d: Pull complete
  24. Digest: sha256:e9027fe4d91c0153429607251656806cc784e914937271037f7738bd5b8e7709
  25. Status: Downloaded newer image for mysql:latest
  26. docker.io/library/mysql:latest
  27. [root@master mysql]# cat mysql.yaml
  28. apiVersion: apps/v1
  29. kind: Deployment
  30. metadata:
  31. labels:
  32. app: mysql
  33. name: mysql
  34. spec:
  35. replicas: 1
  36. selector:
  37. matchLabels:
  38. app: mysql
  39. template:
  40. metadata:
  41. labels:
  42. app: mysql
  43. spec:
  44. containers:
  45. - image: mysql:latest
  46. name: mysql
  47. imagePullPolicy: IfNotPresent
  48. env:
  49. - name: MYSQL_ROOT_PASSWORD
  50. value: "123456" #mysql的密码
  51. ports:
  52. - containerPort: 3306
  53. ---
  54. apiVersion: v1
  55. kind: Service
  56. metadata:
  57. labels:
  58. app: svc-mysql
  59. name: svc-mysql
  60. spec:
  61. selector:
  62. app: mysql
  63. type: NodePort
  64. ports:
  65. - port: 3306 #服务的端口,服务映射到集群里面的端口
  66. protocol: TCP
  67. targetPort: 3306 #pod映射端口
  68. nodePort: 30007 #宿主机的端口,服务暴露在外面的端口
  69. 2.部署
  70. [root@master mysql]# kubectl apply -f mysql.yaml
  71. deployment.apps/mysql created
  72. service/svc-mysql created
  73. [root@master mysql]# kubectl get svc
  74. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  75. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 37h
  76. svc-mysql NodePort 10.110.192.240 <none> 3306:30007/TCP 9s
  77. [root@master mysql]# kubectl get pod
  78. NAME READY STATUS RESTARTS AGE
  79. mysql-597ff9595d-lhsgp 0/1 ContainerCreating 0 56s
  80. nginx-deployment-d4c8d4d89-2xh6w 1/1 Running 2 (15h ago) 20h
  81. nginx-deployment-d4c8d4d89-c64c4 1/1 Running 2 (15h ago) 20h
  82. nginx-deployment-d4c8d4d89-fhvfd 1/1 Running 2 (15h ago) 20h
  83. [root@master mysql]# kubectl exec -it mysql-597ff9595d-lhsgp -- bash
  84. root@mysql-597ff9595d-tzqzl:/# mysql -uroot -p123456 #容器内部进入mysql
  85. mysql: [Warning] Using a password on the command line interface can be insecure.
  86. Welcome to the MySQL monitor. Commands end with ; or \g.
  87. Your MySQL connection id is 8
  88. Server version: 8.0.27 MySQL Community Server - GPL
  89. Copyright (c) 2000, 2021, Oracle and/or its affiliates.
  90. Oracle is a registered trademark of Oracle Corporation and/or its
  91. affiliates. Other names may be trademarks of their respective
  92. owners.
  93. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  94. mysql>

7.部署promethues+grafana对集群里的所有服务器(cpu,内存,网络带宽,磁盘IO等)进行常规性能监控,包括k8s集群节点服务器。

prometheus监控系统,grafana出图

监控对象:master,worker1,worker2,nfs服务器,gitlab服务器,harbor服务器,

ansible中控机

     

提前下载prometheus监控系统所需要的软件
  1. #准备工作
  2. [root@prometheus ~]# mkdir /prom
  3. [root@prometheus ~]# cd /prom
  4. [root@prometheus prom]# ls
  5. grafana-enterprise-9.1.2-1.x86_64.rpm prometheus-2.43.0.linux-amd64.tar.gz
  6. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  7. [root@prometheus prom]# tar xf prometheus-2.43.0.linux-amd64.tar.gz
  8. [root@prometheus prom]# ls
  9. grafana-enterprise-9.1.2-1.x86_64.rpm prometheus-2.43.0.linux-amd64
  10. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz prometheus-2.43.0.linux-amd64.tar.gz
  11. [root@prometheus prom]# mv prometheus-2.43.0.linux-amd64 prometheus
  12. [root@prometheus prom]# ls
  13. grafana-enterprise-9.1.2-1.x86_64.rpm prometheus
  14. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz prometheus-2.43.0.linux-amd64.tar.gz
  15. 临时和永久修改PATH变量,添加prometheus的路径
  16. [root@prometheus prom]# PATH=/prom/prometheus:$PATH
  17. [root@prometheus prom]# echo 'PATH=/prom/prometheus:$PATH' >>/etc/profile
  18. [root@prometheus prom]# which prometheus
  19. /prom/prometheus/prometheus
  20. 把prometheus做成一个服务来进行管理,非常方便日后维护和使用
  21. [root@prometheus prom]# vim /usr/lib/systemd/system/prometheus.service
  22. [Unit]
  23. Description=prometheus
  24. [Service]
  25. ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
  26. ExecReload=/bin/kill -HUP $MAINPID
  27. KillMode=process
  28. Restart=on-failure
  29. [Install]
  30. WantedBy=multi-user.target
  31. 重新加载systemd相关的服务,识别Prometheus服务的配置文件
  32. [root@prometheus prom]# systemctl daemon-reload
  33. [root@prometheus prom]#
  34. 启动Prometheus服务
  35. [root@prometheus prom]# systemctl start prometheus
  36. [root@prometheus prom]# systemctl restart prometheus
  37. [root@prometheus prom]# ps aux|grep prome
  38. root 2166 1.1 3.7 798956 37588 ? Ssl 13:53 0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
  39. root 2175 0.0 0.0 112824 976 pts/0 S+ 13:53 0:00 grep --color=auto prome
  40. #设置开启启动
  41. [root@prometheus prom]# systemctl enable prometheus
  42. Created symlink from /etc/systemd/system/multi-user.target.wants/prometheus.service to /usr/lib/systemd/system/prometheus.service.
  43. [root@prometheus prom]# ip a
  44. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  45. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  46. inet 127.0.0.1/8 scope host lo
  47. valid_lft forever preferred_lft forever
  48. inet6 ::1/128 scope host
  49. valid_lft forever preferred_lft forever
  50. 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
  51. link/ether 00:0c:29:37:86:3b brd ff:ff:ff:ff:ff:ff
  52. inet 192.168.0.33/24 brd 192.168.0.255 scope global noprefixroute ens33
  53. valid_lft forever preferred_lft forever
  54. #修改prometheus,yml文件
  55. - job_name: "prometheus"
  56. static_configs:
  57. - targets: ["192.168.0.33:9090"]
  58. - job_name: "master"
  59. static_configs:
  60. - targets: ["192.168.0.20:9090"]
  61. - job_name: "worker1"
  62. static_configs:
  63. - targets: ["192.168.0.21:9090"]
  64. - job_name: "worker2"
  65. static_configs:
  66. - targets: ["192.168.0.22:9090"]
  67. - job_name: "ansible"
  68. static_configs:
  69. - targets: ["192.168.0.30:9090"]
  70. - job_name: "gitlab"
  71. static_configs:
  72. - targets: ["192.168.0.35:9090"]
  73. - job_name: "harbor"
  74. static_configs:
  75. - targets: ["192.168.0.34:9090"]
  76. - job_name: "nfs"
  77. static_configs:
  78. - targets: ["192.168.0.36:9090"]
  79. 安装exporter
  80. ~
  81. 使用xftp工具上传node_exporter软件,也可以使用ansible上传到被监控的服务器上
  82. [root@prometheus prom]# scp ./node_exporter-1.4.0-rc.0.linux-amd64.tar.gz 192.168.0.30:/root
  83. The authenticity of host '192.168.0.30 (192.168.0.30)' can't be established.
  84. ECDSA key fingerprint is SHA256:xactOuiFsm9merQVjdeiV4iZwI4rXUnviFYTXL2h8fc.
  85. ECDSA key fingerprint is MD5:69:58:6b:ab:c4:8c:27:e2:b2:7c:31:bb:63:20:81:61.
  86. Are you sure you want to continue connecting (yes/no)? yes
  87. Warning: Permanently added '192.168.0.30' (ECDSA) to the list of known hosts.
  88. root@192.168.0.30's password:
  89. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz 100% 9507KB 40.0MB/s 00:00
  90. [root@ansible ~]# ls
  91. anaconda-ks.cfg node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  92. #检查进程是否启动
  93. [root@master ~]# ps -aux|grep node
  94. root 2231 2.7 2.2 828488 85208 ? Ssl 11:48 4:12 kue --authentication-kubeconfig=/etc/kubernetes/controller-manager.conntroller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubetc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaneer.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubetc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kub0.96.0.0/12 --use-service-account-credentials=true
  95. root 3403 0.0 0.0 4236 416 ? Ss 11:49 0:00 ru
  96. root 3408 2.8 1.2 1672716 47712 ? Sl 11:49 4:22 ca
  97. root 3409 0.0 1.0 1524740 41652 ? Sl 11:49 0:01 ca
  98. root 3410 0.0 0.9 1156080 36288 ? Sl 11:49 0:00 ca
  99. root 3411 0.0 0.9 1155824 36972 ? Sl 11:49 0:00 ca
  100. root 3413 0.0 1.0 1156080 39968 ? Sl 11:49 0:00 ca
  101. root 3414 0.0 0.8 1229812 34732 ? Sl 11:49 0:00 ca
  102. root 121582 0.1 0.4 717696 16676 ? Ssl 14:20 0:00 /n 0.0.0.0:9090
  103. ##访问本机的9090端口就行

实现了对整个集群的监控。

#安装grafana,绘制优美的图片,方便我们进行观察

  1. ##只要在安装了prometheus的机器上安装就行
  2. [root@prometheus prom]# ls
  3. grafana-enterprise-9.1.2-1.x86_64.rpm
  4. install_node_exporter.sh
  5. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  6. prometheus
  7. prometheus-2.43.0.linux-amd64.tar.gz
  8. [root@prometheus prom]# yum install grafana-enterprise-9.1.2-1.x86_64.rpm -y
  9. [root@prometheus prom]# systemctl start grafana-server
  10. [root@prometheus prom]# systemctl enable grafana-server
  11. Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.
  12. [root@prometheus prom]# ps aux|grep grafana
  13. grafana 1410 8.9 7.1 1137704 71040 ? Ssl 15:12 0:01 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid --packaging=rpm cfg:default.paths.logs=/var/log/grafana cfg:default.paths.data=/var/lib/grafana cfg:default.paths.plugins=/var/lib/grafana/plugins cfg:default.paths.provisioning=/etc/grafana/provisioning
  14. root 1437 0.0 0.0 112824 976 pts/0 S+ 15:13 0:00 grep --color=auto grafana
  15. #安装成功
  16. 监听3000端口
  17. 登录,在浏览器里登录
  18. http://192.168.203.135:3000
  19. 默认的用户名和密码是
  20. 用户名admin
  21. 密码admin

#我将密码修改为123456

#添加数据源

添加数据,修改仪表盘

#实现了对整个集群的性能监控

8.使用ingress给web业务做基于域名的负载均衡

拓展小知识

在 Kubernetes 集群中监控容器和集群资源的时候,通常会考虑使用 cAdvisor(Container Advisor)和 Metrics Server 这两个工具。它们各自有不同的特点和适用场景:

  1. cAdvisor (Container Advisor):

    • 特点:
      • cAdvisor 是 Kubernetes 官方提供的一个容器资源使用和性能分析工具。
      • 它可以监控容器的资源使用情况,包括 CPU、内存、网络和磁盘等方面的指标。
      • cAdvisor 运行在每个节点上,通过监控 Docker 容器的 cgroups 和命名空间来获取容器的统计信息。
      • 可以通过 cAdvisor 提供的 API 接口或者直接访问 cAdvisor 的 Web UI 来查看容器的监控数据。
    • 适用场景:
      • 适用于需要基本的容器资源监控和性能分析的场景。
      • 对于单个节点上的容器监控比较适用,但对于跨节点的集群级别监控需要其他工具配合。
  2. Metrics Server:

    • 特点:
      • Metrics Server 是 Kubernetes 官方提供的用于聚合和提供资源指标的 API 服务器。
      • 它可以提供节点级别和集群级别的资源指标,包括 CPU 使用率、内存使用量等。
      • Metrics Server 获取节点和容器的指标数据,并将其暴露为 Kubernetes API 的一部分,可以通过 kubectl top 命令来查看资源使用情况。
      • 通常作为 Kubernetes Dashboard 和 Horizontal Pod Autoscaler 等功能的基础。
    • 适用场景:
      • 适用于需要查看集群级别资源使用情况的场景,如监控整个集群的 CPU、内存等指标。
      • 用于 Kubernetes Dashboard、Horizontal Pod Autoscaler 等需要使用资源指标的功能。

综合来看,一般来说,cAdvisor 更适合单个节点上的容器监控和性能分析,而 Metrics Server 更适合集群级别的资源指标聚合和 API 访问。在实际使用中,您可以根据具体需求和场景来选择合适的监控工具或者将它们结合使用。

部署过程

  1. 第1大步骤: 安装ingress controller
  2. 1.将镜像scp到所有的node节点服务器上
  3. #准备好所有需要的文件
  4. [root@ansible ~]# ls
  5. [root@ansible ~]# ls
  6. hpa-example.tar ##hpa水平扩缩
  7. ingress-controller-deploy.yaml #ingress controller
  8. ingress-nginx-controllerv1.1.0.tar.gz #ingress-nginx-controller镜像
  9. install_node_exporter.sh
  10. kube-webhook-certgen-v1.1.0.tar.gz # kube-webhook-certgen镜像
  11. nfs-pvc.yaml
  12. nfs-pv.yaml
  13. nginx-deployment-nginx-svc-2.yaml
  14. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  15. sc-ingress-url.yaml #基于URL的负载均衡
  16. sc-ingress.yaml
  17. sc-nginx-svc-1.yaml #创建service1 和相关pod
  18. sc-nginx-svc-3.yaml #创建service3 和相关pod
  19. sc-nginx-svc-4.yaml #创建service4 和相关pod
  20. #kube-webhook-certgen镜像主要用于生成Kubernetes集群中用于Webhook的证书。
  21. #kube-webhook-certgen镜像生成的证书,可以确保Webhook服务在Kubernetes集群中的安全通信和身份验证
  22. [root@ansible ~]# ansible nodes -m copy -a "src=./ingress-nginx-controllerv1.1.0.tar.gz dest=/root/"
  23. 192.168.0.22 | CHANGED => {
  24. "ansible_facts": {
  25. "discovered_interpreter_python": "/usr/bin/python"
  26. },
  27. "changed": true,
  28. "checksum": "090f67aad7867a282c2901cc7859bc16856034ee",
  29. "dest": "/root/ingress-nginx-controllerv1.1.0.tar.gz",
  30. "gid": 0,
  31. "group": "root",
  32. "md5sum": "5777d038007f563180e59a02f537b155",
  33. "mode": "0644",
  34. "owner": "root",
  35. "size": 288980480,
  36. "src": "/root/.ansible/tmp/ansible-tmp-1712220848.65-1426-256601085523400/source",
  37. "state": "file",
  38. "uid": 0
  39. }
  40. ##类似这样就是成功了
  41. [root@worker2 ~]# ls
  42. anaconda-ks.cfg
  43. ingress-nginx-controllerv1.1.0.tar.gz
  44. install_node_exporter.sh
  45. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  46. 导入镜像,在所有的worker服务器上进行
  47. [root@worker1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
  48. [root@worker1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
  49. [root@worker2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
  50. [root@worker2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
  51. [root@worker1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
  52. e2eb06d8af82: Loading layer 65.54
  53. e2eb06d8af82: Loading layer 3.08
  54. e2eb06d8af82: Loading layer 5.865MB/5.865MB
  55. ab1476f3fdd9: Loading layer 557.1
  56. ab1476f3fdd9: Loading layer 6.128
  57. ab1476f3fdd9: Loading layer 10.58
  58. ab1476f3fdd9: Loading layer 15.04
  59. ab1476f3fdd9: Loading layer 23.4
  60. ab1476f3fdd9: Loading layer 32.87
  61. ab1476f3fdd9: Loading layer 38.99
  62. ab1476f3fdd9: Loading layer 41.78
  63. ab1476f3fdd9: Loading layer 44.01
  64. ab1476f3fdd9: Loading layer 45.68
  65. ab1476f3fdd9: Loading layer 49.58
  66. ab1476f3fdd9: Loading layer 55.71
  67. ab1476f3fdd9: Loading layer 62.39
  68. ab1476f3fdd9: Loading layer 71.3
  69. ab1476f3fdd9: Loading layer 79.66
  70. ab1476f3fdd9: Loading layer 88.57
  71. ab1476f3fdd9: Loading layer 97.48
  72. ab1476f3fdd9: Loading layer 105.8
  73. ab1476f3fdd9: Loading layer 114.2
  74. ab1476f3fdd9: Loading layer 120.9
  75. ab1476f3fdd9: Loading layer 120.9MB/120.9MB
  76. ad20729656ef: Loading layer 4.096
  77. ad20729656ef: Loading layer 4.096kB/4.096kB
  78. 0d5022138006: Loading layer 393.2
  79. 0d5022138006: Loading layer 12.98
  80. 0d5022138006: Loading layer 20.84
  81. 0d5022138006: Loading layer 28.31
  82. 0d5022138006: Loading layer 35.39
  83. 0d5022138006: Loading layer 36.57
  84. 0d5022138006: Loading layer 38.09MB/38.09MB
  85. 8f757e3fe5e4: Loading layer 229.4
  86. 8f757e3fe5e4: Loading layer 10.09
  87. 8f757e3fe5e4: Loading layer 15.83
  88. 8f757e3fe5e4: Loading layer 18.12
  89. 8f757e3fe5e4: Loading layer 19.04
  90. 8f757e3fe5e4: Loading layer 21.42MB/21.42MB
  91. a933df9f49bb: Loading layer 65.54
  92. a933df9f49bb: Loading layer 1.573
  93. a933df9f49bb: Loading layer 2.49
  94. a933df9f49bb: Loading layer 3.411MB/3.411MB
  95. 7ce1915c5c10: Loading layer 32.77
  96. 7ce1915c5c10: Loading layer 309.8
  97. 7ce1915c5c10: Loading layer 309.8
  98. 986ee27cd832: Loading layer 6.141
  99. b94180ef4d62: Loading layer 38.37
  100. d36a04670af2: Loading layer 2.754
  101. 2fc9eef73951: Loading layer 4.096
  102. 1442cff66b8e: Loading layer 51.67
  103. 1da3c77c05ac: Loading layer 3.584Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.0
  104. [root@worker1 ~]# ls
  105. anaconda-ks.cfg
  106. ingress-nginx-controllerv1.1.0.tar.gz
  107. install_node_exporter.sh
  108. node_exporter-1.4.0-rc.0.linux-amd64.tar.gz
  109. [root@worker1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
  110. c0d270ab7e0d: Loading layer 3.697MB/3.697MB
  111. ce7a3c1169b6: Loading layer 45.38MB/45.38MB
  112. Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
  113. [root@master ingress]# kubectl get ns
  114. NAME STATUS AGE
  115. default Active 42h
  116. devops-tools Active 21h
  117. ingress-nginx Active 18m
  118. kube-node-lease Active 42h
  119. kube-public Active 42h
  120. kube-system Active 42h
  121. kubernetes-dashboard Active 41h
  122. [root@master ingress]# kubectl get svc -n ingress-nginx
  123. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  124. ingress-nginx-controller NodePort 10.101.22.116 <none> 80:32140/TCP,443:30268/TCP 18m
  125. ingress-nginx-controller-admission ClusterIP 10.106.82.248 <none> 443/TCP 18m
  126. [root@master ingress]# kubectl get pod -n ingress-nginx
  127. NAME READY STATUS RESTARTS AGE
  128. ingress-nginx-admission-create-lvbmf 0/1 Completed 0 18m
  129. ingress-nginx-admission-patch-h24bx 0/1 Completed 1 18m
  130. ingress-nginx-controller-7cd558c647-ft9gx 1/1 Running 0 18m
  131. ingress-nginx-controller-7cd558c647-t2pmg 1/1 Running 0 18m
  132. 第2大步骤: 创建pod和暴露pod的服务
  133. ##启动nginx服务pod--》启动两个pod,实现dns域名解析轮询
  134. [root@master ingress]# kubectl apply -f sc-nginx-svc-3.yaml
  135. deployment.apps/sc-nginx-deploy-3 unchanged
  136. service/sc-nginx-svc-3 unchanged
  137. [root@master ingress]# kubectl apply -f sc-nginx-svc-4.yaml
  138. deployment.apps/sc-nginx-deploy-4 unchanged
  139. service/sc-nginx-svc-4 unchanged
  140. [root@master ingress]# kubectl get svc
  141. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  142. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h
  143. sc-nginx-svc-3 ClusterIP 10.102.96.68 <none> 80/TCP 19m
  144. sc-nginx-svc-4 ClusterIP 10.100.36.98 <none> 80/TCP 19m
  145. svc-mysql NodePort 10.110.192.240 <none> 3306:30007/TCP 5h51m
  146. 查看服务器的详细信息,查看Endpoints对应的pod的ip和端口是否正常
  147. [root@master ingress]# kubectl describe svc sc-nginx-svc
  148. Name: sc-nginx-svc-3
  149. Namespace: default
  150. Labels: app=sc-nginx-svc-3
  151. Annotations: <none>
  152. Selector: app=sc-nginx-feng-3
  153. Type: ClusterIP
  154. IP Family Policy: SingleStack
  155. IP Families: IPv4
  156. IP: 10.102.96.68
  157. IPs: 10.102.96.68
  158. Port: name-of-service-port 80/TCP
  159. TargetPort: 80/TCP
  160. Endpoints: 10.224.189.95:80,10.224.189.96:80,10.224.235.150:80
  161. Session Affinity: None
  162. Events: <none>
  163. Name: sc-nginx-svc-4
  164. Namespace: default
  165. Labels: app=sc-nginx-svc-4
  166. Annotations: <none>
  167. Selector: app=sc-nginx-feng-4
  168. Type: ClusterIP
  169. IP Family Policy: SingleStack
  170. IP Families: IPv4
  171. IP: 10.100.36.98
  172. IPs: 10.100.36.98
  173. Port: name-of-service-port 80/TCP
  174. TargetPort: 80/TCP
  175. Endpoints: 10.224.189.97:80,10.224.189.98:80,10.224.235.151:80
  176. Session Affinity: None
  177. Events: <none>
  178. [root@master ingress]# curl 10.224.189.95:80 ##内部pod的IP地址
  179. wang6666666
  180. 10.224.189.96:80##10.224.235.150:80
  181. 第3大步骤: 启用ingress 关联ingress controller 和service
  182. [root@master ingress]# kubectl apply -f sc-ingress.yaml
  183. ingress.networking.k8s.io/sc-ingress created
  184. 过几分钟可以看到 有宿主机的ip地址
  185. [root@master ingress]# kubectl get ingress
  186. NAME CLASS HOSTS ADDRESS PORTS AGE
  187. sc-ingress nginx www.feng.com,www.wang.com 80 8s
  188. [root@master ingress]# cat sc-ingress-url.yaml
  189. apiVersion: networking.k8s.io/v1
  190. kind: Ingress
  191. metadata:
  192. name: simple-fanout-example
  193. annotations:
  194. kubernets.io/ingress.class: nginx
  195. spec:
  196. ingressClassName: nginx
  197. rules:
  198. - host: www.wang.com #设置域名
  199. http:
  200. paths:
  201. - path: /wang1 #内部pod里面的地址
  202. pathType: Prefix
  203. backend:
  204. service:
  205. name: sc-nginx-svc-3
  206. port:
  207. number: 80
  208. - path: /wang2
  209. pathType: Prefix
  210. backend:
  211. service:
  212. name: sc-nginx-svc-4
  213. port:
  214. number: 80
  215. [root@master ingress]# kubectl apply -f sc-ingress-url.yaml
  216. [root@master ingress]# kubectl exec -it sc-nginx-deploy-4-7d4b5c487f-8l7wr -- bash
  217. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/# cd /usr/share/nginx/html/
  218. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls
  219. 50x.html index.html wang2
  220. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls
  221. 50x.html index.html wang2
  222. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cat index.html
  223. wang11111111
  224. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cp index.html ./wang2/
  225. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls
  226. 50x.html index.html wang2
  227. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cd wang2/
  228. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html/wang2# ls
  229. index.html
  230. root@sc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html/wang2# exit
  231. exit
  232. [root@master ingress]# kubectl exec -it sc-nginx-deploy-3-5c4b975ffc-d8hwk -- bash
  233. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/# cd /usr/share/nginx/html/
  234. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# ls
  235. 50x.html index.html wang1
  236. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# cp index.html ./wang1/
  237. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# ls
  238. 50x.html index.html wang1
  239. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# cat ./wang1/index.html
  240. wang6666666
  241. root@sc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# exit
  242. exit
  243. ##先在pod里面创建好文件index.html和文件夹
  244. #需要分别在service3和service4上面创建好
  245. 第4步: 查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则
  246. [root@master ingress]# kubectl get pod -n ingress-nginx
  247. NAME READY STATUS RESTARTS AGE
  248. ingress-nginx-admission-create-lvbmf 0/1 Completed 0 29m
  249. ingress-nginx-admission-patch-h24bx 0/1 Completed 1 29m
  250. ingress-nginx-controller-7cd558c647-ft9gx 1/1 Running 0 29m
  251. ingress-nginx-controller-7cd558c647-t2pmg 1/1 Running 0 29m
  252. 获取ingress controller对应的service暴露宿主机的端口,访问宿主机和相关端口,就可以验证ingress controller是否能进行负载均衡
  253. [root@k8smaster 4-4]# kubectl get svc -n ingress-nginx
  254. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  255. ingress-nginx-controller NodePort 10.99.160.10 <none> 80:30092/TCP,443:30263/TCP 37m
  256. ingress-nginx-controller-admission ClusterIP 10.99.138.23 <none> 443/TCP 37m
  257. 在其他的宿主机或者windows机器上使用域名进行访问
  258. 因为我们是基于域名做的负载均衡的配置,所有必须要在浏览器里使用域名去访问,不能使用ip地址
  259. 同时ingress controller做负载均衡的时候是基于http协议的,7层负载均衡
  260. [root@nfs ~]# cat /etc/hosts
  261. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
  262. ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
  263. 192.168.0.21 www.wang.com
  264. 192.168.0.22 www.wang.com
  265. 192.168.0.20 master
  266. [root@nfs ~]# curl www.wang.com/wang1/index.html
  267. wang6666666
  268. [root@nfs ~]# curl www.wang.com/wang2/index.html
  269. <html>
  270. <head><title>404 Not Found</title></head>
  271. <body>
  272. <center><h1>404 Not Found</h1></center>
  273. <hr><center>nginx/1.21.5</center>
  274. </body>
  275. </html>
  276. [root@nfs ~]# curl www.wang.com/wang2/index.html
  277. wang11111111
  278. ##DNS,这个采用的是轮询算法,需要多试几次就行了

#部署pv和pvc,对系统资源的管理 

  1. 第5步:启动第2个服务和pod,使用了pv+pvc+nfs
  2. 需要提前准备好nfs服务器+创建pv和pvc
  3. [root@k8smaster 4-4]# ls
  4. ingress-controller-deploy.yaml nfs-pvc.yaml sc-ingress.yaml
  5. ingress-nginx-controllerv1.1.0.tar.gz nfs-pv.yaml sc-nginx-svc-1.yaml
  6. kube-webhook-certgen-v1.1.0.tar.gz nginx-deployment-nginx-svc-2.yaml
  7. [root@master ingress]# cat nfs-pv.yaml
  8. apiVersion: v1
  9. kind: PersistentVolume
  10. metadata:
  11. name: sc-nginx-pv
  12. labels:
  13. type: sc-nginx-pv
  14. spec:
  15. capacity:
  16. storage: 10Gi
  17. accessModes:
  18. - ReadWriteMany
  19. storageClassName: nfs
  20. nfs:
  21. path: "/web" #nfs共享的目录
  22. server: 192.168.0.36 #nfs服务器的ip地址
  23. readOnly: false
  24. [root@k8smaster 4-4]# kubectl apply -f nfs-pv.yaml
  25. persistentvolume/sc-nginx-pv configured
  26. [root@master ingress]# cat nfs-pvc.yaml
  27. apiVersion: v1
  28. kind: PersistentVolumeClaim
  29. metadata:
  30. name: sc-nginx-pvc
  31. spec:
  32. accessModes:
  33. - ReadWriteMany
  34. resources:
  35. requests:
  36. storage: 1Gi
  37. storageClassName: nfs #使用nfs类型的pv
  38. [root@master ingress]# kubectl apply -f nfs-pvc.yaml
  39. persistentvolumeclaim/sc-nginx-pvc created
  40. [root@master ingress]# kubectl get pv
  41. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  42. jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 22h
  43. pv-web 10Gi RWX Retain Bound default/pvc-web nfs 24h
  44. sc-nginx-pv 10Gi RWX Retain Bound default/sc-nginx-pvc nfs 76s
  45. [root@master ingress]# cat nginx-deployment-nginx-svc-2.yaml
  46. apiVersion: apps/v1
  47. kind: Deployment
  48. metadata:
  49. name: nginx-deployment
  50. labels:
  51. app: nginx
  52. spec:
  53. replicas: 3
  54. selector:
  55. matchLabels:
  56. app: sc-nginx-feng-2
  57. template:
  58. metadata:
  59. labels:
  60. app: sc-nginx-feng-2
  61. spec:
  62. volumes:
  63. - name: sc-pv-storage-nfs
  64. persistentVolumeClaim:
  65. claimName: sc-nginx-pvc
  66. containers:
  67. - name: sc-pv-container-nfs
  68. image: nginx
  69. imagePullPolicy: IfNotPresent
  70. ports:
  71. - containerPort: 80
  72. name: "http-server"
  73. volumeMounts:
  74. - mountPath: "/usr/share/nginx/html"
  75. name: sc-pv-storage-nfs
  76. ---
  77. apiVersion: v1
  78. kind: Service
  79. metadata:
  80. name: sc-nginx-svc-2
  81. labels:
  82. app: sc-nginx-svc-2
  83. spec:
  84. selector:
  85. app: sc-nginx-feng-2
  86. ports:
  87. - name: name-of-service-port
  88. protocol: TCP
  89. port: 80
  90. targetPort: 80
  91. [root@k8smaster 4-4]#
  92. [root@k8smaster 4-4]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml
  93. deployment.apps/nginx-deployment created
  94. service/sc-nginx-svc-2 created
  95. [root@master ingress]# kubectl get svc
  96. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  97. kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42h
  98. sc-nginx-svc ClusterIP 10.108.143.45 <none> 80/TCP 20m
  99. sc-nginx-svc-2 ClusterIP 10.109.241.58 <none> 80/TCP 16s
  100. svc-mysql NodePort 10.110.192.240 <none> 3306:30007/TCP 4h45m
  101. [root@master ingress]# kubectl get svc -n ingress-nginx
  102. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  103. ingress-nginx-controller NodePort 10.101.22.116 <none> 80:32140/TCP,443:30268/TCP 44m
  104. ingress-nginx-controller-admission ClusterIP 10.106.82.248 <none> 443/TCP 44m
  105. [root@master ingress]# kubectl get ingress
  106. NAME CLASS HOSTS ADDRESS PORTS AGE
  107. sc-ingress nginx www.feng.com,www.wang.com 192.168.0.21,192.168.0.22 80 16m
  108. 访问宿主机暴露的端口号30092或者80都可以
  109. ##访问成功了
  110. [root@ansible ~]# curl www.wang.com
  111. welcome to sanchuang !!! \n
  112. welcome to sanchuang !!!
  113. 0000000000000000000000
  114. welcome to sanchuang !!!
  115. welcome to sanchuang !!!
  116. welcome to sanchuang !!!
  117. 666666666666666666 !!!
  118. 777777777777777777 !!!

        9.使用探针(liveless、readiness、startup)的httpGet和exec方法对web业务pod进行监控,一旦出现问题马上重启,增强业务pod的可靠性。

  1. [root@master ingress]# vim my-web.yaml
  2. [root@master ingress]# cat my-web.yaml
  3. apiVersion: apps/v1
  4. kind: Deployment
  5. metadata:
  6. labels:
  7. app: myweb
  8. name: myweb
  9. spec:
  10. replicas: 3
  11. selector:
  12. matchLabels:
  13. app: myweb
  14. template:
  15. metadata:
  16. labels:
  17. app: myweb
  18. spec:
  19. containers:
  20. - name: myweb
  21. image: nginx:latest
  22. imagePullPolicy: IfNotPresent
  23. ports:
  24. - containerPort: 8000
  25. resources:
  26. limits:
  27. cpu: 300m
  28. requests:
  29. cpu: 100m
  30. livenessProbe:
  31. exec:
  32. command:
  33. - ls
  34. - /
  35. initialDelaySeconds: 5
  36. periodSeconds: 5
  37. readinessProbe:
  38. exec:
  39. command:
  40. - ls
  41. - /
  42. initialDelaySeconds: 5
  43. periodSeconds: 5
  44. startupProbe:
  45. httpGet:
  46. path: /
  47. port: 8000
  48. failureThreshold: 30
  49. periodSeconds: 10
  50. ---
  51. apiVersion: v1
  52. kind: Service
  53. metadata:
  54. labels:
  55. app: myweb-svc
  56. name: myweb-svc
  57. spec:
  58. selector:
  59. app: myweb
  60. type: NodePort
  61. ports:
  62. - port: 8000
  63. protocol: TCP
  64. targetPort: 8000
  65. nodePort: 30001
  66. [root@master ingress]# kubectl describe pod myweb-b69f9bc6-ht2vw
  67. Name: myweb-b69f9bc6-ht2vw
  68. Namespace: default
  69. Priority: 0
  70. Node: worker2/192.168.0.22
  71. Start Time: Thu, 04 Apr 2024 20:06:43 +0800
  72. Labels: app=myweb
  73. pod-template-hash=b69f9bc6
  74. Annotations: cni.projectcalico.org/containerID: 8c2aed8a822bab4162d7d8cce6933cf058ecddb3d33ae8afa3eee7daa8a563be
  75. cni.projectcalico.org/podIP: 10.224.189.110/32
  76. cni.projectcalico.org/podIPs: 10.224.189.110/32
  77. Status: Running
  78. IP: 10.224.189.110
  79. IPs:
  80. IP: 10.224.189.110
  81. Controlled By: ReplicaSet/myweb-b69f9bc6
  82. Containers:
  83. myweb:
  84. Container ID: docker://64d91f5ae0c61770e2dc91ee6cfc46f029a7af25f2119ea9ea047407ae072969
  85. Image: nginx:latest
  86. Image ID: docker-pullable://nginx@sha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31
  87. Port: 8000/TCP
  88. Host Port: 0/TCP
  89. State: Running
  90. Started: Thu, 04 Apr 2024 20:06:44 +0800
  91. Ready: False
  92. Restart Count: 0
  93. Limits:
  94. cpu: 300m
  95. Requests:
  96. cpu: 100m
  97. Liveness: exec [ls /] delay=5s timeout=1s period=5s #success=1 #failure=3
  98. Readiness: exec [ls /] delay=5s timeout=1s period=5s #success=1 #failure=3
  99. Startup: http-get http://:8000/ delay=0s timeout=1s period=10s #success=1 #failure=30
  100. Environment: <none>
  101. Mounts:
  102. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bhvf6 (ro)

   10.使用ab工具对整个k8s集群里的web服务进行压力测试

  1. 安装http-tools工具得到ab软件
  2. [root@nfs-server ~]# yum install httpd-tools -y
  3. 模拟访问
  4. [root@nfs-server ~]# ab -n 1000 -c50 http://192.168.220.100:31000/index.html
  5. root@master hpa]# kubectl get hpa --watch
  6. 增加并发数和请求总数
  7. [root@gitlab ~]# ab -n 5000 -c100 http://192.168.0.21:80/index.html
  8. This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
  9. Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
  10. Licensed to The Apache Software Foundation, http://www.apache.org/
  11. Benchmarking 192.168.0.21 (be patient)
  12. Completed 500 requests
  13. Completed 1000 requests
  14. Completed 1500 requests
  15. Completed 2000 requests
  16. Completed 2500 requests
  17. Completed 3000 requests
  18. Completed 3500 requests
  19. Completed 4000 requests
  20. Completed 4500 requests
  21. Completed 5000 requests
  22. Finished 5000 requests
  23. Server Software:
  24. Server Hostname: 192.168.0.21
  25. Server Port: 80
  26. Document Path: /index.html
  27. Document Length: 146 bytes
  28. Concurrency Level: 100
  29. Time taken for tests: 2.204 seconds
  30. Complete requests: 5000
  31. Failed requests: 0
  32. Write errors: 0
  33. Non-2xx responses: 5000
  34. Total transferred: 1370000 bytes
  35. HTML transferred: 730000 bytes
  36. Requests per second: 2268.42 [#/sec] (mean)
  37. Time per request: 44.084 [ms] (mean)
  38. Time per request: 0.441 [ms] (mean, across all concurrent requests)
  39. Transfer rate: 606.98 [Kbytes/sec] received
  40. Connection Times (ms)
  41. min mean[+/-sd] median max
  42. Connect: 0 3 4.1 1 22
  43. Processing: 1 40 30.8 38 160
  44. Waiting: 0 39 30.7 36 160
  45. Total: 1 43 30.9 41 162
  46. Percentage of the requests served within a certain time (ms)
  47. 50% 41
  48. 66% 54
  49. 75% 63
  50. 80% 69
  51. 90% 83
  52. 95% 100
  53. 98% 115
  54. 99% 129
  55. 100% 162 (longest request)
  56. ##监控方式
  57. 1.kubectl top pod ##本地top 查看
  58. 2.http://192.168.0.33:3000/ #使用grafana
  59. 3.http://192.168.0.33:9090/targets #使用prometheus

 项目心得:

  1.      1.更加深入的了解了k8s的各个功能
  2.      2.对各个服务(Prometheus,nfs等)深入了解
  3.      3.自己的故障处理能力得到提升
  4.      4.对负载均衡和高可用,自动扩缩有了认识
  5.      5.更加了解开发和运维的关系
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/364809
推荐阅读
相关标签
  

闽ICP备14008679号