当前位置:   article > 正文

云原生Kubernetes: K8S 1.29版本 部署Sonarqube_下载flannel.yml

下载flannel.yml

 一、实验

1.环境

(1)主机

表1 主机

主机架构版本IP备注
masterK8S master节点1.29.0192.168.204.8

node1K8S node节点1.29.0192.168.204.9
node2K8S node节点1.29.0192.168.204.10已部署Kuboard

(2)master节点查看集群

  1. 1)查看node
  2. kubectl get node
  3. 2)查看node详细信息
  4. kubectl get node -o wide

(3)查看pod

[root@master ~]# kubectl get pod -A

(4) 访问Kuboard

http://192.168.204.10:30080/kuboard/cluster

查看节点

2.K8S 1.29版本 部署HELM

(1)查阅

https://github.com/helm/helm/releases/tag/v3.14.4

目前最新版为v3.14.4

(2) 部署HELM

  1. 1)安装 helm
  2. //下载二进制 Helm client 安装包
  3. helm-v3.14.4-linux-amd64.tar.gz
  4. tar -zxvf helm-v3.14.4-linux-amd64.tar.gz
  5. mv linux-amd64/helm /usr/local/bin/helm
  6. helm version
  7. //命令补全
  8. source <(helm completion bash)

安装

(3)使用 helm 安装 Chart

  1. 1)查阅
  2. https://github.com/SonarSource/helm-chart-sonarqube
  3. 2)使用 helm 安装 Chart
  4. //添加指定的 chart 仓库,
  5. helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube
  6. 3) 下载指定版本
  7. helm pull sonarqube/sonarqube --version 10.5.0+2748

查阅最新版本

安装

下载

(4)移动

  1. cd ~ && mkdir sonarqube
  2. mv sonarqube-10.5.0+2748.tgz sonarqube/
  3. cd sonarqube/;ls

3.搭建NFS

(1)检查并安装rpcbind和nfs-utils软件包

[root@master ~]# rpm -q rpcbind nfs-utils

(2)创建目录并授权

[root@master ~]# mkdir -p /opt/sonarqube

[root@master opt]# chmod 777 sonarqube/

(3)打开nfs的配置文件

[root@master opt]# vim /etc/exports

(4)配置文件

给所有网段用户赋予读写权限、同步内容、不压缩共享对象root用户权限

  1. ……
  2. /opt/sonarqube *(rw,sync,no_root_squash)

(5) 使NFS配置生效

[root@master opt]# exportfs -r

(6)监听端口

[root@master opt]# ss -antp | grep rpcbind

(7)查看共享

[root@master opt]# showmount -e

其他节点查看

[root@node1 ~]# showmount -e master

4.K8S 1.29版本安装nfs-provisioner

(1) 查阅

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases

(2)创建目录

  1. [root@master ~]# cd ~ && mkdir nfs-subdir-external-provisioner
  2. [root@master ~]# cd nfs-subdir-external-provisioner/

(3)第一种方式下载

helm添加repo

[root@master nfs-subdir-external-provisioner]# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner

下载

[root@master ~]# helm pull nfs-subdir-external-provisioner/nfs-subdir-external-provisioner

(5)第二种方式下载

查阅

https://artifacthub.io/packages/helm/nfs-subdir-external-provisioner/nfs-subdir-external-provisioner

点击右边的install

弹出页面

点击右下角"this link"

(6)移动并解压(选择上面的第二种方式)

  1. [root@master ~]# mv nfs-subdir-external-provisioner-4.0.18.tgz nfs-subdir-external-provisioner
  2. [root@master nfs-subdir-external-provisioner]# tar -xvf nfs-subdir-external-provisioner-4.0.18.tgz

(7)node节点导入镜像

导入本地

[root@node1 ~]# docker load --input nfs-subdir-external-provisioner.tar

重新打标签

[root@node1 ~]# docker tag k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 registry.k8s.io/sig-storage/n

(8)master节点安装

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/ --set nfs.server=192.168.204.8 --set nfs.path=/opt/sonarqube --set storageClass.name=nfs-client --set storageClass.defaultClass=true -n nfs-provisioner --create-namespace 

(9)查看pod

详细查看

  1. [root@master ~]# kubectl describe pod nfs-subdir-external-provisioner-567b586d45-xz8r6 -n nfs-provisioner
  2. Name: nfs-subdir-external-provisioner-567b586d45-xz8r6
  3. Namespace: nfs-provisioner
  4. Priority: 0
  5. Service Account: nfs-subdir-external-provisioner
  6. Node: node1/192.168.204.9
  7. Start Time: Sat, 27 Apr 2024 19:38:39 +0800
  8. Labels: app=nfs-subdir-external-provisioner
  9. pod-template-hash=567b586d45
  10. release=nfs-subdir-external-provisioner
  11. Annotations: cni.projectcalico.org/containerID: 8f4479951e36de27cc21dcce8b7bf11a34eb838107d4457c6ca352acbf69399e
  12. cni.projectcalico.org/podIP: 10.244.166.167/32
  13. cni.projectcalico.org/podIPs: 10.244.166.167/32
  14. Status: Running
  15. IP: 10.244.166.167
  16. IPs:
  17. IP: 10.244.166.167
  18. Controlled By: ReplicaSet/nfs-subdir-external-provisioner-567b586d45
  19. Containers:
  20. nfs-subdir-external-provisioner:
  21. Container ID: docker://9c18e809cc7179a55d66a1886b6addbd034841a6010fd07c4b4049449ab79814
  22. Image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
  23. Image ID: docker://sha256:932b0bface75b80e713245d7c2ce8c44b7e127c075bd2d27281a16677c8efef3
  24. Port: <none>
  25. Host Port: <none>
  26. State: Running
  27. Started: Sat, 27 Apr 2024 19:38:41 +0800
  28. Ready: True
  29. Restart Count: 0
  30. Environment:
  31. PROVISIONER_NAME: cluster.local/nfs-subdir-external-provisioner
  32. NFS_SERVER: 192.168.204.8
  33. NFS_PATH: /opt/sonarqube
  34. Mounts:
  35. /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
  36. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t248d (ro)
  37. Conditions:
  38. Type Status
  39. PodReadyToStartContainers True
  40. Initialized True
  41. Ready True
  42. ContainersReady True
  43. PodScheduled True
  44. Volumes:
  45. nfs-subdir-external-provisioner-root:
  46. Type: NFS (an NFS mount that lasts the lifetime of a pod)
  47. Server: 192.168.204.8
  48. Path: /opt/sonarqube
  49. ReadOnly: false
  50. kube-api-access-t248d:
  51. Type: Projected (a volume that contains injected data from multiple sources)
  52. TokenExpirationSeconds: 3607
  53. ConfigMapName: kube-root-ca.crt
  54. ConfigMapOptional: <nil>
  55. DownwardAPI: true
  56. QoS Class: BestEffort
  57. Node-Selectors: <none>
  58. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  59. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  60. Events:
  61. Type Reason Age From Message
  62. ---- ------ ---- ---- -------
  63. Normal Pulled 88s kubelet Container image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
  64. Normal Created 88s kubelet Created container nfs-subdir-external-provisioner
  65. Normal Started 87s kubelet Started container nfs-subdir-external-provisioner
  66. Normal Scheduled 81s default-scheduler Successfully assigned nfs-provisioner/nfs-subdir-external-provisioner-567b586d45-xz8r6 to node1

(10)Kuboard查看

工作负载

容器组

详细信息

5.K8S 1.29版本 部署Sonarqube(第一种方式)

(1)解压

  1. [root@master ~]# cd sonarqube/
  2. [root@master sonarqube]# ls
  3. [root@master sonarqube]# tar -xvf sonarqube-10.5.0+2748.tgz

(2)修改values.yaml文件

  1. [root@master sonarqube]# cd sonarqube/
  2. [root@master sonarqube]# vim values.yaml
  3. ……
  4. # 全局搜索"service"关键字
  5. service:
  6. type: NodePort # 类型修改为NodePort
  7. externalPort: 9000
  8. internalPort: 9000
  9. nodePort: 30090 # NodePort对外暴露的端口
  10. ……
  11. persistence:
  12. enabled: true #设置为true
  13. ……
  14. storageClass: nfs-client #设置为当前集群默认的StorageClass

修改前:

修改后:

(3)创建一个安装Sonarqube用的名字空间

[root@master sonarqube]# kubectl create ns sonarqube

(4)chart安装Sonarqube

  1. [root@master sonarqube]# helm install sonarqube ./sonarqube -n sonarqube
  2. NAME: sonarqube
  3. LAST DEPLOYED: Sat Apr 27 20:12:09 2024
  4. NAMESPACE: sonarqube
  5. STATUS: deployed
  6. REVISION: 1
  7. NOTES:
  8. 1. Get the application URL by running these commands:
  9. export NODE_PORT=$(kubectl get --namespace sonarqube -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
  10. export NODE_IP=$(kubectl get nodes --namespace sonarqube -o jsonpath="{.items[0].status.addresses[0].address}")
  11. echo http://$NODE_IP:$NODE_PORT
  12. WARNING:
  13. Please note that the SonarQube image runs with a non-root user (uid=1000) belonging to the root group (guid=0). In this way, the chart can support arbitrary user ids as recommended in OpenShift.
  14. Please visit https://docs.openshift.com/container-platform/4.14/openshift_images/create-images.html#use-uid_create-images for more information.
  15. WARNING: The embedded PostgreSQL is intended for evaluation only, it is DEPRECATED, and it will be REMOVED in a future release.
  16. Please visit https://artifacthub.io/packages/helm/sonarqube/sonarqube#production-use-case for more information.

(5)输入命令

  1. [root@master sonarqube]# export NODE_PORT=$(kubectl get --namespace sonarqube -o jsonpath="{.spec.ports[0].nodePort}" services sonarqube-sonarqube)
  2. [root@master sonarqube]# export NODE_IP=$(kubectl get nodes --namespace sonarqube -o jsonpath="{.items[0].status.addresses[0].address}")
  3. [root@master sonarqube]# echo http://$NODE_IP:$NODE_PORT
  4. http://192.168.204.8:30090

(6)node节点拉取postgresql镜像

  1. [root@node2 ~]# docker pull docker.io/bitnami/postgresql:11.14.0-debian-10-r22
  2. #也可以使用替代愿镜像:m.daocloud.io/docker.io/bitnami/postgresql:11.14.0-debian-10-r22

Kuboard容器组查看

(7) node节点拉取sonarqube镜像

node2拉取镜像

[root@node2 ~]# docker pull sonarqube:10.5.0-community

node2节点导出镜像

[root@node2 ~]# docker save -o sonarqube.tar sonarqube:10.5.0-community

复制Docker镜像到node1节点

[root@node2 ~]# scp sonarqube.tar root@node1:~

node1节点导入Docker镜像

[root@node1 ~]# docker load -i sonarqube.tar 

(8) 查看服务

(9)查看卷

  1. [root@master sonarqube]# cd /opt/sonarqube/
  2. [root@master sonarqube]# ls

Kuboard查看

(10)HELM更新配置文件

[root@master sonarqube]# helm upgrade -f sonarqube/values.yaml sonarqube ./sonarqube -n sonarqube

(11)删除项目

[root@master sonarqube]# helm uninstall sonarqube -n sonarqube

5.K8S 1.29版本 部署Sonarqube(第二种方式)

(1)创建NFS

postgresql

  1. [root@master opt]# cd ~
  2. [root@master ~]# mkdir -p /opt/postgre
  3. [root@master ~]# cd /opt
  4. [root@master opt]# chmod 777 postgre/
  5. [root@master opt]# vim /etc/exports
  6. [root@master opt]# exportfs -r
  7. [root@master opt]# showmount -e
  8. Export list for master:
  9. /opt/postgre *
  10. /opt/sonar *
  11. /opt/sonarqube *
  12. /opt/nexus *
  13. /opt/k8s *

sonarqube

  1. [root@master sonarqube]# cd ~
  2. [root@master ~]# mkdir -p /opt/sonar
  3. [root@master ~]#
  4. [root@master ~]# cd /opt
  5. [root@master opt]# chmod 777 sonar/
  6. [root@master opt]# vim /etc/exports
  7. [root@master opt]# exportfs -r
  8. [root@master opt]# showmount -e
  9. Export list for master:
  10. /opt/sonar *
  11. /opt/sonarqube *
  12. /opt/nexus *
  13. /opt/k8s *

(2)创建postgresql的pv

[root@master ~]# vim pv-postgre.yaml

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: pv-postgre
  5. spec:
  6. capacity:
  7. storage: 5Gi #配置容量大小
  8. volumeMode: Filesystem
  9. accessModes:
  10. - ReadWriteOnce #配置访问策略为只允许一个节点读写
  11. persistentVolumeReclaimPolicy: Retain #配置回收策略,Retain为手动回收
  12. storageClassName: "pv-postgre" #配置为nfs
  13. nfs:
  14. path: /opt/postgre #配置nfs服务端的共享路径
  15. server: 192.168.204.8 #配置nfs服务器地址

(3)生成资源

[root@master ~]# kubectl apply -f pv-postgre.yaml 

(4)查看pv

[root@master ~]# kubectl get pv

(5)拉取镜像

 node2

[root@node2 ~]# docker pull postgres:11.4

(6) 导出镜像

[root@node2 ~]# docker save -o postgres.tar postgres:11.4

(7)复制Docker镜像到node1节点

[root@node2 ~]# scp postgres.tar root@node1:~ 

(8)node1节点导入Docker镜像

[root@node1 ~]# docker load -i postgres.tar 

(9)部署postgresql

[root@master ~]# vim postgre.yaml

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: postgre-pvc
  5. namespace: sonarqube
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: "pv-postgre"
  10. resources:
  11. requests:
  12. storage: 2Gi
  13. ---
  14. apiVersion: apps/v1
  15. kind: Deployment
  16. metadata:
  17. name: postgres-sonar
  18. labels:
  19. app: postgres-sonar
  20. namespace: sonarqube
  21. spec:
  22. replicas: 1
  23. selector:
  24. matchLabels:
  25. app: postgres-sonar
  26. template:
  27. metadata:
  28. labels:
  29. app: postgres-sonar
  30. spec:
  31. containers:
  32. - name: postgres-sonar
  33. image: postgres:11.4
  34. imagePullPolicy: IfNotPresent
  35. ports:
  36. - containerPort: 5432
  37. env:
  38. - name: POSTGRES_DB
  39. value: "sonarDB"
  40. - name: POSTGRES_USER
  41. value: "sonarUser"
  42. - name: POSTGRES_PASSWORD
  43. value: "123456"
  44. resources:
  45. limits:
  46. cpu: 1000m
  47. memory: 2048Mi
  48. requests:
  49. cpu: 500m
  50. memory: 1024Mi
  51. volumeMounts:
  52. - name: data
  53. mountPath: /var/lib/postgresql/data
  54. volumes:
  55. - name: data
  56. persistentVolumeClaim:
  57. claimName: postgre-pvc
  58. ---
  59. apiVersion: v1
  60. kind: Service
  61. metadata:
  62. name: postgres-sonar
  63. namespace: sonarqube
  64. labels:
  65. app: postgres-sonar
  66. spec:
  67. ports:
  68. - port: 5432
  69. protocol: TCP
  70. targetPort: 5432
  71. selector:
  72. app: postgres-sonar

(10) 生成资源

[root@master ~]# kubectl apply -f postgre.yaml 

(11) 查看pv,pvc

  1. [root@master ~]# kubectl get pv
  2. [root@master ~]# kubectl get pvc -n sonarqube

(12) 查看pod,svc

[root@master ~]#  kubectl get pod.svc -n sonarqube

(13)Kuboard查看

工作负载

容器组

服务

(14)创建sonarqube的pv

[root@master ~]# vim pv-sonar.yaml

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: pv-sonar
  5. spec:
  6. capacity:
  7. storage: 10Gi #配置容量大小
  8. volumeMode: Filesystem
  9. accessModes:
  10. - ReadWriteOnce #配置访问策略为只允许一个节点读写
  11. persistentVolumeReclaimPolicy: Retain #配置回收策略,Retain为手动回收
  12. storageClassName: "pv-sonar" #配置为nfs
  13. nfs:
  14. path: /opt/sonar #配置nfs服务端的共享路径
  15. server: 192.168.204.8 #配置nfs服务器地址

 (15)生成资源

[root@master ~]# kubectl apply -f pv-sonar.yaml 

(16)查看pv

[root@master ~]# kubectl get pv

 (17)拉取镜像

 node1

[root@node1 ~]# docker pull sonarqube:lts

(18) 导出镜像

[root@node1 ~]# docker save -o sonar.tar sonarqube:lts

(19)复制Docker镜像到node1节点

[root@node1 ~]# scp sonar.tar root@node2:~ 

(20)node1节点导入Docker镜像

[root@node2 ~]# docker load -i sonar.tar 

(21) 部署sonarqube

[root@master ~]# vim sonar.yaml

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: sonarqube-pvc
  5. namespace: sonarqube
  6. spec:
  7. accessModes:
  8. - ReadWriteOnce
  9. storageClassName: "pv-sonar"
  10. resources:
  11. requests:
  12. storage: 5Gi
  13. ---
  14. apiVersion: apps/v1
  15. kind: Deployment
  16. metadata:
  17. name: sonarqube
  18. labels:
  19. app: sonarqube
  20. namespace: sonarqube
  21. spec:
  22. replicas: 1
  23. selector:
  24. matchLabels:
  25. app: sonarqube
  26. template:
  27. metadata:
  28. labels:
  29. app: sonarqube
  30. spec:
  31. initContainers:
  32. - name: init-sysctl
  33. image: busybox
  34. imagePullPolicy: IfNotPresent
  35. command: ["sysctl", "-w", "vm.max_map_count=262144"]
  36. securityContext:
  37. privileged: true
  38. containers:
  39. - name: sonarqube
  40. image: sonarqube:lts
  41. ports:
  42. - containerPort: 9000
  43. env:
  44. - name: SONARQUBE_JDBC_USERNAME
  45. value: "sonarUser"
  46. - name: SONARQUBE_JDBC_PASSWORD
  47. value: "123456"
  48. - name: SONARQUBE_JDBC_URL
  49. value: "jdbc:postgresql://postgres-sonar:5432/sonarDB" #postgres-sonar改成集群IP
  50. livenessProbe:
  51. httpGet:
  52. path: /sessions/new
  53. port: 9000
  54. initialDelaySeconds: 60
  55. periodSeconds: 30
  56. readinessProbe:
  57. httpGet:
  58. path: /sessions/new
  59. port: 9000
  60. initialDelaySeconds: 60
  61. periodSeconds: 30
  62. failureThreshold: 6
  63. resources:
  64. limits:
  65. cpu: 2000m
  66. memory: 2048Mi
  67. requests:
  68. cpu: 1000m
  69. memory: 1024Mi
  70. volumeMounts:
  71. - mountPath: /opt/sonarqube/conf
  72. name: data
  73. subPath: conf
  74. - mountPath: /opt/sonarqube/data
  75. name: data
  76. subPath: data
  77. - mountPath: /opt/sonarqube/extensions
  78. name: data
  79. subPath: extensions
  80. volumes:
  81. - name: data
  82. persistentVolumeClaim:
  83. claimName: sonarqube-pvc
  84. ---
  85. apiVersion: v1
  86. kind: Service
  87. metadata:
  88. name: sonarqube
  89. namespace: sonarqube
  90. labels:
  91. app: sonarqube
  92. spec:
  93. type: NodePort
  94. ports:
  95. - name: sonarqube
  96. port: 9000
  97. targetPort: 9000
  98. nodePort: 30090
  99. protocol: TCP
  100. selector:
  101. app: sonarqube

 (22) 生成资源

[root@master ~]# kubectl apply -f sonar.yaml 

(23) 查看pv,pvc

  1. [root@master ~]# kubectl get pv
  2. [root@master ~]# kubectl get pvc -n sonarqube


 

(24)查看pod,svc

[root@master ~]# kubectl get pod,svc -n sonarqube

(25)部署ingress

[root@master ~]# vim ingress-sonar.yaml
  1. apiVersion: networking.k8s.io/v1
  2. kind: Ingress
  3. metadata:
  4. name: ingress-sonar
  5. namespace: sonarqube
  6. spec:
  7. ingressClassName: "nginx"
  8. rules:
  9. - host: sonarqube.site
  10. http:
  11. paths:
  12. - path: /
  13. pathType: Prefix
  14. backend:
  15. service:
  16. name: sonarqube
  17. port:
  18. number: 9000

(26)生成资源

[root@master ~]# kubectl apply -f ingress-sonar.yaml 

(27)查看ingress

[root@master ~]# kubectl get ingress -n sonarqube

(28)详细查看

  1. [root@master ~]# kubectl describe ingress ingress-sonar -n sonarqube
  2. Name: ingress-sonar
  3. Labels: <none>
  4. Namespace: sonarqube
  5. Address: 10.101.23.182
  6. Ingress Class: nginx
  7. Default backend: <default>
  8. Rules:
  9. Host Path Backends
  10. ---- ---- --------
  11. sonarqube.site
  12. / sonarqube:9000 (10.244.166.129:9000)
  13. Annotations: <none>
  14. Events:
  15. Type Reason Age From Message
  16. ---- ------ ---- ---- -------
  17. Normal Sync 72s (x2 over 86s) nginx-ingress-controller Scheduled for sync
  18. Normal Sync 72s (x2 over 86s) nginx-ingress-controller Scheduled for sync

(29)Kuboard查看

应用路由

详细信息

(30)master节点修改hosts

[root@master ~]# vim /etc/hosts

(31)查看

ingress-nginx-controller对外暴露端口为31820

(32)curl测试

[root@master ~]# curl sonarqube.site:31820

(33)物理机修改hosts

(34)访问系统

http://sonarqube.site:31820

(35)输入用户名和密码

  1. 账号:admin
  2. 密码:admin

(36) 设置新密码

弹出

修改

(37)进入系统

(38)其他方式的Sonarqube部署

可以参考本人博客:

持续集成交付CICD:CentOS 7 安装 Sonarqube9.6-CSDN博客

二、问题

1.chart安装Sonarqube报错

(1)报错

Error: INSTALLATION FAILED: cannot load values.yaml: error converting YAML to JSON: yaml: line 67: mapping values are not allowed in this context

(2)原因

格式错误。

(3)解决方法

修改配置文件。

修改前:

修改后:

2.K8S 部署sonarqube报错

(1)报错

查看pod,svc

查看deploy

(2) 原因分析

JDBC连接postgresql失败,value: "jdbc:postgresql://postgres-sonar:5432/sonarDB"中的postgres-sonar需要写入集群IP。

(3)解决方法

修改配置文件:

[root@master ~]# kubectl edit deploy sonarqube  -n sonarqube

修改后:

成功:

[root@master ~]# kubectl get pod,svc -n sonarqube

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/544975
推荐阅读
相关标签
  

闽ICP备14008679号