当前位置:   article > 正文

kubesphere搭建k8s集群

kubesphere

目录

1准备3台虚拟机(centos7.9)

2 每台虚拟机更新yum的软件包,时间设置等

3 关闭防火墙

4 添加三台服务器的域名设置

5 设置三台服务器之间免密  执行完后务必退出shell然后重连

6 安装kubesphere必要依赖,每个节点都要装,不然报错:socat not found in system path

7 安装nfs-server

配置nfs-client(选做)

配置默认存储  sc.yaml

8 只用在主节点k8s-node1文件夹中下载k8s安装脚本  没有配置镜像加速可能会很慢多试几次

9 集群配置,创建配置文件,config-sample.yaml

 10 编辑config-sample.yaml

 11 启动脚本和配置文件

12 耐心等待安装完成,会把所有工作节点添加到k8s-node1(时间大概5-10分钟)

13 查看日志

14 查看节点状态

15 如果忘记配置阿里镜像加速 

 16 删除集群,重新安装

17多租户实战


1准备3台虚拟机centos7.9

  1. cd /etc/sysconfig/network-scripts
  2. vim ifcfg-ens33
  1. BOOTPROTO=static
  2. ONBOOT=yes
  3. IPADDR=192.168.1.211
  4. GATEWAY=192.168.1.1
  5. NETMASK=255.255.255.0
  6. DNS1=114.114.114.114

2 每台虚拟机更新yum的软件包,时间设置等

yum -y update
yum makecache fast
  1. yum install -y ntpdate
  2. ntpdate time.windows.com
  3. ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
  4. date

3 关闭防火墙

  1. systemctl stop firewalld
  2. systemctl disable firewalld
  3. /etc/selinux/config
  4. 将SELINUX的值设置为disabled

4 添加三台服务器的域名设置

vim /etc/hosts
  1. 192.168.1.211 node1
  2. 192.168.1.212 node2
  3. 192.168.1.213 node3

5 设置三台服务器之间免密  执行完后务必退出shell然后重连

  1. 1、先在所有服务器上执行命令:
  2. ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
  3. 2、而后在所有服务器上执行命令:这样自身就能免密登陆
  4. cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys
  5. 3、之后将每台服务器上的id_dsa.pub公钥发送到其他机器的/tmp文件夹下,如在master上执行
  6. scp ~/.ssh/id_dsa.pub node2:/tmp/
  7. scp ~/.ssh/id_dsa.pub node3:/tmp/
  8. 4、之后在其他的机器上将公钥追加到各自的authorized_keys里,执行以下命令:
  9. cat /tmp/id_dsa.pub >>~/.ssh/authorized_keys
  10. 5、同样的,在其他的机器上将公钥发送到其他服务器上,然后在其他服务器上将公钥追加到各自的authorized_keys即可
  11. 6、最后是测试免密钥连接。
  12. ssh node1

6 安装kubesphere必要依赖,每个节点都要装,不然报错:socat not found in system path

yum install -y socat conntrack ebtables ipset

7 安装nfs-server

  1. #在每个机器。
  2. yum install -y nfs-utils
  3. #在master执行以下命令
  4. echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
  5. #执行以下命令, 启动nfs服务;创建共享目录
  6. mkdir -p /nfs/data
  7. #在master执行
  8. systemctl enable rpcbind
  9. systemctl enable nfs-server
  10. systemctl start rpcbind
  11. systemctl start nfs-server
  12. #使配置生效
  13. exportfs-r
  14. #检查配置是否生效
  15. exportfs

配置nfs-client(选做)

  1. #改成自己的master的ip,只在从节点执行
  2. showmount -e 192.168.1.211
  3. mkdir -p /nfs/data
  4. mount -t nfs 192.168.1.211:/nfs/data /nfs/data

配置默认存储  sc.yaml

  1. ## 创建了一个存储类
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: nfs-storage
  6. annotations:
  7. storageclass.kubernetes.io/is-default-class: "true"
  8. provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
  9. parameters:
  10. archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份
  11. ---
  12. apiVersion: apps/v1
  13. kind: Deployment
  14. metadata:
  15. name: nfs-client-provisioner
  16. labels:
  17. app: nfs-client-provisioner
  18. # replace with namespace where provisioner is deployed
  19. namespace: default
  20. spec:
  21. replicas: 1
  22. strategy:
  23. type: Recreate
  24. selector:
  25. matchLabels:
  26. app: nfs-client-provisioner
  27. template:
  28. metadata:
  29. labels:
  30. app: nfs-client-provisioner
  31. spec:
  32. serviceAccountName: nfs-client-provisioner
  33. containers:
  34. - name: nfs-client-provisioner
  35. image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
  36. # resources:
  37. # limits:
  38. # cpu: 10m
  39. # requests:
  40. # cpu: 10m
  41. volumeMounts:
  42. - name: nfs-client-root
  43. mountPath: /persistentvolumes
  44. env:
  45. - name: PROVISIONER_NAME
  46. value: k8s-sigs.io/nfs-subdir-external-provisioner
  47. - name: NFS_SERVER
  48. value: 192.168.1.211 ## 指定自己nfs服务器地址
  49. - name: NFS_PATH
  50. value: /nfs/data ## nfs服务器共享的目录
  51. volumes:
  52. - name: nfs-client-root
  53. nfs:
  54. server: 192.168.1.211
  55. path: /nfs/data
  56. ---
  57. apiVersion: v1
  58. kind: ServiceAccount
  59. metadata:
  60. name: nfs-client-provisioner
  61. # replace with namespace where provisioner is deployed
  62. namespace: default
  63. ---
  64. kind: ClusterRole
  65. apiVersion: rbac.authorization.k8s.io/v1
  66. metadata:
  67. name: nfs-client-provisioner-runner
  68. rules:
  69. - apiGroups: [""]
  70. resources: ["nodes"]
  71. verbs: ["get", "list", "watch"]
  72. - apiGroups: [""]
  73. resources: ["persistentvolumes"]
  74. verbs: ["get", "list", "watch", "create", "delete"]
  75. - apiGroups: [""]
  76. resources: ["persistentvolumeclaims"]
  77. verbs: ["get", "list", "watch", "update"]
  78. - apiGroups: ["storage.k8s.io"]
  79. resources: ["storageclasses"]
  80. verbs: ["get", "list", "watch"]
  81. - apiGroups: [""]
  82. resources: ["events"]
  83. verbs: ["create", "update", "patch"]
  84. ---
  85. kind: ClusterRoleBinding
  86. apiVersion: rbac.authorization.k8s.io/v1
  87. metadata:
  88. name: run-nfs-client-provisioner
  89. subjects:
  90. - kind: ServiceAccount
  91. name: nfs-client-provisioner
  92. # replace with namespace where provisioner is deployed
  93. namespace: default
  94. roleRef:
  95. kind: ClusterRole
  96. name: nfs-client-provisioner-runner
  97. apiGroup: rbac.authorization.k8s.io
  98. ---
  99. kind: Role
  100. apiVersion: rbac.authorization.k8s.io/v1
  101. metadata:
  102. name: leader-locking-nfs-client-provisioner
  103. # replace with namespace where provisioner is deployed
  104. namespace: default
  105. rules:
  106. - apiGroups: [""]
  107. resources: ["endpoints"]
  108. verbs: ["get", "list", "watch", "create", "update", "patch"]
  109. ---
  110. kind: RoleBinding
  111. apiVersion: rbac.authorization.k8s.io/v1
  112. metadata:
  113. name: leader-locking-nfs-client-provisioner
  114. # replace with namespace where provisioner is deployed
  115. namespace: default
  116. subjects:
  117. - kind: ServiceAccount
  118. name: nfs-client-provisioner
  119. # replace with namespace where provisioner is deployed
  120. namespace: default
  121. roleRef:
  122. kind: Role
  123. name: leader-locking-nfs-client-provisioner
  124. apiGroup: rbac.authorization.k8s.io
  1. #应用
  2. kubectl apply -f sc.yaml
  3. #确认配置是否生效
  4. kubectl get sc

8 只用在主节点k8s-node1文件夹中下载k8s安装脚本  没有配置镜像加速可能会很慢多试几次

  1. export KKZONE=cn
  2. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
  3. chmod +x kk

9 集群配置,创建配置文件,config-sample.yaml

./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1

 10 编辑config-sample.yaml

  1. apiVersion: kubekey.kubesphere.io/v1alpha1
  2. kind: Cluster
  3. metadata:
  4. name: sample
  5. spec:
  6. hosts:
  7. - {name: node1, address: 192.168.1.211, internalAddress: 192.168.1.211, user: root, password: "root"}
  8. - {name: node2, address: 192.168.1.212, internalAddress: 192.168.1.212, user: root, password: "root"}
  9. - {name: node3, address: 192.168.1.213, internalAddress: 192.168.1.213, user: root, password: "root"}
  10. roleGroups:
  11. etcd:
  12. - node1
  13. master:
  14. - node1
  15. worker:
  16. - node2
  17. - node3
  18. controlPlaneEndpoint:
  19. domain: lb.kubesphere.local
  20. address: ""
  21. port: 6443
  22. kubernetes:
  23. version: v1.20.4
  24. imageRepo: kubesphere
  25. clusterName: cluster.local
  26. network:
  27. plugin: calico
  28. kubePodsCIDR: 10.233.64.0/18
  29. kubeServiceCIDR: 10.233.0.0/18
  30. registry:
  31. registryMirrors: "https://o83laiga.mirror.aliyuncs.com"
  32. insecureRegistries: []
  33. addons: []

 11 启动脚本和配置文件

./kk create cluster -f config-sample.yaml

12 耐心等待安装完成,会把所有工作节点添加到k8s-node1(时间大概5-10分钟)

13 查看日志

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f

14 查看节点状态

kubectl get nodes

15 如果忘记配置阿里镜像加速 

更改daemon文件

  1. {
  2. "registry-mirrors": ["https://o83laiga.mirror.aliyuncs.com"],
  3. "exec-opts": ["native.cgroupdriver=systemd"],
  4. "log-driver": "json-file",
  5. "log-opts": {
  6. "max-size": "100m"
  7. }
  8. }

 16 删除集群,重新安装

./kk delete cluster

高级模式删除

./kk delete cluster [-f config-sample.yaml]

17多租户实战

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/很楠不爱3/article/detail/247598
推荐阅读
相关标签
  

闽ICP备14008679号