当前位置:   article > 正文

【3】Linux多节点部署KubeSphere|最简单的安装方法

kubesphere

目录

步骤1:准备三台服务器

步骤2:下载KubeKey

步骤3:创建集群

1. 创建示例配置文件

2. 编辑配置文件

3. 使用配置文件创建集群

4. 验证安装


步骤1:准备三台服务器

  • 4c8g (master)
  • 8c16g * 2 (worker)
  • centos7.9
  • 内网互通
  • 每个机器有自己域名
  • 防火墙开放30000~32767端口
主机 IP主机名角色
192.168.0.2control planecontrol plane, etcd
192.168.0.3node1worker
192.168.0.4node2worker

 #修改主机名

hostnamectl set-hostname 主机IP

步骤2:下载KubeKey(master节点)

先执行以下命令以确保您从正确的区域下载 KubeKey。

export KKZONE=cn

执行以下命令下载 KubeKey:

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.2 sh -

为 kk 添加可执行权限

chmod +x kk

步骤3:创建集群(master节点)

1. 创建示例配置文件

命令如下

./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]

./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1

2. 编辑配置文件

将创建默认文件 config-sample.yaml。编辑文件,以下是多节点集群(具有一个主节点)配置文件的示例。

  1. spec:
  2. hosts:
  3. - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123}
  4. - {name: node1, address: 192.168.0.3, internalAddress: 192.168.0.3, user: ubuntu, password: Testing123}
  5. - {name: node2, address: 192.168.0.4, internalAddress: 192.168.0.4, user: ubuntu, password: Testing123}
  6. roleGroups:
  7. etcd:
  8. - master
  9. control-plane:
  10. - master
  11. worker:
  12. - node1
  13. - node2
  14. controlPlaneEndpoint:
  15. domain: lb.kubesphere.local
  16. address: ""
  17. port: 6443

 我自己的修改后的:

  1. apiVersion: kubekey.kubesphere.io/v1alpha2
  2. kind: Cluster
  3. metadata:
  4. name: sample
  5. spec:
  6. hosts:
  7. - {name: master, address: 172.31.0.2, internalAddress: 172.31.0.2, user: root, password: "W1234567@123"}
  8. - {name: node1, address: 172.31.0.3, internalAddress: 172.31.0.3, user: root, password: "W1234567@123"}
  9. - {name: node2, address: 172.31.0.4, internalAddress: 172.31.0.4, user: root, password: "W1234567q@123"}
  10. roleGroups:
  11. etcd:
  12. - master
  13. control-plane:
  14. - master
  15. worker:
  16. - master
  17. - node1
  18. - node2
  19. controlPlaneEndpoint:
  20. ## Internal loadbalancer for apiservers
  21. # internalLoadbalancer: haproxy
  22. domain: lb.kubesphere.local
  23. address: ""
  24. port: 6443
  25. kubernetes:
  26. version: v1.20.4
  27. clusterName: cluster.local
  28. autoRenewCerts: true
  29. containerManager: docker
  30. etcd:
  31. type: kubekey
  32. network:
  33. plugin: calico
  34. kubePodsCIDR: 10.233.64.0/18
  35. kubeServiceCIDR: 10.233.0.0/18
  36. ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
  37. multusCNI:
  38. enabled: false
  39. registry:
  40. privateRegistry: ""
  41. namespaceOverride: ""
  42. registryMirrors: []
  43. insecureRegistries: []
  44. addons: []
  45. ---
  46. apiVersion: installer.kubesphere.io/v1alpha1
  47. kind: ClusterConfiguration
  48. metadata:
  49. name: ks-installer
  50. namespace: kubesphere-system
  51. labels:
  52. version: v3.1.1
  53. spec:
  54. persistence:
  55. storageClass: ""
  56. authentication:
  57. jwtSecret: ""
  58. zone: ""
  59. local_registry: ""
  60. etcd:
  61. monitoring: false
  62. endpointIps: localhost
  63. port: 2379
  64. tlsEnable: true
  65. common:
  66. redis:
  67. enabled: false
  68. redisVolumSize: 2Gi
  69. openldap:
  70. enabled: false
  71. openldapVolumeSize: 2Gi
  72. minioVolumeSize: 20Gi
  73. monitoring:
  74. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
  75. es:
  76. elasticsearchMasterVolumeSize: 4Gi
  77. elasticsearchDataVolumeSize: 20Gi
  78. logMaxAge: 7
  79. elkPrefix: logstash
  80. basicAuth:
  81. enabled: false
  82. username: ""
  83. password: ""
  84. externalElasticsearchUrl: ""
  85. externalElasticsearchPort: ""
  86. console:
  87. enableMultiLogin: true
  88. port: 30880
  89. alerting:
  90. enabled: false
  91. # thanosruler:
  92. # replicas: 1
  93. # resources: {}
  94. auditing:
  95. enabled: false
  96. devops:
  97. enabled: false
  98. jenkinsMemoryLim: 2Gi
  99. jenkinsMemoryReq: 1500Mi
  100. jenkinsVolumeSize: 8Gi
  101. jenkinsJavaOpts_Xms: 512m
  102. jenkinsJavaOpts_Xmx: 512m
  103. jenkinsJavaOpts_MaxRAM: 2g
  104. events:
  105. enabled: false
  106. ruler:
  107. enabled: true
  108. replicas: 2
  109. logging:
  110. enabled: false
  111. logsidecar:
  112. enabled: true
  113. replicas: 2
  114. metrics_server:
  115. enabled: false
  116. monitoring:
  117. storageClass: ""
  118. prometheusMemoryRequest: 400Mi
  119. prometheusVolumeSize: 20Gi
  120. multicluster:
  121. clusterRole: none
  122. network:
  123. networkpolicy:
  124. enabled: false
  125. ippool:
  126. type: none
  127. topology:
  128. type: none
  129. openpitrix:
  130. store:
  131. enabled: false
  132. servicemesh:
  133. enabled: false
  134. kubeedge:
  135. enabled: false
  136. cloudCore:
  137. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  138. tolerations: []
  139. cloudhubPort: "10000"
  140. cloudhubQuicPort: "10001"
  141. cloudhubHttpsPort: "10002"
  142. cloudstreamPort: "10003"
  143. tunnelPort: "10004"
  144. cloudHub:
  145. advertiseAddress:
  146. - ""
  147. nodeLimit: "100"
  148. service:
  149. cloudhubNodePort: "30000"
  150. cloudhubQuicNodePort: "30001"
  151. cloudhubHttpsNodePort: "30002"
  152. cloudstreamNodePort: "30003"
  153. tunnelNodePort: "30004"
  154. edgeWatcher:
  155. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  156. tolerations: []
  157. edgeWatcherAgent:
  158. nodeSelector: {"node-role.kubernetes.io/worker": ""}
  159. tolerations: []

3. 使用配置文件创建集群

./kk create cluster -f config-sample.yaml

 我在这里提示有其他的需要安装下图:

  1. 22:33:18 CST [ConfirmModule] Display confirmation form
  2. +--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
  3. | name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
  4. +--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
  5. | master | y | y | y | | | | | | y | | | | | | CST 22:33:18 |
  6. | node1 | y | y | y | | | | | | y | | | | | | CST 22:33:18 |
  7. | node2 | y | y | y | | | | | | y | | | | | | CST 22:33:18 |
  8. +--------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
  9. 22:33:18 CST [ERRO] master: conntrack is required.
  10. 22:33:18 CST [ERRO] master: socat is required.
  11. 22:33:18 CST [ERRO] node1: conntrack is required.
  12. 22:33:18 CST [ERRO] node1: socat is required.
  13. 22:33:18 CST [ERRO] node2: conntrack is required.
  14. 22:33:18 CST [ERRO] node2: socat is required

 然后根据缺少每个节点安装好

yum install -y  conntrack && yum install -y socat

 然后继续下面这个命令安装:

./kk create cluster -f config-sample.yaml

整个安装过程可能需要 10 到 20 分钟,具体取决于您的计算机和网络环境

4. 验证安装

安装完成后,您会看到如下内容

  1. #####################################################
  2. ### Welcome to KubeSphere! ###
  3. #####################################################
  4. Console: http://192.168.0.2:30880
  5. Account: admin
  6. Password: P@88w0rd
  7. NOTES:
  8. 1. After you log into the console, please check the
  9. monitoring status of service components in
  10. the "Cluster Management". If any service is not
  11. ready, please wait patiently until all components
  12. are up and running.
  13. 2. Please change the default password after login.
  14. #####################################################
  15. https://kubesphere.io 20xx-xx-xx xx:xx:xx
  16. #####################################################

现在,您可以通过 <NodeIP:30880 使用默认帐户和密码 (admin/P@88w0rd) 访问 KubeSphere 的 Web 控制台。 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Monodyee/article/detail/247608
推荐阅读
相关标签
  

闽ICP备14008679号