当前位置:   article > 正文

在ubuntu22.04上安装k8s-1.29.3_ubantu kubectlv1.29.3

ubantu kubectlv1.29.3

1、准备机器

1.1、安装虚拟机

    虚拟机镜像采用ubuntu-22.04.3-live-server-amd64.iso,一路回车直到系统安装完成。用root登录,能ping通宿主机和外网就可以了。

1.2、配置虚拟机
1.2.1、关闭防火墙
  1. # systemctl disable ufw
  2. // 若没有该config文件不做如下2行
  3. # sed -ri 's/SELINUX=permissive/SELINUX=disabled/' /etc/selinux/config
  4. # sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
1.2.2、网络时间同步
  1. // 同步aliyun时间
  2. # apt install ntpdate
  3. # crontab -e
  4. 0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com
  5. // 设置时区
  6. # timedatectl set-timezone Asia/Shanghai
1.2.3、配置内核转发和网桥过滤
  1. // 配置
  2. # cat > /etc/sysctl.d/k8s.conf << EOF
  3. net.bridge.bridge-nf-call-ip6tables = 1
  4. net.bridge.bridge-nf-call-iptables = 1
  5. net.ipv4.ip_forward = 1
  6. vm.swappiness = 0
  7. EOF
  8. // 启动
  9. # modprobe br_netfilter
  10. // 检查
  11. # lsmod | grep br_netfilter
  12. br_netfilter 32768 0
  13. bridge 307200 1 br_netfilter
  14. // 开机启动
  15. # cat > /etc/modules-load.d/k8s.conf << EOF
  16. overlay
  17. br_netfilter
  18. EOF
1.2.4、安装ipset和ipvsadm
  1. # apt install ipset ipvsadm
  2. # cat > /etc/modules-load.d/ipvs.conf << EOF
  3. ip_vs
  4. ip_vs_rr
  5. ip_vs_wrr
  6. ip_vs_sh
  7. nf_conntrack
  8. EOF
1.2.5、关闭交换区
  1. # cat /etc/fstab
  2. # <file system> <mount point> <type> <options> <dump> <pass>
  3. # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
  4. /dev/disk/by-id/dm-uuid-LVM-aMgPZgZ6o3cHyNRGU08LFhzfZuvDoqjTrxFfUt6c3Zu3FwpXO7xWyoRZSNRaLZq1 / ext4 defaults 0 1
  5. # /boot was on /dev/sda2 during curtin installation
  6. /dev/disk/by-uuid/9314b4f8-368c-4f1b-ba74-9fb759ad9270 /boot ext4 defaults 0 1
  7. #/swap.img none swap sw 0 0

注释掉最后关于交换区的一行。

1.2.6、配置/etc/hosts
  1. // 在/etc/hosts文件后面加上以下内容
  2. 10.0.1.11 master1
  3. 10.0.1.21 worker1
  4. 10.0.1.22 worker2
  5. 10.0.1.23 worker3

2、安装容器

2.1、安装containerd
  1. # apt install containerd
  2. # apt remove containerd
  3. // 在安装containerd的时候,系统附带重新安装了新的runc
  4. // 然后到github上下载cri-containerd,才能支持crictl命令,可以在win下用迅雷下载比较快,然后复制到虚拟机上。
  5. # wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-1.7.14-linux-amd64.tar.gz
  6. // 解压
  7. # tar xvf cri-containerd-1.7.14-linux-amd64.tar.gz -C /
  8. // 修改配置
  9. # mkdir /etc/containerd
  10. # containerd config default > /etc/containerd/config.toml
  11. // 将该文件里面65行的版本号改为3.9
  12. # sandbox_image = "registry.k8s.io/pause:3.8"
  13. sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
  14. // 将该文件里面137行改为true
  15. # SystemdCgroup = false
  16. SystemdCgroup = true
  17. // 最后将containerd设为开机自启动
  18. # systemctl enable containerd

3、构建k8s

3.1、下载k8s软件
3.1.1、snap下载
  1. // snap下载
  2. # snap install kubeadm --classic
  3. # snap install kubectl --classic
  4. # snap install kubelet --classic
  5. // 查看kubelet服务状态
  6. # systemctl status snap.kubelet.daemon.service
  7. # cd /etc/systemd/system
  8. # mv snap.kubelet.daemon.service kubelet.service
  9. # systemctl disable snap.kubelet.daemon.service
  10. # systemctl enable kubelet.service
  11. # reboot
  12. apt install conntrack
  13. apt install socat
  14. // 关机
  15. # shutdown -h 0
3.1.2、apt下载
  1. // apt下载
  2. // 从社区获取apt下载源包含k8s1.29版本,用aliyun也可以,但版本号最高为k8s1.28
  3. # curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
  4. # echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
  5. // 更新apt源
  6. # apt update
  7. // 查看新源里面有什么版本的kubeadm
  8. # apt-cache policy kubeadm
  9. kubeadm:
  10. Installed: (none)
  11. Candidate: 1.28.2-00
  12. Version table:
  13. 1.28.2-00 500
  14. 500 https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
  15. 1.28.2-00 500
  16. 500 https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
  17. 1.28.2-00 500
  18. 500 https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
  19. 1.28.2-00 500
  20. 500 https://pkgs.k8s.io/core:/stable:/v1.28/deb Packages
  21. // 发现最新版是1.28.2-00
  22. // 进行安装
  23. # apt install kubeadm kubectl kubelet
  24. // 保持版本不被自动升级
  25. # apt-mark hold kubeadm kubectl kubelet
  26. // 关机
  27. # shutdown -h 0
3.2、复制虚拟机master1

virtualbox里面复制一个虚拟机,取名k8s_master1,修改IP地址

在virtualbox里面复制一个虚拟机,取名k8s_worker1,修改IP地址

  1. # hostnamectl hostname master1
  2. // 各个worker虚拟机还需要修改IP地址,并将各自的IP和机器名称加入/etc/hosts
  3. // 在master1上做初始化
  4. # kubeadm init --kubernetes-version=v1.29.3 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16
  5. # kubeadm init --kubernetes-version=v1.29.3 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --apiserver-advertise-address=10.0.1.11
  6. // 一次成功!
  7. # kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --apiserver-advertise-address=10.0.1.11
  8. [init] Using Kubernetes version: v1.29.3
  9. [preflight] Running pre-flight checks
  10. [preflight] Pulling images required for setting up a Kubernetes cluster
  11. [preflight] This might take a minute or two, depending on the speed of your internet connection
  12. [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  13. [certs] Using certificateDir folder "/etc/kubernetes/pki"
  14. [certs] Generating "ca" certificate and key
  15. [certs] Generating "apiserver" certificate and key
  16. [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 10.0.1.11]
  17. [certs] Generating "apiserver-kubelet-client" certificate and key
  18. [certs] Generating "front-proxy-ca" certificate and key
  19. [certs] Generating "front-proxy-client" certificate and key
  20. [certs] Generating "etcd/ca" certificate and key
  21. [certs] Generating "etcd/server" certificate and key
  22. [certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.0.1.11 127.0.0.1 ::1]
  23. [certs] Generating "etcd/peer" certificate and key
  24. [certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.0.1.11 127.0.0.1 ::1]
  25. [certs] Generating "etcd/healthcheck-client" certificate and key
  26. [certs] Generating "apiserver-etcd-client" certificate and key
  27. [certs] Generating "sa" key and public key
  28. [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
  29. [kubeconfig] Writing "admin.conf" kubeconfig file
  30. [kubeconfig] Writing "super-admin.conf" kubeconfig file
  31. [kubeconfig] Writing "kubelet.conf" kubeconfig file
  32. [kubeconfig] Writing "controller-manager.conf" kubeconfig file
  33. [kubeconfig] Writing "scheduler.conf" kubeconfig file
  34. [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
  35. [control-plane] Using manifest folder "/etc/kubernetes/manifests"
  36. [control-plane] Creating static Pod manifest for "kube-apiserver"
  37. [control-plane] Creating static Pod manifest for "kube-controller-manager"
  38. [control-plane] Creating static Pod manifest for "kube-scheduler"
  39. [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
  40. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
  41. [kubelet-start] Starting the kubelet
  42. [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
  43. [apiclient] All control plane components are healthy after 4.503238 seconds
  44. [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
  45. [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
  46. [upload-certs] Skipping phase. Please see --upload-certs
  47. [mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
  48. [mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
  49. [bootstrap-token] Using token: yyjh09.6he5wfuvsgpclctr
  50. [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
  51. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
  52. [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
  53. [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
  54. [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
  55. [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
  56. [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
  57. [addons] Applied essential addon: CoreDNS
  58. [addons] Applied essential addon: kube-proxy
  59. Your Kubernetes control-plane has initialized successfully!
  60. To start using your cluster, you need to run the following as a regular user:
  61. mkdir -p $HOME/.kube
  62. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  63. sudo chown $(id -u):$(id -g) $HOME/.kube/config
  64. Alternatively, if you are the root user, you can run:
  65. export KUBECONFIG=/etc/kubernetes/admin.conf
  66. You should now deploy a pod network to the cluster.
  67. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  68. https://kubernetes.io/docs/concepts/cluster-administration/addons/
  69. Then you can join any number of worker nodes by running the following on each as root:
  70. kubeadm join 10.0.1.11:6443 --token yyjh09.6he5wfuvsgpclctr \
  71. --discovery-token-ca-cert-hash sha256:ea410f8b9757ca344212ff3e906ec9eb44f1902b5ee7a24bdb9c3fe9d8621d5a
  72. // 安装成功了!检查一下
  73. # kubectl get node
  74. E0319 11:28:28.217021 8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
  75. E0319 11:28:28.217430 8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
  76. E0319 11:28:28.219640 8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
  77. E0319 11:28:28.219773 8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
  78. E0319 11:28:28.222284 8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
  79. The connection to the server localhost:8080 was refused - did you specify the right host or port?
  80. // 按照成功提示信息执行如下命令
  81. # mkdir -p $HOME/.kube
  82. # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  83. # chown $(id -u):$(id -g) $HOME/.kube/config
  84. // 重新检查
  85. # kubectl get node
  86. NAME STATUS ROLES AGE VERSION
  87. master1 NotReady control-plane 11m v1.29.3
  88. # kubectl get pod -A
  89. NAMESPACE NAME READY STATUS RESTARTS AGE
  90. kube-system coredns-857d9ff4c9-sl62g 0/1 Pending 0 12m
  91. kube-system coredns-857d9ff4c9-z6jjq 0/1 Pending 0 12m
  92. kube-system etcd-master1 1/1 Running 0 12m
  93. kube-system kube-apiserver-master1 1/1 Running 0 12m
  94. kube-system kube-controller-manager-master1 1/1 Running 0 12m
  95. kube-system kube-proxy-5l598 1/1 Running 0 12m
  96. kube-system kube-scheduler-master1 1/1 Running 0 12m
  97. // 在worker节点上按照master1上初始化成功之后的提示操作
  98. # kubeadm join 10.0.1.11:6443 --token yyjh09.6he5wfuvsgpclctr \
  99. --discovery-token-ca-cert-hash sha256:ea410f8b9757ca344212ff3e906ec9eb44f1902b5ee7a24bdb9c3fe9d8621d5a
  100. // 按照成功提示信息执行如下命令
  101. # mkdir -p $HOME/.kube
  102. # cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
  103. # chown $(id -u):$(id -g) $HOME/.kube/config
  104. // 检查节点加入情况
  105. # kubectl get node
  106. NAME STATUS ROLES AGE VERSION
  107. master1 NotReady control-plane 91m v1.29.3
  108. worker1 NotReady <none> 7m3s v1.29.3

4、构建网络

  1. // 用helm来安装calico,首先检查系统有没有安装helm
  2. # helm
  3. Command 'helm' not found, but can be installed with:
  4. snap install helm
  5. // 没有安装,按照提示安装
  6. # snap install helm
  7. error: This revision of snap "helm" was published using classic confinement and thus may perform
  8. arbitrary system changes outside of the security sandbox that snaps are usually confined to,
  9. which may put your system at risk.
  10. If you understand and want to proceed repeat the command including --classic.
  11. root@master1:~# snap install helm --classic
  12. helm 3.14.3 from Snapcrafters✪ installed
  13. # Installing
  14. 1. Add the projectcalico helm repository.
  15. ```
  16. helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
  17. ```
  18. 1. Create the tigera-operator namespace.
  19. ```
  20. kubectl create namespace tigera-operator
  21. ```
  22. 1. Install the helm chart into the `tigera-operator` namespace.
  23. ```
  24. helm install calico projectcalico/tigera-operator --namespace tigera-operator
  25. // 检查
  26. # kubectl get pod -A
  27. NAMESPACE NAME READY STATUS RESTARTS AGE
  28. calico-system calico-kube-controllers-fbb8d4c9c-nqd9k 0/1 Pending 0 28s
  29. calico-system calico-node-7v465 0/1 Init:0/2 0 28s
  30. calico-system calico-node-dbmx9 0/1 Init:1/2 0 28s
  31. calico-system calico-typha-8b695c9cc-v2vsf 1/1 Running 0 28s
  32. calico-system csi-node-driver-64mpv 0/2 ContainerCreating 0 28s
  33. calico-system csi-node-driver-q5jm5 0/2 ContainerCreating 0 28s
  34. kube-system coredns-857d9ff4c9-sl62g 0/1 Pending 0 100m
  35. kube-system coredns-857d9ff4c9-z6jjq 0/1 Pending 0 100m
  36. kube-system etcd-master1 1/1 Running 0 100m
  37. kube-system kube-apiserver-master1 1/1 Running 0 100m
  38. kube-system kube-controller-manager-master1 1/1 Running 0 100m
  39. kube-system kube-proxy-5l598 1/1 Running 0 100m
  40. kube-system kube-proxy-798fq 1/1 Running 0 17m
  41. kube-system kube-scheduler-master1 1/1 Running 0 100m
  42. tigera-operator tigera-operator-748c69cf45-gdhdg 1/1 Running 0 39s
  43. // 一直重复检查,直到左右pod处于Running状态
  44. # kubectl get pod -A
  45. NAMESPACE NAME READY STATUS RESTARTS AGE
  46. calico-apiserver calico-apiserver-67dd77d667-4c4vf 0/1 Running 0 29s
  47. calico-apiserver calico-apiserver-67dd77d667-8glv5 0/1 Running 0 29s
  48. calico-system calico-kube-controllers-fbb8d4c9c-nqd9k 1/1 Running 0 2m11s
  49. calico-system calico-node-7v465 1/1 Running 0 2m11s
  50. calico-system calico-node-dbmx9 1/1 Running 0 2m11s
  51. calico-system calico-typha-8b695c9cc-v2vsf 1/1 Running 0 2m11s
  52. calico-system csi-node-driver-64mpv 2/2 Running 0 2m11s
  53. calico-system csi-node-driver-q5jm5 2/2 Running 0 2m11s
  54. kube-system coredns-857d9ff4c9-sl62g 1/1 Running 0 102m
  55. kube-system coredns-857d9ff4c9-z6jjq 1/1 Running 0 102m
  56. kube-system etcd-master1 1/1 Running 0 102m
  57. kube-system kube-apiserver-master1 1/1 Running 0 102m
  58. kube-system kube-controller-manager-master1 1/1 Running 0 102m
  59. kube-system kube-proxy-5l598 1/1 Running 0 102m
  60. kube-system kube-proxy-798fq 1/1 Running 0 18m
  61. kube-system kube-scheduler-master1 1/1 Running 0 102m
  62. tigera-operator tigera-operator-748c69cf45-gdhdg 1/1 Running 0 2m22s
  63. // 检查node状态# kubectl get node
  64. NAME STATUS ROLES AGE VERSION
  65. master1 Ready control-plane 102m v1.29.3
  66. worker1 Ready <none> 18m v1.29.3
  67. // worker1的校色标签为<none>,修改为worker
  68. # kubectl label node worker1 node-role.kubernetes.io/worker=worker
  69. node/worker1 labeled

5、测试与监控

5.1、部署ngins进行测试

编写nginx.yaml

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginxweb
  5. spec:
  6. selector:
  7. matchLabels:
  8. app: nginxweb1
  9. replicas: 2
  10. template:
  11. metadata:
  12. labels:
  13. app: nginxweb1
  14. spec:
  15. containers:
  16. - name: nginxwebc
  17. image: nginx:latest
  18. imagePullPolicy: IfNotPresent
  19. ports:
  20. - containerPort: 80
  21. ---
  22. apiVersion: v1
  23. kind: Service
  24. metadata:
  25. name: nginxweb-service
  26. spec:
  27. externalTrafficPolicy: Cluster
  28. selector:
  29. app: nginxweb1
  30. ports:
  31. - protocol: TCP
  32. port: 80
  33. targetPort: 80
  34. nodePort: 30080
  35. type: NodePort
  1. # kubectl delete -f nginx.yaml
  2. deployment.apps "nginxweb" deleted
  3. service "nginxweb-service" deleted
  4. # kubectl get all
  5. NAME READY STATUS RESTARTS AGE
  6. pod/nginxweb-64c569cccc-rj47x 1/1 Running 0 2m59s
  7. pod/nginxweb-64c569cccc-wppsh 1/1 Running 0 2m59s
  8. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  9. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h13m
  10. service/nginxweb-service NodePort 10.96.240.49 <none> 80:30080/TCP 2m59s
  11. NAME READY UP-TO-DATE AVAILABLE AGE
  12. deployment.apps/nginxweb 2/2 2 2 2m59s
  13. NAME DESIRED CURRENT READY AGE
  14. replicaset.apps/nginxweb-64c569cccc 2 2 2 2m59s
  15. # curl 10.96.240.49
  16. <!DOCTYPE html>
  17. <html>
  18. <head>
  19. <title>Welcome to nginx!</title>
  20. <style>
  21. html { color-scheme: light dark; }
  22. body { width: 35em; margin: 0 auto;
  23. font-family: Tahoma, Verdana, Arial, sans-serif; }
  24. </style>
  25. </head>
  26. <body>
  27. <h1>Welcome to nginx!</h1>
  28. <p>If you see this page, the nginx web server is successfully installed and
  29. working. Further configuration is required.</p>
  30. <p>For online documentation and support please refer to
  31. <a href="http://nginx.org/">nginx.org</a>.<br/>
  32. Commercial support is available at
  33. <a href="http://nginx.com/">nginx.com</a>.</p>
  34. <p><em>Thank you for using nginx.</em></p>
  35. </body>
  36. </html>
  37. // 或者通过win浏览器访问http://10.0.1.11:30080
  38. Welcome to nginx!
  39. If you see this page, the nginx web server is successfully installed and working. Further configuration is required.
  40. For online documentation and support please refer to nginx.org.
  41. Commercial support is available at nginx.com.
  42. Thank you for using nginx.
5.2、安装dashboard
  1. # helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
  2. "kubernetes-dashboard" has been added to your repositories
  3. root@master1:~/test# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
  4. Release "kubernetes-dashboard" does not exist. Installing it now.
  5. NAME: kubernetes-dashboard
  6. LAST DEPLOYED: Wed Mar 20 08:08:32 2024
  7. NAMESPACE: kubernetes-dashboard
  8. STATUS: deployed
  9. REVISION: 1
  10. TEST SUITE: None
  11. NOTES:
  12. *************************************************************************************************
  13. *** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
  14. *************************************************************************************************
  15. Congratulations! You have just installed Kubernetes Dashboard in your cluster.
  16. To access Dashboard run:
  17. kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
  18. NOTE: In case port-forward command does not work, make sure that kong service name is correct.
  19. Check the services in Kubernetes Dashboard namespace using:
  20. kubectl -n kubernetes-dashboard get svc
  21. Dashboard will be available at:
  22. https://localhost:8443

上述安装不好使,下列安装一次成功!

最后得到如下的管理界面

完成任务,谢谢浏览!

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/繁依Fanyi0/article/detail/601269
推荐阅读
相关标签
  

闽ICP备14008679号