当前位置:   article > 正文

CentOS8下超详细安装配置kubernetes(K8S)_centos8 kubernetes

centos8 kubernetes

一、环境准备

1. 卸载podman

centos8默认安装了podman容器,它和docker可能有冲突需要卸载掉

sudo yum remove podman
  • 1

2. 关闭交换区

  • 临时关闭
sudo swapoff -a
  • 1
  • 永久关闭
    把/etc/fstab中的swap注释掉
sudo sed -i 's/.*swap.*/#&/' /etc/fstab
  • 1

在这里插入图片描述

3. 禁用selinux

  • 临时关闭
setenforce 0
  • 1
  • 永久关闭
sudo sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
  • 1

在这里插入图片描述

4. 关闭防火墙

sudo systemctl stop firewalld.service
sudo systemctl disable firewalld.service
  • 1
  • 2

二、安装K8S

1. 配置系统基本安装源

sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
  • 1

2. 添加K8S安装源

将如下内容保存到:/etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

由于目前阿里镜像中还没有CentOS8的kubernetes,但是可以使用CentOS7的安装包,所以上面是使用的kubernetes-el7-x86_64,如果有CentOS8的,则为kubernetes-el8-x86_64。

3. 安装docker

sudo yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
  • 1
  • 2
  • 3

如果出现错误:

错误:
 问题: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
  - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.el7.x86_64 is excluded
  - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
(尝试添加 '--skip-broken' 来跳过无法安装的软件包 或 '--nobest' 来不只使用最佳选择的软件包)
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

可以通过如下方式来解决:

wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm
  • 1
  • 2

为了docker加速pull,可以设置阿里云加速:

sudo mkdir -p /etc/docker
sudo vim /etc/docker/daemon.json
  • 1
  • 2

设置为如下内容:

{
   "registry-mirrors" : ["https://mj9kvemk.mirror.aliyuncs.com"]
}
  • 1
  • 2
  • 3

4. 安装kubectl、kubelet、kubeadm

安装kubectl、kubelet、kubeadm,设置kubelet开机启动,启动kubelet。

sudo yum install -y kubectl kubelet kubeadm
sudo systemctl enable kubelet
sudo systemctl start kubelet
  • 1
  • 2
  • 3

查看K8S版本

kubeadm version
kubectl version --client
kubelet --version
  • 1
  • 2
  • 3

在这里插入图片描述
可以看到kubelet的版本为1.18.5。

5. 初始化kubernetes集群

kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.5 --pod-network-cidr=10.18.0.0/16
  • 1

运行后出现问题:

[root@localhost admin]# kubeadm init --apiserver-advertise-address=0.0.0.0 \
--apiserver-cert-extra-sans=127.0.0.1 \
--image-repository=registry.aliyuncs.com/google_containers \
--ignore-preflight-errors=all \
--kubernetes-version=v1.18.5 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.18.0.0/16
W0702 16:23:11.951553   16395 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • [WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.
    出现[WARNING IsDockerSystemdCheck],是由于docker的Cgroup Driver和kubelet的Cgroup Driver不一致导致的,此处选择修改docker的和kubelet一致

查看docker信息:

docker info | grep Cgroup
  • 1

在这里插入图片描述
可以看到驱动为Cgroup,需要改为systemd。
编辑文件/usr/lib/systemd/system/docker.service
在ExecStart命令中添加

--exec-opt native.cgroupdriver=systemd
  • 1

在这里插入图片描述
然后重启docker,再查看信息,可以看到已经变为systemd了

systemctl daemon-reload
systemctl restart docker
docker info | grep Cgroup
  • 1
  • 2
  • 3

在这里插入图片描述
此时再执行下面的命令进行初始化:

kubeadm init --apiserver-advertise-address=0.0.0.0 \
--apiserver-cert-extra-sans=127.0.0.1 \
--image-repository=registry.aliyuncs.com/google_containers \
--ignore-preflight-errors=all \
--kubernetes-version=v1.18.5 \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr=10.18.0.0/16
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。
但是还是出现问题:
在这里插入图片描述

kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.5 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
W0702 17:47:00.104450   61229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING Port-2379]: Port 2379 is in use
	[WARNING Port-2380]: Port 2380 is in use
	[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.5 not found: manifest unknown: manifest unknown
, error: exit status 1
	[WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.5 not found: manifest unknown: manifest unknown
, error: exit status 1
	[WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.5 not found: manifest unknown: manifest unknown
, error: exit status 1
	[WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.18.5: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/kube-proxy:v1.18.5 not found: manifest unknown: manifest unknown
, error: exit status 1
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0702 17:47:17.258210   61229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0702 17:47:17.261156   61229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52

应该是阿里云镜像中还没有1.18.5的相关文件,改为1.18.2试试:

kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.2 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
  • 1

成功了:
在这里插入图片描述

[root@localhost admin]# kubeadm init --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=127.0.0.1 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all --kubernetes-version=v1.18.2 --service-cidr=10.10.0.0/16 --pod-network-cidr=10.18.0.0/16
W0702 17:47:41.592876   62703 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
	[WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
	[WARNING Port-10250]: Port 10250 is in use
	[WARNING Port-2379]: Port 2379 is in use
	[WARNING Port-2380]: Port 2380 is in use
	[WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0702 17:49:34.509168   62703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0702 17:49:34.510843   62703 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.003628 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: l21jwf.pjzezg1xmopqoj0p
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.134:6443 --token l21jwf.pjzezg1xmopqoj0p \
    --discovery-token-ca-cert-hash sha256:0b1062f2ec73f8c35c1bfecd857a287b128aba7ae5c0673ea604c9ac7c296a95 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77

执行提示中的命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 1
  • 2
  • 3

再执行:

kubectl get node
kubectl get pod --all-namespaces
  • 1
  • 2

在这里插入图片描述
node节点为NotReady,因为coredns pod没有启动,缺少网络pod。

6. 安装calico网络

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  • 1

在这里插入图片描述
过一会再查看信息,节点已经处于Ready状态了。
在这里插入图片描述

三、安装kubernetes-dashboard

将官方的recommended.yaml文件下载下来。可以使用下面的命令下载,如果国内无法访问https://raw.githubusercontent.com,则可以在前面的链接中复制内容再保存。

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml
  • 1

由于官方没使用nodeport,所以需要修改文件,添加两行配置。

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #添加这行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000    #添加这行
  selector:
    k8s-app: kubernetes-dashboard
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

如图所示:
在这里插入图片描述
然后创建POD,并查看kubernetes-dashboard

kubectl create -f recommended.yaml
kubectl get svc -n kubernetes-dashboard
  • 1
  • 2

在这里插入图片描述
此时,可以使用浏览器访问:https://192.168.1.134:30000/#/login,其中的IP是本机的IP。
在这里插入图片描述

这里有两种登录方式:

  • 使用Token登录
  • 使用Kubeconfig登录

使用Token登录

1. 创建token

kubectl create sa dashboard-admin -n kube-system
  • 1

在这里插入图片描述

2. 授权token 访问权限

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
  • 1

在这里插入图片描述

3. 获取token

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
echo ${DASHBOARD_LOGIN_TOKEN}
  • 1
  • 2
  • 3

在这里插入图片描述

4. 登录

在这里插入图片描述
输入Token后,登录页面如下图所示:
在这里插入图片描述
需要注意的是Token默认有效期是24小时,过期需要重新生成token。

有了kubernetes-dashboard的帮助,就可以可视化的部署与监控应用了。

常用Token命令:

  • 查看Token
kubeadm token list
  • 1
  • 创建Token
kubeadm token create
  • 1
  • 删除 Token
kubeadm token delete TokenXXX
  • 1
  • 初始化master节点时,node节点加入集群命令
kubeadm token create --print-join-command
  • 1

或者:

token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0
  • 1
  • 2

四、使用DashBoard部署容器

以部署nginx和mysql为例,介绍如何利用K8s DashBoard部署容器。

1.示例:部署nginx

在Web页面依次按下图操作
在这里插入图片描述
在点击“Deploy"后等待一段时间,当状态颜色都变为绿色则表示成功。
在此步可能会出现错误:

0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
  • 1

这是因为k8s出于安全考虑默认情况下无法在master节点上部署pod,在Linux控制台执行下面的命令解决:

kubectl taint nodes --all node-role.kubernetes.io/master-
  • 1

还有可能出现错误:

Back-off restarting failed container
  • 1

在deployment申明镜像的后面加上命令
command: [ “/bin/bash”, “-ce”, “tail -f /dev/null” ]
在这里插入图片描述
成功页面:
在这里插入图片描述
然后在下面找到“Services”栏,可以看到暴露给外部的端口,此时我们可以使用这个端口访问nginx了。
在这里插入图片描述
看到nginx的欢迎页面,说明成功了。
在这里插入图片描述

2.示例:部署Mysql

与部署nginx类似,但是有一点不一样的就是Mysql的容器需要指定一个初始化的密码,所以需要用到“高级选项”:
在这里插入图片描述
在高级选项中填入环境变量:
名字:MYSQL_ROOT_PASSWORD
值:123456
这样就设置了Mysql的root账号密码。
在这里插入图片描述
部署成功后可以试试Mysql:
在这里插入图片描述
在这里插入图片描述
可以看到成功访问Mysql。
我再试试外部访问,首先查看对外端口:
在这里插入图片描述
在这里插入图片描述

五、删除容器

如果想要删除已经部署的容器,可以在页面中点下图所示按钮,在弹出的菜单中选择删除即可。
在这里插入图片描述
参考:
https://blog.csdn.net/vs2008ASPNET/article/details/104119331/
https://blog.csdn.net/sq4521/article/details/105873575
https://www.kubernetes.org.cn/7189.html
https://www.ywcsb.vip/blog/94.html

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/922864
推荐阅读
相关标签
  

闽ICP备14008679号