赞
踩
kubeadm 是官方社区推出的一个用于快速部署 kubernetes 集群的工具。 这个工具能通过两条指令完成一个 kubernetes 集群的部署:
创建一个 Master 节点 $ kubeadm init
将一个 Node 节点加入到当前集群中 $ kubeadm join
kubesphere选用最新版本为:3.20,与之相配套的k8s版本为1.21.3(1.22.x实验性支持就不用了)
按照前置环境要求准备三台虚拟机并进行配置,这里使用vmware创建最小化Centos操作系统,并进行克隆处理。
节点名称 | IP地址 | 用途 |
---|---|---|
k8s-node1 | 192.168.63.10 | 作为Master节点使用 |
k8s-node2 | 192.168.63.11 | 作为Node节点使用 |
k8s-node3 | 192.168.63.12 | 作为Node节点使用 |
1、开启root密码访问权限(一般不用改)
vi /etc/ssh/sshd_config 修改 PasswordAuthentication yes/no 重启服务 service sshd restart
2、修改主机IP地址
vi /etc/sysconfig/network-scripts/ifcfg-ens33
3、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
4、关闭 selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
5、关闭swap
swapoff -a #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
free -g #验证,swap 必须为 0;
6、添加主机名与 IP 对应关系
vi /etc/hosts
192.168.63.10 k8s-node1
192.168.63.11 k8s-node2
192.168.63.12 k8s-node3
7、修改主机名
vi /etc/hostname
8、将桥接的 IPv4 流量传递到 iptables 的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
可参考:
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum list|grep kube
yum install -y kubelet-1.21.3 kubeadm-1.21.3 kubectl-1.21.3
systemctl enable kubelet
systemctl start kubelet
kubeadm init \ --apiserver-advertise-address=masterIP地址 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version v1.21.3 \ --service-cidr=10.125.0.0/16 \ --pod-network-cidr=10.150.0.0/16
- masterIP地址:指定需要初始化为master节点的IP地址
- image-repository:指定镜像加速地址
- service-cidr:集群内服务之间路由地址
- pod-network-cidr:集群内pod之间通讯的有效IP地址
...以上内容忽略 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.63.10:6443 --token ii44ya.n4ryb3yka0q09fq3 \ --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.63.10:6443:6443 --token ii44ya.n4ryb3yka0q09fq3 \ --discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39
提前复制创建好的令牌,令牌2小时之内有效。如果过期可以使用以下命令生成token:
kubeadm token create --ttl 0 --print-join-command
- 1
按照提示,执行以下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Flannel 是一个非常简单的能够满足 Kubernetes 所需要的覆盖网络。使用Flannel作为k8s的网络附加组件。
kubectl apply -f \
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
不出意外上边的命令是执行不成功的,查看pod的状态发现会是Init:ImagePullBackOff
,原因是下载的镜像为quay.io/coreos/flannel:v0.12.0-amd64
,由于quary.io国内访问不了,所以镜像也就下载不成功。为了成功安装安装Flannel,需要手动安装,具体步骤如下:
去https://github.com/coreos/flannel/releases下载flannel镜像
导入镜像docker load < flanneld-v0.12.0-amd64.docker
稍等片刻k8s会自动重新安装,Flannel状态变为Running则代表成功
[root@k8s-node1 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6f6b8cc4f6-qdkzr 1/1 Running 1 22h
coredns-6f6b8cc4f6-shkd9 1/1 Running 1 22h
etcd-k8s-node1 1/1 Running 1 22h
kube-apiserver-k8s-node1 1/1 Running 1 22h
kube-controller-manager-k8s-node1 1/1 Running 1 22h
kube-flannel-ds-amd64-8vhz7 1/1 Running 1 20h
在另外两台机器分别执行以下命令,等待节点加入集群。
kubeadm join 192.168.63.10:6443:6443 --token ii44ya.n4ryb3yka0q09fq3 \
--discovery-token-ca-cert-hash sha256:67318db78eef549400d515ed239ca3dbf85d5195e4ba6c13b61854f497278b39
等待几分钟后,所有阶段状态为Ready,则代表集群搭建成功。输出日志类似如下:
[preflight] Running pre-flight checks
... (log output of join workflow) ...
Node join complete:
* Certificate signing request sent to control-plane and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on control-plane to see this machine join.
PS:如果加入集群失败,则使用
kubeadm reset
重置安装状态后,再执行上述命令即可。
[root@k8s-node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane,master 21h v1.21.3
k8s-node2 Ready <none> 20h v1.21.3
k8s-node3 Ready <none> 20h v1.21.3
至此,K8s集群安装完毕,可以安装简单的应用进行测试,如nginx、tomcat等。
curl -L https://git.io/get_helm.sh | bash
不出意外,上述命令仍然不能成功执行,还是由于网络原因,故需要手动安装Helm。
下载Helm压缩包并移动到指定位置
tar -zxvf helm-v2.16.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version
看到如下信息则安装成功:
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
创建授权文件helm-rbac.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
执行kubectl apply -f helm-rbac.yaml
安装Tiller
helm init --service-account tiller --upgrade --tiller-image=jessestuart/tiller:v2.16.3 --history-max 300 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
验证版本 helm version
Client: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.3", GitCommit:"1ee0254c86d4ed6887327dabed7aa7da29d7eb0d", GitTreeState:"clean"}
kubectl describe node k8s-node1 | grep Taint
kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
不出意外,由于网络原因,上述命令执行成功,但查看所有pod状态,发现镜像都无法下载失败。需手动安装,具体步骤如下:
下载所需docker镜像(附件中有)
加载镜像到三个节点
docker load < xx.tar
修改yaml文件(附录中的无需修改)
删除部署失败的deployment和daemonset
kubectl delete deployment maya-apiserver openebs-admission-server openebs-localpv-provisioner openebs-provisioner openebs-snapshot-operator openebs-ndm-operator -n openebs
kubectl delete daemonset openebs-ndm -n openebs
重新执行kubectl apply -f openebs-operator-1.5.0.yaml
,等待安装成功。
如果某个Pod安装失败,可以使用kubectl describe pod [podname] -n [namespace]查看日志。
查看效果kubectl get sc -n openebs
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
openebs-device openebs.io/local Delete WaitForFirstConsumer false 12h
openebs-hostpath (default) openebs.io/local Delete WaitForFirstConsumer false 12h
openebs-jiva-default openebs.io/provisioner-iscsi Delete Immediate false 12h
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter Delete Immediate false 12h
将 openebs-hostpath 设置为 默认的 StorageClass
kubectl patch storageclass openebs-hostpath -p \
'{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
至此,OpenEBS 的 LocalPV 已作为默认的存储类型创建成功。由于在文档开头手动去掉 了 master 节点的 Taint,我们可以在安装完 OpenEBS 后将 master 节点 Taint 加上,避 免业务相关的工作负载调度到 master 节点抢占 master 资源
kubectl taint nodes k8s-node1 node-role.kubernetes.io=master:NoSchedule
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/cluster-configuration.yaml
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
##################################################### ### Welcome to KubeSphere! ### ##################################################### Console: http://192.168.63.10:30880 Account: admin Password: P@88w0rd NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### https://kubesphere.io 20xx-xx-xx xx:xx:xx #####################################################
# This manifest deploys the OpenEBS control plane components, with associated CRs & RBAC rules # NOTE: On GKE, deploy the openebs-operator.yaml in admin context # Create the OpenEBS namespace apiVersion: v1 kind: Namespace metadata: name: openebs --- # Create Maya Service Account apiVersion: v1 kind: ServiceAccount metadata: name: openebs-maya-operator namespace: openebs --- # Define Role that allows operations on K8s pods/deployments kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator rules: - apiGroups: ["*"] resources: ["nodes", "nodes/proxy"] verbs: ["*"] - apiGroups: ["*"] resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"] verbs: ["*"] - apiGroups: ["*"] resources: ["statefulsets", "daemonsets"] verbs: ["*"] - apiGroups: ["*"] resources: ["resourcequotas", "limitranges"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"] verbs: ["list", "watch"] - apiGroups: ["*"] resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"] verbs: ["*"] - apiGroups: ["volumesnapshot.external-storage.k8s.io"] resources: ["volumesnapshots", "volumesnapshotdatas"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: [ "get", "list", "create", "update", "delete", "patch"] - apiGroups: ["*"] resources: [ "disks", "blockdevices", "blockdeviceclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "castemplates", "runtasks"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"] verbs: ["*" ] - apiGroups: ["*"] resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"] verbs: ["*" ] - apiGroups: ["coordination.k8s.io"] resources: ["leases"] verbs: ["get", "watch", "list", "delete", "update", "create"] - apiGroups: ["admissionregistration.k8s.io"] resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"] verbs: ["get", "create", "list", "delete", "update", "patch"] - nonResourceURLs: ["/metrics"] verbs: ["get"] - apiGroups: ["*"] resources: [ "upgradetasks"] verbs: ["*" ] --- # Bind the Service Account with the Role Privileges. # TODO: Check if default account also needs to be there kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: openebs-maya-operator subjects: - kind: ServiceAccount name: openebs-maya-operator namespace: openebs roleRef: kind: ClusterRole name: openebs-maya-operator apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: maya-apiserver namespace: openebs labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.5.0 spec: selector: matchLabels: name: maya-apiserver openebs.io/component-name: maya-apiserver replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: maya-apiserver openebs.io/component-name: maya-apiserver openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: maya-apiserver imagePullPolicy: IfNotPresent image: openebs/m-apiserver:1.5.0 ports: - containerPort: 5656 env: # OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s # based on this config. This is ignored if empty. # This is supported for maya api server version 0.5.2 onwards #- name: OPENEBS_IO_KUBE_CONFIG # value: "/home/ubuntu/.kube/config" # OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s # based on this address. This is ignored if empty. # This is supported for maya api server version 0.5.2 onwards #- name: OPENEBS_IO_K8S_MASTER # value: "http://172.28.128.3:8080" # OPENEBS_NAMESPACE provides the namespace of this deployment as an # environment variable - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as # environment variable - name: OPENEBS_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName # OPENEBS_MAYA_POD_NAME provides the name of this pod as # environment variable - name: OPENEBS_MAYA_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default # storageclass and storagepool will not be created. - name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG value: "true" # OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be # configured as a part of openebs installation. # If "true" a default cstor sparse pool will be configured, if "false" it will not be configured. # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL value: "false" # OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath # to be used for saving the shared content between the side cars # of cstor volume pod. # The default path used is /var/openebs/sparse #- name: OPENEBS_IO_CSTOR_TARGET_DIR # value: "/var/openebs/sparse" # OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath # to be used for saving the shared content between the side cars # of cstor pool pod. This ENV is also used to indicate the location # of the sparse devices. # The default path used is /var/openebs/sparse #- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR # value: "/var/openebs/sparse" # OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath # to be used for default Jiva StoragePool loaded by OpenEBS # The default path used is /var/openebs # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true #- name: OPENEBS_IO_JIVA_POOL_DIR # value: "/var/openebs" # OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath # to be used for default openebs-hostpath storageclass loaded by OpenEBS # The default path used is /var/openebs/local # This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG # is set to true #- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR # value: "/var/openebs/local" - name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE value: "openebs/jiva:1.5.0" - name: OPENEBS_IO_JIVA_REPLICA_IMAGE value: "openebs/jiva:1.5.0" - name: OPENEBS_IO_JIVA_REPLICA_COUNT value: "3" - name: OPENEBS_IO_CSTOR_TARGET_IMAGE value: "openebs/cstor-istgt:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_IMAGE value: "openebs/cstor-pool:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE value: "openebs/cstor-pool-mgmt:1.5.0" - name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE value: "openebs/cstor-volume-mgmt:1.5.0" - name: OPENEBS_IO_VOLUME_MONITOR_IMAGE value: "openebs/m-exporter:1.5.0" - name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE ################################################################################################################### value: "openebs/m-exporter:1.5.0" - name: OPENEBS_IO_HELPER_IMAGE value: "openebs/linux-utils:1.5.0" # OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage # events to Google Analytics - name: OPENEBS_IO_ENABLE_ANALYTICS value: "true" - name: OPENEBS_IO_INSTALLER_TYPE value: "openebs-operator" # OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours) # for periodic ping events sent to Google Analytics. # Default is 24h. # Minimum is 1h. You can convert this to weekly by setting 168h #- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL # value: "24h" livenessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 readinessProbe: exec: command: - /usr/local/bin/mayactl - version initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: v1 kind: Service metadata: name: maya-apiserver-service namespace: openebs labels: openebs.io/component-name: maya-apiserver-svc spec: ports: - name: api port: 5656 protocol: TCP targetPort: 5656 selector: name: maya-apiserver sessionAffinity: None --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-provisioner namespace: openebs labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner replicas: 1 strategy: type: Recreate rollingUpdate: null template: metadata: labels: name: openebs-provisioner openebs.io/component-name: openebs-provisioner openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: openebs-provisioner imagePullPolicy: IfNotPresent image: openebs/openebs-k8s-provisioner:1.5.0 env: # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s # based on this address. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_K8S_MASTER # value: "http://10.128.0.12:8080" # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s # based on this config. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_KUBE_CONFIG # value: "/home/ubuntu/.kube/config" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that provisioner should forward the volume create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" livenessProbe: exec: command: - pgrep - ".*openebs" initialDelaySeconds: 30 periodSeconds: 60 --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-snapshot-operator namespace: openebs labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-snapshot-operator openebs.io/component-name: openebs-snapshot-operator openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: snapshot-controller image: openebs/snapshot-controller:1.5.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace livenessProbe: exec: command: - pgrep - ".*controller" initialDelaySeconds: 30 periodSeconds: 60 # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that snapshot controller should forward the snapshot create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" - name: snapshot-provisioner image: openebs/snapshot-provisioner:1.5.0 imagePullPolicy: IfNotPresent env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name, # that snapshot provisioner should forward the clone create/delete requests. # If not present, "maya-apiserver-service" will be used for lookup. # This is supported for openebs provisioner version 0.5.3-RC1 onwards #- name: OPENEBS_MAYA_SERVICE_NAME # value: "maya-apiserver-apiservice" livenessProbe: exec: command: - pgrep - ".*provisioner" initialDelaySeconds: 30 periodSeconds: 60 --- # This is the node-disk-manager related config. # It can be used to customize the disks probes and filters apiVersion: v1 kind: ConfigMap metadata: name: openebs-ndm-config namespace: openebs labels: openebs.io/component-name: ndm-config data: # udev-probe is default or primary probe which should be enabled to run ndm # filterconfigs contails configs of filters - in their form fo include # and exclude comma separated strings node-disk-manager.config: | probeconfigs: - key: udev-probe name: udev probe state: true - key: seachest-probe name: seachest probe state: false - key: smart-probe name: smart probe state: true filterconfigs: - key: os-disk-exclude-filter name: os disk exclude filter state: true exclude: "/,/etc/hosts,/boot" - key: vendor-filter name: vendor filter state: true include: "" exclude: "CLOUDBYT,OpenEBS" - key: path-filter name: path filter state: true include: "" exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: openebs-ndm namespace: openebs labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-ndm openebs.io/component-name: ndm updateStrategy: type: RollingUpdate template: metadata: labels: name: openebs-ndm openebs.io/component-name: ndm openebs.io/version: 1.5.0 spec: # By default the node-disk-manager will be run on all kubernetes nodes # If you would like to limit this to only some nodes, say the nodes # that have storage attached, you could label those node and use # nodeSelector. # # e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node" # kubectl label node <node-name> "openebs.io/nodegroup"="storage-node" #nodeSelector: # "openebs.io/nodegroup": "storage-node" serviceAccountName: openebs-maya-operator hostNetwork: true containers: - name: node-disk-manager image: openebs/node-disk-manager-amd64:v0.4.5 imagePullPolicy: Always securityContext: privileged: true volumeMounts: - name: config mountPath: /host/node-disk-manager.config subPath: node-disk-manager.config readOnly: true - name: udev mountPath: /run/udev - name: procmount mountPath: /host/proc readOnly: true - name: sparsepath mountPath: /var/openebs/sparse env: # namespace in which NDM is installed will be passed to NDM Daemonset # as environment variable - name: NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # pass hostname as env variable using downward API to the NDM container - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # specify the directory where the sparse files need to be created. # if not specified, then sparse files will not be created. - name: SPARSE_FILE_DIR value: "/var/openebs/sparse" # Size(bytes) of the sparse file to be created. - name: SPARSE_FILE_SIZE value: "10737418240" # Specify the number of sparse files to be created - name: SPARSE_FILE_COUNT value: "0" livenessProbe: exec: command: - pgrep - ".*ndm" initialDelaySeconds: 30 periodSeconds: 60 volumes: - name: config configMap: name: openebs-ndm-config - name: udev hostPath: path: /run/udev type: Directory # mount /proc (to access mount file of process 1 of host) inside container # to read mount-point of disks and partitions - name: procmount hostPath: path: /proc type: Directory - name: sparsepath hostPath: path: /var/openebs/sparse --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-ndm-operator namespace: openebs labels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-ndm-operator openebs.io/component-name: ndm-operator openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: node-disk-operator image: openebs/node-disk-operator-amd64:v0.4.5 imagePullPolicy: Always readinessProbe: exec: command: - stat - /tmp/operator-sdk-ready initialDelaySeconds: 4 periodSeconds: 10 failureThreshold: 1 env: - name: WATCH_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name # the service account of the ndm-operator pod - name: SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName - name: OPERATOR_NAME value: "node-disk-operator" - name: CLEANUP_JOB_IMAGE value: "openebs/linux-utils:1.5.0" --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-admission-server namespace: openebs labels: app: admission-webhook openebs.io/component-name: admission-webhook openebs.io/version: 1.5.0 spec: replicas: 1 strategy: type: Recreate rollingUpdate: null selector: matchLabels: app: admission-webhook template: metadata: labels: app: admission-webhook openebs.io/component-name: admission-webhook openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: admission-webhook image: openebs/admission-server:1.5.0 imagePullPolicy: IfNotPresent args: - -alsologtostderr - -v=2 - 2>&1 env: - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: ADMISSION_WEBHOOK_NAME value: "openebs-admission-server" --- apiVersion: apps/v1 kind: Deployment metadata: name: openebs-localpv-provisioner namespace: openebs labels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner openebs.io/version: 1.5.0 spec: selector: matchLabels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: name: openebs-localpv-provisioner openebs.io/component-name: openebs-localpv-provisioner openebs.io/version: 1.5.0 spec: serviceAccountName: openebs-maya-operator containers: - name: openebs-provisioner-hostpath imagePullPolicy: Always image: openebs/provisioner-localpv:1.5.0 env: # OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s # based on this address. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_K8S_MASTER # value: "http://10.128.0.12:8080" # OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s # based on this config. This is ignored if empty. # This is supported for openebs provisioner version 0.5.2 onwards #- name: OPENEBS_IO_KUBE_CONFIG # value: "/home/ubuntu/.kube/config" - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: OPENEBS_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as # environment variable - name: OPENEBS_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName - name: OPENEBS_IO_ENABLE_ANALYTICS value: "true" - name: OPENEBS_IO_INSTALLER_TYPE value: "openebs-operator" - name: OPENEBS_IO_HELPER_IMAGE value: "openebs/linux-utils:1.5.0" livenessProbe: exec: command: - pgrep - ".*localpv" initialDelaySeconds: 30 periodSeconds: 60 ---
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。