_helm3安装harbor">
当前位置:   article > 正文

helm3安装harbor【搭建NFS,用NFS创建storageclass,为Harbor动态创建PVC/PV,Harbor使用 ingress 暴露方式提供https访问】

helm3安装harbor

前言:

一、安装nfs-server

k8s-master01信息【提供nfs存储的机器】
公网IP:47.96.252.251
私网IP:172.30.125.104

未来的样子

nfs:
server: 172.30.125.104
path: /data/harbor

1.1 在提供 NFS 存储主机上执行,这里默认master节点

yum install -y nfs-utils #这条命令所有节点master、worker都执行

echo "/data/harbor *(insecure,rw,sync,no_root_squash)" > /etc/exports

# 执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /data/harbor/{chartmuseum,jobservice,registry,database,redis,trivy}

# 在master执行
chmod -R 777 /data/harbor

# 使配置生效
exportfs -r

#检查配置是否生效
exportfs

systemctl enable rpcbind && systemctl start rpcbind

systemctl enable nfs && systemctl start nfs

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

1.2 配置nfs-client

  • 在每个node上配置nfs-client,172.30.125.104 为master的私网 ip 地址
yum install -y nfs-utils #这条命令所有节点master、worker都执行

showmount -e 172.30.125.104 #查看worker节点是否能查到master节点的nfs文件

# 以下步骤,将 master 的 nfs 文件目录,挂载到 worker 节点本地目录,可以不做
# mkdir -p /data/harbor

# mount -t nfs 172.30.125.104:/data/harbor /data/harbor

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

二、添加 helm repo 仓库

安装 helm 工具
官网:https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz
tar -zxvf helm-v3.7.2-linux-amd64.tar.gz
#解压得到文件包 linux-amd64
cd linux-amd64
cp helm /usr/local/bin/
helm version
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

以上,helm工具安装成功了,接下来开始添加 harbor的helm repo,并下载 chart 包

官网:https://github.com/goharbor/harbor-helm/releases

helm repo add harbor https://helm.goharbor.io
helm pull harbor/harbor --version 1.6.0
# 拉取下的chart包名 harbor-1.6.0.tgz

tar zxvf harbor-1.6.0.tgz #解压出文件名 harbor

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

修改 /harbor/values.yaml,下图中的字段要对照修改

**注意:此处是集群内网的IP地址 externalURL: https://myharbor2.com

#这里我只给出修改的参数,未修改的按照应用默认参数即可
expose:
  type: ingress
  tls:
    ### 是否启用 https 协议
    enabled: true
    secret:  "myharbor2.com"
  ingress:
    hosts:
      ### 配置 Harbor 的访问域名,需要注意的是配置 notary 域名要和 core 处第一个单词外,其余保持一致
      core: myharbor2.com
      notary: notary.myharbor2.com
    controller: default
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "1024m"
      #### 如果是 traefik ingress,则按下面配置:
#      kubernetes.io/ingress.class: "traefik"
#      traefik.ingress.kubernetes.io/router.tls: 'true'
#      traefik.ingress.kubernetes.io/router.entrypoints: websecure
      #### 如果是 nginx ingress,则按下面配置:
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/proxy-body-size: "1024m"
      nginx.org/client-max-body-size: "1024m"

## 如果Harbor部署在代理后,将其设置为代理的URL,这个值一般要和上面的 Ingress 配置的地址保存一致
externalURL: https://myharbor2.com

### Harbor 各个组件的持久化配置,并设置各个组件 existingClaim 参数为上面创建的对应 PVC 名称
### nfs-storage需要提前创建nfs和storageClass
persistence:
  enabled: true
  ### 存储保留策略,当PVC、PV删除后,是否保留存储数据
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      storageClass: "nfs-storage"
      size: 20Gi
    chartmuseum:
      storageClass: "nfs-storage"
      size: 5Gi
    jobservice:
      storageClass: "nfs-storage"
      size: 1Gi
    database:
      storageClass: "nfs-storage"
      size: 1Gi
    redis:
      storageClass: "nfs-storage"
      size: 1Gi
    trivy:
      storageClass: "nfs-storage"
      size: 5Gi
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

三、配置NFS存储

安装nfs server后,提供nfs的私网 IP地址 172.30.125.104;拷贝如下内容,记得替换spec.nfs.server的IP地址

vim harbor-storage.yaml
  • 1
## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: harbor-data  #Deployment中spec.template.spec.containers.env.name.PROVISIONER_NAME 保持一致
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: harbor-data
            - name: NFS_SERVER
              value: 172.30.125.104 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /data/harbor  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.30.125.104
            path: /data/harbor
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
kubectl apply -f harbor-storage.yaml
  • 1

四、生成自签证书

1、创建 CA 证书

  • 这里为了区分上面http,以 myharbor2.com 域名为例。
# 生成 CA 证书私钥
$ openssl genrsa -out ca.key 4096
# 生成 CA 证书
$ openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=myharbor2.com" \
 -key ca.key \
 -out ca.crt
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2、创建域名证书

  • 生成私钥
openssl genrsa -out myharbor2.com.key 4096
  • 1
  • 生成证书签名请求 CSR
openssl req -sha512 -new \
    -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=*.myharbor2.com" \
    -key myharbor2.com.key \
    -out myharbor2.com.csr
  • 1
  • 2
  • 3
  • 4
  • 生成 x509 v3 扩展
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=myharbor2.com
DNS.2=*.myharbor2.com
DNS.3=hostname
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 创建 Harbor 访问证书
openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in myharbor2.com.csr \
    -out myharbor2.com.crt
  • 1
  • 2
  • 3
  • 4
  • 5
  • 将 crt 转换为 cert ,以供 Docker 使用
openssl x509 -inform PEM -in myharbor2.com.crt -out myharbor2.com.cert
  • 1
  • 最终在目录下得到如下文件:
[root@k8s-master1 crt]# ll
total 32
-rw-r--r-- 1 root root 2033 Feb 24 18:21 ca.crt
-rw-r--r-- 1 root root 3243 Feb 24 18:21 ca.key
-rw-r--r-- 1 root root   17 Feb 24 18:24 ca.srl
-rw-r--r-- 1 root root 2110 Feb 24 18:21 myharbor2.com.cert
-rw-r--r-- 1 root root 2110 Feb 24 18:22 myharbor2.com.crt
-rw-r--r-- 1 root root 1708 Feb 24 18:22 myharbor2.com.csr
-rw-r--r-- 1 root root 3243 Feb 24 18:22 myharbor2.com.key
-rw-r--r-- 1 root root  269 Feb 24 18:22 v3.ext

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

五、部署harbor

  • 创建 Namespace
kubectl create ns harbor2
  • 1
  • 创建证书秘钥
kubectl create secret tls myharbor2.com --key myharbor2.com.key --cert myharbor2.com.cert -n harbor2
kubectl get secret myharbor2.com -n harbor2
  • 1
  • 2
  • 部署方式一:通过命令行配置,覆盖未经修改的原版 harbor/value.yaml 值,进行安装
helm install myharbor2 --namespace harbor2 ./harbor \
  --set expose.ingress.hosts.core=myharbor2.com \
  --set expose.ingress.hosts.notary=notary.harbor2.service.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor2.com \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-storage \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-storage \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-storage \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-storage \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-storage \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-storage \
  --set persistence.enabled=true \
  --set externalURL=https://myharbor2.com \
  --set harborAdminPassword=Harbor12345
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 部署方式二:使用前面已修改的 harbor/value.yaml ,进行部署
  • 这里,使用部署方式二,因为前面 harbor/values.yaml 已配置 ok ;两种方式都可以哈
helm install myharbor2 --namespace harbor2 ./harbor
  • 1
kubectl get ingress,all -n harbor2
  • 1
[root@k8s-master1 crt]# kubectl get ingress,pod -n harbor2
NAME                                                 HOSTS                        ADDRESS   PORTS     AGE
ingress.extensions/myharbor2-harbor-ingress          myharbor2.com                          80, 443   86m
ingress.extensions/myharbor2-harbor-ingress-notary   notary.harbor2.service.com             80, 443   86m

NAME                                                  READY   STATUS    RESTARTS   AGE
pod/myharbor2-harbor-chartmuseum-7986455b69-w4j94     1/1     Running   1          67m
pod/myharbor2-harbor-core-bf48fb4d5-4ncrc             1/1     Running   3          67m
pod/myharbor2-harbor-database-0                       1/1     Running   1          86m
pod/myharbor2-harbor-jobservice-b4bbc8c59-rwb2r       1/1     Running   2          67m
pod/myharbor2-harbor-notary-server-659d575c-5drrk     1/1     Running   2          67m
pod/myharbor2-harbor-notary-signer-5bdd58f5dd-wtqp2   1/1     Running   1          67m
pod/myharbor2-harbor-portal-6596f98bd7-szg78          1/1     Running   1          86m
pod/myharbor2-harbor-redis-0                          1/1     Running   1          86m
pod/myharbor2-harbor-registry-67ffbc74b-xdfp4         2/2     Running   2          67m
pod/myharbor2-harbor-trivy-0                          1/1     Running   1          86m
[root@k8s-master1 crt]# 

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
[root@master01 ~]# helm install my-harbor ./harbor/ # 可添加后缀 --namespace harbor
[root@master01 ~]# kubectl get po
NAME                                              READY   STATUS    RESTARTS       AGE
my-harbor-harbor-chartmuseum-648ddc6cc7-f6jf7     1/1     Running   3 (38m ago)    57m
my-harbor-harbor-core-787997f69-wwm8m             1/1     Running   4 (35m ago)    57m
my-harbor-harbor-database-0                       1/1     Running   3 (38m ago)    5h36m
my-harbor-harbor-jobservice-b6c898d8b-ktb9c       1/1     Running   4 (36m ago)    57m
my-harbor-harbor-nginx-5c7999cd9f-fxqwr           1/1     Running   3 (38m ago)    150m
my-harbor-harbor-notary-server-78bd56d784-vkdzd   1/1     Running   4 (38m ago)    57m
my-harbor-harbor-notary-signer-69bbf5b848-8f45n   1/1     Running   4 (38m ago)    57m
my-harbor-harbor-portal-7f965b49cd-hmhwc          1/1     Running   3 (38m ago)    5h36m
my-harbor-harbor-redis-0                          1/1     Running   3 (38m ago)    5h36m
my-harbor-harbor-registry-f566858b6-9q7df         2/2     Running   6 (38m ago)    57m
my-harbor-harbor-trivy-0                          1/1     Running   4 (35m ago)    5h36m
nfs-client-provisioner-659758485d-brdw7           1/1     Running   18 (38m ago)   9h

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

六、 访问配置

  • 查找 ingress-nginx 配置信息,部署hostNetwork: true,Kind:DaemonSet
[root@k8s-master1 crt]# kubectl get po -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
nginx-ingress-controller-z4rw6   1/1     Running   7          7h22m   10.244.0.6   k8s-master1   <none>           <none>


[root@k8s-master1 crt]# kubectl get node k8s-master1 -o wide
NAME          STATUS   ROLES    AGE   VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-master1   Ready    master   21h   v1.16.15   172.30.125.104   <none>        CentOS Linux 7 (Core)   3.10.0-1160.53.1.el7.x86_64   docker://20.10.12


[root@k8s-master1 crt]# kubectl get ingress -n harbor2
NAME                              HOSTS                        ADDRESS   PORTS     AGE
myharbor2-harbor-ingress          myharbor2.com                          80, 443   93m
myharbor2-harbor-ingress-notary   notary.harbor2.service.com             80, 443   93m


  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 因此,配置 /etc/hosts 域名解析,在k8s所有节点
echo 172.30.125.104  myharbor2.com >> /etc/hosts
echo 172.30.125.104  notary.harbor2.service.com >> /etc/hosts
  • 1
  • 2
  • 配置 /etc/docker/daemon.json 【此处参考,可忽略】
cat > /etc/docker/daemon.json << EOF
{
 "exec-opts":["native.cgroupdriver=systemd"],
 "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"],
 "insecure-registries": ["https://myharbor2.com"]
}
EOF
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 重启 docker
systemctl daemon-reload
systemctl restart docker
  • 1
  • 2

七、 内部访问harbor

  • Harbor登录账号:admin/Harbor12345
  • 登录:docker login -u admin -p Harbor12345 myharbor2.com
[root@k8s-master1 crt]# kubectl get ingress -n harbor2
NAME                              HOSTS                        ADDRESS   PORTS     AGE
myharbor2-harbor-ingress          myharbor2.com                          80, 443   96m
myharbor2-harbor-ingress-notary   notary.harbor2.service.com             80, 443   96m

[root@k8s-master1 crt]# docker login myharbor2.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

八、浏览器访问

  • WINDOWS 本地配置 /etc/hosts 文件

  • C:\WINDOWS\System32\drivers\etc\hosts

  • 输入:47.96.252.251 myharbor2.com

    注意:47.96.252.251 为 ingress-nginx,以 DaemonSet 方式,部署node节点,对应的公网 IP 地址
    在这里插入图片描述

附录:

  • 差异:./harbor/values.yaml【修改版】与 values.yaml【原版】
[root@k8s-master1 juwei]# diff ./harbor/values.yaml values.yaml 
30c30
<       secretName: "myharbor2.com"
---
>       secretName: ""
38,39c38,39
<       core: myharbor2.com
<       notary: notary.myharbor2.domain
---
>       core: core.harbor.domain
>       notary: notary.harbor.domain
48d47
<       
50c49
<       ingress.kubernetes.io/proxy-body-size: "1024m"
---
>       ingress.kubernetes.io/proxy-body-size: "0"
52,53c51
<       nginx.ingress.kubernetes.io/proxy-body-size: "1024m"
<       nginx.org/client-max-body-size: "1024m"
---
>       nginx.ingress.kubernetes.io/proxy-body-size: "0"
114c112
< externalURL: https://myharbor2.com
---
> externalURL: https://core.harbor.domain
199c197
<       storageClass: "nfs-storage"
---
>       storageClass: ""
205c203
<       storageClass: "nfs-storage"
---
>       storageClass: ""
211c209
<       storageClass: "nfs-storage"
---
>       storageClass: ""
219c217
<       storageClass: "nfs-storage"
---
>       storageClass: ""
227c225
<       storageClass: "nfs-storage"
---
>       storageClass: ""
233c231
<       storageClass: "nfs-storage"
---
>       storageClass: ""

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
声明:本文内容由网友自发贡献,转载请注明出处:【wpsshop】
推荐阅读
相关标签
  

闽ICP备14008679号