当前位置:   article > 正文

Rancher高可用安装Harbor、Gitlab、Jenkins、Nexus、Sonar等未完待续_rancher sonar

rancher sonar

初始化环境:

  1. #!/bin/bash
  2. timedatectl set-timezone Asia/Shanghai
  3. gpasswd -a $USER docker
  4. #newgrp docker
  5. docker stop $(docker ps -a -q)
  6. docker rm $(docker ps -a -q)
  7. docker system prune -f
  8. docker volume rm $(docker volume ls -q)
  9. docker image rm $(docker image ls -q)
  10. rm -rf /etc/ceph \
  11. /etc/cni \
  12. /etc/kubernetes \
  13. /opt/cni \
  14. /opt/rke \
  15. /run/secrets/kubernetes.io \
  16. /run/calico \
  17. /run/flannel \
  18. /var/lib/calico \
  19. /var/lib/etcd \
  20. /var/lib/cni \
  21. /var/lib/kubelet \
  22. /var/lib/rancher/rke/log \
  23. /var/log/containers \
  24. /var/log/pods \
  25. /var/run/calico
  26. echo "
  27. net.bridge.bridge-nf-call-ip6tables=1
  28. net.bridge.bridge-nf-call-iptables=1
  29. net.ipv4.ip_forward=1
  30. net.ipv4.conf.all.forwarding=1
  31. net.ipv4.neigh.default.gc_thresh1=4096
  32. net.ipv4.neigh.default.gc_thresh2=6144
  33. net.ipv4.neigh.default.gc_thresh3=8192
  34. net.ipv4.neigh.default.gc_interval=60
  35. net.ipv4.neigh.default.gc_stale_time=120
  36. # 参考 https://github.com/prometheus/node_exporter#disabled-by-default
  37. kernel.perf_event_paranoid=-1
  38. #sysctls for k8s node config
  39. net.ipv4.tcp_slow_start_after_idle=0
  40. net.core.rmem_max=16777216
  41. fs.inotify.max_user_watches=524288
  42. kernel.softlockup_all_cpu_backtrace=1
  43. kernel.softlockup_panic=0
  44. kernel.watchdog_thresh=30
  45. fs.file-max=2097152
  46. fs.inotify.max_user_instances=8192
  47. fs.inotify.max_queued_events=16384
  48. vm.max_map_count=262144
  49. fs.may_detach_mounts=1
  50. net.core.netdev_max_backlog=16384
  51. net.ipv4.tcp_wmem=4096 12582912 16777216
  52. net.core.wmem_max=16777216
  53. net.core.somaxconn=32768
  54. net.ipv4.ip_forward=1
  55. net.ipv4.tcp_max_syn_backlog=8096
  56. net.ipv4.tcp_rmem=4096 12582912 16777216
  57. net.ipv6.conf.all.disable_ipv6=1
  58. net.ipv6.conf.default.disable_ipv6=1
  59. net.ipv6.conf.lo.disable_ipv6=1
  60. kernel.yama.ptrace_scope=0
  61. vm.swappiness=0
  62. # 可以控制core文件的文件名中是否添加pid作为扩展。
  63. kernel.core_uses_pid=1
  64. # Do not accept source routing
  65. net.ipv4.conf.default.accept_source_route=0
  66. net.ipv4.conf.all.accept_source_route=0
  67. # Promote secondary addresses when the primary address is removed
  68. net.ipv4.conf.default.promote_secondaries=1
  69. net.ipv4.conf.all.promote_secondaries=1
  70. # Enable hard and soft link protection
  71. fs.protected_hardlinks=1
  72. fs.protected_symlinks=1
  73. # 源路由验证
  74. # see details in https://help.aliyun.com/knowledge_detail/39428.html
  75. net.ipv4.conf.all.rp_filter=0
  76. net.ipv4.conf.default.rp_filter=0
  77. net.ipv4.conf.default.arp_announce = 2
  78. net.ipv4.conf.lo.arp_announce=2
  79. net.ipv4.conf.all.arp_announce=2
  80. # see details in https://help.aliyun.com/knowledge_detail/41334.html
  81. net.ipv4.tcp_max_tw_buckets=5000
  82. net.ipv4.tcp_syncookies=1
  83. net.ipv4.tcp_fin_timeout=30
  84. net.ipv4.tcp_synack_retries=2
  85. kernel.sysrq=1
  86. " >> /etc/sysctl.conf
  87. sysctl -p
  88. cat >> /etc/security/limits.conf <<EOF
  89. * soft nofile 65535
  90. * hard nofile 65536
  91. EOF
  92. touch /etc/docker/daemon.json
  93. cat > /etc/docker/daemon.json <<EOF
  94. {
  95. "log-driver": "json-file",
  96. "log-opts": {
  97. "max-size": "100m",
  98. "max-file": "3"
  99. },
  100. "max-concurrent-downloads": 10,
  101. "max-concurrent-uploads": 10,
  102. "registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com"],
  103. "storage-driver": "overlay2",
  104. "storage-opts": [
  105. "overlay2.override_kernel_check=true"
  106. ]
  107. }
  108. EOF
  109. systemctl daemon-reload && systemctl restart docker

下载必要的工具:

  1. [ec2-user@ip-10-6-217-126 toolket]$ ll
  2. total 66376
  3. -rwxr-xr-x 1 ec2-user ec2-user 1911 Aug 25 14:08 clean.sh
  4. -rw-r--r-- 1 ec2-user ec2-user 12741413 Aug 25 14:07 helm-v3.3.0-linux-amd64.tar.gz
  5. drwxr-xr-x 3 ec2-user ec2-user 20 Mar 25 23:33 kubernetes
  6. -rw-r--r-- 1 ec2-user ec2-user 13231874 Aug 25 14:20 kubernetes-client-linux-amd64.tar.gz
  7. drwxr-xr-x 2 ec2-user ec2-user 50 Aug 12 05:41 linux-amd64
  8. -rw-r--r-- 1 ec2-user ec2-user 8710 Aug 25 14:07 rancher-2.4.5.tgz
  9. -rw-r--r-- 1 ec2-user ec2-user 7447 Aug 25 14:07 rancher-cluster.yml
  10. -rwxr-xr-x 1 ec2-user ec2-user 41966325 Aug 25 14:07 rke_linux-amd64

解压kubectl并拷贝到bin目录:

  1. [ec2-user@ip-10-6-217-126 toolket]$ tar zxvf kubernetes-client-linux-amd64.tar.gz
  2. kubernetes/
  3. kubernetes/client/
  4. kubernetes/client/bin/
  5. kubernetes/client/bin/kubectl
  6. [ec2-user@ip-10-6-217-126 toolket]$ sudo cp kubernetes/client/bin/kubectl /usr/local/bin/
  7. [ec2-user@ip-10-6-217-126 toolket]$ sudo cp rke_linux-amd64 /usr/local/bin/rke
  8. [ec2-user@ip-10-6-217-126 toolket]$ tar zxvf helm-v3.3.0-linux-amd64.tar.gz
  9. linux-amd64/
  10. linux-amd64/README.md
  11. linux-amd64/helm
  12. linux-amd64/LICENSE
  13. [ec2-user@ip-10-6-217-126 toolket]$ sudo cp linux-amd64/helm /usr/local/bin/

如果之前安装过k8s,需要执行clean.sh脚本清理环境:

  1. # 停止服务
  2. systemctl disable kubelet.service
  3. systemctl disable kube-scheduler.service
  4. systemctl disable kube-proxy.service
  5. systemctl disable kube-controller-manager.service
  6. systemctl disable kube-apiserver.service
  7. systemctl stop kubelet.service
  8. systemctl stop kube-scheduler.service
  9. systemctl stop kube-proxy.service
  10. systemctl stop kube-controller-manager.service
  11. systemctl stop kube-apiserver.service
  12. # 删除所有容器
  13. docker rm -f $(docker ps -qa)
  14. # 删除所有容器卷
  15. docker volume rm $(docker volume ls -q)
  16. # 卸载mount目录
  17. for mount in $(mount | grep tmpfs | grep '/var/lib/kubelet' | awk '{ print $3 }') /var/lib/kubelet /var/lib/rancher; do umount $mount; done
  18. # 备份目录
  19. mv /etc/kubernetes /etc/kubernetes-bak-$(date +"%Y%m%d%H%M")
  20. mv /var/lib/etcd /var/lib/etcd-bak-$(date +"%Y%m%d%H%M")
  21. mv /var/lib/rancher /var/lib/rancher-bak-$(date +"%Y%m%d%H%M")
  22. mv /opt/rke /opt/rke-bak-$(date +"%Y%m%d%H%M")
  23. # 删除残留路径
  24. rm -rf /etc/ceph \
  25. /etc/cni \
  26. /opt/cni \
  27. /run/secrets/kubernetes.io \
  28. /run/calico \
  29. /run/flannel \
  30. /var/lib/calico \
  31. /var/lib/cni \
  32. /var/lib/kubelet \
  33. /var/log/containers \
  34. /var/log/pods \
  35. /var/run/calico
  36. # 清理网络接口
  37. network_interface=`ls /sys/class/net`
  38. for net_inter in $network_interface;
  39. do
  40. if ! echo $net_inter | grep -qiE 'lo|docker0|eth*|ens*';then
  41. ip link delete $net_inter
  42. fi
  43. done
  44. # 清理残留进程
  45. port_list='80 443 6443 2376 2379 2380 8472 9099 10250 10254'
  46. for port in $port_list
  47. do
  48. pid=`netstat -atlnup|grep $port |awk '{print $7}'|awk -F '/' '{print $1}'|grep -v -|sort -rnk2|uniq`
  49. if [[ -n $pid ]];then
  50. kill -9 $pid
  51. fi
  52. done
  53. pro_pid=`ps -ef |grep -v grep |grep kube|awk '{print $2}'`
  54. if [[ -n $pro_pid ]];then
  55. kill -9 $pro_pid
  56. fi

生成ssh key:

  1. [ec2-user@ip-10-6-217-126 toolket]$ ssh-keygen
  2. Generating public/private rsa key pair.
  3. Enter file in which to save the key (/home/ec2-user/.ssh/id_rsa):
  4. Enter passphrase (empty for no passphrase):
  5. Enter same passphrase again:
  6. Your identification has been saved in /home/ec2-user/.ssh/id_rsa.
  7. Your public key has been saved in /home/ec2-user/.ssh/id_rsa.pub.
  8. The key fingerprint is:
  9. SHA256:dUw+9uau7x+Nhof+r3V0Nay0ChufZNRF9M5eLd4KcBM ec2-user@ip-10-6-217-126.cn-northwest-1.compute.internal
  10. The key's randomart image is:
  11. +---[RSA 2048]----+
  12. | . o+ |
  13. | +. o .|
  14. | ..Eo oo|
  15. | ..o.+oo+|
  16. | So.ooo+ B|
  17. | BooB *+|
  18. | . ++ *.*|
  19. | . = oo|
  20. | o=B+o|
  21. +----[SHA256]-----+

编辑.ssh/authorized_keys文件,添加.ssh/id_rsa.pub公钥内容:

[ec2-user@ip-10-6-217-126 ~]$ vim .ssh/authorized_keys

编辑rancher-cluster.yml文件,用于安装RKE集群,可以单节点安装:

  1. nodes:
  2. - address: 10.6.217.126
  3. user: ec2-user
  4. role: [controlplane, worker, etcd]
  5. services:
  6. etcd:
  7. # 开启自动备份
  8. ## rke版本大于等于0.2.x或rancher版本大于等于v2.2.0时使用
  9. backup_config:
  10. enabled: true # 设置true启用ETCD自动备份,设置false禁用;
  11. interval_hours: 12 # 快照创建间隔时间,不加此参数,默认5分钟;
  12. retention: 3 # etcd备份保留份数;
  13. services:
  14. kube-api:
  15. extra_args:
  16. watch-cache: true
  17. default-watch-cache-size: 1500
  18. # 事件保留时间,默认1小时
  19. event-ttl: 1h0m0s
  20. # 默认值400,设置0为不限制,一般来说,每25~30个Pod有15个并行
  21. max-requests-inflight: 800
  22. # 默认值200,设置0为不限制
  23. max-mutating-requests-inflight: 400
  24. # kubelet操作超时,默认5s
  25. kubelet-timeout: 5s
  26. services:
  27. kube-controller:
  28. extra_args:
  29. # 修改每个节点子网大小(cidr掩码长度),默认为24,可用IP为254个;23,可用IP为510个;22,可用IP为1022个;
  30. node-cidr-mask-size: "24"
  31. feature-gates: "TaintBasedEvictions=false"
  32. # 控制器定时与节点通信以检查通信是否正常,周期默认5s
  33. node-monitor-period: "5s"
  34. ## 当节点通信失败后,再等一段时间kubernetes判定节点为notready状态。
  35. ## 这个时间段必须是kubelet的nodeStatusUpdateFrequency(默认10s)的整数倍,
  36. ## 其中N表示允许kubelet同步节点状态的重试次数,默认40s。
  37. node-monitor-grace-period: "20s"
  38. ## 再持续通信失败一段时间后,kubernetes判定节点为unhealthy状态,默认1m0s。
  39. node-startup-grace-period: "30s"
  40. ## 再持续失联一段时间,kubernetes开始迁移失联节点的Pod,默认5m0s。
  41. pod-eviction-timeout: "1m"
  42. # 默认5. 同时同步的deployment的数量。
  43. concurrent-deployment-syncs: 5
  44. # 默认5. 同时同步的endpoint的数量。
  45. concurrent-endpoint-syncs: 5
  46. # 默认20. 同时同步的垃圾收集器工作器的数量。
  47. concurrent-gc-syncs: 20
  48. # 默认10. 同时同步的命名空间的数量。
  49. concurrent-namespace-syncs: 10
  50. # 默认5. 同时同步的副本集的数量。
  51. concurrent-replicaset-syncs: 5
  52. # 默认5m0s. 同时同步的资源配额数。(新版本中已弃用)
  53. # concurrent-resource-quota-syncs: 5m0s
  54. # 默认1. 同时同步的服务数。
  55. concurrent-service-syncs: 1
  56. # 默认5. 同时同步的服务帐户令牌数。
  57. concurrent-serviceaccount-token-syncs: 5
  58. # 默认30s. 同步deployment的周期。
  59. deployment-controller-sync-period: 30s
  60. # 默认15s。同步PV和PVC的周期。
  61. pvclaimbinder-sync-period: 15s
  62. services:
  63. kubelet:
  64. extra_args:
  65. feature-gates: "TaintBasedEvictions=false"
  66. # 指定pause镜像
  67. pod-infra-container-image: "rancher/pause:3.1"
  68. # 传递给网络插件的MTU值,以覆盖默认值,设置为0(零)则使用默认的1460
  69. network-plugin-mtu: "1500"
  70. # 修改节点最大Pod数量
  71. max-pods: "250"
  72. # 密文和配置映射同步时间,默认1分钟
  73. sync-frequency: "3s"
  74. # Kubelet进程可以打开的文件数(默认1000000),根据节点配置情况调整
  75. max-open-files: "2000000"
  76. # 与apiserver会话时的并发数,默认是10
  77. kube-api-burst: "30"
  78. # 与apiserver会话时的 QPS,默认是5,QPS = 并发量/平均响应时间
  79. kube-api-qps: "15"
  80. # kubelet默认一次拉取一个镜像,设置为false可以同时拉取多个镜像,
  81. # 前提是存储驱动要为overlay2,对应的Dokcer也需要增加下载并发数,参考[docker配置](/rancher2x/install-prepare/best-practices/docker/)
  82. serialize-image-pulls: "false"
  83. # 拉取镜像的最大并发数,registry-burst不能超过registry-qps。
  84. # 仅当registry-qps大于0(零)时生效,(默认10)。如果registry-qps为0则不限制(默认5)。
  85. registry-burst: "10"
  86. registry-qps: "0"
  87. cgroups-per-qos: "true"
  88. cgroup-driver: "cgroupfs"
  89. # 节点资源预留
  90. enforce-node-allocatable: "pods"
  91. system-reserved: "cpu=0.25,memory=200Mi"
  92. kube-reserved: "cpu=0.25,memory=1500Mi"
  93. # POD驱逐,这个参数只支持内存和磁盘。
  94. ## 硬驱逐阈值
  95. ### 当节点上的可用资源降至保留值以下时,就会触发强制驱逐。强制驱逐会强制kill掉POD,不会等POD自动退出。
  96. eviction-hard: "memory.available<300Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%"
  97. ## 软驱逐阈值
  98. ### 以下四个参数配套使用,当节点上的可用资源少于这个值时但大于硬驱逐阈值时候,会等待eviction-soft-grace-period设置的时长;
  99. ### 等待中每10s检查一次,当最后一次检查还触发了软驱逐阈值就会开始驱逐,驱逐不会直接Kill POD,先发送停止信号给POD,然后等待eviction-max-pod-grace-period设置的时长;
  100. ### 在eviction-max-pod-grace-period时长之后,如果POD还未退出则发送强制kill POD"
  101. eviction-soft: "memory.available<500Mi,nodefs.available<50%,imagefs.available<50%,nodefs.inodesFree<10%"
  102. eviction-soft-grace-period: "memory.available=1m30s,nodefs.available=1m30s,imagefs.available=1m30s,nodefs.inodesFree=1m30s"
  103. eviction-max-pod-grace-period: "30"
  104. eviction-pressure-transition-period: "30s"
  105. # 指定kubelet多长时间向master发布一次节点状态。注意: 它必须与kube-controller中的nodeMonitorGracePeriod一起协调工作。(默认 10s)
  106. node-status-update-frequency: 10s
  107. # 设置cAdvisor全局的采集行为的时间间隔,主要通过内核事件来发现新容器的产生。默认1m0s
  108. global-housekeeping-interval: 1m0s
  109. # 每个已发现的容器的数据采集频率。默认10s
  110. housekeeping-interval: 10s
  111. # 所有运行时请求的超时,除了长时间运行的 pull, logs, exec and attach。超时后,kubelet将取消请求,抛出错误,然后重试。(默认2m0s)
  112. runtime-request-timeout: 2m0s
  113. # 指定kubelet计算和缓存所有pod和卷的卷磁盘使用量的间隔。默认为1m0s
  114. volume-stats-agg-period: 1m0s
  115. # 可以选择定义额外的卷绑定到服务
  116. extra_binds:
  117. - "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins"
  118. - "/etc/iscsi:/etc/iscsi"
  119. - "/sbin/iscsiadm:/sbin/iscsiadm"
  120. services:
  121. kubeproxy:
  122. extra_args:
  123. # 默认使用iptables进行数据转发,如果要启用ipvs,则此处设置为`ipvs`,一并添加下面的`extra_binds`
  124. proxy-mode: ""
  125. # 与kubernetes apiserver通信并发数,默认10;
  126. kube-api-burst: 20
  127. # 与kubernetes apiserver通信时使用QPS,默认值5,QPS=并发量/平均响应时间
  128. kube-api-qps: 10
  129. extra_binds:
  130. - "/lib/modules:/lib/modules"
  131. services:
  132. scheduler:
  133. extra_args:
  134. kube-api-burst:
  135. extra_binds: []
  136. extra_env: []
  137. # 当使用外部 TLS 终止,并且使用 ingress-nginx v0.22或以上版本时,必须。
  138. ingress:
  139. provider: nginx
  140. options:
  141. use-forwarded-headers: "true"

简洁配置:

  1. nodes:
  2. - address: 10.6.217.126
  3. user: ec2-user
  4. role: [controlplane, worker, etcd]
  5. services:
  6. etcd:
  7. # 开启自动备份
  8. ## rke版本大于等于0.2.x或rancher版本大于等于v2.2.0时使用
  9. backup_config:
  10. enabled: true # 设置true启用ETCD自动备份,设置false禁用;
  11. interval_hours: 12 # 快照创建间隔时间,不加此参数,默认5分钟;
  12. retention: 3 # etcd备份保留份数;
  13. # 当使用外部 TLS 终止,并且使用 ingress-nginx v0.22或以上版本时,必须。
  14. ingress:
  15. provider: nginx
  16. options:
  17. use-forwarded-headers: "true"

安装RKE:

  1. [ec2-user@ip-10-6-217-126 toolket]$ rke up --config ./rancher-cluster.yml
  2. INFO[0000] Running RKE version: v1.1.4
  3. INFO[0000] Initiating Kubernetes cluster
  4. INFO[0000] [dialer] Setup tunnel for host [10.6.217.126]
  5. INFO[0000] Checking if container [cluster-state-deployer] is running on host [10.6.217.126], try #1
  6. INFO[0000] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  7. INFO[0000] Starting container [cluster-state-deployer] on host [10.6.217.126], try #1
  8. INFO[0000] [state] Successfully started [cluster-state-deployer] container on host [10.6.217.126]
  9. INFO[0000] [certificates] Generating CA kubernetes certificates
  10. INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
  11. INFO[0001] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates
  12. INFO[0001] [certificates] Generating Kubernetes API server certificates
  13. INFO[0001] [certificates] Generating Service account token key
  14. INFO[0001] [certificates] Generating Kube Controller certificates
  15. INFO[0001] [certificates] Generating Kube Scheduler certificates
  16. INFO[0001] [certificates] Generating Kube Proxy certificates
  17. INFO[0001] [certificates] Generating Node certificate
  18. INFO[0001] [certificates] Generating admin certificates and kubeconfig
  19. INFO[0002] [certificates] Generating Kubernetes API server proxy client certificates
  20. INFO[0002] [certificates] Generating kube-etcd-10-6-217-126 certificate and key
  21. INFO[0002] Successfully Deployed state file at [./rancher-cluster.rkestate]
  22. INFO[0002] Building Kubernetes cluster
  23. INFO[0002] [dialer] Setup tunnel for host [10.6.217.126]
  24. INFO[0002] [network] Deploying port listener containers
  25. INFO[0002] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  26. INFO[0002] Starting container [rke-etcd-port-listener] on host [10.6.217.126], try #1
  27. INFO[0003] [network] Successfully started [rke-etcd-port-listener] container on host [10.6.217.126]
  28. INFO[0003] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  29. INFO[0003] Starting container [rke-cp-port-listener] on host [10.6.217.126], try #1
  30. INFO[0004] [network] Successfully started [rke-cp-port-listener] container on host [10.6.217.126]
  31. INFO[0004] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  32. INFO[0004] Starting container [rke-worker-port-listener] on host [10.6.217.126], try #1
  33. INFO[0005] [network] Successfully started [rke-worker-port-listener] container on host [10.6.217.126]
  34. INFO[0005] [network] Port listener containers deployed successfully
  35. INFO[0005] [network] Running control plane -> etcd port checks
  36. INFO[0005] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  37. INFO[0005] Starting container [rke-port-checker] on host [10.6.217.126], try #1
  38. INFO[0005] [network] Successfully started [rke-port-checker] container on host [10.6.217.126]
  39. INFO[0006] Removing container [rke-port-checker] on host [10.6.217.126], try #1
  40. INFO[0006] [network] Running control plane -> worker port checks
  41. INFO[0006] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  42. INFO[0006] Starting container [rke-port-checker] on host [10.6.217.126], try #1
  43. INFO[0006] [network] Successfully started [rke-port-checker] container on host [10.6.217.126]
  44. INFO[0006] Removing container [rke-port-checker] on host [10.6.217.126], try #1
  45. INFO[0006] [network] Running workers -> control plane port checks
  46. INFO[0006] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  47. INFO[0006] Starting container [rke-port-checker] on host [10.6.217.126], try #1
  48. INFO[0007] [network] Successfully started [rke-port-checker] container on host [10.6.217.126]
  49. INFO[0007] Removing container [rke-port-checker] on host [10.6.217.126], try #1
  50. INFO[0007] [network] Checking KubeAPI port Control Plane hosts
  51. INFO[0007] [network] Removing port listener containers
  52. INFO[0007] Removing container [rke-etcd-port-listener] on host [10.6.217.126], try #1
  53. INFO[0007] [remove/rke-etcd-port-listener] Successfully removed container on host [10.6.217.126]
  54. INFO[0007] Removing container [rke-cp-port-listener] on host [10.6.217.126], try #1
  55. INFO[0007] [remove/rke-cp-port-listener] Successfully removed container on host [10.6.217.126]
  56. INFO[0007] Removing container [rke-worker-port-listener] on host [10.6.217.126], try #1
  57. INFO[0008] [remove/rke-worker-port-listener] Successfully removed container on host [10.6.217.126]
  58. INFO[0008] [network] Port listener containers removed successfully
  59. INFO[0008] [certificates] Deploying kubernetes certificates to Cluster nodes
  60. INFO[0008] Checking if container [cert-deployer] is running on host [10.6.217.126], try #1
  61. INFO[0008] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  62. INFO[0008] Starting container [cert-deployer] on host [10.6.217.126], try #1
  63. INFO[0008] Checking if container [cert-deployer] is running on host [10.6.217.126], try #1
  64. INFO[0013] Checking if container [cert-deployer] is running on host [10.6.217.126], try #1
  65. INFO[0013] Removing container [cert-deployer] on host [10.6.217.126], try #1
  66. INFO[0013] [reconcile] Rebuilding and updating local kube config
  67. INFO[0013] Successfully Deployed local admin kubeconfig at [./kube_config_rancher-cluster.yml]
  68. INFO[0013] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
  69. INFO[0013] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [10.6.217.126]
  70. INFO[0013] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  71. INFO[0014] Starting container [file-deployer] on host [10.6.217.126], try #1
  72. INFO[0014] Successfully started [file-deployer] container on host [10.6.217.126]
  73. INFO[0014] Waiting for [file-deployer] container to exit on host [10.6.217.126]
  74. INFO[0014] Waiting for [file-deployer] container to exit on host [10.6.217.126]
  75. INFO[0014] Container [file-deployer] is still running on host [10.6.217.126]
  76. INFO[0015] Waiting for [file-deployer] container to exit on host [10.6.217.126]
  77. INFO[0015] Removing container [file-deployer] on host [10.6.217.126], try #1
  78. INFO[0015] [remove/file-deployer] Successfully removed container on host [10.6.217.126]
  79. INFO[0015] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
  80. INFO[0015] [reconcile] Reconciling cluster state
  81. INFO[0015] [reconcile] This is newly generated cluster
  82. INFO[0015] Pre-pulling kubernetes images
  83. INFO[0015] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  84. INFO[0015] Kubernetes images pulled successfully
  85. INFO[0015] [etcd] Building up etcd plane..
  86. INFO[0015] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  87. INFO[0015] Starting container [etcd-fix-perm] on host [10.6.217.126], try #1
  88. INFO[0016] Successfully started [etcd-fix-perm] container on host [10.6.217.126]
  89. INFO[0016] Waiting for [etcd-fix-perm] container to exit on host [10.6.217.126]
  90. INFO[0016] Waiting for [etcd-fix-perm] container to exit on host [10.6.217.126]
  91. INFO[0016] Container [etcd-fix-perm] is still running on host [10.6.217.126]
  92. INFO[0017] Waiting for [etcd-fix-perm] container to exit on host [10.6.217.126]
  93. INFO[0017] Removing container [etcd-fix-perm] on host [10.6.217.126], try #1
  94. INFO[0017] [remove/etcd-fix-perm] Successfully removed container on host [10.6.217.126]
  95. INFO[0017] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [10.6.217.126]
  96. INFO[0017] Starting container [etcd] on host [10.6.217.126], try #1
  97. INFO[0018] [etcd] Successfully started [etcd] container on host [10.6.217.126]
  98. INFO[0018] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [10.6.217.126]
  99. INFO[0018] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  100. INFO[0018] Starting container [etcd-rolling-snapshots] on host [10.6.217.126], try #1
  101. INFO[0018] [etcd] Successfully started [etcd-rolling-snapshots] container on host [10.6.217.126]
  102. INFO[0023] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  103. INFO[0023] Starting container [rke-bundle-cert] on host [10.6.217.126], try #1
  104. INFO[0024] [certificates] Successfully started [rke-bundle-cert] container on host [10.6.217.126]
  105. INFO[0024] Waiting for [rke-bundle-cert] container to exit on host [10.6.217.126]
  106. INFO[0024] Container [rke-bundle-cert] is still running on host [10.6.217.126]
  107. INFO[0025] Waiting for [rke-bundle-cert] container to exit on host [10.6.217.126]
  108. INFO[0025] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [10.6.217.126]
  109. INFO[0025] Removing container [rke-bundle-cert] on host [10.6.217.126], try #1
  110. INFO[0025] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  111. INFO[0025] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  112. INFO[0026] [etcd] Successfully started [rke-log-linker] container on host [10.6.217.126]
  113. INFO[0026] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  114. INFO[0026] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  115. INFO[0026] [etcd] Successfully started etcd plane.. Checking etcd cluster health
  116. INFO[0026] [controlplane] Building up Controller Plane..
  117. INFO[0026] Checking if container [service-sidekick] is running on host [10.6.217.126], try #1
  118. INFO[0026] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  119. INFO[0026] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  120. INFO[0026] Starting container [kube-apiserver] on host [10.6.217.126], try #1
  121. INFO[0027] [controlplane] Successfully started [kube-apiserver] container on host [10.6.217.126]
  122. INFO[0027] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [10.6.217.126]
  123. INFO[0035] [healthcheck] service [kube-apiserver] on host [10.6.217.126] is healthy
  124. INFO[0035] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  125. INFO[0035] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  126. INFO[0036] [controlplane] Successfully started [rke-log-linker] container on host [10.6.217.126]
  127. INFO[0036] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  128. INFO[0036] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  129. INFO[0036] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  130. INFO[0036] Starting container [kube-controller-manager] on host [10.6.217.126], try #1
  131. INFO[0037] [controlplane] Successfully started [kube-controller-manager] container on host [10.6.217.126]
  132. INFO[0037] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [10.6.217.126]
  133. INFO[0042] [healthcheck] service [kube-controller-manager] on host [10.6.217.126] is healthy
  134. INFO[0042] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  135. INFO[0042] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  136. INFO[0042] [controlplane] Successfully started [rke-log-linker] container on host [10.6.217.126]
  137. INFO[0042] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  138. INFO[0043] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  139. INFO[0043] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  140. INFO[0043] Starting container [kube-scheduler] on host [10.6.217.126], try #1
  141. INFO[0043] [controlplane] Successfully started [kube-scheduler] container on host [10.6.217.126]
  142. INFO[0043] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [10.6.217.126]
  143. INFO[0048] [healthcheck] service [kube-scheduler] on host [10.6.217.126] is healthy
  144. INFO[0048] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  145. INFO[0049] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  146. INFO[0049] [controlplane] Successfully started [rke-log-linker] container on host [10.6.217.126]
  147. INFO[0049] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  148. INFO[0049] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  149. INFO[0049] [controlplane] Successfully started Controller Plane..
  150. INFO[0049] [authz] Creating rke-job-deployer ServiceAccount
  151. INFO[0049] [authz] rke-job-deployer ServiceAccount created successfully
  152. INFO[0049] [authz] Creating system:node ClusterRoleBinding
  153. INFO[0049] [authz] system:node ClusterRoleBinding created successfully
  154. INFO[0049] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
  155. INFO[0049] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
  156. INFO[0049] Successfully Deployed state file at [./rancher-cluster.rkestate]
  157. INFO[0049] [state] Saving full cluster state to Kubernetes
  158. INFO[0049] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state
  159. INFO[0049] [worker] Building up Worker Plane..
  160. INFO[0049] Checking if container [service-sidekick] is running on host [10.6.217.126], try #1
  161. INFO[0049] [sidekick] Sidekick container already created on host [10.6.217.126]
  162. INFO[0049] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  163. INFO[0049] Starting container [kubelet] on host [10.6.217.126], try #1
  164. INFO[0050] [worker] Successfully started [kubelet] container on host [10.6.217.126]
  165. INFO[0050] [healthcheck] Start Healthcheck on service [kubelet] on host [10.6.217.126]
  166. INFO[0055] [healthcheck] service [kubelet] on host [10.6.217.126] is healthy
  167. INFO[0055] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  168. INFO[0055] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  169. INFO[0055] [worker] Successfully started [rke-log-linker] container on host [10.6.217.126]
  170. INFO[0055] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  171. INFO[0056] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  172. INFO[0056] Image [rancher/hyperkube:v1.18.6-rancher1] exists on host [10.6.217.126]
  173. INFO[0056] Starting container [kube-proxy] on host [10.6.217.126], try #1
  174. INFO[0056] [worker] Successfully started [kube-proxy] container on host [10.6.217.126]
  175. INFO[0056] [healthcheck] Start Healthcheck on service [kube-proxy] on host [10.6.217.126]
  176. INFO[0056] [healthcheck] service [kube-proxy] on host [10.6.217.126] is healthy
  177. INFO[0056] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  178. INFO[0056] Starting container [rke-log-linker] on host [10.6.217.126], try #1
  179. INFO[0057] [worker] Successfully started [rke-log-linker] container on host [10.6.217.126]
  180. INFO[0057] Removing container [rke-log-linker] on host [10.6.217.126], try #1
  181. INFO[0057] [remove/rke-log-linker] Successfully removed container on host [10.6.217.126]
  182. INFO[0057] [worker] Successfully started Worker Plane..
  183. INFO[0057] Image [rancher/rke-tools:v0.1.59] exists on host [10.6.217.126]
  184. INFO[0057] Starting container [rke-log-cleaner] on host [10.6.217.126], try #1
  185. INFO[0058] [cleanup] Successfully started [rke-log-cleaner] container on host [10.6.217.126]
  186. INFO[0058] Removing container [rke-log-cleaner] on host [10.6.217.126], try #1
  187. INFO[0058] [remove/rke-log-cleaner] Successfully removed container on host [10.6.217.126]
  188. INFO[0058] [sync] Syncing nodes Labels and Taints
  189. INFO[0058] [sync] Successfully synced nodes Labels and Taints
  190. INFO[0058] [network] Setting up network plugin: canal
  191. INFO[0058] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
  192. INFO[0058] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
  193. INFO[0058] [addons] Executing deploy job rke-network-plugin
  194. INFO[0078] [addons] Setting up coredns
  195. INFO[0078] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
  196. INFO[0078] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
  197. INFO[0078] [addons] Executing deploy job rke-coredns-addon
  198. INFO[0083] [addons] CoreDNS deployed successfully
  199. INFO[0083] [dns] DNS provider coredns deployed successfully
  200. INFO[0083] [addons] Setting up Metrics Server
  201. INFO[0083] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
  202. INFO[0083] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
  203. INFO[0083] [addons] Executing deploy job rke-metrics-addon
  204. INFO[0088] [addons] Metrics Server deployed successfully
  205. INFO[0088] [ingress] Setting up nginx ingress controller
  206. INFO[0088] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
  207. INFO[0088] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
  208. INFO[0088] [addons] Executing deploy job rke-ingress-controller
  209. INFO[0093] [ingress] ingress controller nginx deployed successfully
  210. INFO[0093] [addons] Setting up user addons
  211. INFO[0093] [addons] no user addons defined
  212. INFO[0093] Finished building Kubernetes cluster successfully
  213. [ec2-user@ip-10-6-217-126 toolket]$ docker ps
  214. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  215. 183fc321581e rancher/pause:3.1 "/pause" 10 seconds ago Up 10 seconds k8s_POD_nginx-ingress-controller-zptj2_ingress-nginx_6f45cdd8-bf43-4c3c-bac4-0a807ae06264_0
  216. b3a17b94c215 rancher/pause:3.1 "/pause" 24 seconds ago Up 23 seconds k8s_POD_canal-hs8gc_kube-system_014f34fe-d5ce-48b9-b2a9-de1a7d62ff98_0
  217. a6dc1c024f7f rancher/hyperkube:v1.18.6-rancher1 "/opt/rke-tools/entr…" 44 seconds ago Up 43 seconds kube-proxy
  218. 72b70d6d69f7 rancher/hyperkube:v1.18.6-rancher1 "/opt/rke-tools/entr…" 50 seconds ago Up 50 seconds kubelet
  219. a4c09ee15b54 rancher/hyperkube:v1.18.6-rancher1 "/opt/rke-tools/entr…" 57 seconds ago Up 56 seconds kube-scheduler
  220. 9d0c1c618ba4 rancher/hyperkube:v1.18.6-rancher1 "/opt/rke-tools/entr…" About a minute ago Up About a minute kube-controller-manager
  221. 4559750e14e6 rancher/hyperkube:v1.18.6-rancher1 "/opt/rke-tools/entr…" About a minute ago Up About a minute kube-apiserver
  222. d5557332a025 rancher/rke-tools:v0.1.59 "/opt/rke-tools/rke-…" About a minute ago Up About a minute etcd-rolling-snapshots
  223. fa63f04f2550 rancher/coreos-etcd:v3.4.3-rancher1 "/usr/local/bin/etcd…" About a minute ago Up About a minute etcd

检查安装状态:

  1. [ec2-user@ip-10-6-217-126 toolket]$ cp kube_config_rancher-cluster.yml ~/.kube/config
  2. [ec2-user@ip-10-6-217-126 toolket]$ kubectl get nodes
  3. NAME STATUS ROLES AGE VERSION
  4. 10.6.217.126 Ready controlplane,etcd,worker 3m17s v1.18.6
  5. [ec2-user@ip-10-6-217-126 toolket]$ kubectl get pods --all-namespaces
  6. NAMESPACE NAME READY STATUS RESTARTS AGE
  7. ingress-nginx default-http-backend-598b7d7dbd-ch4gt 1/1 Running 0 3m4s
  8. ingress-nginx nginx-ingress-controller-zptj2 1/1 Running 0 3m4s
  9. kube-system canal-hs8gc 2/2 Running 0 3m18s
  10. kube-system coredns-849545576b-58qdp 1/1 Running 0 3m14s
  11. kube-system coredns-autoscaler-5dcd676cbd-qb67v 1/1 Running 0 3m14s
  12. kube-system metrics-server-697746ff48-6gczh 1/1 Running 0 3m9s
  13. kube-system rke-coredns-addon-deploy-job-z8gzv 0/1 Completed 0 3m15s
  14. kube-system rke-ingress-controller-deploy-job-x9nqh 0/1 Completed 0 3m5s
  15. kube-system rke-metrics-addon-deploy-job-tfxqp 0/1 Completed 0 3m10s
  16. kube-system rke-network-plugin-deploy-job-dt9sr 0/1 Completed 0 3m35s

将以下文件的副本保存在安全的位置:

  • rancher-cluster.yml: RKE 集群配置文件。
  • kube_config_rancher-cluster.yml: 集群的Kubeconfig 文件,此文件包含用于访问集群的凭据。
  • rancher-cluster.rkestateKubernetes 集群状态文件,此文件包含用于完全访问集群的凭据。

添加Helm Chart 仓库:

  1. [ec2-user@ip-10-6-217-126 toolket]$ helm repo add rancher-stable http://rancher-mirror.oss-cn-beijing.aliyuncs.com/server-charts/stable
  2. "rancher-stable" has been added to your repositories

创建Rancher命名空间:

  1. [ec2-user@ip-10-6-217-126 toolket]$ kubectl create namespace cattle-system
  2. namespace/cattle-system created

创建Rancher安装模板,指定域名、外部负载中止SSL:

  1. [ec2-user@ip-10-6-217-126 toolket]$ helm template rancher ./rancher-2.4.5.tgz --output-dir . \
  2. > --namespace cattle-system \
  3. > --set hostname=rancher.example.cn \
  4. > --set tls=external
  5. wrote ./rancher/templates/serviceAccount.yaml
  6. wrote ./rancher/templates/clusterRoleBinding.yaml
  7. wrote ./rancher/templates/service.yaml
  8. wrote ./rancher/templates/deployment.yaml
  9. wrote ./rancher/templates/ingress.yaml

安装Rancher:

  1. [ec2-user@ip-10-6-217-126 toolket]$ kubectl -n cattle-system apply -R -f ./rancher
  2. clusterrolebinding.rbac.authorization.k8s.io/rancher created
  3. deployment.apps/rancher created
  4. ingress.extensions/rancher created
  5. service/rancher created
  6. serviceaccount/rancher created

查看安装进度及状态:

  1. [ec2-user@ip-10-6-217-126 toolket]$ kubectl -n cattle-system rollout status deploy/rancher
  2. Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
  3. Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
  4. Waiting for deployment spec update to be observed...
  5. Waiting for deployment "rancher" rollout to finish: 1 of 3 updated replicas are available...
  6. Waiting for deployment "rancher" rollout to finish: 2 of 3 updated replicas are available...
  7. deployment "rancher" successfully rolled out

 

安装Gitlab:

gitlab/gitlab-ce:latest

新建数据卷

  1. /var/log/gitlab
  2. /var/opt/gitlab
  3. /etc/gitlab

映射80端口用于外网访问,添加以下参数解决413错误:

  1. metadata:
  2. annotations:
  3. nginx.ingress.kubernetes.io/proxy-body-size: 1024m
  4. nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
  5. nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
  6. nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"

最好绑定运行主机。

 

Jenkins安装:

jenkins/jenkins:lts

映射数据卷:

/var/jenkins_home

映射主机docker.sock,需要访问权限:赋权限 sudo chmod o+rw /var/run/docker.sock

/var/run/docker.sock

复制 docker命令到jenkins_home工作目录用于在容器中使用该命令

sudo cp /usr/bin/docker /opt/jenkins-home/tools/hudson.tasks.Maven_MavenInstallation/maven3/bin/docker

复制宿主机的 .docker/config.json授权文件,用于容器内连接私有镜像仓库

sudo cp .docker/config.json /opt/jenkins-home/.docker/config.json


 

Jenkins配置

1. 环境变量
PATH+EXTRA  /var/jenkins_home/tools/hudson.tasks.Maven_MavenInstallation/maven3/bin
2. 配置JDK
/usr/local/openjdk-8
 

映射8080端口到公网访问

最好绑定运行主机。

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/寸_铁/article/detail/848868
推荐阅读
相关标签
  

闽ICP备14008679号