赞
踩
简单的搭建k3s的Server已经在上一篇文章中介绍了。
k3s是支持各种系统或者核心。本人也在三中不同架构的ubuntu22.04系统中进行了测试,分别是amd64(戴尔笔记本和虚拟机),arm64(M1 Mac的虚拟机)和arm64(NVIDIA Orin Jetpack 6 DP版本)。其中后面两个arm64版本的kernel是不同的,所以在某些设置中也有有很多坑。
在amd64和arm64的笔记本或者虚拟机中,k3s的Server和Agent,根据官网的介绍和配置都是很容易搭建的。拥有不同身份的不同设备之间的通信是否顺利是k3s集群环境搭建的唯一难点。通常情况下,并不需要在deployment的yaml文件中添加额外的设置,或者是添加其他的网络配置文件,不同集群之间就可以正常通信。但是本人在项目中还使用树莓派4B和Nvidia Orin 64GB对k3s进行了测试,可想而知,他们是需要额外的配置。其中,根据官网的介绍,在树莓派中需要额外地安装内核模块(网上有教程)。Nvidia Orin 64GB中的Ubuntu 22.04默认使用的Cgroup是V2版本,查看了其他的Ubuntu22.04系统,都是Cgroup2版本。个人猜测,Ubuntu 22.04应该是默认使用了Cgroup2的控制群组版本。然而Nvidia Orin的Cgroup2版本无法支持容器找到相对应的服务API,根据网上的教程可以使用Cgroup1版本的控制群组。因为Nvidia Orin没有和普通Ubuntu系统的Grub引导文件,所以在修改引导文件中绕了很多弯路。而且经过本人的测试,Nvidia Orin只有在host-gw的flannel-backend模式下,k3s服务才能正常地安装,启动以及添加后续的集群。而且server和master的flannel-backend的模式需要保持一致,而且k3s默认的flannel-backend的模式是vxlan,所以在Nvidia Orin搭建k3s集群就需要特别注意网络配置。
接下去就简单介绍一下k3s server和agent的安装,搭建环境:
master | Ubuntu 22.04 | VM amd64 | 192.168.0.105 | ros2-talker |
agent | Ubuntu 22.04 | VM arm64 | 192.168.0.244 | ros2-listener |
docker需要根据k3s官网给出的脚本安装对应的20.10版本,在上一篇文章提到过,版本不对可能会导致镜像无法找到。
首先需要在两个设备分别添加host信息:
- sudo -i
- vim /etc/hosts
将下面两行添加进入hosts文件:
- 192.168.0.105 master
- 192.168.0.244 agent
如何修改hostname已经在上一篇文章中介绍过。接下来分别在两台设备中安装k3s。
k3s server端
url -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --token 12345 --docker" sh -s -
k3s agent端
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --server https://192.168.0.105:6443 --token 12345 --docker" sh -s -
port 6443需要给定,否则agent无法正常连接到server。
结果显示如下:
- root@master:~# kubectl get nodes
- NAME STATUS ROLES AGE VERSION
- master Ready control-plane,master 13h v1.28.5+k3s1
- agent Ready <none> 13h v1.28.5+k3s1
根据上一篇文章介绍的,在两台设备中都需要将ros2节点的镜像添加到docker中。
然后,我们需要重新编辑deployment.yaml文件。但是,在这之前需要进行ros2的环境配置(尝试过多种方法,下面介绍的方法相对便捷),添加一个ros2-config.yaml文件:
- ### Fast-DDS specific configuration for ROS 2 environmental variables
- apiVersion: v1
- kind: ConfigMap
- metadata:
- name: fastdds-config
- namespace: default
- data:
- ROS_DOMAIN_ID: "5"
- RMW_IMPLEMENTATION: rmw_fastrtps_cpp
运行:
kubectl apply -f ros2-config.yaml
这样就可以将ros2的DOMAIN_ID和DDS通信模式添加到kubernetes的config map中去。结果如下:
- root@master:~# kubectl get configmap
- NAME DATA AGE
- kube-root-ca.crt 1 13h
- fastdds-config 2 13h
现在编辑deployment.yaml文件:
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: ros2-talker
- labels:
- app: ros2-talker
- spec:
- selector:
- matchLabels:
- app: ros2-talker
- template:
- metadata:
- labels:
- app: ros2-talker
- spec:
- hostNetwork: true
- nodeSelector:
- kubernetes.io/hostname: master
- containers:
- - name: ros2-talker
- image: ros2_example:latest
- envFrom:
- - configMapRef:
- name: fastdds-config
- command: ["/bin/sh", "-c"]
- args: [". /opt/ros/humble/setup.sh && . install/local_setup.sh && ros2 run my_ros2_package talker"]
- imagePullPolicy: IfNotPresent
- tty: true
-
- ---
-
- apiVersion: apps/v1
- kind: Deployment
- metadata:
- name: ros2-listener
- labels:
- app: ros2-listener
- spec:
- selector:
- matchLabels:
- app: ros2-listener
- template:
- metadata:
- labels:
- app: ros2-listener
- spec:
- hostNetwork: true
- nodeSelector:
- kubernetes.io/hostname: agent
- containers:
- - name: ros2-listener
- image: ros2_example:latest
- envFrom:
- - configMapRef:
- name: fastdds-config
- command: ["/bin/sh", "-c"]
- args: [". /opt/ros/humble/setup.sh && . install/local_setup.sh && ros2 run my_ros2_package listener"]
- imagePullPolicy: IfNotPresent
- tty: true
运行:
kubectl apply -f deployment.yaml
值得注意的是hostNetwork参数,ros2在不同设备的容器之间通信需要使用host的网络,就如在不同设备直接运行docker中的ros2容器,我们需要添加--network=host --ipc=host --pid=host的道理是一样的。
目前k3s集群节点的信息如下:
- root@master:~# kubectl get pods
- NAME READY STATUS RESTARTS AGE
- ros2-talker-64ff77c87d-zg9ns 1/1 Running 1 (12h ago) 13h
- ros2-listener-546f6cdb78-2vvps 1/1 Running 0 12h
然后可以在master端分别进入在master运行的ros2-talker和agent运行的ros2-listener容器中,进行查看ros2节点是否运行正常,也可以直接通过kubectl logs <pod name>来进行查看log信息。
结果:
- root@master:~# kubectl logs ros2-talker-64ff77c87d-zg9ns
- [INFO] [1706435392.176366834] [minimal_publisher]: Publishing: "Hello World: 1900"
- [INFO] [1706435392.676367804] [minimal_publisher]: Publishing: "Hello World: 1901"
- [INFO] [1706435393.176378474] [minimal_publisher]: Publishing: "Hello World: 1902"
- [INFO] [1706435393.676374415] [minimal_publisher]: Publishing: "Hello World: 1903"
- [INFO] [1706435394.176426054] [minimal_publisher]: Publishing: "Hello World: 1904"
- [INFO] [1706435394.676442297] [minimal_publisher]: Publishing: "Hello World: 1905"
- [INFO] [1706435395.176579077] [minimal_publisher]: Publishing: "Hello World: 1906"
- [INFO] [1706435395.676366825] [minimal_publisher]: Publishing: "Hello World: 1907"
- [INFO] [1706435396.176454787] [minimal_publisher]: Publishing: "Hello World: 1908"
- [INFO] [1706435396.676478928] [minimal_publisher]: Publishing: "Hello World: 1909"
- [INFO] [1706435397.175153734] [minimal_publisher]: Publishing: "Hello World: 1910"
- root@master:~# kubectl logs ros2-listener-546f6cdb78-2vvps
- [INFO] [1706435378.035473043] [minimal_subscriber]: I heard: "Hello World: 1900"
- [INFO] [1706435378.569445876] [minimal_subscriber]: I heard: "Hello World: 1901"
- [INFO] [1706435379.081392464] [minimal_subscriber]: I heard: "Hello World: 1902"
- [INFO] [1706435379.594442447] [minimal_subscriber]: I heard: "Hello World: 1903"
- [INFO] [1706435380.036335949] [minimal_subscriber]: I heard: "Hello World: 1904"
- [INFO] [1706435380.618460924] [minimal_subscriber]: I heard: "Hello World: 1905"
- [INFO] [1706435381.037352203] [minimal_subscriber]: I heard: "Hello World: 1906"
- [INFO] [1706435381.605553638] [minimal_subscriber]: I heard: "Hello World: 1907"
- [INFO] [1706435382.051838589] [minimal_subscriber]: I heard: "Hello World: 1908"
- [INFO] [1706435382.537467720] [minimal_subscriber]: I heard: "Hello World: 1909"
- [INFO] [1706435383.035677202] [minimal_subscriber]: I heard: "Hello World: 1910"
-
两台设备容器运行的结果:
k3s server端
- root@master:~# docker ps
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- 99f498a64254 af74bd845c4a "entry" 19 minutes ago Up 18 minutes k8s_lb-tcp-443_svclb-traefik-3cc741f7-8rd4j_kube-system_342a0eb8-3844-4a77-af80-cdc7cd45dd9d_1
- 68e123d2d304 cc365cbb0397 "/entrypoint.sh --gl…" 19 minutes ago Up 19 minutes k8s_traefik_traefik-f4564c4f4-tkfb8_kube-system_9b49f7cd-015c-4d17-abbd-14c5389e4ca3_1
- 0089638bd8d5 ead0a4a53df8 "/coredns -conf /etc…" 19 minutes ago Up 19 minutes k8s_coredns_coredns-6799fbcd5-8rzsk_kube-system_2313e7e2-cdcc-4de5-a8a7-77fd0d42fd83_1
- f88b2ef4e540 817bbe3f2e51 "/metrics-server --c…" 19 minutes ago Up 19 minutes k8s_metrics-server_metrics-server-67c658944b-7clwf_kube-system_6264b960-c666-4926-985c-42ccec12adca_1
- 87d47464e31e b29384aeb4b1 "local-path-provisio…" 19 minutes ago Up 19 minutes k8s_local-path-provisioner_local-path-provisioner-84db5d44d9-rvb8s_kube-system_7a5a9220-c1a6-4947-9f85-2f264c526064_1
- 8f7099b5b014 af74bd845c4a "entry" 19 minutes ago Up 19 minutes k8s_lb-tcp-80_svclb-traefik-3cc741f7-8rd4j_kube-system_342a0eb8-3844-4a77-af80-cdc7cd45dd9d_1
- 22626ee89746 2637c201e7d1 "/bin/sh -c '. /opt/…" 19 minutes ago Up 18 minutes k8s_ros2-talker_ros2-talker-64ff77c87d-zg9ns_default_3426da78-6ec1-4690-893e-33d24094efa8_1
- 2d4cba0b94ed rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_coredns-6799fbcd5-8rzsk_kube-system_2313e7e2-cdcc-4de5-a8a7-77fd0d42fd83_1
- e03bf410c144 rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_ros2-talker-64ff77c87d-zg9ns_default_3426da78-6ec1-4690-893e-33d24094efa8_1
- 9caa849e3ac8 rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_traefik-f4564c4f4-tkfb8_kube-system_9b49f7cd-015c-4d17-abbd-14c5389e4ca3_1
- aa9a84e967f2 rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_svclb-traefik-3cc741f7-8rd4j_kube-system_342a0eb8-3844-4a77-af80-cdc7cd45dd9d_1
- 3ce51a743a0d rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_metrics-server-67c658944b-7clwf_kube-system_6264b960-c666-4926-985c-42ccec12adca_1
- b4591c50d03b rancher/mirrored-pause:3.6 "/pause" 19 minutes ago Up 19 minutes k8s_POD_local-path-provisioner-84db5d44d9-rvb8s_kube-system_7a5a9220-c1a6-4947-9f85-2f264c526064_1
k3s agent端
- root@agent:~# docker ps
- CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
- 34353e26a5ab dfc5cfc7b6d7 "entry" Less than a second ago Up Less than a second k8s_lb-tcp-443_svclb-traefik-3cc741f7-8hpl4_kube-system_da721bba-36cf-4e82-982c-01fb1395debb_3
- bb15d9ba1b62 dfc5cfc7b6d7 "entry" Less than a second ago Up Less than a second k8s_lb-tcp-80_svclb-traefik-3cc741f7-8hpl4_kube-system_da721bba-36cf-4e82-982c-01fb1395debb_3
- e975096b5026 44f117cf6f2d "/bin/sh -c '. /opt/…" Less than a second ago Up Less than a second k8s_ros2-listener_ros2-listener-546f6cdb78-2vvps_default_371cdd2d-c688-47e4-9d26-78dac834f9f5_0
- 49a1ffe7187e rancher/mirrored-pause:3.6 "/pause" Less than a second ago Up Less than a second k8s_POD_ros2-listener-546f6cdb78-2vvps_default_371cdd2d-c688-47e4-9d26-78dac834f9f5_0
- a61a3aaffb18 rancher/mirrored-pause:3.6 "/pause" Less than a second ago Up Less than a second k8s_POD_svclb-traefik-3cc741f7-8hpl4_kube-system_da721bba-36cf-4e82-982c-01fb1395debb_1
到目前为止,将简单的ros2的节点使用docker容器添加到k3s集群中去就完成了。当然,后续可能还会有更多其他的问题,比如如何从实体硬件中获取数据,如何mount串线端口到集群中等等的实际问题。当然,这些问题的前提就是,k3s集群环境需要先搭建好,否则其他都是空话。如果大家有其他的发现或者更优的解决办法,请多多分享。
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。