当前位置:   article > 正文

HoloOcean:水下机器人模拟器

holoocean

HoloOcean:水下机器人模拟器

HoloOcean: An Underwater Robotics Simulator

期刊信息:IEEE Access(1区)

Abstract(摘要)

Due to the difficulty and expense of underwater field trials, a high fidelity underwater simulator is a necessity for testing and developing algorithms. To fill this need, we present HoloOcean, an open source underwater simulator, built upon Unreal Engine 4 (UE4). HoloOcean comes equipped with multiagent support, various sensor implementations of common underwater sensors, and simulated communications support. We also implement a novel sonar sensor model that leverages an octree representation of the environment for efficient and realistic sonar imagery generation. Due to being built upon UE4, new environments are straightforward to add, enabling easy extensions to be built. Finally, HoloOcean is controlled via a simple python interface, allowing simple installation via pip, and requiring few lines of code to execute simulations.
由于水下现场试验的难度和费用,高保真度水下模拟器是测试和开发算法的必要条件。为了满足这一需求,我们推出了HoloOcean,一款基于虚幻引擎4(UE4)的开源水下模拟器。HoloOcean配备了多智能体支持、常见水下传感器的各种传感器实现以及模拟通信支持。我们还实现了一种新的声纳传感器模型,该模型利用环境的八叉树表示来高效和逼真地生成声纳图像。由于是在UE4上构建的,添加新环境非常简单,可以轻松构建扩展。最后,HoloOcean通过一个简单的python接口进行控制,允许通过pip进行简单安装,并且只需要几行代码即可执行模拟。

I. INTRODUCTION (引言)

An effective simulation environment for autonomous underwater vehicles (AUVs) can accelerate the development of algorithms and applications. This is true for all robotics systems, but is particularly necessary for AUVs where field testing is expensive and high-risk.
一个有效的无人潜航器仿真环境可以加速算法和应用的发展。这适用于所有机器人系统,但对于现场测试昂贵且高风险的AUV来说尤其必要。

There are many modern day applications of AUVs that can dramatically improve scientific knowledge, quality of life, and safety. These applications include inspection of marine infrastructure such as dams, ship hulls, and communication lines, as well as exploration of oceans that can lead to discoveries in the fields of geology, marine biology, and medicine. However, all these applications require complex algorithms to be designed and tested, which can be costly and unreasonably challenging without a virtual environment on which to first develop them.
AUV的许多现代应用可以显著提高科学知识、生活质量和安全性。这些应用包括检查大坝、船体和通信线路等海洋基础设施,以及探索海洋,从而在地质学、海洋生物学和医学领域取得发现。然而,所有这些应用程序都需要设计和测试复杂的算法,如果没有首先开发这些算法的虚拟环境,这些算法可能成本高昂,而且具有不合理的挑战性。

We present HoloOcean, an open source underwater simulator. It is built upon the reinforcement learning simulator Holodeck [1] and the capabilities of Unreal Engine 4 (UE4) [2]. UE4 in particular provides accurate simulation dynamics, high-fidelity imagery, a mature environment construction editor, and a C++ interface to add custom sensors, agents, etc. Specifically, HoloOcean has the following features:
我们介绍了HoloOcean,一个开源的水下模拟器。它建立在强化学习模拟器Holodeck[1]和虚幻引擎4(UE4)[2]的能力之上。UE4特别提供了精确的模拟动力学、高保真图像、成熟的环境构建编辑器和用于添加自定义传感器、代理等的C++接口。具体而言,HoloOcean具有以下功能:

  1. A simple python interface, allowing for quick installation and effortless use.
  2. Ease of adding new environments. UE4 is a well documented game building engine with many marine and underwater assets already made.
  3. A novel and efficient imaging sonar implementation built upon an octree structure that results in realistic sonar imagery.
  4. Full support for multi-agent missions, including implemented acoustic and optical modem sensors for realistic cooperative simulations.
    1) 一个简单的python接口,允许快速安装和轻松使用。
    2) 易于添加新环境。UE4是一个有充分记录的游戏构建引擎,已经制造了许多海洋和水下资产。
    3) 一种新颖高效的成像声纳实现,建立在八叉树结构上,可产生逼真的声纳图像。
    4) 全面支持多智能体任务,包括实现用于现实合作模拟的声学和光学调制解调器传感器。

The paper is organized as follows. Section II reviews current underwater robotics simulators and sonar sensor models. In Section III, we describe our novel sonar imagery algorithm, followed by Section IV where we review the other various implementations and measurement models for common underwater sensors such as a Doppler velocity log (DVL), inertial measurement unit (IMU), GPS, camera, and depth sensor. Multi-agent missions and communications via optical and acoustic modems are described in Section V. Customization of missions and the simple python interface is laid out in Section VI. V arious environments made in UE4 are shown in Section VII, and Section VIII summarizes the article and proposes future work.
本文组织如下。第二节回顾了当前的水下机器人模拟器和声纳传感器模型。在第三节中,我们描述了我们的新型声纳成像算法,然后是第四节,我们回顾了常见水下传感器的其他各种实现和测量模型,如多普勒速度测井(DVL)、惯性测量单元(IMU)、GPS、相机和深度传感器。第五节介绍了多智能体任务以及通过光学和声学调制解调器进行的通信。第六节介绍了任务的定制和简单的python接口。第七节介绍了UE4中制作的各种环境,第八节总结了文章并提出了未来的工作。

II. RELATED WORK (相关工作)

Creating a realistic underwater simulator requires many features to be useful for algorithm development including, but not limited to, multi-agent support; realistic sensor simulations; accurate underwater dynamics; ease of use; integration with existing systems; and, preferably, open source [3]. Another requirement is a lack of heavy dependencies. Heavy dependencies, such as Robot Operating System (ROS) [4], can make installation and use cumbersome, particularly when the simulator is only being used for data generation. V arious attempts at these features are listed in Table I, but as far as we know there are none that match all of these criteria.
相关工作创建一个逼真的水下模拟器需要许多功能才能用于算法开发,包括但不限于多智能体支持;逼真的传感器模拟;精确的水下动力学;易用性;与现有系统的整合;以及,优选地,开源[3]。另一个要求是缺乏严重的依赖性。高度依赖性,如机器人操作系统(ROS)[4],可能会使安装和使用变得繁琐,尤其是当模拟器仅用于数据生成时。表I列出了对这些功能的各种尝试,但据我们所知,没有一个符合所有这些标准。
常用水下模拟器的比较

TABLE I: Comparison of common underwater simulators. ✓denotes that the simulator has a feature, ∗ that it has a limited implementation or is unknown, and × that it does not. There are many other common robotics simulators that are not listed here, but most don’t have support for underwater robotics.
表1:常用水下模拟器的比较。✓表示模拟器有一个特性,*表示它有一个有限的实现或未知,而x表示它没有。这里没有列出许多其他常见的机器人模拟器,但大多数都不支持水下机器人。

UUV Simulator [5] is one of the more mature options, built upon the popular open source robotics simulator Gazebo [6]. It has accurate modeling of hydrostatic and hydrodynamic effects, multi-agent support, a preliminary sonar implementation [7], and is easily configurable. However, it requires the installation of ROS, lacks multi-agent communications, and does not appear to be actively maintained.
UUV模拟器[5]是一个更成熟的选择,建立在流行的开源机器人模拟器Gazebo[6]。它具有流体静力和流体动力效应的精确建模,多智能体支持,初步的声纳实现[7],并且易于配置。但是,它需要安装ROS,缺乏多代理通信,并且似乎没有被积极维护。

UWSim [8] is built on OpenSceneGraph and osgOcean [9]. It also has multi-agent support and is open source, but depends on ROS, is difficult to configure, is not being actively maintained, and has a sonar model that is more akin to a LiDAR.
UWSim[8]建立在OpenSceneGraph和osgoean[9]之上。它还具有多代理支持,并且是开源的,但依赖于ROS,难以配置,没有积极维护,并且具有更类似于激光雷达的声纳模型。

Built upon USARSim [10], MarineSIM [11] is another simulator for underwater navigation, but is not open source, which limits the possibility for future development. Other work in USARSim includes an implementation of acoustic multi-agent communications [12], but lacks other common underwater sensors.
MarineSIM[11]是建立在USARSim[10]基础上的另一个水下导航模拟器,但不是开源的,这限制了未来发展的可能性。USARSim的其他工作包括实现声学多智能体通信[12],但缺乏其他常见的水下传感器。

An efficient and accurate implementation of a multi-beam imaging sonar is also essential for research in underwater perception and localization. The UUV Simulator sonar model leverages a simulated depth camera and GPU computations [7, 13] that appears promising, but has drawbacks due to the depth camera field of view not matching that of a true imaging sonar. Others leverage ray tracing [14] or more accurate acoustic physics [15], which can be computationally inefficient and currently have no integration with existing simulators.
高效、准确地实现多波束成像声呐对水下感知和定位研究至关重要。UUV模拟器声纳模型利用模拟深度相机和GPU计算[7,13],看起来很有前途,但由于深度相机的视场与真正的成像声纳不匹配,因此存在缺陷。其他方法则利用光线追踪[14]或更精确的声学物理[15],这些方法在计算上效率低下,目前还没有与现有的模拟器集成。

Holodeck [1] is an open source reinforcement learning and robotics simulator. It’s built upon UE4 [2], providing it with high-fidelity imagery, accurate dynamics built upon the PhysX physics engine [16], and a mature community with many environment assets already made. Further, Holodeck has a simple python interface, allowing for easy installation and use on a variety of systems. In this work, we propose HoloOcean that builds on Holodeck and augments it with accurate underwater dynamics, multi-agent communications, a realistic imaging sonar implementation, and other underwater sensor models.
Holodeck[1]是一个开源的强化学习和机器人模拟器。它是建立在UE4[2],提供高保真的图像,准确的动态建立在PhysX物理引擎[16],和一个成熟的社区与许多环境资产已经完成。此外,Holodeck有一个简单的python接口,允许在各种系统上轻松安装和使用。在这项工作中,我们提出了基于Holodeck的holooocean,并通过精确的水下动力学,多智能体通信,逼真的成像声纳实现和其他水下传感器模型来增强它。

III. IMAGING SONAR (成像声纳)

An imaging sonar is a common underwater sensor used to generate imagery of the environment. This imagery can be used for localization, mapping, visualization, or various other algorithms. In this section we cover how an imaging sonar functions and present our algorithm for generating realistic imagery, which is summarized in Fig. 2.
成像声纳是一种常见的水下传感器,用于生成环境图像。该图像可用于定位、映射、可视化或各种其他算法。在本节中,我们将介绍成像声纳的功能,并介绍我们生成逼真图像的算法,如图2所示。
声纳成像算法的伪代码

Fig. 2: Pseudocode of our sonar imagery algorithm. Lines 10-17 corresponding to recursively searching our octree to find leaves in our field of view, lines 18-22 correspond to removing leaves that may be in shadows, and lines 23-24 to the final computation of the image.
图2:我们的声纳成像算法的伪代码。第10-17行对应于递归搜索我们的八树树以查找视野中的叶子,第18-22行对应于删除可能在阴影中的叶子,第23-24行对应于图像的最终计算。

A. Operation (操作)

Multi-beam imaging sonar sensors use acoustic waves to capture imagery of their environment. A wave is emitted and upon encountering an object, part of the wave is reflected back to the sonar, where intensity is recorded and beamforming techniques are used to determine the direction of the return. This intensity will be dependent on the surface normal vector it encountered as well as the normal of the impacting beam. This is because a surface that is aligned with the beam will reflect more energy than one that is perpendicular. Time from sending to receiving the wave is also measured, and from this the range r is calculated using the speed of sound underwater.
多波束成像声纳传感器使用声波捕捉环境图像。一种波被发射出来,当遇到一个物体时,部分波被反射回声纳,声纳记录强度,并使用波束形成技术来确定返回的方向。这个强度将取决于它遇到的表面法向量以及冲击光束的法线。这是因为与光束对齐的表面会比垂直的表面反射更多的能量。还测量了从发送到接收声波的时间,并由此计算出使用水下声速的范围r。

The sonar reading in a given horizontal direction forms a beam. Each beam has a horizontal angle, known as the azimuth ϕ, and a vertical width, known as the elevation θ, as seen in Fig. 3.
声纳在给定的水平方向上的读数形成波束。每个波束都有一个水平角度,称为方位φ,和一个垂直宽度,称为仰角θ,如图3所示。
在这里插入图片描述

Fig. 3: Geometry of an imaging sonar. Shown are a single beam and the elevation θ, azimuth ϕ, and range r of a single point. All elevation data will be lost as objects are projected onto the azimuth-range plane and then binned accordingly.
图3:成像声纳的几何结构。所示为单波束和仰角θ,方位角φ,和范围r的单点。所有的高程数据都将丢失,因为物体被投影到方位角-距离平面上,然后相应地被分类。

Each beam will correspond to a column in the resulting 2D image, while each row will correspond to a range interval with its resulting quantity being given by its echo intensity. In this way, a sonar gives a projection of 3D space onto a 2D image, with the elevation as the dimension that is lost.
每个波束将对应于生成的二维图像中的一列,而每一行将对应于一个范围间隔,其结果数量由其回波强度给出。通过这种方式,声纳将3D空间投影到2D图像上,而仰角作为丢失的维度。

B. Projection Model(投影模型)

To simulate sonar imagery, we discretize the environment using an octree implementation. In each leaf of the octree, both location and surface normals are stored. This octree allows us to project the points onto the sonar image, rather than using expensive ray tracing or relying on the square field of view of a depth camera. When octrees are generated, nodes are only added if the box is only partially occupied. This eliminates the need to store areas of free space, as well as areas inside of objects.
为了模拟声纳图像,我们使用八叉树实现离散化环境。在八叉树的每个叶子中,位置和面法线都被存储。这个八叉树允许我们将点投射到声纳图像上,而不是使用昂贵的光线追踪或依赖于深度相机的方形视野。当生成八叉树时,只有当盒子部分被占用时才会添加节点。这消除了存储自由空间区域以及对象内部区域的需要。

To find the octree leaves in our field of view, we first recursively search the root octree to find all mid-level nodes in our field of view. Note, to allow for imaging of other agents during operation, a small octree is made for each agent and updated after each time step. In parallel, we then recursively search the agent octrees along with all previously found mid-level nodes to find all the leaves in the field of view.
为了找到视场中的八叉树叶子,我们首先递归地搜索根八叉树,以找到视场中的所有中层节点。注意,为了在操作过程中对其他代理进行成像,为每个代理制作一个小的八叉树,并在每个时间步后更新。同时,我们递归地搜索代理八叉树以及之前找到的所有中级节点,以找到视野中的所有叶子。

Once the leaves are found, the dot product d d d between the surface unit normal n s n_s ns and impact unit normal n i n_i ni of each is calculated. This gives us the cosine of the angle ψ ψ ψ between the two normals as follows,
一旦找到叶子,计算每个叶子的表面单位法向 n s n_s ns和冲击单位法向 n i n_i ni之间的点积 d d d。这就给出了两个法线之间的角ψ的余弦值,如下所示,
d ≜ n s ⋅ n i = c o s ( ψ ) > 0 = ⇒ ∣ ψ ∣ < π 2 . ( 1 ) d ≜ n_s · n_i = cos(ψ) > 0 =⇒ |ψ| < π ^2 . \quad (1) dnsni=cos(ψ)>0=⇒ψ<π2.(1)

If the angle is greater than π 2 π ^2 π2 , this implies the leaf must be on the backside of an object and it isn’t kept. Thus, we keep all leaves with a dot product greater than 0.
如果角度大于 π 2 π^2 π2,这意味着叶子一定在物体的背面,并且没有被保留。因此,我们保持所有叶节点的点积大于0。

C. Shadows (阴影)

Once all leaves on the front side of objects are found, we must remove the ones that lie in shadows. To do this, we first place each of the leaves in their corresponding ϕ , θ ϕ, θ ϕ,θ bin. We have found that an elevation bin size of 0.0 3 ◦ 0.03^◦ 0.03provides the best results.
一旦找到物体正面的所有叶子,我们必须移除阴影中的叶子。要做到这一点,我们首先将每个叶子放置在相应的 ϕ , θ ϕ,θ ϕθbin中。我们发现,海拔bin尺寸为 0.0 3 ◦ 0.03^◦ 0.03提供了最好的结果。

In parallel, each of these bins is sorted by ascending range. We keep the first cluster of a bin. We do this by iterating through the sorted bin, checking if the difference between two adjacent elements’ range value is less than some predefined ϵ. Once a gap larger than ϵ is found, everything after it is then removed, as it is part of a shadow.
并行地,每个箱子按升序范围排序。我们保留一个箱子的第一个簇。我们通过迭代排序箱来实现这一点,检查两个相邻元素的范围值之间的差是否小于某个预定义的 λ λ λ。一旦发现一个大于 λ λ λ的空隙,它之后的所有东西就会被移除,因为它是阴影的一部分。

There exist more accurate shadowing algorithms, but at a significant computational cost. This algorithm has provided us with a good combination of both accuracy and efficiency.
虽然存在更精确的阴影算法,但计算成本很高。该算法为我们提供了精度和效率的良好结合。

The resulting leaves are then sorted into their corresponding ϕ, r bins. The following calculation is then performed to determine the pixel value z s ij s ij of the i, jth bin,
然后将得到的叶子分类到相应的 ϕ , r ϕ,r ϕ,r箱中。然后进行以下计算,确定第 i , j i, j i,j个bin的像素值 z i j s z^s_{ij} zijs
z i j s = ( 1 n ∑ k = 0 n d k ) ( 1 + w s m ) + w s a w ∼ N ( 0 , σ s m ) , w s a ∼ R ( σ s a ) ( 2 ) z^s_{ij}=(\frac{1}{n}\sum_{k=0}^{n}d_k)(1+w^{sm})+w^{sa} \quad \quad w\sim N(0,σ^{sm}),wsa ∼ R(σ^{sa}) (2) zijs=(n1k=0ndk)(1+wsm)+wsawN(0,σsm),wsaR(σsa)(2)
where dk was computed previously in eqn. (1), wsa , w sm provide additive and multiplicative noise, respectively, and R is the Rayleigh distribution. These steps are all summarized in Fig. 2.
其中 d k d_k dk先前在eqn中计算。(1), w s a 、 w s m w^{sa}、w^{sm} wsawsm分别提供加性噪声和乘性噪声, R R R为瑞利分布。这些步骤都总结在图2中。

D. Efficiency & Memory (效率和内存)

Unfortunately, as many of our environments are 2km×2km, generating a full octree representation all at once requires an unwieldy amount of time, and storing it all in memory is impossible. To combat this, we generate, cache, and load octree leaves in real time, as follows.
不幸的是,由于我们的许多环境都是 2 k m × 2 k m 2km×2km 2km×2km,一次生成一个完整的八叉树表示需要大量的时间,并且将其全部存储在内存中是不可能的。为了解决这个问题,我们实时生成、缓存和加载八叉树叶子,如下所示。

On startup, the root octree till mid-level nodes is either made and cached, or loaded from file. Then, each mid-level node within some rmax of our vehicle that isn’t already cached is created and saved to file.
在启动时,要么创建并缓存中间层节点的根八叉树,要么从文件中加载。然后,在我们的车辆的一些尚未缓存的 r m a x r_{max} rmax中的每个中级节点被创建并保存到文件中。

During the recursive search, when a mid-level node is found, we check if it’s been loaded; if not, then it’s loaded from the cache or made and cached. Each mid-level node that isn’t in the field of view is deleted from memory.
在递归搜索过程中,当找到一个中级节点时,我们检查它是否已加载;如果没有,则从缓存中加载或生成并缓存。不在视野范围内的每个中层节点都会从内存中删除。

We’ve found that loading from our cache, a directory of json files, is fast and has negligible impact on performance since the number of new mid-level nodes in the field of view each iteration is generally small. In addition, the cache is persistent between simulations, thus it can be reused with each new mission. This method also removes the need to generate the full 2km×2km octree representation on first startup, instead only generating the leaves that are likely to be used.
我们发现,从缓存(一个json文件目录)加载速度很快,对性能的影响可以忽略不计,因为每次迭代中视图域中新的中级节点的数量通常很小。此外,缓存在模拟之间是持久的,因此它可以在每个新任务中重用。这种方法还不需要在第一次启动时生成完整的2km×2km八叉树表示,而只生成可能使用的叶子。

This process produces realistic sonar images, as can be seen in Fig. 4. At a 5.12m mid-level node size, 2cm minimum leaf size, and 50m max distance we can sample 2 sonar images per second, and at 2.56m mid-level, 1cm minimum, 10m distance we can sample about 14 images per second.
这一过程产生了逼真的声纳图像,如图4所示。在5.12米的中间节点大小,2cm的最小叶子大小,50m的最大距离下,我们每秒可以采样2张声纳图像,在2.56米的中间节点大小,1cm的最小距离,10m的距离下,我们每秒可以采样14张图像。

成像声纳仿真示例(a)
成像声纳仿真示例b

Fig. 4: Example of imaging sonar simulation. (a) Shows the environment, with green rays showing an approximation of the field of view of the sonar. The visible range is given by an azimuth of 120◦, elevation of 20◦, and a range from 1 to 50 meters. An octree resolution of 2cm is used. (b) Shows the resulting sonar image, including minor multiplicative and additive noise. The image has 512 azimuth beams and 1024 range bins.
图4:成像声纳仿真示例。(a)显示环境,绿色光线显示声纳视野的近似值。可见范围为方位角120°,仰角20°,范围从1米到50米。使用2cm的八叉树分辨率。(b)显示生成的声纳图像,包括轻微的乘法和加性噪声。图像有512个方位波束和1024个距离箱。

IV. SENSOR MODELS (传感器模型)

HoloOcean comes built in with a variety of sensors used in underwater robotics. They’ve been implemented with real world sensors in mind, and thus come with many configuration options and noise models. Note, all sensors have been configured with their own sensor frame, defined with respect to the robot frame as R s w , t s w R_s^w, t_s^w Rsw,tsw. These are represented in the world frame R s w , t s w R_s^w, t_s^w Rsw,tswas follows,
holooocean内置了各种用于水下机器人的传感器。它们已经在现实世界的传感器中实现,因此有许多配置选项和噪声模型。注意,所有传感器都配置了自己的传感器框架,相对于机器人框架定义为 R s w , t s w R_s^w, t_s^w Rsw,tsw。它们在世界框架 R s w , t s w R_s^w, t_s^w Rsw,tsw中表示如下,
R s w = R r w R s r t s w = R r w t s r + t r w ( 3 ) R_s^w=R_r^wR_s^r\\ t_s^w=R_r^wt_s^r+t_r^w \quad (3) Rsw=RrwRsrtsw=Rrwtsr+trw(3)

Each sensor has various configurations that can be set in a simple json file, as shown in Fig. 6. These configurations include Rr s r s , t r s r s , the sample rate in Hz, covariance for noise models, visualization tools as seen in Fig. 5, and various other sensor specific configurations. Further, when desired, a Lightweight Communications and Marshalling (LCM) [17] wrapper has been included to allow for an interface with real code. A ROS wrapper could also easily be added and is under active development.
每个传感器都有各种配置,可以在一个简单的json文件中进行设置,如图6所示。这些配置包括 R s r , t s r R^r_s, t^r_s Rsr,tsr,以Hz为单位的采样率,噪声模型的协方差,如图5所示的可视化工具,以及各种其他传感器特定配置。此外,如果需要,还包括轻量级通信和编组(LCM)[17]包装器,以允许使用实际代码的接口。ROS包装器也可以很容易地添加,并且正在积极开发中。
在这里插入图片描述

Fig. 5: Demonstration of various sensor visualization tools included in HoloOcean. (a) Shows two vehicles transmitting over optical modem, with their available line of sight highlighted as cones. (b) Shows a sonar simulation with an octree generated at 10cm. Highlighted are the field of view of the sonar in green, and each octree leaf inside the field of view in red. © Shows the beams of a DVL highlighted in green.
图5:holooocean中包含的各种传感器可视化工具的演示。(a)显示两辆车辆通过光学调制解调器进行传输,它们可用的视线线突出显示为锥体。(b)显示声纳模拟,在10cm处生成八叉树。突出显示的是声纳的视场(绿色),视场内的每个八叉树叶子(红色)。©以绿色突出显示DVL的光束。

在这里插入图片描述

Fig. 6: Example of a mission configuration. The sensor socket is a predefined frame on the vehicle body where a sensor can be placed. The sensor location and rotation parameters are then used as an offset from the chosen socket.
图6:任务配置示例。传感器插座是车身上可以放置传感器的预定义框架。然后将传感器位置和旋转参数用作与所选插座的偏移量。

In this section, we cover the measurement model used for each sensor, as well as various other implementation details.
在本节中,我们将介绍用于每个传感器的测量模型,以及各种其他实现细节。

A. Doppler Velocity Log(多普勒速度测井)

A DVL functions by sending out four acoustic waves and upon their return, uses incoming and outgoing wave velocities to calculate the sensor velocity in the sensor frame by leveraging the Doppler effect. We denote the angle of the acoustic waves from the negative z-axis as α, and the calculated velocity of the beams as the 4-vector vb. The sensor measurement z v is then modeled as,
DVL的功能是通过发出四个声波,并在它们返回时,利用入射和出射波的速度来计算传感器框架中的传感器速度,利用多普勒效应。我们将声波与负z轴的夹角记为 α α α,计算出的波束速度记为4向量 v b v_b vb。传感器测量值 z v z^v zv然后建模为:
z v = R b s ( v b + w v ) = R w s v w + R b s w v , w v   N ( 0 , ∑ v ) R b s = [ 1 2 s i n ( α ) 0 − 1 2 s i n ( α ) 0 0 1 2 s i n ( α ) 0 − 1 2 s i n ( α ) 1 4 c o s ( α ) 1 4 c o s ( α ) 1 4 c o s ( α ) 1 4 c o s ( α ) ] z^v=R^s_b(v_b+w^v)=R_w^sv_w+R_b^sw^v, \quad w^v~N(0,\sum v) \\ R_b^s=

[12sin(α)012sin(α)0012sin(α)012sin(α)14cos(α)14cos(α)14cos(α)14cos(α)]
zv=Rbs(vb+wv)=Rwsvw+Rbswv,wv N(0,v)Rbs= 2sin(α)104cos(α)102sin(α)14cos(α)12sin(α)104cos(α)102sin(α)14cos(α)1
where w v w^v wv is Gaussian noise, R b s R^s_b Rbs is the transformation from the beam frame to the sensor frame [18], and v w v_w vw is the velocity in the world frame that we pull from UE4.
其中 w v w^v wv为高斯噪声, R b s R^s_b Rbs为光束帧到传感器帧的变换[18], v w v_w vw为我们从UE4中拉出的世界帧中的速度。

Additionally, a DVL also calculates the distance each wave traveled, resulting in a 4-vector of ranges that can be used to assemble a sparse point cloud.
此外,DVL还计算每个波传播的距离,从而产生可用于组装稀疏点云的4向量范围。

B. Inertial Measurement Unit (惯性测量单元)

An IMU sensor measures angular velocity and linear acceleration in the sensor frame. A time-varying bias is commonly found in both measurements of an IMU, and we include it in the sensor model as follows [19],
IMU传感器在传感器框架中测量角速度和线加速度。在IMU的两次测量中通常会发现时变偏差,我们将其包含在传感器模型中如下[19]:
Z a = R w s + b a + w a , w a ∼ N ( 0 , ∑ a ) z w = w + b w + w w , w w ∼ N ( 0 , ∑ w ) b a = b a + w b a , w b a ∼ N ( 0 , ∑ b a ) b w = b w + w b w , w b w ∼ N ( 0 , ∑ b w ) ( 5 ) Z^a=R_w^s+b^a+w^a,\quad w^a\sim N(0,\sum a)\\ z^w=w+b^w+w^w,\quad w^w\sim N(0,\sum w)\\ b^a=b^a+w^{ba},\quad w^{ba}\sim N(0,\sum ba) \\ b^w=b^w+w^{bw},\quad w^{bw}\sim N(0,\sum bw) \quad(5) Za=Rws+ba+wa,waN(0,a)zw=w+bw+ww,wwN(0,w)ba=ba+wba,wbaN(0,ba)bw=bw+wbw,wbwN(0,bw)(5)

HoloOcean can also be configured to return the bias for ground truth purposes in cases where it’s also being tracked.
holooocean也可以配置为在被跟踪的情况下返回地面真相的偏差。

C. Depth Sensor (深度传感器)

Underwater pressure is directly proportional to depth, and is often used as a z-axis position measurement in the global frame. We measure it as follows, given our global z position is denoted by pz [20],
水下压力与深度成正比,通常用作全局框架中的z轴位置测量。我们以如下方式测量它,给定我们的全局 z z z位置用 p z p_z pz[20]表示,
z d = p z + w d , w d ∼ ( 0 , σ d ) ( 6 ) z^d=p_z+w^d,\quad w^d\sim (0,\sigma^d) \quad (6) zd=pz+wd,wd(0,σd)(6)

D. Camera(相机)

An underwater camera is also included that allows for imagery of the environment to be taken from the camera frame. UE4 has long been known for its high-fidelity imagery, providing games with realistic graphics for many years. This can be fine tuned to give photorealistic underwater imagery [21], as is the case in HoloOcean.
水下相机可以从相机框架中拍摄环境的图像。UE4一直以其高保真图像而闻名,多年来一直为游戏提供逼真的图像。这可以微调以提供逼真的水下图像[21],就像Holooocean中的情况一样。

E. GPS Sensor (GPS传感器)

While GPS is not available underwater, when near the surface AUVs can gain connection and receive GPS measurements to aid in localization. We model this by the following, letting global position be denoted as p,
虽然水下无法使用GPS,但当靠近水面时,auv可以连接并接收GPS测量数据,以帮助定位。我们用下面的方法来建模,让全局位置表示为p,
z g = p + w g , w g ∼ N ( 0 , ∑ g ) ( 7 ) z^g=p+w^g,\quad w^g\sim N(0,\sum g) \quad (7) zg=p+wg,wgN(0,g)(7)

Further, we only allow GPS measurements to be received when within a certain distance d of the surface, where d can be configured to be a random variable distributed as N (d, Σ).
此外,我们只允许在距离地面一定距离d内接收GPS测量值,其中d可以配置为分布为N (d, Σ)的随机变量。

F. Pose Sensor(姿态传感器)

A pose sensor is also included for use as ground truth. It returns an element of SE(3), of the form,
姿态传感器用作地面真值。它返回SE(3)的元素,其形式为:
z p = [ R s w t s w 0 1 × 3 1 ] ( 8 ) z^p=

[Rswtsw01×31]
\quad (8) zp=[Rsw01×3tsw1](8)

V. MULTI-AGENT MISSIONS & COMMUNICATIONS (多代理任务和通信)

HoloOcean also allows for an arbitrary number of agents to be used in a scenario. Adding agents, with any kind of sensors attached, is as simple as adding a few lines to a json file. HoloOcean comes with two agents models: our inhouse custom hovering AUV based on the BlueROV2 by BlueRobotics, and a generic AUV. Both follow a simple buoyancy dynamics model, while the hovering AUV has forces applied to the thruster locations, and the generic AUV has a thruster along with fin dynamics as defined in [22].
holooocean还允许在一个场景中使用任意数量的代理。添加带有任何类型传感器的代理就像在json文件中添加几行代码一样简单。holooocean有两种代理模型:我们的内部定制悬停AUV基于bluerorobotics的BlueROV2,和一个通用的AUV。两者都遵循一个简单的浮力动力学模型,而悬停的AUV有施加在推进器位置的力,而通用的AUV有一个推进器和鳍动力学,定义在[22]中。

In many multi-agent missions, communications between agents is essential for cooperative efforts. For underwater robotics, this becomes a difficult task as communications are restricted to optical or acoustic modems. To better simulate these scenarios, these modems are also included in HoloOcean as described below.
在许多多智能体任务中,智能体之间的通信对于合作努力至关重要。对于水下机器人来说,这是一项艰巨的任务,因为通信仅限于光学或声学调制解调器。为了更好地模拟这些场景,holooocean中还包括如下所述的调制解调器。

A. Acoustic Modem (声学调制解调器)

Our acoustic modem model is based off of the capabilities of the Blueprint Subsea SeaTrac X150 [23]. This means that to communicate, an acoustic wave is sent from one beacon to another. In HoloOcean, when a message is sent from an acoustic modem, it’s sent with a message type along with its data payload. We check if any messages are sent at the same time; if there are, all messages are dropped to simulate interference of acoustic waves.
我们的声学调制解调器模型是基于Blueprint Subsea SeaTrac X150的功能[23]。这意味着为了进行通信,声波从一个信标发送到另一个信标。在holooocean中,当从声学调制解调器发送消息时,它会随消息类型和数据负载一起发送。我们检查是否同时发送了任何消息;如果有,所有的信息将被丢弃以模拟声波的干扰。

We calculate how many time steps it will take for the acoustic message to arrive. Upon arrival, the payload is received, and depending on the message type, a return message is possibly sent. Along with the payload, other data is received such as bearing, range, and depth as specified by the message type.
我们计算声音信息到达需要多少时间步长。到达后,将接收有效负载,并根据消息类型可能发送返回消息。除了有效负载外,还接收消息类型指定的其他数据,如方位、范围和深度。

B. Optical Modem (光调制解调器)

The optical modem is roughly based on the Hydromea LumaX [24]. It operates similarly to the acoustic modem, with a few notable differences. In HoloOcean, when an optical modem sends a message, it first checks to make sure the sending and receiving beacons are oriented properly to see each other and that no object is obstructing the view.
光调制解调器大致基于Hydromea LumaX[24]。它的工作原理与声学调制解调器相似,但有一些明显的区别。在holooocean中,当光调制解调器发送信息时,它首先检查发送和接收信标的方向是否正确,以便彼此看到,并且没有物体阻挡视线。

Once a connection is verified, the message and its payload are sent, in this case with no delay.
一旦验证了连接,就会发送消息及其有效负载,在本例中没有延迟。

VI. PYTHON INTERFACE(Python接口)

HoloOcean is essentially built as a python wrapper of an UE4 compiled game binary. This allows for a simple python interface for controlling simulations, as well as easy installation.
holooocean本质上是作为UE4编译的游戏二进制文件的PYTHON包装器构建的。这允许一个简单的python界面来控制模拟,以及易于安装。

A. Installation (安装)

Installation is performed by installing the python package, then from the python package a simple command will download the UE4 packaged binary. Note, as more environments for HoloOcean become available, each environment will be available for download individually as an UE4 packaged binary. Installation is as follows. First from the command line to install the python package while using python 3.6+,
安装是通过安装python包来完成的,然后从python包中一个简单的命令将下载UE4打包的二进制文件。请注意,随着holooocean的更多环境可用,每个环境都可以作为UE4打包二进制文件单独下载。安装步骤如下。首先从命令行安装python包,同时使用python 3.6+,在这里插入图片描述

B. Python Interface( Python接口)

HoloOcean’s python interface is modeled after OpenAI Gym [25]. This means controlling the robots can be done in a few lines of code. For example, the following code creates the predefined mission “PierHarbor-Hovering” which includes our custom in-house Hovering AUV loaded with its sensors at the pier environment. It then sends the AUV commands, one for each of its eight thrusters, to push it upwards for 200 ticks. At the end of each tick, a “state” dictionary is returned that has all the sensor information sampled at that time step.
holooocean的python接口是仿照OpenAI设计的健身房[25]。这意味着控制机器人是可以做到的在几行代码中。例如,下面的代码创建预定义的任务“pierharbor - hover”包括我们的自定义内部悬停AUV装载它的传感器在码头环境。然后发送AUV指令,8个推进器各一个,推动它上升200秒。在每个tick结束时,返回一个“状态”字典,其中包含在该时间步长采样的所有传感器信息。
在这里插入图片描述

C. Configuring Missions (配置任务)

Configuring missions is easily done by defining a custom configuration in json format, as shown in Fig. 6. Note, the configuration takes in a possible array of agents, as well as an array of sensors objects for each agent. This json, either loaded from a file or inserted directly in the python code, is passed to HoloOcean for mission creation. This allows for painless customization of missions, with any variation of sensors and agents as required.
通过定义json格式的自定义配置,可以轻松完成任务的配置,如图6所示。注意,该配置接受可能的代理数组,以及每个代理的传感器对象数组。这个json,要么从文件中加载,要么直接插入到python代码中,被传递给holooocean创建任务。这允许轻松定制任务,根据需要使用任何变化的传感器和代理。

HoloOcean also comes with a number of other features such as headless mode, configurable time steps, frame rate capping, and others. For further information, documentation is available at holoocean.readthedocs.io.
holooocean还具有许多其他功能,如无头模式,可配置的时间步长,帧率上限等。欲了解更多信息,请访问holooocean .readthedocs.io。

VII. ENVIRONMENTS(环境)

To provide a wide range of realistic scenarios, we have built various environments for HoloOcean as well. Currently, these include a dam, pier, and open water environment. We have configured each of these with realistic underwater imagery, as well as large underwater areas for use in multiagent missions. Examples can be seen in Fig. 7.
为了提供广泛的现实场景,我们也为holooocean构建了各种环境。目前,这些包括水坝、码头和开放的水域环境。我们已经为每一个都配置了真实的水下图像,以及用于多代理任务的大型水下区域。示例如图7所示。
我们在UE4中创建的用于holooocean的环境示例。

Fig. 7: Examples of the environments that we have created in UE4 for use in HoloOcean. (a) Shows the full dam environment with size 650m×650m, with (d) showing a closeup of one of the pipes. (b) Shows an overview of the pier environment, including all 3 sizes of piers, with a total size of 2km×2km, and (e) showing a closeup of a pier in the top left of (b). Finally, © shows the open water environment which includes a number of sunken submarines and planes for inspection with a total size of 2km×2km, with one of the submarines shown in (f).
图7:我们在UE4中创建的用于holooocean的环境示例。(a)显示大坝的完整环境,尺寸为650m×650m, (d)显示其中一根管道的特写。(b)显示了码头环境的概述,包括所有三种尺寸的码头,总尺寸为2km×2km, (e)显示了(b)左上角的码头特写。最后,©显示了开放水域环境,其中包括一些沉没的潜艇和飞机,总尺寸为2km×2km,其中一艘潜艇显示在(f)中。

As one of the most popular game engines, UE4 has a large marketplace full of environments and meshes for purchase by independent users at reasonable prices. This makes rapid development of high quality environments possible. In addition, UE4 has great documentation and a large community behind it, resulting in an abundance of online resources for new users, significantly lessening the initial learning curve.
作为最受欢迎的游戏引擎之一,UE4拥有一个充满环境和网格的大型市场,供独立用户以合理的价格购买。这使得高质量环境的快速发展成为可能。此外,UE4有很棒的文档和庞大的社区,为新用户提供了丰富的在线资源,大大减少了最初的学习曲线。

HoloOcean has already aided us in our research, providing realistic data for verification of the invariant extended Kalman filter for underwater navigation [26].
holooocean已经为我们的研究提供了帮助,为验证用于水下导航的不变扩展卡尔曼滤波器提供了现实数据[26]。

VIII. CONCLUSION (结论)

Building upon Holodeck and UE4, we created a new open source underwater simulator, allowing for painless building of custom environments from UE4. It also features a simple to install and use python interface. Further, we have implemented a novel imaging sonar model, allowing for realistic real-time imagery. Several common underwater sensors such as a DVL, IMU, camera, GPS, and depth sensor have also been implemented. Multi-agent missions and communications are also supported through acoustic and optical modems. All together, this results in a mature underwater simulator that’s extensible and easy to use. Future work will include additional sonar sensors, a GPU implementation of our imaging sonar algorithm, a ROS wrapper to publish sensor data, additional agents including autonomous surface vehicles, more accurate sensor noise simulations, and additional custom environments.
在Holodeck和UE4的基础上,我们创建了一个新的开源水下模拟器,允许从UE4中无痛地构建自定义环境。它还具有一个简单的安装和使用python界面。此外,我们已经实现了一种新的成像声纳模型,允许真实的实时图像。一些常见的水下传感器,如DVL、IMU、相机、GPS和深度传感器也已经实现。通过声光调制解调器也支持多代理任务和通信。总之,这是一个成熟的水下模拟器,可扩展且易于使用。未来的工作将包括额外的声纳传感器,成像声纳算法的GPU实现,发布传感器数据的ROS包装器,包括自动水面车辆在内的额外代理,更精确的传感器噪声模拟,以及额外的自定义环境。

REFERENCES

[1] J. Greaves, M. Robinson, N. Walton, M. Mortensen, R. Pottorff, C. Christopherson, D. Hancock, J. Milne, and D. Wingate, “Holodeck: A high fidelity simulator,” https://github.com/BYU-PCCL/holodeck, 2018.
[2] Epic Games, “Unreal engine,” https://www.unrealengine.com, 2019.
[3] D. Cook, A. Vardy, and R. Lewis, “A survey of AUV and robot simulators for multi-vehicle operations,” in Proc. IEEE/OES Autonomous Underwater Vehicles Conf., Oct. 2014, pp. 1–8.
[4] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operating system,” in ICRA Workshop on Open Source Software, 2009.
[5] M. M. M. Manhaes, S. A. Scherer, M. Voss, L. R. Douat, and ˜ T. Rauschenbach, “UUV Simulator: A Gazebo-based package for underwater intervention and multi-robot simulation,” in Proc. IEEE/MTS OCEANS Conf. Exhib., Sep. 2016, pp. 1–8.
[6] N. Koenig and A. Howard, “Design and use paradigms for Gazebo, an open-source multi-robot simulator,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots and Syst., vol. Sep. 2004, pp. 2149–2154 vol.3.
[7] R. Cerqueira, T. Trocoli, G. Neves, S. Joyeux, J. Albiez, and L. Oliveira, “A novel GPU-based sonar simulator for real-time applications,” Computers & Graphics, vol. 68, pp. Nov. 2017.
[8] M. Prats, J. Perez, J. J. Fern ´ andez, and P. J. Sanz, “An open source tool ´ for simulation and supervision of underwater intervention missions,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots and Syst., Oct. 2012, pp.2577–2582.
[9] “osgOcean,” https://github.com/kbale/osgocean, 2018.
[10] S. Carpin, M. Lewis, J. Wang, S. Balakirsky, and C. Scrapper, “USARSim: A robot simulator for research and education,” IEEE Robot. and Automation Letters, pp. Apr. 2007.
[11] P. Namal Senarathne, W. S. Wijesoma, K. W. Lee, B. Kalyan, M. Moratuwage, N. M. Patrikalakis, and F. S. Hover, “MarineSIM: Robot simulation for marine environments,” in Proc. IEEE/MTS OCEANS Conf. Exhib., 2010, pp. 1–5.
[12] A. Sehgal and D. Cernea, “A multi-AUV missions simulation framework for the USARSim robotics simulator,” in Mediterranean Conference on Control and Automation, 2010, pp. 1188–1193.
[13] D.-H. Gwon, J. Kim, M. H. Kim, H. G. Park, T. Y. Kim, and A. Kim, “Development of a side scan sonar module for the underwater simulator,” in 2017 14th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Jun. 2017, pp. 662–665.
[14] K. J. DeMarco, M. E. West, and A. M. Howard, “A computationallyefficient 2D imaging sonar model for underwater robotics simulations in Gazebo,” in Proc. IEEE/MTS OCEANS Conf. Exhib., Oct. 2015, pp.1–7.
[15] A. Rascon, “Forward-looking sonar simulation model for robotic applications,” Thesis, Monterey, CA; Naval Postgraduate School, Sep.
2020.
[16] NVIDIA, “Physx,” https://github.com/NVIDIAGameWorks/PhysX.
[17] A. S. Huang, E. Olson, and D. C. Moore, “LCM: Lightweight Communications and Marshalling,” in Proc. IEEE/RSJ Int. Conf. Intell.
Robots and Syst., Oct. 2010, pp. 4057–4062.
[18] J. Y. Taudien, “Doppler velocity log algorithms: detection, estimation, and accuracy,” Ph.D. dissertation, Pennsylvania State Universiy, 2018.
[19] R. Hartley, M. Ghaffari, R. M. Eustice, and J. W. Grizzle, “Contactaided invariant extended Kalman filtering for robot state estimation,” Int. J. Robot. Res., vol. 39, no. 4, pp. 402–430, 2020.
[20] S. Aravamudhan and S. Bhansali, “Reinforced piezoresistive pressure sensor for ocean depth measurements,” Sensors and Actuators A: Physical, vol. 142, no. 1, pp. 111–117, 2008.
[21] T. Manderson, I. Karp, and G. Dudek, “Aqua underwater simulator,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots and Syst., 2018.
[22] Z. J. Harris and L. L. Whitcomb, “Preliminary Evaluation of Cooperative Navigation of Underwater Vehicles without a DVL Utilizing a Dynamic Process Model,” in Proc. IEEE Int. Conf. Robot. and Automation, 2018, pp. 4897–4904.
[23] Blueprint, “SeaTrac X150 USBL Transponder Beacon,” https://www.
blueprintsubsea.com/pages/product.php?PN=BP00795, 2020.
[24] Hydromea, “LumaX,” https://www.hydromea.com/ underwater-wireless-communication/, 2020.
[25] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, “OpenAI gym,” CoRR, vol. abs/1606.01540, 2016.
[26] E. Potokar, K. Norman, and J. G. Mangelson, “Invariant Extended Kalman Filtering for Underwater Navigation,” IEEE Robot. and Automation Letters, vol. 6, no. 3, pp. Jul. 2021.

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/weixin_40725706/article/detail/108688
推荐阅读
相关标签
  

闽ICP备14008679号