水下机器人地形辅助导航(英文)
基于滑模控制的水下机器人导航研究

基于滑模控制的水下机器人导航研究一、水下机器人导航技术概述水下机器人,也称为无人潜水器(UUV),是一类能够在水下自主或遥控操作的机器人系统。
随着海洋资源开发、海洋科学研究、水下工程检测等领域需求的不断增长,水下机器人技术得到了迅速发展。
导航技术作为水下机器人的关键技术之一,直接影响着其执行任务的效率和安全性。
1.1 水下机器人导航技术的核心特性水下机器人导航技术的核心特性包括精确性、鲁棒性、自适应性和智能化。
精确性是指导航系统能够提供准确的定位信息,确保水下机器人在复杂的水下环境中准确到达预定位置。
鲁棒性是指导航系统能够在面对水下环境变化、传感器故障等不确定因素时,仍能保持稳定的导航性能。
自适应性是指导航系统能够根据环境变化和任务需求,自动调整导航策略。
智能化是指导航系统能够进行自主决策,实现复杂任务的自主导航。
1.2 水下机器人导航技术的应用场景水下机器人导航技术的应用场景非常广泛,包括但不限于以下几个方面:- 海底地形测绘:通过精确导航,水下机器人能够绘制海底地形图,为海洋地质研究提供基础数据。
- 海洋资源勘探:水下机器人能够导航至特定区域进行资源勘探,如油气、矿产等。
- 水下结构检测:水下机器人能够导航至水下结构物,如桥梁、管道等,进行检测和维护。
- 水下搜救:在发生海难等紧急情况时,水下机器人能够导航至失事区域进行搜救。
二、基于滑模控制的水下机器人导航研究滑模控制(Sliding Mode Control, SMC)是一种非线性控制策略,以其强鲁棒性和快速响应特性,在水下机器人导航系统中得到了广泛应用。
2.1 滑模控制的基本原理滑模控制的基本原理是设计一个滑动面,当系统状态在滑动面上时,系统表现出期望的动态特性。
通过设计适当的控制律,使得系统状态能够达到并保持在滑动面上,从而实现对系统的有效控制。
滑模控制具有对参数变化和外部干扰不敏感的特点,因此在水下机器人导航中具有很好的应用前景。
2.2 滑模控制在水下机器人导航中的应用在水下机器人导航中,滑模控制可以应用于路径跟踪、避障、姿态控制等多个方面。
新型水下机器人定位与导航算法研究

新型水下机器人定位与导航算法研究随着技术的不断进步,水下机器人在海洋资源勘探、海洋科学研究以及海洋工程等领域发挥着重要作用。
而水下机器人的定位与导航算法是其能否准确执行任务的重要关键。
本文将探讨新型水下机器人定位与导航算法的研究进展,并分析其应用前景。
一、定位与导航算法的重要性水下机器人作为一种自主操作的智能系统,需要具备准确的定位能力以及稳定的导航算法。
由于水下环境的复杂性,传统的定位与导航方法无法完全满足水下机器人的需求。
因此,研究新型的定位与导航算法对于提高水下机器人的精确性和可靠性至关重要。
二、惯性导航与基于声纳的定位研究惯性导航是水下机器人定位与导航中常用的方法之一。
该方法通过使用加速度计和陀螺仪等惯性传感器,利用航位推算的方法对机器人的位置和方向进行估计。
惯性导航具有实时性好、无需外部信号等优点,但由于惯性传感器存在漂移误差,导航精度随时间的增加而降低。
因此,在基于惯性导航的基础上结合声纳数据进行定位是提高定位精度的一种重要方法。
通过声纳传感器获取水下声纳信息,结合惯性导航算法进行数据融合估计,可以有效提高水下机器人的定位精度。
三、视觉导航与图像处理研究随着计算机视觉技术的快速发展,视觉导航成为水下机器人定位与导航研究的热点之一。
水下机器人可以使用摄像机获取水下图像信息,并通过图像处理算法对图像进行分析和处理,从而实现定位与导航。
视觉导航可以利用环境中的特征点进行位置估计,如通过SLAM (Simultaneous Localization and Mapping)算法构建水下环境的地图,并利用地图进行机器人的定位。
此外,还可以利用深度学习等先进的图像处理技术,通过对水下图像的学习和分析,提高机器人的感知和理解能力。
四、无线通信与水声通信技术研究水下机器人在执行任务时需要与外界进行实时通信,因此无线通信技术是定位与导航算法研究中的另一个重要方向。
传统的无线通信在水下非常困难,而水声通信技术可以在水下实现数据的传输和通信。
遥控水下机器人(ROV)海底资料可视化模型构建

遥控水下机器人(ROV)海底资料可视化模型构建海底地形是海洋地质学、海洋地球物理学、物理海洋学和海洋生物学等研究的基础资料。
海底地形的复杂性是影响海洋要素分布的重要因素之一,也是海洋海流呈现多样化的重要原因,也影响到了海洋水团的来源和性质;另外,对海洋资源的数量及多样性也有重要影响;对海洋沉积物类型的空间分布及厚度影响更为直接形象。
海底地形测量及可视化是地形数据解释的关键,海底视像调查是海底地形观测的重要技术手段。
采用ROV(Remotely Operated Vehicles,)进行海底地形视像观测,是一种极高效率的可视化工具,通过R0V获取的大量视频和图像资料,有极大的数据挖掘潜力。
本文介绍了R0V视像调查工作手段,评述了一种基于工业软件的数据处理流程,并详细阐述了利用R0V的视频和图像资料生成3D可视模型的处理方法,该方法将为海洋地质调查提供一种全新的可视化海底地形探测手段。
一、研究背景目前有多种形式的海底地形测量方法,其中海底视像调查已经被广泛地应用于各种各样的海洋科学研究中包,从而成为重要的海底地形测量方法之一。
海底视像调査是利用水下摄影设备对海底目标或局部地形进行的直接可视化的测量工作,目的是确定海底摄影目标的形状、大小、位置和性质,或局部地形的起伏状态。
水下机器人(ROV (Remotely Operated Vehicle,以下简称ROV),是一种具有智能功能的水下遥控潜水器。
ROV可以通过配置摄像头和多功能机械手,携带具有多种用途和功能的声学探测仪器以及专业工具进行各种复杂的水下作业任务。
其中利用ROV 录像探测海底信息并对海底目标物进行直接目视观测被认为是ROV的重要作业手段之—。
传统上使用的ROV大多应用于可视观测、携带特定传感器作业以及回收实体样品等精细调查。
当需要高精度定位取样时,样品釆集经常依赖于水下定位系统提供的目标位置和摄像机实时传输的视像信息。
由于R0V是定点作业,釆集的视像信息范围有限,如果不对作业区提前进行全面的调査,研究人员则没有把握在科学或工程上最相关的区域进行观测或取样。
水下机器人的定位与导航技术研究

水下机器人的定位与导航技术研究随着现代科技的不断进步,水下机器人的应用范围也越来越广泛,涵盖了科研、勘探、救援等多个领域。
而水下机器人的定位与导航技术是水下机器人核心技术之一。
水下机器人的导航技术主要是通过激光、声波等信号进行定位,而机器人定位的准确性对于水下作业的成功与否至关重要。
水下机器人定位技术的发展历程早期,水下机器人缺乏较为成熟和先进的定位技术,因此在海底勘探任务中会出现层层叠加、地图残缺的情况。
随着科技的进步,水下机器人的定位技术也得到了很大的改善。
在20世纪50年代,水下机器人首次使用声学信号进行距离探测和定位,标志着水下机器人定位技术的突破。
随后,在21世纪初,全球定位系统(GPS)的广泛应用,为水下机器人导航技术的发展提供了很大的支持。
随着声纳、激光和无线电波等定位技术的逐渐发展,现代水下机器人已经具备了比较高的准确性和可靠性。
水下机器人的定位方式水下机器人的定位方式主要有惯性导航、声纳导航、视觉导航等。
其中,惯性导航是指通过测量物体运动状态的加速度计和陀螺仪获取机器人位置的一种技术,由于惯性导航不需要外部支持设备,因此可在较长时间内提供机器人的精确定位。
其次,声纳导航是应用声纳波传播特性进行定位,声纳波在海水中传播受到水质、海流、海浪等因素的影响,因此容易受到环境因素的干扰。
还有一种方式是视觉导航,它依靠通过摄像头采集图像进行空间滤波和目标跟踪实现机器人的定位和导航。
水下机器人的导航方法水下机器人的导航方法主要有点对点导航、自主导航、协作导航等。
其中,点对点导航是指运用惯性、声学等方式通过设定目标点,机器人按照预设路径前进,靠近本体所在位置的目标点进行操作的一种导航方式。
其次,自主导航智能化程度更高,机器人可根据设定的任务需求进行自主导航,但在海底环境中因环境复杂等原因,自主导航依然存在诸多瓶颈问题。
协作导航是指多个水下机器人完成一个共同目标的一种导航方式。
通过协作,每台水下机器人彼此之间相互支持,提高了任务完成的效率和成功率。
自主水下航行器导航与定位技术

自主水下航行器导航与定位技术发布时间:2023-02-03T02:36:04.888Z 来源:《科学与技术》2022年第18期作者:杜晓海[导读] 自主水下机器人(AUV)作为开发和利用海洋资源的主要载体,杜晓海海军装备部 710065摘要:自主水下机器人(AUV)作为开发和利用海洋资源的主要载体,在执行任务时需要准确的定位信息。
现有AUV主要采用基于捷联惯性导航系统(SINS),辅以声学导航和地球物理场匹配导航技术。
本文简要介绍了水下导航模式的基本原理、优缺点和适用场景;讨论了各种导航模式中的关键技术,以提高组合导航的精度和稳定性。
通过分析现阶段存在的问题,展望了水下航行的未来发展趋势。
关键词:自主水下航行器;智能导航;智能定位本文综述了目前主流的AUV水下导航关键技术,包括DVL测速技术、LBL/SBL/USBL水声定位导航技术、地形辅助导航技术、地磁辅助导航技术和重力辅助导航技术以及协同导航技术,介绍了相关导航技术的基本原理和发展,分析和总结了水下自主导航中各技术的关键问题和技术难点,最后展望了AUV水下导航技术的未来发展。
1 SINS/DVL定位技术DVL是一种利用声波多普勒效应测量载流子速度的导航仪器。
根据AUV与水底之间的相对距离,DVL有两种模式:水底跟踪和水底跟踪。
当载流子与水底的相对距离在该范围内时,声波可以到达水底,当AUV与水底之间的相对距离超过范围时,声波无法到达水底,DVL采用水跟踪模式。
根据传输波速的多少,可以分为单波束、双波束和四波束。
1.1 SINS/DVL对准技术惯性导航可以为AUV提供实时的姿态、速度、位置等导航信息。
然而,初始对准必须在使用前进行,初始对准的结果在很大程度上决定了最终的集成精度。
通常,AUV在停泊或航行于水面时接收GPS信号进行初始对准。
在特定的任务背景下,AUV需要在水下运动期间完成初始对准,因此,许多学者提出了基于DVL辅助的移动基站对准。
水下辅助导航综述

水下辅助导航综述文章介绍了水下辅助导航系统的研究现状;分析了目前常用的水下辅助导航算法,尤其对水下地形辅助导航进行了比较深入的分析,它是目前广泛使用的水下导航技术,随着水下机器人的发展,水下地形辅助导航必将越来越收到研究者的关注,文中也分析了未来地形辅助导航的研究热点。
标签:水下导航;地形辅助导航;综述1 概述当今空中和陆上的导航系统十分依赖于GPS,它能够提供大范围的精确且连续的位置信息。
因此GPS广泛应用于各种移动平台上,包括飞行器、地面车等机器人系统中。
尽管如此,但还是在一些特殊的环境,GPS不能使用,必须要考虑其它的导航手段。
这类环境包括:水下、太空、地底、室内以及其它GPS接收机受限的环境中(比如:战时受屏蔽区域)。
尤其在水中,GPS等电磁传感器无法使用,必须考虑其他的辅助导航定位手段。
目前在水下有多种类型的导航定位手段,包括声学,光学,地磁以及地形等方法。
随着海洋资源受到各国的极大关注,海上冲突不断,临海更过纷纷投入水下机器人装备的开发。
水下导航是水下机器人的关键组成,由于水下特殊的环境,GPS系统无法使用,因此需要进行水下輔助导航。
随着科学技术的发展,各种水下导航技术在不同的应用中发挥了重要的作用,越来越多地收到了人们的关注。
目前在GPS受限的情况下,水域机器人的导航定位技术有两类:航迹推算以及辅助导航。
航迹推算主要利用速度加速度传感器来计算机器人的位置,由于测量的误差积分的结果会产生飘移,随着时间的增加误差会增大。
典型的推算公式如下:(1)这里dxn和dqb分别表示机器人的位置和方向的改变量,R(qb)和■(qb)是旋转矩阵,vb、ab和?棕b分别表示速度,加速度以及角速度。
n表示惯性导航的第n时刻,b表示机器人本体。
在机器人系统中,一般通过融合加速度、陀螺、磁力仪以及速度传感器信息来计算机器人位置。
航海常用的导航传感器主要有水压传感器,磁罗盘,陀螺仪,加速度计,IMU (Inertial Measurement Unit),AHRS (Attitude,Heading and Reference System),DVL(Doppler Velocity Log),传感器原理不同,其测量精度相差很大。
水下机器人定位导航技术实验报告

水下机器人定位导航技术实验报告一、引言水下机器人在海洋探索、资源开发、科学研究等领域发挥着越来越重要的作用。
而定位导航技术是水下机器人实现自主作业和精确操作的关键。
本次实验旨在研究和评估不同的水下机器人定位导航技术,为其实际应用提供参考和依据。
二、实验目的本次实验的主要目的是:1、比较不同定位导航技术在水下环境中的精度和可靠性。
2、分析各种技术在不同水质、水流条件下的性能表现。
3、探索如何提高水下机器人定位导航的准确性和稳定性。
三、实验设备与环境(一)水下机器人本次实验采用了型号水下机器人,其具备主要功能和特点。
(二)定位导航系统1、惯性导航系统(INS)2、声学定位系统3、卫星导航系统(在水面时辅助)(三)实验环境实验在一个大型的室内水池中进行,水池尺寸为长、宽、深,模拟了不同的水质(清澈、混浊)和水流条件(缓流、急流)。
四、实验方法与步骤(一)实验准备1、对水下机器人进行全面检查和调试,确保其各项功能正常。
2、安装和校准定位导航系统,设置相关参数。
(二)实验过程1、在不同水质和水流条件下,分别启动水下机器人,让其按照预设的轨迹运动。
2、同时记录惯性导航系统、声学定位系统和卫星导航系统(在水面时)的数据。
(三)数据采集与处理1、实验过程中,实时采集各个定位导航系统的数据。
2、对采集到的数据进行滤波、降噪等预处理。
3、采用特定的算法和软件对数据进行分析和计算,得出定位导航的精度和误差。
五、实验结果与分析(一)惯性导航系统1、在短时间内,惯性导航系统能够提供较为准确的位置和姿态信息。
2、但随着时间的推移,由于累积误差的存在,其定位精度逐渐降低。
(二)声学定位系统1、在清澈水质和缓流条件下,声学定位系统表现出色,定位精度较高。
2、然而,在混浊水质和急流环境中,声波的传播受到干扰,定位精度有所下降。
(三)卫星导航系统(水面辅助)在水面时,卫星导航系统能够提供非常准确的位置信息,有效地对水下机器人的定位进行校准和修正。
水下机器人导航方式简介

2. 航位推算导航系统
航位推算导航系统是智能水下机器人重要的导航方法之一,早在 16 世纪航位推算导航 法就已经提出,但当时很少利用在水下。而在水下导航中,航位推算是一种最为基本的 导航方法,Cotter 曾为航位推算导航法做出了定义,即:“从给定的初始位臵开始,根 据运动体在该点的航行速度、航行方向和航行时间,即可推算出下一时刻的位臵信息的 导航过程。 航位推算法简单、经济,目前仍然是水下导航中重要的手段。水下机器人只需配备深度 计、速度计、姿态传感器等,在给定水下机器人初始导航位臵信息的前提下,通过推算 系统完成推算就可构建一定精度的可靠、实时的水下自主导航系统[14]-[15]。但航位推 算导航精度有限,其导航精度受传感器数据测量精度影响比较大,且会存在累积误差, 另外还比较容易受海况的影响。 姿态传感器和速度计是航位推算导航系统的两个最重要的传感器,姿态传感器目前主要 采用光纤陀螺,光纤陀螺相对一般罗经具有精度高、体积小的优势,但是价格也十分昂 贵;而速度计目前主要采用多普勒测速仪(DVL),美国和英国等发达国家都研制出了 精度较高的多普勒测速仪,例如由美国 EDO 公司研制的 3040 型和 3050 型 DVL,其测 速精度可达 0.2%,而英国 MA 公司研制的 COVELIA,其速度最大绝对误差不超过 0.005kn,对于 DVL 来说,其作用距离与体积是成正比的,因此在实际应用中,应该根 据工作需要进行选择[3]。
9.视觉导航
随着计算机数据处理能力的提高以及图像处理技术的发展,利用声或 者光作为“视觉”已经可以为水下机器人进行导航,常用的视觉导航手 段有图像声纳、摄像头、水下电视等。 视觉导航技术也受到很多发达国家的重视,美国研制的 AUSS 水下机 器人,是早期性能较高的 AUV,它装备了前视声纳、静物照相机、水 下摄像机等视觉装备为水下导航提供辅助信息。澳大利亚研制的 Kambara 水下机器人也搭载了一套由 Pulnix TMC-73摄像头和 Sony EVI-D30 摄像头组成的光视觉系统。 但由于海洋环境复杂,光在水中传播的损耗较大,光视觉距离有限, 而声波在水中传播距离较远,但图像声纳比较容易受到还早噪声的影 响,所以现阶段视觉导航在技术上仍需要进一步提高才能应用到实际 中。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Advanced Robotics,Vol.15,No.5,pp.533–549(2001)ÓVSP and Robotics Society of Japan2001.Full paper Towards terrain-aided navigation for underwater roboticsSTEFAN WILLIAMS¤,GAMINI DISSANA YAKEand HUGH DURRANT-WHYTEAustralian Centre for Field Robotics,Department of Mechanical and Mechatronic Engineering, University of Sydney,NSW2006,AustraliaReceived27July2000;accepted19November2000Abstract—This paper describes an approach to autonomous navigation for an undersea vehicle that uses information from a scanning sonar to generate navigation estimates based on a simultaneous localization and mapping algorithm.Development of low-speed platform models for vehicle control and the theoretical and practical details of mapping and position estimation using sonar are provided. An implementation of these techniques on a small submersible vehicle‘Oberon’are presented.Keywords:Terrain-aided navigation;localization;mapping;uncertainty;autonomous underwater vehicle.1.INTRODUCTIONCurrent work on undersea vehicles at the Australian Centre for Field Robotics concentrates on the development of terrain-aided navigation techniques,sensor fusion and vehicle control architectures for real-time platform control.Position and attitude estimation algorithms that use information from scanning sonar to complement a vehicle dynamic model and unobservable environmental disturbances are invaluable in the subsea environment.Key elements of the current research work include the development of sonar feature models,the tracking and use of these models in mapping and position estimation,and the development of low-speed platform models for vehicle control.While many land-based robots use GPS or maps of the environment to provide accurate position updates for navigation,a robot operating underwater does not typically have access to this type of information.In underwater scienti c missions, a priori maps are seldom available and other methods for localisation must be considered.Many underwater robotic systems rely on xed acoustic transponders that are surveyed into the robot’s work area[1].These transponders are then¤To whom correspondence should be addressed.E-mail:stefanw@.au534S.Williams et al.interrogated to triangulate the position of the vehicle.The surveying of these transponders can be a costly and time consuming affair—especially at the depths at which these vehicles often operate and their performance can vary with conditions within the water column in which the vehicle is operating.As an alternative to beacon-based navigation,a vehicle can use its on-board sensors to extract terrain information from the environment in which it is operating. One of the key technologies being developed in the context of this work is an algorithm for Simultaneous Localization and Map Building(SLAM)to estimate the position of an underwater vehicle.SLAM is the process of concurrently building up a feature based map of the environment and using this map to obtain estimates of the location of the vehicle[2–6].In essence,the vehicle relies on its ability to extract useful navigation information from the data returned by its sensors. The robot typically starts at an unknown location with no a priori knowledge of landmark locations.From relative observations of landmarks,it simultaneously computes an estimate of vehicle location and an estimate of landmark locations. While continuing in motion,the robot builds a complete map of landmarks and uses these to provide continuous estimates of the vehicle location.The potential for this type of navigation system for subsea robots is enormous considering the dif culties involved in localization in underwater environments.This paper presents the results of the application of SLAM to estimate the motion of an underwater vehicle.This work represents the rst instance of a deployable underwater implementation of the SLAM algorithm.Section2introduces the Oberon submersible vehicle developed at the Centre and brie y describes the sensors and actuators used.Section3summarizes the stochastic mapping algorithm used for SLAM,while Section4presents the feature extraction and data association techniques used to generate the observations for the SLAM algorithm.In Section5, a series of trials are described and the results of applying SLAM during eld trials in a natural terrain environment along Sydney’s coast are presented.Finally,Section 6concludes the paper by summarizing the results and discussing future research topics as well as on-going work.2.THE OBERON VEHICLEThe experimental platform used for the work reported in this paper is a mid-size submersible robotic vehicle called Oberon designed and built at the Australian Cen-tre for Field Robotics(see Fig.1).The vehicle is equipped with two scanning low-frequency terrain-aiding sonars and a color CCD camera,together with bathy-ometric depth sensors,a ber optic gyroscope and a magneto-inductive compass with integrated two-axis tilt sensor[7].This vehicle is intended primarily as a re-search platform upon which to test novel sensing strategies and control methods. Autonomous navigation using the information provided by the vehicle’s on-board sensors represents one of the ultimate goals of the project[8].Towards terrain-aided navigation535Figure1.Oberon at sea.3.FEATURE-BASED POSITION ESTIMATIONThis section presents a feature based localisation and mapping technique used for generating vehicle position estimates.By tracking the relative position between thevehicle and identi able features in the environment,both the position of the vehicle and the position of the features can be estimated simultaneously.The correlation information between the estimates of the vehicle and feature locations is maintained to ensure that consistent estimates of these states are generated.3.1.The estimation processThe localization and map building process consists of a recursive,three-stage procedure comprising prediction,observation and update steps using an extended Kalman lter(EKF)[3].The Kalman lter is a recursive,least-squares estimator and produces at time k a minimum mean-squared error estimate O x.k j k/of the state x.k/given a series of observations,Z k D[z.1/:::z.k/]:O x.k j k/D E£x j Z k¤:(1) The lter fuses a predicted state estimate O x.k j k¡1/with an observation z.k/ofthe state x.k/to produce the updated estimate O x.k j k/.For the SLAM algorithm the EKF is used to estimate the pose of the vehicle x v.k/along with the positions of the N observed features x i.k/;i D1:::N.In the current implementation,the vehicle pose is made up of the two-dimensional position.x v;y v/and orientation Ãv of the vehicle.An estimate of vehicle ground speed,V v,slip angle,°v,and the gyro rate bias,PÃbias,is also generated by the algorithm.The‘slip angle’°v is the angle between the vehicle axis and the direction of the velocity vector.Although the thrusters that drive the vehicle are oriented in the direction of the vehicle axis,the slip angle is often non-zero due to disturbances caused by the deployed tether and currents.A schematic diagram of the vehicle model is shown in Fig.2.3.1.1.Prediction.The prediction stage uses a model of the motion of the vehicle to predict the vehicle position,O x v.k j k¡1/,at instant k given the information536S.Williams et al.Figure 2.The vehicle model currently employed with the submersible vehicle.The positioning lter estimates the vehicle position,.X v ;Y v /,orientation,Ãv ,velocity,V v and slip angle °v .The frame of reference used is based on the north-east-down alignment commonly used in aeronauticalengineering applications.The x-axis is aligned with the compass generated north reading.available to instant k ¡1.A constant acceleration model,shown in (2),is used for this purpose:P x vD V v cos .Ãv C °v /C v x ;P y vD V v sin .Ãv C °v /C v y ;(2)P Ãv D P Ãgyro ¡P Ãbias C v Ã;P Vv D v V ;P °v D v °;P Ãbias D v bias ;where v x ,v y ,v Ã,v V ,v °and v bias are assumed to be zero-mean,temporally uncorrelated gaussian process noise errors with variance ¾2x ,¾2y ,¾2Ã,¾2V ,¾2°and ¾2bias respectively.The standard deviations for these noise parameters are shown in Table 1.The rate of change of vehicle ground speed,P V v ,and slip angle,P °v ,are assumed to be driven by white noise.The ber-optic gyroscope measures the vehicle yaw rate and is used as a control input to drive the orientation estimate.Given the small submerged inertia,relatively slow motion and large drag-coef cients induced by the open frame structure of the vehicle and the deployed tether,the model described by (2)is able to capture the motion of the vehicle.In order to implement the lter,the discrete form of the vehicle model is used to predict the vehicle state O x v .k j k ¡1/given the previous estimate O x v .k ¡1j k ¡1/.The discrete,non-linear vehicle prediction equation,F v ,is shown in (4):O x v .k j k ¡1/D F v .O x v .k ¡1j k ¡1/;u.k//;(3)Towards terrain-aided navigation537 Table1.SLAM lter parametersSampling period1T0:1sVehicle x process noise SD¾x0:025mVehicle y process noise SD¾y0:025mVehicle heading process noise SD¾Ã0:6oVehicle velocity SD¾v0.01m/sVehicle slip angle SD¾°1:4oGyro bias SD¾bias0:3o/sGyro measurement SD¾gyro0:6o/sCompass SD¾compass2:9oRange measurement SD¾R0:1mBearing measurement SD¾B1:4oSonar range20mSonar resolution0:1mwhere F v is de ned by:O x v D O x v C1T O V v cos.OÃv C O°v/;O y v D O y v C1T O V v sin.OÃv C O°v/;(4)OÃv D OÃv C1T.PÃgyro¡PÃbias/;O V v D O V v;O°v D O°v;O PÃbias D O PÃbias;with the discrete timestamps.k j k¡1/and.k¡1j k¡1/omitted for conciseness. The features that are tracked in the map are assumed to be stationary over time.This is not a necessary condition given the formulation of the SLAM algorithm using an EKF.However,tracking moving features in the environment is not considered feasible at this time given the available sensors nor is it likely to aid in navigation since accurate models of bodies moving underwater are not likely to be available.This assumption yields a simple feature map prediction equation,F m:O x i.k j k¡1/D O x i.k¡1j k¡1/:(5) The covariance matrix of the vehicle and feature states,O P.k j k¡1/,is predicted us-ing the non-linear state prediction equation.The predicted covariance is computed using the gradient of the state propagation equation linearized about the current vehicle state estimate,r F v,and about the control input model,r F u,the process noise model,Q,and the control noise model,U.The lter parameters used in this application are shown in Table1.O P.k j k¡1/D r F v O P.k¡1j k¡1/r F T v C r F u U.k j k¡1/r F T u C Q.k j k¡1/;(6)538S.Williams et al.withU.k j k¡1/D diag £¾2gyro¤;(7)andQ.k j k¡1/D diag £¾2x¾2y¾2þ2V¾2°¾2bias¤:(8)3.1.2.Observation.There are two types of observations involved in the map building process as implemented on the vehicle.The rst is the observation of the orientation from the output of the magneto-inductive compass.The lter generates an estimate of the current yaw of the vehicle by fusing the predicted yaw estimate with the compass output.A shaping state that estimates the yaw rate bias of the gyroscope is also generated.The yaw measurements are incorporated into the SLAM lter using the yaw observation estimate,O zÃ.k j k¡1/,as shown in(9):O zÃ.k j k¡1/D O xÃ.k j k¡1/:(9) The compass observations are assumed to be corrupted by zero-mean,temporally uncorrelated white noise with variance¾compass.z compass.k/DÃ.k/C w compass:(10) There is always a danger that a compass will be affected by ferrous objects in the environment and transient magnetic elds induced by large currents,such as those generated by the vehicle’s thrusters.While this may the case,in practice the compass does not seem to be affected to a large degree by the vehicle’s thrusters. In addition,the unit is equipped with a magnetic eld strength alarm.When the strength of the magnetic eld increases,the alarm is signaled,indicating that the current observation may be in doubt.Terrain feature observations are made using an imaging sonar that scans the horizontal plane around the vehicle.Point features are extracted from the sonar scans and are matched against existing features in the map.The feature matching algorithm will be described in more detail in Section4.The observation consists of a relative distance and orientation from the vehicle to the feature.The terrain feature observations are assumed to be corrupted by zero-mean,temporally uncorrelated white noise with variance¾R and¾B respectively.The predicted observation, O z i.k j k¡1/,when observing landmark‘i’located at O x i can be computed using the non-linear observation model H i.O x v.k j k¡1/;O x i.k j k¡1//:O z i.k j k¡1/D H i.O x v.k j k¡1/;O x i.k j k¡1//;(11) where H i is de ned by:O z i R D p.O x v¡O x i/2C.O y v¡O y i/2;Towards terrain-aided navigation539O z iµD arctan.O y v¡O y i/.O x v¡O x i/´¡OÃv;with the discrete timestamps.k j k¡1/once again omitted for conciseness.For both types of observation,the difference between the actual observation,z.k/, and the predicted observation,O z.k j k¡1/,is termed the innovationº.k j k¡1/:º.k j k¡1/D z.k/¡O z.k j k¡1/:(12)The innovation covariance,S.k j k¡1/,is computed using the current state covari-ance estimate,O P.k j k¡1/,the gradient of the observation model,r H.k j k¡1/,and the covariance of the observation model R.k j k¡1/:S.k j k¡1/D r H.k j k¡1/O P.k j k¡1/r H.k j k¡1/T C R.k j k¡1/;(13)with:RÃ.k j k¡1/D diag £¾2compass¤;(14)and:R i.k j k¡1/D diag £¾2R¾2B¤:(15)3.1.3.Update.The state estimate can now be updated using the optimal gain matrix W.k/.This gain matrix provides a weighted sum of the prediction and observation,and is computed using the innovation covariance,S.k j k¡1/,and the predicted state covariance,O P.k j k¡1/.This is used to compute the state update O x.k j k/as well as the updated state covariance O P.k j k/:O x.k j k/D O x.k j k¡1/C W.k j k¡1/º.k j k¡1/;(16) O P.k j k/D O P.k j k¡1/¡W.k j k¡1/S.k j k¡1/W.k j k¡1/T;(17) where:W.k j k¡1/D O P.k j k¡1/r H.k j k¡1/S¡1.k j k¡1/:(18)4.FEATURE EXTRACTION FOR LOCALIZATIONThe development of autonomous map-based navigation relies on the ability of the system to extract appropriate and reliable features with which to build maps.Point features are identi ed from the sonar scans returned by the imaging sonar and are used to build up a map of the environment.The extraction of point features from the sonar data is essentially a three-stage process.The range to the principal return must rst be identi ed in individual pings. This represents the range to the object that has produced the return.The principal returns must then be grouped into clusters.Small,distinct clusters can be identi ed as point features,and the range and bearing to the target estimated.Finally,the range and bearing information must be matched against existing features in the map.This540S.Williams et al.Figure3.An image captured from the submersible of one of the sonar targets deployed at the eld test site.section provides more details of the feature identi cation algorithms used to provide observations for the lter.4.1.Sonar targetsSonar targets are currently introduced into the environment in which the vehicle will operate(see Fig.3)in order to obtain identi able and stable features.A prominent portion of the reef wall or a rocky outcropping might also be classi ed as a point feature.If the naturally occurring point features are stable they will also be incorporated into the map.Development of techniques to extract terrain aiding information from more complex natural features,such as coral reefs and the natural variations on the sea oor,is an area of active research.The ability to use natural features will allow the submersible to be deployed in a larger range of environments without the need to introduce arti cial beacons.The sonar targets produce strong sonar returns that can be charaterized as point features for the purposes of mapping(see Fig.5a).The lighter sections in the scan indicate stronger intensity returns.In this scan,two sonar targets are clearly visible and can easily be characterized as point features.(The features extracted by the algorithm are shown in Fig.5b.)More details of the feature extraction algorithms are presented in the following subsections.4.2.Principal returnsThe data returned by the SeaKing imaging sonar consists of the complete time history of each sonar ping in a discrete set of bins scaled over the desired range. The rst task in extracting reliable features is to identify the principal return from the ping data.The principal return is considered to be the start of the maximum energy component of the signal above a certain noise threshold.Figure4shows a single ping taken from a scan in the eld.This return is a re ection from one of the sonar targets and the principal return is clearly visible.The return exhibits veryTowards terrain-aided navigation541Figure4.(a)A single SeaKing ping showing the raw ping,the moving average and the computed principal return.This ping is a re ection from one of the sonar targets and shows very good signal to noise ratio.The dashed line marks the principal return.(b)A single SeaKing ping re ected from the reef surrounding the vehicle showing the raw ping and the moving average.The terrain returns are distinguishable from the target returns by the fact that the high energy returns are spread over a much wider section of the ping.The large amplitude return at low range in this ping results from the interface between the oil- lled sonar transducer housing and the surrounding sea rge amplitude returns are ignored if they are below2.0m from the vehicle.good signal to noise ratio making the extraction of the principal returns relatively straightforward.At present the vehicle relies on the sonar targets as its primary source of navigation information.It is therefore paramount for the vehicle to reliably identify returns originating from the sonar targets.Examination of the returns generated by the targets shows that they typically have a large magnitude return concentrated over a very short section of the ping.This differs from returns from other objects in the environment such as rocks and the reef walls that tend tend have high energy returns spread over a much wider section of the ping as seen in Fig.4b.4.3.Identi cation of point featuresFollowing the extraction of the principal return from individual pings,these returns are then processed to nd regions of constant depth within the scan that can be classi ed as point features.Sections of the scan are examined to nd consecutive pings from which consistent principal return ranges are located.The principal returns are classi ed as a point feature if the width of the cluster is small enough to be characterised as a point feature and the region is spatially distinct with respect to other returns in the scan[9].The bearing to the feature is computed using the center542S.Williams et al.Figure5.(a)A scan in the eld showing sonar targets.(b)The principal returns(+)and the extracted point features(e)from the scan in(a).of the distribution of principal returns.The range is taken to be the median range of the selected principal returns.A scan taken in the eld is shown in Fig.5a.Two targets are clearly visible in the scan along with a section of the reef wall.Figure5b shows the principal returns selected from the scan along with the point features extracted by the algorithm.Both targets are correctly classi ed as point features while the returns originating from the reef are ignored.Future work concentrates on using the information available from the unstructured natural terrain to aid in navigation.4.4.Feature matchingOnce a point feature has been extracted from a scan,it must be matched against known targets in the environment.A two-step matching algorithm is used in order to reduce the number of targets that are added to the map(see Fig.6).When a new range and bearing observation is received from the feature extraction process,the estimated position of the feature is computed using the current estimate of vehicle position.This position is then compared with the estimated positions of the features in the map using the Mahanabolis distance[3].If the observation can be associated to a single feature the EKF is used to generate a new state estimate. An observation that can be associated with multiple targets is rejected since false observations can destroy the integrity of the estimation process.If the observation does not match to any targets in the current map,it is compared against a list of tentative targets.Each tentative target maintains a counter indicating the number of associations that have been made with the feature as well as the last observed position of the feature.If a match is made,the counter is incremented and the observed position is updated.When the counter passes a threshold value, the feature is considered to be suf ciently stable and is added to the map.If theFigure6.The feature-matching algorithm.potential feature cannot be associated with any of the tentative features,a new tentative feature is added to the list.Tentative features that are not reobserved are removed from the list after a xed time interval has elapsed.5.EXPERIMENTAL RESULTSThe SLAM algorithms have been tested during deployment in a natural environment off the coast of Sydney,Australia.The submersible was deployed in a natural inlet with the sonar targets positioned in a straight line at intervals of10m.The vehicle controls were set to maintain a constant heading and altitude during the run.Once the vehicle had reached the end of its tether(approximately50m)it was turned around and returned along the line of targets.The slope of the inlet in which the vehicle was deployed meant that the depth of the vehicle varied between approximately1and5m over the course of the run.The plot of the nal map obtained by the SLAM algorithm(shown in Fig.7) clearly shows the position of the sonar feature targets along with a number of tentative targets that are still not con rmed as suf ciently reliable.Some of the tentative targets are from the reef wall,while others come from returns off of the tether.These returns are typically not very stable and therefore do not get incorporated into the SLAM map.The absolute location of all the potential point targets identi ed based on the sonar principal returns are also shown in this map. These locations were computed using the estimated vehicle location at the instantof the corresponding sonar return.The returns seen near the top and bottom of theFigure7.Path of robot shown against nal map of the environment.The vehicle position estimates are spaced evenly in time over the run.It is evident that the vehicle speed changes during the run as a function of the tether deployment.The estimated position of the features are shown as circles with the covariance ellipses showing their95%con dence bounds.Tentative targets that have not yet been added to the map are shown as‘+’.The series of tentative targets towards the top of the image occur from the reef wall.These natural point features tend not to be very stable,though,and are thus not incorporated into the map.map are from the reef walls.As can be seen,large clusters of returns have been successfully identi ed as targets.Since there is currently no absolute position sensor on the vehicle,the perfor-mance of the positioning lter cannot be measured against ground truth at this time. In previous work,it was shown that the estimator yields consistent results in the controlled environment of the swimming pool at the University of Sydney[10]. To verify the performance of the lter,the innovation sequence can be monitored to check the consistency of the estimates.Figure8shows that the innovation se-quences are within the covariance bounds computed by the algorithm.The state estimates can also be monitored to ensure they are yielding sensible estimates.The vehicle is attached to an on-shore command station via a tether.This tether is deployed during the mission and a number of oating buoys keep it from dragging on the ground.The tether catenary creates a force directed back along its length.The effects of this force are evident in the slip angle experienced by the vehicle.When the vehicle executes a large turn,the slip angle tends to change direction.Figure9shows the slip angle estimates throughout the run.Shortly after the sharp turn at360s,the mean slip angle estimate changes sign—re ecting the fact that the tether has changed its position relative to the vehicle.In addition to the identi ed target returns,strong energy returns from the reefwalls and the sea oor can also be extracted from the sonar pings.In Fig.10theseThe innovation is plotted as a solid line while the con dence bounds are the dash-dot lines.Figure9.(a)The vehicle orientation and(b)slip angle computed by the algorithm.The vehicle executes a180±turn at approximately the360s.The mean slip estimate changes from a positive to negative value at this time re ecting the change in the force induced by the tether catenary.strong returns have been plotted.The return points are color coded to re ect the depth at which the observation was taken.The shape of the inlet can be clearly seen and it is evident that the vehicle is observing the sea oor behind itself as it moves deeper along the inlet.Figure11shows a close up view of the same scene showingthe vehicle position estimates more clearly.Figure10.Path of robot shown against nal map of the environment.The estimated position of the features are shown as a vertical column of circles.The strong sonar returns are color coded with the depth at which they were observed with a darker point indicating a deeper depth.The shape of the inlet is clearly visible from this plot.The estimated vehicle positions are shown spaced evenly in time.6.SUMMARY AND CONCLUSIONSIn this paper,it has been shown that SLAM is practically feasible using arti cial targets introduced into a natural terrain environment on Sydney’s shore-line.By using terrain information as a navigational aid,the vehicle is able to detect unmodeled disturbances in its motion induced by the tether drag and the effect of currents.The focus of future work is on representing natural terrain in a form suitable for incorporation into SLAM.This will enable the vehicle to be deployed in a broader range of environments without the need to introduce arti cial beacons.Another outstanding issue is that of map management.As the number of calculations required to maintain the state covariance estimates increases with the square of the number of beacons in the map,criteria for eliminating features from the map as well as for partitioning the map into submaps becomes important.This is especially true for longer missions in which the number of available landmarks is potentially quite large.Finally,integration of the localisation and map building with mission planning is under consideration.This will allow decisions concerning sensingstrategies to be made in light of the desired mission objectives.。