一种由Matlab仿真控制的自主移动机器人模拟器(英文)

合集下载

RAMSIS简介

RAMSIS简介

RAMSIS介绍RAMSIS是德语“Rechnergestütztes Anthropologisch-Mathematisches System zur Insassen-Simulation”的缩写,意思是“用于乘员仿真的计算机辅助人体数字系统”。

RAMSIS是一种用于乘员仿真和汽车人机工程设计的高效CAD工具。

该软件为工程师提供了一个详细的CAD人体模型,来模拟仿真驾驶员的行为。

它使设计者在产品开发过程的初期,在只有CAD数据的情况下就可以进行大量的人机工程分析,从而避免在以后阶段进行昂贵的修改设计。

RAMSIS在上个世纪80年代由德国凯泽斯劳腾的TECMATH股份有限公司(现在的Human Solutions股份有限公司)联合慕尼黑工业大学的人机工程学系开发。

初期RAMSIS的开发由整个德国汽车工业发起并资助。

他们的目标是,克服现存大多数二维人机工程工具SAE J826模板等的不足,在法规规定的基础上进一步提高车辆的人机工程品质。

RAMSIS已经成为汽车工业用于人机工程设计的全球标准。

目前已经被全球70%以上的轿车制造商所使用,包括Audi,Volkswagen,BMW,Porsche,DaimlerChrysler,Ford,General Motors,Honda,Mazda,Opel,Renault,Peugeot,Citro?n,Rover,Saab,Volvo,Daewoo,Seat,Skoda和Fiat等等。

其中,在中国的顾客有泛亚汽车技术中心、上海大众、上海汽车、上汽乘用车、一汽大众、一汽汽研、一汽轿车、华晨金杯、奇瑞、长安汽车、上汽五菱、北汽福田、北汽研究院、海南马自达、东风汽车等。

卡车,客车,叉车等工业用车制造商,如Freightliner和Iveco、MAN Commercial Vehicles、The Liebherr Werk Ehingen GmbH等也正在使用RAMSIS。

基于Matlab和Adams的自平衡机器人联合仿真

基于Matlab和Adams的自平衡机器人联合仿真

基于Matlab和Adams的自平衡机器人联合仿真徐建柱;刁燕;罗华;高山【摘要】为检验自平衡机器人控制系统的准确性及其动静态性能,采用Matlab/Simulink和Adams建立虚拟样机系统的方法.通过建立机器人的状态空间方程并利用LQR方法配置系统极点,设计出状态反馈控制器.分别在Simulink和Adams中建立机器人的控制系统和机械仿真模型,利用二者实现对机器人的联合仿真.仿真结果表明,所设计的控制方法能实现机器人平衡,并具有良好的动静态性能.%In order to test the accuracy and the static-dynamic characteristics of the control system of self-balance robots, a virtual prototype system was created based on Matlab/Simulink and Adams. The full state feedback controller was designed by building state spacial formula and configuring the system extremity with LQR method. The control system and the mechanical simulation model of the robot were built in Simulink and Adams. The co-simulation based on Simulink and Adams for the robot was realized. The simulation result shows that the control method can keep the robot's balance successfully and the whole system has a good static-dynamic characteristic.【期刊名称】《现代电子技术》【年(卷),期】2012(035)006【总页数】3页(P90-92)【关键词】自平衡机器人;Matlab/Simulink;Adams;动力学仿真【作者】徐建柱;刁燕;罗华;高山【作者单位】四川大学制造学院,四川成都610065;四川大学制造学院,四川成都610065;四川大学制造学院,四川成都610065;四川大学制造学院,四川成都610065【正文语种】中文【中图分类】TP242-34两轮自平衡机器人的研究是近几年众多国内外学者关注的一个热点,如瑞士联邦大学工业大学Felix Grasser等人研制的JOE,美国Southern Methodist大学研制的nBot,以及为所熟知的由Dean Kamen所发明的两轮电动代步车Segway等。

MATLAB仿真在自动控制原理多媒体辅助教学中的应用

MATLAB仿真在自动控制原理多媒体辅助教学中的应用

【 关键词】 MAT A SMULNK; 制 系统 L B;I I 控
自动控制原理是职业院校电气 自动化等专业 的一 门专业基础课 , 因其主要研究 自动控制 系统的一般规律 ,具有一定 的概括性和抽象 性。初学者不易掌握其基本 问题 、 思想和方法 。课堂教学中 , 教师需要 在黑板上画很 多曲线 , 分析参数 多时, 很难 画出好看准确 的曲线 , 只能 定性的画出大致的形状。影响学生的理解 与接受 。引入控制 系统仿真 软件 MA L T AB的讲解和演示。针对 自动控制原理课程 理论性强 、 现实 模型在实验室较难建 立的特点 , 提出用 MA L B进行仿 真实验 , 以 TA 可 加深学生对课程 的理解 , 调动学生学 习的积极性 , 强学生 理论结合 增 实际的能力 。同时大大提高 了学生深入思考问题 的能力 和创新能力。 目前 ,比较流 行的控 制系 统仿真 软件 M T A A L B是矩 阵实验 室 ( f aoa r ) M ̄ x brt y 的简称 , iL o 是美 国 Ma Wok 公 司出品的商业 数学软 t rs h 件, 用于算法开发 、 数据可视化 、 数据分析以及数值计算的高级技术计 算语言和交互式环境 ,主要包括 MA L B和 Smuik两大部分。 以 TA i l n 往在 电气 自动化专 业学生进行毕业设计过程中 , 常常需要进行大量的 数学 运算。在当今计算机 时代 ,通常的做法 是借助 高级语 言 B s 、 a c i Fra ot n或 c语言等编制计算程序 , r 输人计算机做近似计算 。但是这需 要熟练的掌握所运用 的语法规则与编制程序 的相关规定 , 而且 编制程 序不容易 , 费时费力 。尤其 MA L B应用在 电气 自动化专业的毕业设 TA 计 的计算机仿 真上 , 更体现出它巨大的优越性和简易性 。 SMU I K是 MA L B的附属程式 , I LN TA 是一 种用 以建立模 型及各种 物理及数学系统的软体 , 他利 用图形模组表示系统 的各 个环节 , 用箭 头方 向表示各个环节 的输 出及输入关系。 利用一个类似 自动控制中常 用的方块图, 以容易的将一个复杂的系统模型输入至电脑 中。 可 首先我们启动 S N LN ,只须再 M T A U U IK A L B的命 令视窗 中输入 SMU I K的指令即可 , I LN 即会 出现一个 SMU I K的视窗 , 中包含 了 I LN 其 其个模型库 , 分别是 信号源库 、 出库 、 输 离散系统 库 、 线性 系统 库 、 非 线性系统库及扩展系统库。 1 信 号 源库 包 括阶跃信号 、 正弦波 、 时钟 、 常值 、 、 号发生器 等各种 信号 档 信 源, 中信号发生器可产 生正 弦波 、 其 方波 、 锯齿波、 随机信号等波形 。

MATLAB机器人仿真程序

MATLAB机器人仿真程序

附录MATLAB 机器人工具箱仿真程序:1)运动学仿真模型程序(Rob1.m)L1=link([pi/2 150 0 0])L2=link([0 570 0 0])L3=link([pi/2 130 0 0])L4=link([-pi/2 0 0 640])L5=link([pi/2 0 0 0])L6=link([0 0 0 95])r=robot({L1 L2 L3 L4 L5 L6})=’MOTOMAN-UP6’ % 模型的名称>>drivebot(r)2)正运动学仿真程序(Rob2.m)L1=link([pi/2 150 0 0])L2=link([0 570 0 0])L3=link([pi/2 130 0 0])L4=link([-pi/2 0 0 640])L5=link([pi/2 0 0 0])L6=link([0 0 0 95])r=robot({L1 L2 L3 L4 L5 L6})=’MOTOMAN-UP6’t=[0:0.01:10];%产生时间向量qA=[0 0 0 0 0 0 ]; %机械手初始关节角度qAB=[-pi/2 -pi/3 0 pi/6 pi/3 pi/2 ];%机械手终止关节角度figure('Name','up6机器人正运动学仿真演示');%给仿真图像命名q=jtraj(qA,qAB,t);%生成关节运动轨迹T=fkine(r,q);%正向运动学仿真函数plot(r,q);%生成机器人的运动figure('Name','up6机器人末端位移图')subplot(3,1,1);plot(t, squeeze(T(1,4,:)));xlabel('Time (s)');ylabel('X (m)');subplot(3,1,2);plot(t, squeeze(T(2,4,:)));xlabel('Time (s)');ylabel('Y (m)');subplot(3,1,3);plot(t, squeeze(T(3,4,:)));xlabel('Time (s)');ylabel('Z (m)');x=squeeze(T(1,4,:));y=squeeze(T(2,4,:));z=squeeze(T(3,4,:));figure('Name','up6机器人末端轨迹图'); plot3(x,y,z);3)机器人各关节转动角度仿真程序:(Rob3.m)L1=link([pi/2 150 0 0 ])L2=link([0 570 0 0])L3=link([pi/2 130 0 0])L4=link([-pi/2 0 0 640])L5=link([pi/2 0 0 0 ])L6=link([0 0 0 95])r=robot({L1 L2 L3 L4 L5 L6})='motoman-up6't=[0:0.01:10];qA=[0 0 0 0 0 0 ];qAB=[ pi/6 pi/6 pi/6 pi/6 pi/6 pi/6]; q=jtraj(qA,qAB,t);Plot(r,q);subplot(6,1,1);plot(t,q(:,1));title('转动关节1');xlabel('时间/s');ylabel('角度/rad');subplot(6,1,2);plot(t,q(:,2));title('转动关节2');xlabel('时间/s');ylabel('角度/rad');subplot(6,1,3);plot(t,q(:,3));title('转动关节3');xlabel('时间/s');ylabel('角度/rad');subplot(6,1,4);plot(t,q(:,4));title('转动关节4');xlabel('时间/s');ylabel('角度/rad' );subplot(6,1,5);plot(t,q(:,5));title('转动关节5');xlabel('时间/s');ylabel('角度/rad');subplot(6,1,6);plot(t,q(:,6));title('转动关节6');xlabel('时间/s');ylabel('角度/rad');4)机器人各关节转动角速度仿真程序:(Rob4.m)t=[0:0.01:10];qA=[0 0 0 0 0 0 ];%机械手初始关节量qAB=[ 1.5709 -0.8902 -0.0481 -0.5178 1.0645 -1.0201]; [q,qd,qdd]=jtraj(qA,qAB,t);Plot(r,q);subplot(6,1,1);plot(t,qd(:,1));title('转动关节1');xlabel('时间/s');ylabel('rad/s');subplot(6,1,2);plot(t,qd(:,2));title('转动关节2');xlabel('时间/s');ylabel('rad/s');subplot(6,1,3);plot(t,qd(:,3));title('转动关节3');xlabel('时间/s');ylabel('rad/s');subplot(6,1,4);plot(t,qd(:,4));title('转动关节4');xlabel('时间/s');ylabel('rad/s' );subplot(6,1,5);plot(t,qd(:,5));title('转动关节5');xlabel('时间/s');ylabel('rad/s');subplot(6,1,6);plot(t,qd(:,6));title('转动关节6');xlabel('时间/s');ylabel('rad/s');5)机器人各关节转动角加速度仿真程序:(Rob5.m)t=[0:0.01:10];%产生时间向量qA=[0 0 0 0 0 0]qAB =[1.5709 -0.8902 -0.0481 -0.5178 1.0645 -1.0201]; [q,qd,qdd]=jtraj(qA,qAB,t);figure('name','up6机器人机械手各关节加速度曲线');subplot(6,1,1);plot(t,qdd(:,1));title('关节1');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)');subplot(6,1,2);plot(t,qdd(:,2));title('关节2');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)');subplot(6,1,3);plot(t,qdd(:,3));title('关节3');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)')subplot(6,1,4);plot(t,qdd(:,4));title('关节4');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)')subplot(6,1,5);plot(t,qdd(:,5));title('关节5');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)')subplot(6,1,6);plot(t,qdd(:,6));title('关节6');xlabel('时间 (s)');ylabel('加速度 (rad/s^2)')如有侵权请联系告知删除,感谢你们的配合!。

如何在MATLAB中进行移动机器人控制

如何在MATLAB中进行移动机器人控制

如何在MATLAB中进行移动机器人控制在现代机器人技术领域中,移动机器人控制是一个非常重要的研究方向。

移动机器人是指能够在环境中自主移动和执行任务的机器人。

而利用MATLAB软件进行移动机器人控制不仅可以帮助我们更好地理解机器人的运动规律,还可以实现对机器人的精确控制。

本文将从控制理论和MATLAB编程的角度,探讨如何在MATLAB中进行移动机器人控制。

## 一、移动机器人控制的基本理论在移动机器人控制中,我们需要考虑的一个重要问题是机器人的运动学和动力学。

机器人的运动学研究机器人在空间中的运动和姿态,而动力学则研究机器人的运动和力学关系。

了解机器人的运动学和动力学对于实现精确的控制至关重要。

在移动机器人控制中,我们通常会使用轨迹规划算法来生成机器人的运动轨迹。

轨迹规划算法可以根据机器人的起始位置、目标位置和环境约束,生成一条机器人运动轨迹。

常用的轨迹规划算法包括最速轨迹、最短路径和避障路径等。

另一个重要的控制理论是PID控制器。

PID控制器是一种经典的反馈控制器,可以根据机器人当前状态和目标状态之间的偏差来调整控制指令,从而实现对机器人的控制。

PID控制器在移动机器人控制中得到了广泛应用,因为它简单易懂、调节性能好。

## 二、MATLAB在移动机器人控制中的应用MATLAB是一种流行的数值计算和科学编程环境,被广泛应用于机器人控制领域。

MATLAB提供了很多工具箱和函数,可以用于在MATLAB环境中进行移动机器人控制。

首先,我们可以使用MATLAB中的机器人工具箱进行机器人模型的建立和仿真。

机器人工具箱提供了一系列函数和方法,可以帮助我们建立机器人模型,并对机器人进行仿真。

通过仿真实验,我们可以预测机器人的运动和行为,并进行控制算法的验证和优化。

其次,MATLAB中的机器人工具箱还提供了一些常用的轨迹规划算法,如最速轨迹和最短路径规划算法。

我们可以利用这些算法生成机器人的运动轨迹,并进行仿真和控制实验。

基于Coppeliasim与MATLAB的机器人建模与运动仿真

基于Coppeliasim与MATLAB的机器人建模与运动仿真
根据以上 !2@参数表确定并写出络石机器人 F^1 的 总变换矩阵% 方法如下!
绕 #$ 轴旋转 $5$ "使得 %$ 和 %$5$ 互相平行"因为 !$ 和 !!5都是垂直于 #$ 轴"因此绕 #$ 旋转 $5$ 使它们平行' 并 且共面( % 绕 #$ 轴平移 "$5$ "使得 %$ 和 %$5$ 共线"因为 %$ 和 %$5$ 都是平行并且垂直于 #$ 轴"沿着 #$ 移动则可以使 它们重叠在一起% 绕 %$ 轴平移 "$5$ "使得 %$ 和 %$5$ 的原 点坐标相同% #$ 围绕 %$5$ 旋转 !$5$ "使得 #$ 轴和 #$5$ 轴处 在同一条线上"使得变换前后的坐标系完全重合% 通过上 述变换步骤"得到准确的坐标系变换,$$- %
',O7< J4P0?490?< 02>79R0?S7E@059?01-<D978-/75<456L5D9=9;9702M;90849=05@/=57D7M.4E78< 02-.=75.7D J=405=56-/75<456!&&%%&(
$,L5D9=9;9702T0P09=.D 45EL59711=6759345;24.9;?=56L550H49=05@/=57D7M.4E78< 02-.=75.7D J=405=56-/75<456&&%&(+
,HE$
h
E/-$
% %
lE/-$ ,HE$ ,HE$ ,HE$
E/-$ %
E/-$ E/-$ l,HE$ E/-$
,HE$ %
!$ '()$ !$ E/-$

MATLAB-SimMechanics机构动态仿真

MATLAB-SimMechanics机构动态仿真

个方向转动)、Bushing(三个方向移动,三个方
向转动)、Custom Joint(自定义铰)、
Cylindrical(柱面铰)、Gimbal(万向铰,旋转
三个角度)、In-plane(平面内移动)、Planar
(平面铰)、Prismatic(单自由运动铰)、
Revolute(单自由转动铰)、Screw(螺旋铰)、
扑结构,但至少有一个构件是Ground模块,而且
有一个环境设置模块直接与其相连。
一个构件可能不止两个铰(Joint),即可以
产生分支。但是一个较只能连接两个构件。
(3)配置Body模块:双击模块,打开参数对话
框,配置质量属性(质量和惯性矩),然后确定
Body模块和Ground模块与整体坐标系或其他坐标
• 1.双击Disassembled Joints,弹出如图模块组, 其中模块是分解后的铰,不同于Joints中对应的 铰,它们有不同的基准点。
2021/7/1
14
• 2.双击Massless Connectors,弹出如图模块组, 其中模块是Joint中对应的铰的组合。
2021/7/1
15
4.2.6 传Βιβλιοθήκη 器与激励器模块组(Sensors&Actuators)
• 双击模块,弹出图示模块组。该模块组中的模块 用来和普通的Simulink模块进行数据交换。
2021/7/1
16
• Body Actuator:通过广义力或力矩来驱动刚体。 • Body Sensor:刚体检测模块。 • Constraint&Drivr Sensor:检测一对受约束刚体
间的力或力矩。 • Driver Actuator:对一对互相约束的刚体施加相

机器人控制matlab版

机器人控制matlab版

机器人控制matlab版机器人控制是指通过编程和算法控制机器人的运动和行为。

MATLAB作为一种强大的科学计算软件和工具,可以广泛应用于机器人控制领域。

本文将介绍机器人控制的MATLAB版,包括机器人模型、控制算法和实例应用等。

一、机器人模型在机器人控制之前,需要先建立机器人模型。

机器人模型是机器人的数学表达,用于描述机器人的结构和动力学特性。

常见的机器人模型包括末端执行器模型、刚体模型和关节模型等。

在MATLAB中,可以使用Robotics System Toolbox来建立机器人模型。

该工具箱提供了一系列函数和类,可以方便地创建机器人模型,并进行正向和逆向运动学计算。

二、控制算法机器人控制的核心是控制算法。

控制算法可以分为运动控制和轨迹规划两个方面。

1. 运动控制运动控制是控制机器人执行特定动作的算法。

常见的运动控制算法包括PID控制、反馈线性化控制和自适应控制等。

在MATLAB中,可以使用Control System Toolbox来设计和实现运动控制算法。

该工具箱提供了丰富的函数和方法,可以进行控制器设计、系统建模和仿真等操作。

2. 轨迹规划轨迹规划是控制机器人在给定时间内从初始位置到达目标位置的算法。

常见的轨迹规划算法包括直线插补、三次样条插补和最短路径规划等。

在MATLAB中,可以使用Robotics System Toolbox中的轨迹生成器函数来进行轨迹规划。

该工具箱提供了一系列的插补函数,可以方便地生成平滑的运动轨迹。

三、实例应用机器人控制的应用非常广泛,包括工业制造、医疗卫生和服务机器人等领域。

以工业制造为例,机器人控制可以实现自动化生产线的操作和控制。

通过MATLAB中的机器人控制工具箱,可以对机器人进行建模、控制和仿真,实现自动化生产线的规划和优化。

此外,机器人控制还可以应用于医疗卫生领域。

例如,通过MATLAB编写的机器人控制程序,可以控制外科手术机器人进行精确的手术操作,提高手术的安全性和准确性。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Matlab-based Simulator for Autonomous Mobile RobotsAbstractMatlab is a powerful software development tool and can dramatically reduce the programming workload during the period of algorithm development and theory research. Unfortunately, most of commercial robot simulators do not support Matlab. This paper presents a Matlab-based simulator for algorithm development of 2D indoor robot navigation. It provides a simple user interface for constructing robot models and indoor environment models, including visual observations for the algorithms to be tested. Experimental results are presented to show the feasibility and performance of the proposed simulator.Keywords: Mobile robot, Navigation, Simulator, Matlab1. IntroductionNavigation is the essential ability that a mobile robot. During the development of new navigation algorithms, it is necessary to test them in simulated robots and environments before the testing on real robots and the real world. This is because (i) the prices of robots are expansive; (ii) the untested algorithm may damage the robot during the experiment; (iii) difficulties on the construction and alternation of system models under noise background; (iv) the transient state is difficult to track precisely; and (v) the measurements to the external beacons are hidden during the experiment, but this information is often helpful for debugging and updating the algorithms.The software simulator could be a good solution for these problems. A good simulator could provide many different environments to help the researchers to find out problems in their algorithms in different kinds of mobile robots. In order to solve the problems listed above, this simulator is supposed to be able to monitor system states closely. It also should have flexible and friendly users’ interface to develop all kinds of algorithms.Up to now, many commercial simulators with good performance have been developed. For instance, MOBOTSIM is a 2D simulator for windows, which provides a graphic interface to build environments [1]. But it only supports limited robot models (differential driven robots with distance sensors only), and is unable to deal with on visual based algorithms. Bugworks is a very simple simulator providing drag-and-place interface [2]; but it provides very primitive functions and is more like a demonstration rather than a simulator. Some other robot simulators, such as Ropsim [3], ThreeDimSim [5], and RPG Kinematix [6], are not specially designed for the development of autonomous navigation algorithms of mobile robots and have very limited functions.Among all the commercial simulators, Webot from Cyberbotics [4] and MRS from Microsoft are powerful and better performed simulators for mobile robot navigation. Both simulators, i.e. Webots and MRS, provide powerful interfaces to build mobile robots and environments, excellent 3-D display, accurate performance simulation, and programming languages for robot control. Perhaps due to the powerful functions, they are difficult to use for a new user. For instance, it is quite a boring job to build an environment for visual utilities, which involves shapes building, materials selection, and illumination design. Moreover, some robot development kits have built-in simulator for some special kinds of robots. Aria from Activmedia has a 2-D indoor simulator for Pioneer mobile robots [8]. The simulator adopts feasible text files to configure the environment, but only support limited robot models.However, the majority of commercial simulators are not currently supporting On the other hand, Matlabprogramming that provides a good support in matrix computing, image processing, fuzzy logic, neural network, etc., and can dramatically reduce the coding time in the research stage of new navigation algorithms. For example, a matrix inverse operation may needs a function which has hundreds of lines; but there is a simple command in Matlab. To use Matlab in this stage can avoid time-wasting on regenerating existed algorithms repeatedly and focus on the new theory and algorithm development.This paper presents a Matlab-based simulator that is fully compatible with Matlab codes, and makes it possible for robotics researchers to debug their code and do experiments conveniently at the first stage of their research. The algorithms development is based on Matlab subroutines with appointed parameter variables, which are stored in a file to be accessed by the simulator. Using this simulator, we can build the environment, select parameters, build subroutines and display outputs on the screen. Data are recorded during the whole procedure; some basic analyses are also performed.The rest of the paper is organized as follows. The software structure of the proposed simulator is explained in Section II. Section III describes the user interface of the proposed simulator. Some experimental results are given in Section IV to show the system performance. Finally, Section V presents a brief conclusion and potential future work.2. Software architectureTo make algorithm design and debugging easier, our Matlab based simulator has been designed to have the following functions:●Easy environment model-building; including walls, obstacles, beacons and visual scenes;●Robot model building, including the driving and control system and noise level.●Observation model setting; the simulator calculates the image frame that the robot can see, according tothe precise robot pose, the parameters of camera, and the environment.●Bumping reaction simulation. If the robot touches “walls”, the simula tor can stop the robot even when itis commanded to move forward by other modules. This function prevents the robot passing through the “wall” like a ghost, and makes the simulation running like the experiment on real robots.●Real-time display of the running processing and observations. This is for users to track the navigationprocedure and find out the bugs.●Statistical results of the whole running procedure, including the transient and average localization error.This is detailed navigation result for offline analysis. Some basic and simple analysis has been done in these modules.The architecture shown in Fig. 1 has been developed to implement the functions above. The rest of this section will explain the modules of the simulator in details.2.1. User InterfaceThe simulator provides an interface to build the environment, set the noise model; and a few separate subroutines are available for users to implement observation and localization algorithms. Some parameters and settings are defined by users, the interface modules and files can obtain these definition. As shown in Fig. 1, the modules above the dashed line are the user interface. Using Customer Configure files, users can describe environments (the walls, corridors, doorways, the obstacles and the Beacons), explain system and control models, define noises in different steps and do some simulator settings.The Customer Subroutines should be a serious of source codes with required input/output parameters. The simulator calls these subroutines and uses the results to control the mobile robot. The algorithms in the Customer Subroutines are therefore tested in the system defined by Customer Configure Files (CCFs) in the simulator. Thegrey blocks in Fig. 1 are the Customer Subroutines integrated in the simulator.Fig. 1 Software structure of the simulatorThe environment is described by a configure file in which the corners of walls are indicated by the Cartesian value pairs. Each pair defines a point in the environment and the Program Configure module connects points with straight lines in serials and regards these lines as the wall. Each beacon is defined by a four-element vector as [x,y,ω,P] T, where (x,y) indict the beacon’s position by Cartesian values, ω is the direction that the beacon fa ces to, and P is a pointer refer to an image file reflect the venue views in front of a beacon. For a non-visual beacon, e.g. a reflective polar for a laser scanner, the element P is evaluated, which is illegal for am image pointer.Some parameters are evaluated in a CCF, such as the data of the robot (shape, radius, driving method, wheelbases, maximum translation and rotation speeds, noises, etc.), the observing character (maximum and minimum observing ranges, observing angles, observing noises, etc.) and so on. These data are used by inner modules to build system and observation models. The robot and environment drawn in the real time video also rely on these parameters.The CCF also defines some setting related to the simulation running, e.g. the modes of robot motion and tracking display, the switches of observing display, the strategy of random motion, etc.2.2. Behaviour controlling modulesA navigation algorithm normally consists of few modules such as obstacle avoidance, route planner and localization (and mapping if not given manually). Although obstacle avoidance module (OAM, safety module) is important, it is not discussed in this paper. The simulator provides a built-in OAM for users so that they can focus on their algorithms. But the simulator also allows users to switch off this function and build their own OAM in one of the customer subroutines. A bumping reaction function is also integrated in this module which is always turned on even the OAM has been switched off. Without this function, the robot could go through the wall like a ghost if the user switched off the OAM and the robot has some bugs in the program.The OAM has the flowchart shown in Fig. 2. The robot pose is expressed as X = [x y θ ] where x, y, and θ indicate the Cartesian coordinates and the orientation respectively. The (x, y) pair is therefore adopted to calculate the distance to the “wall” line segments by using the basic theory of analytic geometry. The user’s navigationalgorithm is presented as the Matlab function to be tested, which is called by the OAM. It should output the driving information defined by the robot model, for example, the left and right wheel speed for a differentially driving robot.Fig. 2 Obstacle avoidance module2.3. Data fusion subroutinesThe data fusion is another subroutine of the simulator, which is available to users. The simulator also provides all the information required and receives the output of this subroutine, such as, the localization result and the mapping data.Normally, the robot acquires data using its onboard sensors, such as internal odometers, external sonars, CCD cameras, etc. In the simulator, these sensor data should be transferred to the subroutine as close to that of a real robot as possible. Thus the observation simulation module (OSM) is developed. The internal data includes the precise pose plus the noises generated with the parameters set by CCFs, which is easy to acquire.According to the true robot pose and the arrangement of the beacons, it is easy to deduce the beacons that can be detected by the robot, as well as the distance and direction of the observation. The information of all observed non-visual beacon will be selected according to the CCFs and transferred to the data fuse subroutines. For visualbased algorithms simulation, the CCFs of the environment contain the image files of the scenes at different places. Combined with the camera parameters defined in CCFs, the beacon orientation ω and the observing data such as distance and direction, the OSM can calculate and generate zoomed images to simulate the observations at a certain position.The user observation subroutines are therefore acquired the image just like that from an onboard camera in the real world.2.4. Simulator output moduleThe “Video Demo & Data Result” is the output module of the simulator. The real time video gives the direct view of how the algorithm performs, while the output data give the precise record during the simulation. Fig. 3(a) shows a frame of the real time video, i.e. the whole view, while Fig. 3(b) is the enlarged view of the middle part of Fig. 3(a).(a)Whole view(b) Enlarged partFig. 3 The view of the simulatorFig. 4 The output videoThe wide straight lines denote the walls of the environment; the round on the left in Fig. 3(b) is the real position of the robot and the one on the right is the localization result. The thin straight lines are the feature observation at the certain moment, and the ellipses with crosses at the centres express the uncertainties of mapping. The ellipse around the centre of the localization result means the uncertainty of the localization. The source code about the plotting is based on Bailey’s open source [7]. It should be noticed that the output data c ontains the estimated pose, true pose, covariance matrixes of each step, which can be processed and evaluated precisely after the experiment.The Video is actually implemented by quick update of a serial of static images. In every 40 milliseconds, the simulator calculate all the state parameters, such as the true pose as the localization result of the robot, the current observations and the current mapping result. The simulator draws the image for the current frame with these data and refreshes the output image. Since the image is refreshed 25 times per second, it looks like a real video. The calculation and drawing of current frame is implemented with the method shown in Fig. 4 In each loop cycle, the DrawRobot function translates and rotates the shape stored in the vector Rob, according to the true pose and localization results respectively, and draws the results with different fill shadows or colours. During the processing cycles in Fig. 4, all data and parameters, e.g. the lpose, t_pose, map, etc, are recorded by another thread in a file. After the navigation, these data will be output as well as some basic statistic results.3. Experimental resultThe purpose of the experiment is to test the performance of the simulator. Therefore, the experiment is designed to test the functional module of the simulator separately and then run a real SLAM algorithm in the simulator to test the overall performance.First of all, the OAM is switched off, and the user’s navigation module can only provide a consta nt speed on both wheels. In other words, the robot can only move forward. During the experiment, when the robot bumped into the wall in its front, it stopped and wriggled at the place, because of the driving noise. The bumping reactionmodule works as designed, and makes the robot stops when bumping into any object in the environment.Secondly, we remove all the beacons in the environment and the robot is running 100% on the internal sensors. By analysing the data and observations, the real time video and the noise generation module works perfectly and provides the result as we expected.Thirdly, the OAM is switched on, and the user’s navigation module keeps the same. That is to say, the robot is moving forward unless the built-in avoidance module takes the control to avoid obstacles nearby. During the experiment, the robot keeps moving for more than 10 minutes, and the route covers every corner of the environment. It avoids all the obstacles reliably. The path shown in Fig. 3(a) also clearly proved the performance of the obstacle avoidance module. Then, the observation simulation module is tested by off-line processing of the recorded images, which is generated during navigation. By using the triangulation method [9], the estimated position of each recorded image is deduced and compared with the true position. The error is acceptable, considering the uncertainties of the triangulation. That means the image zoom and projective operation in this module is reliable.Finally, a simple simultaneously localization and mapping (SLAM) algorithm presented in [10] is running in the developed simulator. All the function modules are evolved in this step. The simulator gives all the running results as designed and these results fit well to the result on a real robot given by the reference. The transient localization error result of the simulator is shown in Fig. 5.Fig. 5: Transient error output4. Conclusion and future workThis paper presents a novel simulator that is based on Matlab codes, and allows users to debug their navigation algorithms with Matlab, build an indoor environment, set the observation models with noises, and build the robot models with different driving mechanism, internal sensors and external sensors. The visual observation is also calculated by some projective and zoom calculation based on the built environment view. This function is important for the experiment of visual based algorithms. To save the time, the simulator also provides some functional modules which can implement some navigation tasks such as obstacle avoidance and bumping reactions. All the above functions of the simulator have been tested by designed experiments, which show that the simulator is feasible, useful and accurate for 2D indoor robotic navigation.The current version needs further improvement in the next stage since (i) this simulator cannot implement 3-D experiments; (ii) the input interface is text file based, which is easy to be used by experts but difficult to be used by new users. A graphic drag-and-set interface is needed; (iii) in some environments with complex visual views, the observation simulation is computationally expensive and makes simulation very slow; (iv) the scene ofthe onboard camera is not displayed in real time; and (v) only one robot is supported in this version.。

相关文档
最新文档