基于模糊控制的移动机器人的外文翻译

合集下载

自动化专业外文翻译---模糊逻辑控制机器人走迷宫

自动化专业外文翻译---模糊逻辑控制机器人走迷宫

外文资料FUZZY LOGIC CONTROL FOR ROBOT MAZE TRA VERSAL: ANUNDERGRADUATE CASE STUDYJames Wolfer Chad A. GeorgeAbstractAs previously reported, Indiana University South Bend has deployed autonomous robots in their Computer Organization course to facilitate introducing computer science students to the basics of logic, embedded systems, and assembly language. The robots help to provide effective, real-time feedback on program operation and to make assembly language less abstract. As a part of their coursework students are required to program a sensor-based traversal of a maze. This paper details one solution to this problem employing a fuzzy logic controller to create linguistic rules.Key words:Fuzzy logic, pedagogy, robots, student projectsINTRODUCTIONAssembly language programming in a computer science environment is often taught using abstract exercises to illustrate concepts and encourage student proficiency.To augment this approach we have elected to provide hands-on, real-world experience to our students by introducing robots into our assembly language class.Observing the physical action of robots can generate valuable feedback and have real-world consequences – robots hitting walls make students instantly aware of program errors, for example.It also provides insight into the realities of physical machines such as motor control, sensor calibration, and noise. To help provide a meaningful experience for our computer organization students, we reviewed the course with the following objectives in mind:• Expand the experience of our students in a manner that enhances the student's insight, provides a hands-on, visual, environment for them to learn, and forms an integrated component for future classes.•Remove some of the abstraction inherent in the assembly language class. Specifically, to help enhance the error detection environment.• Provide a kinesthetic aspect to our pedagogy.• Build student expertise early in their program that could lead to research projects and advanced classroom activities later in their program. Specifically, in this case, to build expertise to support later coursework in intelligent systems and robotics.As one component in meeting these objectives we, in cooperation with the Computer Science department, the Intelligent Systems Laboratory, and the University Center for Excellence in Teaching, designed a robotics laboratory to support the assembly language portion of the computer organization class as described in [1].The balance of this report describes one example project resulting from this environment. Specifically, we describe the results of a student project developing an assembly language fuzzy engine, membership function creation, fuzzy controller, and resulting robot behavior in a Linux-based environment.We also describe subsequent software devlopment in C# under Windows, including graphical membership tuning, real-time display of sensor activation, and fuzzy controller system response. Collectively these tools allow for robust controller development, assemblylanguage support, and an environment suitable for effective classroom and publicdisplay.BACKGROUNDRobots have long been recognized for their potential educational utility, with examples ranging from abstract, simulated, robots, such as Karel[2] and Turtle[3] for teaching programming and geometry respectively, to competitive events such as robotic soccer tournaments[4].As the cost of robotics hardware has decreased their migration into the classroom has accelerated [5, 6]. Driven by the combined goals for this class and the future research objectives, as well as software availability, we chose to use off-the-shelf, Khepera II, robots from K-Team[7].SIMULATED ROBOT DIAGRAMThe K-Team Kephera II is a small, two-motor robot which uses differential wheel speed for steering. Figure 1 shows a functional diagram of the robot. In addition to thetwo motors it includes a series of eight infrared sensors, six along the “front” and two in the “back”of the robot. This robot also comes with an embedded system-call library, a variety of development tools, and the availability of several simulators. The embedded code in the Khepera robots includes a relatively simple, but adequate, command level interface which communicates with the host via a standard serial port. This allows students to write their programs using the host instruction set (Intel Pentium in this case), send commands, and receive responses such as sensor values, motor speed and relative wheel position.We also chose to provide a Linux-based programming environment to our students by adapting and remastering the Knoppix Linux distribution [9]. Our custom distribution supplemented Knoppix with modified simulators for the Khepera, the interface library (including source code),manuals, and assembler documentation. Collectively, this provides a complete development platform.The SIM Kheperasimulator[8] includes source code in C, and provides a workable subset of the native robot command language. It also has the ability to redirect input and output to the physical robot from the graphics display. Figure 2 shows the simulated Khepera robot in a maze environment and Figure 3 shows an actual Khepera in a physical maze. To provide a seamless interface to the simulator and robots we modified the original simulator to more effectively communicate through a pair of Linuxpipes, and we developed a small custom subroutine library callable from the student's assembly language programs.Assignments for the class range from initial C assignments to call the robot routines to assembly language assignments culminating in the robot traversing the maze. FUZZY CONTROLLEROne approach to robot control, fuzzy logic, attempts to encapsulate important aspects of human decision making. By forming a representation tolerant of vague, imprecise, ambiguous, and perhaps missing information fuzzy logic enhances the ability to deal with real-world problems. Furthermore, by empirically modeling a system engineering experience and intuition can be incorporated into a final design.Typical fuzzy controller design [10] consists of:• Defining the control objectives and criteria• Determining the input and output relationships• Creating fuzzy membership functions, along withsubsequent rules, to encapsulate a solution fromintput to output.• Apply necessary input/output conditioning• Test, evaluate, and tune the resulting system.Figure 4 illustrates the conversion from sensor input to a fuzzy-linguistic value. Given three fuzzy possibilities, …too close‟, …too far‟, and …just right‟, along with a sensor reading we can ascertain the degree to which the sensor reading belongs to each of these fuzzy terms. Note that while Figure 4 illustrates a triangular membership set, trapezoids and other shapes are also common.Once the inputs are mapped to their corresponding fuzzy sets the fuzzy attributes are used, expert system style, to trigger rules governing the consequent actions, in this case, of the robot.For example, a series of rules for a robot may include:• If left-sensor is too close and right sensor is too far then turn right.• If left sensor is just right and forward sensor is too far then drive straight.• If left sensor is too far and forward sensor is too far then turn left.• If forward sensor is close then turn right sharply.The logical operators …and‟, …or‟, and …not‟ are calculated as follows: …and‟ represents set intersection and is calculated as the minimum value, …or‟ is calculated as the maximum value or the union of the sets, and …not‟ finds the inverse of the set, calculated as 1.0-fitness.Once inputs have been processed and rules applied, the resulting fuzzy actions must be mapped to real-world control outputs. Figure 5 illustrates this process. Here output is computed as the coordinate of the centroid of the aggregate area of the individual membership sets along the horizontal axis.ASSEMBLY LANGUAGE IMPLEMENTATIONTwo implementations of the fuzzy robot controller were produced. The first was written in assembly language for the Intel cpu architecture under the Linux operating system, the second in C# under Windows to provide a visually intuitive interface for membership set design and public demonstration.Figure 6 shows an excerpt of pseudo-assembly language program. The actual program consists of approximately eight hundred lines of hand-coded assembly language. In the assembly language program subroutine calls are structured with parameters pushed onto the stack. Note that the code for pushing parameters has been edited from this example to conserve space and to illustrate the overall role of the controller. In this code-fragment the …open_pipes‟ routine establishes contact with the simulator or robot. Once communication is established, a continous loop obtains sensor values, encodes them as fuzzy inputs, interprets them through the rule base to linguistic output members which are then converted to control outputs which are sent to the robot. The bulk of the remaining code implements the fuzzy engine itself.FUZZY CONTROLLER MAIN LOOPMembership sets were manually defined to allow the robot to detect and track walls, avoid barriers, and negotiate void spaces in it field of operation. Using this controller, both the simulated robot and the actual Khepera successfully traversed a variety of maze configurations.ASSEMBLY LANGUAGE OBSERV ATIONSWhile implementing the input fuzzification and output defuzzification in assembly language was tedious compared with the same task in a high level language, the logic engine proved to be well suited to description in assembly language.The logic rules were defined in a type of psuedo-code using …and‟, …or‟, …not‟ as operators and using the fuzzy input and output membership sets as parameters. With the addition of input, output and flow control operators, the assembly language logic engine simply had to evaluate these psuedo-code expressions in order to map fuzzy inputs memberships to fuzzy output memberships.Other than storing the current membership fitness values from the inputfuzzyfication, the only data structure needed for the logic engine is a stack to hold intermediate calculations. This is convenient under assembly language since the CPUs stack is immediately available as well as the nescesary stack operators.There were seven commands implemented by the logic rule interpreter: IN, OUT, AND, OR, NOT, DONE, and EXIT.•IN – reads the current fitness from an input membership set and places the value on the stack.•OUT – assigns the value on the top of the stack as the fitness value of an output membership set if it is greater than the existing fitness value for that set.•AND – performs the intersection operation by replacing the top two elements on the stack with the minimum element.•OR – performs the union operation by replace the top two elements on the stack with their maximum.•NOT – replaces the top value on the stack with its compliment.•DONE – pops the top value off the stack to prepare for the next rule•EXIT – signals the end of the logic rule definition and exits the interpreter.As an example the logic rule “If left-sensor is too close and right sensor is too far then turn right”, might be define d by the following fuzzy logic psuedo-code: IN, left_sensor[ TOO_CLOSE ]IN, right_sensor[ TOO_FAR ] ANDOUT, left_wheel[ FWD ]OUT, right_wheel[ STOP ]DONEEXITBy utilizing the existing CPU stack and implementing the logic engine as anpsuedo-code interpreter, the assembly language version is capable of handling arbitrarily complicated fuzzy rules composed of the simple logical operators provided. IMPLEMENTATIONWhile the assembly language programming was the original focus of the project, ultimately we felt that a more polished user interface was desirable for membership set design, fuzzy rule definition, and controller response monitoring. To provide these facilities the fuzzy controller was reimplemented in C# under Windows. through 10 illustrate the capabilities of the resulting software. Specifically, Figure 7 illustrates user interface for membership defination, in this case …near‟. Figure 8 illustrates theinterface for defining the actual fuzzy rules. Figure 9 profiles the output response with respect to a series of simulated inputs. Finally, real-time monitoring of the system is also implemented as illustrated in 10 which shows the robot sensor input values.Since the Khepera simulator was operating system specific, the C# program controls the robot directly. Again, the robot was successful at navigating the maze using a controller specified with this interface.SUMMARYTo summarize, we have developed a student-centric development environment for teaching assembly language programming. As one illustration of its potential we profiled a project implementing a fuzzy-logic engine and controller, along with a subsequent implementation in the C# programming language. Together these projects help to illustrate the viability of a robot-enhanced environment for assembly language programming.REFERENCES[1] Wolfer, J &Rababaah, H. R. A., “Creating a Hands-On Robot Environment for Teaching Assembly Language Programming”, Global Conference on Engineering and Technology Education, 2005[2] Pattic R.E., Karel the Robot: a gentle introduction to the art of programming, 2nd edition. Wiley, 1994[3] Abelson H. and diSessa A., Turtle geometry: the computer as a medium for exploring mathematics. MIT Press, 1996[4] Amirijoo M., Tesanovic A., and Nadjm-Tehrani S., “Raising motivation in real-time laboratories: the soccer scenario” in SIGCSE Technical Symposium on Computer Sciences Education, pp. 265-269, 2004.[5] Epp E.C., “Robot control and embedded systems on inexpensive linux platforms workshop,” in SIGCSE Technical Symposium on Computer Science Education, p. 505, 2004[6] Fagin B. and Merkle L., “Measuring the effectiveness of robots in teaching computer science,” in SIGCSE Technical Symposium on Computer Science Education, PP. 307-311, 2003.[7] K-Team Khepera Robots, , accessed 09/06/05.[8] Michel O., “Khepera Simulator package version 2.0: Freeware mobile robot simulator written at the university of nice Sophia-Antipolis by Olivier Michel. Downloadable from the world wide web. http://diwww.epfl.ch/lami/team/michel/khep-sim, accessed 09/06/05.[9] Knoppix Official Site, , accessed 09/06/05.[10] Earl Cox., The Fuzzy Systems Handbook, Academic Press, New York, 1999.模糊逻辑控制机器人走迷宫James Wolfer Chad A. George摘要美国印第安纳大学南本德已部署在他们的计算机组织课程自主机器人,以方便学生介绍计算机科学逻辑的基础知识,嵌入式系统和汇编语言。

英文翻译-mobile robot

英文翻译-mobile robot

基于模糊神经网络的移动机器人在未知环境下的实时反应摘要本文在以模糊推理和神经网络系统结合的基础上,为实现移动机器人和未知环境下的实时自反映,提出了一种混合型智能方法。

模糊推理(模糊神经网络,FNN)下的神经网络系统可以有效提高神经网络的学习速度。

此方法可用来实时控制基于当前运作情况的移动机器人。

这些情况包括由超声波传感器支持下的机器人与障碍物之间的不同方向上的距离。

目标源由简易光学探测器和机器人移动的方向而被感知。

仿真试验结果表明以上方法能够迅速探测机器人控制系统中输入和输出之间的模糊关系。

1.引言随着计算机,感应,运作和智力技术等与之相关领域的迅速发展,拥有智力运作能力的自动化机器人应运而生,并在诸如宇航,水下,工业,国防等其他社会部门中得到广泛应用。

为了提高自动化的运作效率,掌握以下能力是十分必要的:(1).迅速感知外部环境(2).依靠感知信息,机器人所处的外置和环境结构需确认(3).在未知环境下,机器人从一处移动到另一处可自行设计一条通道,不与障碍物冲突。

因此,如何解决不确定信息成为机器人实现自动安全有效避开障碍物这一巨大挑战的能力之一。

当前解决以上问题的方法可归为三大类:基于模型法,基于感应法,混合法。

基于模型法包括道路图绘[2,3],细胞分解[3,4],潜在领域[3],神经网络[5]等等。

依靠这些技术,自动机器人可以确信地(在有障碍物存在的环境下)找到起始地到目标点之间的零障碍通道。

然而,这种方法需要依赖正确的环境模型,而不是动态变化或未知的环境。

基于感应的方法是由边缘探测[6],墙壁跟随[7],失真逻辑[8]等组成。

以感应数据为基础,自动化机器人可以在动态或未知环境下安全出行。

但是此方法对传感器的准确性过于敏感。

加上超声波传感器的频繁误读,会使机器人产生不稳定的反应。

混合型方法[9]结合了模型式和感应式方法的可靠性,它首先会经常利用模型式方法设计一条路径,之后利用感应控制器指引机器人跟随或修改路径,同时避免了未知环境下的障碍,显而易见,试用混合方法的关键问题,在于如何结合路径信息和感应数据。

外文翻译--智能化的物流搬运机器人-agv-其他专业

外文翻译--智能化的物流搬运机器人-agv-其他专业

中文3170字Intelligent logistics handling robot--AGVHandling the logistics function is one of the elements of the logistics systems have a high rate, logistics occupy an important part of the cost. United States industrial production process Handling costs account for 20-30% of the cost. German logistics enterprises Material handling costs account for one-third of the turnover. Japan logistics handling costs account for the GNP %,and China production logistics handling costs account for about % of the manufacturing cost. All of the world have been seeking mechanization and intelligent handling technology and equipment. AGV, a flexible and intelligent logistics handling robots, from the 1950s, storage industry begans to use. now in the manufacturing sector, ports, terminals and other areas of universal application.AGV notable feature is unmanned, the AGV is equipped with an automatic guidance system, system can be protected in no artificial pilot circumstances can be scheduled along the route will automatically, goods or materials from the threshold automatically delivered to the destination. Another feature of the AGV is good flexibility and a high degree of automation and a high level of intelligence, AGV according to the route of storage spaces, such as changes in the production process and the flexibility to change, running path and the cost of change with the traditional carousels and rigid transmission line compared to low. AGV is equipped with the general handling agencies, equipment and other logistics automatic interface, Implementation of goods and material handling and the removal process automation. Moreover, the AGV is also cleaner production characteristics, AGV rely on the built-in battery powered. running process without the noise, pollution-free, and can be applied to many of the requirements in the working environment cleaner place.ⅠAGV typesAGV it has been since the invention of a 50-year history, with the expansion of areas of application, of the types and forms of diversity has become. Often under the AGV will automatically process the way of AGV navigation divided into the following categories : Induction-guided AGVElectromagnetic Induction general guide is on the ground, along a predetermined routeof the buried cable, when the high-frequency currents flowing through wires, Traverse electromagnetic field generated around, AGV symmetrical installed two electromagnetic sensors, they receive the electromagnetic signal intensity differences reflect AGV deviated from the path degree. AGV control system based on this bias to control the vehicle's steering,Continuous dynamic closed-loop control to ensure AGV path for the creation of a stable tracking. This guide electromagnetic induction method of navigation in the vast majority of the AGVS commercial use, particularly applies to the large and medium-sized AGV.2. Laser-guided AGVThe AGV species can be installed on a rotating laser scanner, running path along the walls or pillars installed a high reflective of positioning signs, AGV rely on the laser scanner fired a laser beam, followed by the reflective signs around the positioning of the laser beam back, on-board computer to calculate the current vehicle position and the direction of movement, adopted and built-in digital maps correction compared to the position, thus achieving automatic removal.Currently, the types of AGV increasingly prevalent. And the basis of the same guiding principles, if the laser scanner replacement for infrared transmitters, ultrasonic transmitters. is laser-guided AGV can become infrared-guided AGV and ultrasound-guided AGV.3. Vision-guided AGVVision-guided AGV is under rapid development and maturity of the AGV. The species AGV is equipped with a CCD camera and sensors. on-board computer equipped with AGV wishes to the route of the surrounding environment image database. AGV moving process, dynamic access to traffic cameras around environmental information and images and image databases, thus determine the current location of the next stage will make a decision.AGV such as setting up does not require any physical path, in theory, has the best guide Flexible, With the computer image acquisition, storage and processing of the rapid development of technology, the kinds of practical AGV is growing.In addition, there are ferromagnetic gyro inertial-guided AGV, optical-guided AGV variety of forms of AGV.Ⅱ Application of AGV1. WarehousingWarehousing AGV is the first application of the place. In 1954 the first to AGV in the United States state of South Carolina Mercury M otor Freight company's operational warehouse for storage of goods from achieving automatic removal. At present the world is about 2 million operation in a wide range of AGV 2,100 large and small warehouses. Videocon Group in 2000, running the operation zone warehouse, 9 AGV Taiwan formed a soft bank automatic handling system, successfully completed the 23,400 daily conveying goods and parts handling tasks.2. ManufacturingAGV production in the manufacturing sector in line to succeed, efficient, accurate and flexible materials to complete the task of handling. And may be composed of multiple AGV Flexible handling of the logistics system Along with handling the production line can process adjustments and timely adjustment make a production line to produce more than 10 types of products, greatly improving production flexibility and the competitiveness of enterprises. 1974 Sweden's V olvo Kalmar car assembly plants in order to improve the transport system flexibility AGVS based tools to carry automatic car assembly line, from the assembly line more than capable of carrying the body of car components AGVS use of the assembly line. reduced assembly time by 20% and 39% decrease assembly fault, the investment recovery period decreased 57% labor decreased by 5%. Currently, AGV in the world's major car manufacturers, such as General Motors, Toyota, Chrysler, public works, such as automobile manufacturing and assembly line is being widely used.In recent years, as the basis for CIMS removal tool, the AGV to the mechanical application of in-depth processing, production of home appliances, microelectronics manufacturing, tobacco and other industries, production and processing areas to become the most widely AGV areas.3. Post office, library, port and airportAt post offices, libraries, and airport terminals occasions, the delivery of the existence of operational changes, dynamic nature, processes recurring adjustments, and removal processes in a single, features AGV concurrent operations, automation, Intelligent and flexible to the characteristics of a good occasion to meet on-removal requirements. Sweden in 1983 in Stockholm offices Slovakia, Japan in 1988 in Tokyo, Tama offices, China in 1990 in Shanghai started to use postal hub AGV complete removal products work. Port of Rotterdam in the Netherlands. 50 known as the "yard tractors" AGV completed container from the side of the delivery of several hundred yards from the The repeatability warehouse work.4. Tobacco, medicine, food, chemicalsFor the removal operation is clean, safe, non-polluting emissions, and other special requirements of the tobacco, pharmaceutical, food, chemical and other industries, AGV application also be in focus. Many cigarette enterprises laser-guided AGV completed pallet cargo handling work such as Philip Morris tobacco company 、Royal tobacco company etc.5. Dangerous places and special servicesMilitarily, the AGV to the automatic driving-based Integrated detection and other demolition equipment, Mine can be used for battlefield reconnaissance and position, the British military is developing a MINDER Recce is a reconnaissance vehicle, with minedetection, destruction and the ability to route automatically verify type reconnaissance vehicles. In the steel plant, AGV Charge for delivery, reducing the labor intensity. In nuclear power plants and the use of nuclear radiation preservation of the storage sites, AGV used for the delivery to avoid the danger of radiation. In the film and film storage, AGV be in the dark environment, accurate and reliable transportation of materials and semi-finished products.Ⅲ AGV routes and scheduling methodAGV use of a route optimization and real-time scheduling AGV is the current field of a hotspot. Practical, it was the methods used are :programmingAGV to the task of choosing the best and the best path can be summed up as a task scheduling problem. Mathematical programming methods to solve scheduling problems is the optimal solution to the traditional method. The method of solving process is actually a resource constraint to the optimization process. Practical methods of the main integer programming, dynamic programming, petri methods. Scheduling of the small-scale cases, such methods can get better results, but with the increased scale of operation, Solving the problem of time-consuming exponential growth, limitations of the method in charge,mass-line optimization and scheduling application.Simulation of the actual scheduling environment modeling, AGV thereby to a scheduling program for the implementation of computer simulation. Users and researchers can use simulation means to scheduling program for testing, monitoring, thereby changing the selection and scheduling strategy. Practical use of a discrete event simulation methods, object-oriented simulation and three-dimensional simulation technology, Many AGV software can be used for scheduling simulation, which, Lanner Group Witness software can quickly build simulation models, Implementation of 3D simulation and demonstration of the results of the analysis.INTELLIGENCEA way for the activation process AGV described as a constraint in meeting the solution set Search optimal solution process. It said the use of knowledge of the technical knowledge included, Meanwhile the use of search technology seeks to provide a satisfactory solution. Specific methods of expert system, genetic algorithms, heuristics, neural network algorithm. Within this total, the expert system in which more practical use. It will dispatch experts abstract experience as a system can understand and implement the scheduling rules, and usingconflict resolution techniques to solve large-scale AGV scheduling rules and the expansion of the conflict.Because neural network with parallel computing, distributed storage knowledge, strong adaptability, and therefore, for it to become a large-scale AGV Scheduling is a very promising approach. At present, the neural network method for a successful TSP-NP problem solving. Neural networks can optimize the composition of the solution into a discrete dynamic system of energy function, through minimizing the energy function to seek optimization solution.Genetic algorithm simulates natural process of biological evolution and genetic variation and the formation of an optimal solution. Genetic algorithm for the optimization of the AGV scheduling problem, First through the coding of a certain number of possible scheduling program into the appropriate chromosome, and the calculation of each chromosome fitness (such as running the shortest path), through repeated reproduction, crossover, Find fitness variation large chromosomes, AGV scheduling problem that is the optimal solution.Using a single method to solve scheduling problems, there were some flaws. Currently, a variety of integration methods to solve the scheduling problem AGV is a hotspot. For example, expert system integration and genetic algorithm, expert knowledge into the chromosome of the initial formation of the group, Solution to accelerate the speed and quality.智能化的物流搬运机器人-AGV装卸搬运是物流的功能要素之一,在物流系统中发生的频率很高,占据物流费用的重要部分。

基于模糊控制的移动机器人的外文翻译

基于模糊控制的移动机器人的外文翻译

1998 年的IEEE 国际会议上机器人及自动化Leuven ,比利时1998年5 月一种实用的办法-- 带拖车移动机器人的反馈控制F. Lamiraux and J.P. Laumond拉斯,法国国家科学研究中心法国图卢兹{florent ,jpl}@laas.fr摘要本文提出了一种有效的方法来控制带拖车移动机器人。

轨迹跟踪和路径跟踪这两个问题已经得到解决。

接下来的问题是解决迭代轨迹跟踪。

并且把扰动考虑到路径跟踪内。

移动机器人Hilare 的实验结果说明了我们方法的有效性。

1 引言过去的8 年,人们对非完整系统的运动控制做了大量的工作。

布洛基[2]提出了关于这种系统的一项具有挑战性的任务,配置的稳定性,证明它不能由一个简单的连续状态反馈。

作为替代办法随时间变化的反馈[10,4,11,13,14,15,18]或间断反馈[3]也随之被提出。

从[5]移动机器人的运动控制的一项调查可以看到。

另一方面,非完整系统的轨迹跟踪不符合布洛基的条件,从而使其这一个任务更为轻松。

许多著作也已经给出了移动机器人的特殊情况的这一问题[6,7,8,12,16]。

所有这些控制律都是工作在相同的假设下:系统的演变是完全已知和没有扰动使得系统偏离其轨迹。

很少有文章在处理移动机器人的控制时考虑到扰动的运动学方程。

但是[1]提出了一种有关稳定汽车的配置,有效的矢量控制扰动领域,并且建立在迭代轨迹跟踪的基础上。

存在的障碍使得达到规定路径的任务变得更加困难,因此在执行任务的任何动作之前都需要有一个路径规划。

在本文中,我们在迭代轨迹跟踪的基础上提出了一个健全的方案,使得带拖车的机器人按照规定路径行走。

该轨迹计算由规划的议案所描述[17],从而避免已经提交了输入的障碍物。

在下面,我们将不会给出任何有关规划的发展,我们提及这个参考的细节。

而且,我们认为,在某一特定轨迹的执行屈服于扰动。

我们选择的这些扰动模型是非常简单,非常一般。

它存在一些共同点⑴。

移动机器人模糊控制与模糊自适应PID控制

移动机器人模糊控制与模糊自适应PID控制

环境感知、动态决策与规划、行为控制与执行等多功能为一体的综合系统。

它既可以接受人类的指挥,也可以运行预先设定的程序,甚至可以根据人工智能技术制定的原则纲领进行一定的自主行动。

它代表了机电一体化的最高成就,集合了传感器技术、信息技术、电子工程、自动化控制、人工智能等多个学科的智慧结晶。

在工农业、医疗、服务、国防等多个领域得到应用,随着机器人性能的不断提高,其应用范围将更加广阔。

然而,当前移动机器人在移动过程中还面临着一些问题和挑战。

机器人移动过程首要面临的问题是获取环境和地图数据。

其次是在移动过程中能够识别、规避、重新规划路线。

由于机器人移动过程中遇到的障碍物不是固定不变的,复杂多变的障碍物给机器人规避带来很大问题。

而且面临不断变化的环境,移动机器人存在迷失可能。

同时,机器人本身严谨的思维逻辑能力还有待提升,如何在将机器人控制在规定的活动范围和活动路线上行驶是需要解决的问题。

在未来,智能机器人将更多的和人合作,扩大、延伸并部分取代人类的脑力劳动,提高自身的适应性和自主性。

2 移动机器人的模糊控制策略研究2.1 移动机器人控制理论的发展移动机器人以移动方式来分,可以分为轮式移动机器人、步行移动机器人、履带式移动机器人等。

在研制过程中人们发现移动机器人的实际转速和目标值有所差异,机器人难以按照设计的方向及速度前进。

以轮式移动机器人为例,研究机器人有两个驱动前轮和两个万向轮,前轮分别由直流电机控制,根据电机转速和左右车轮的差值进行移动控制。

但是,即使排除掉两个直流电机本身参数有差异,能够为机器当今移动机器人自动控制方法主要有经典控制、现代控制及智能控制三种。

其中经典控制是线性控制或者定常系统的主要方法,这个系统的变量是一定的,主要依托函数传递实现模型控制,最经典的就是PID 控制。

然而,由于系统不仅存在线性系统、也存在着时变系统,即系统输入量和输出量的关系随时间变化的系统。

由此,现代控制理论应用而生。

以上两种理论的实行都是依靠状态方程的建立实现的。

工业机器人外文翻译

工业机器人外文翻译

中文2840字外文资料INDYSTIAL ROBOTSThere are a variety of definitions of the term robot.Depending on the definitino used,the number of robot installatinos wordwide varies widely.Numerous single-purpose machines are used in manufacturing plants that might appear to be robots.These machines are hardwired to perform a single function and cannot be reprogrammed to perform a different function.Such single-purpose machines do not fit the definition for industrial robots that is becoming widely accepted.This definition was developed by the Robot Institute of America.A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.Note that this definition contains the words reprogrammable and multifunctional.It is these two characteristics that separate the true industrial robot form the various single-purpose machines used in modern manufacturing firms.The term “reprogrammable” implies two things: The robot operates according to a written program,and this program can be rewriten to accommodate a variety of manufacturning tasks.The term “multifunctional” means that the robot can, through reprogramming and the use of different end-dffectors, perform a number of different manufacturing tasks.Definitions written around these two critical characteristics are becoming the accepted definitions among manufacturing professioals.The first articulated arm came about in 1951 and was used by the U.S.Atomic Energy Commission.In 1945,the first programmable robot was designed by George Devol.It was based on two important technologies:(1) Numerical control (NC) technology.(2) Remote manipulation technology.Numerical control technology provided a form of machine control ideally suited to robots.It allowed for the control of motion by stored programs.These programs contain data points to which the robot sequentially moves, timing signals to initiste action and to stop movement, and logic statements to allow for decision making.Remote manipulator technology allowed a machines to be more than just anotherNC machine.It allowed such machines to become robots that can perform a variety of manufacturing tasks in both inaccessible and unsafe environments.By merging these two technologies, Devol developed the first industrial robot, an unsophisticated programmable materials handling machine.The first commercially produced robot was developed in 1959.In 1962, the first industrial robot to be used on a production line was installed by General Motors Corporation.This robot was produced by Unimation.A major step forward in robot control occurred in 1973 with the development of the T-3 industrial robot controlled bya minicomputer.Numerical control and remote and remote manipulator technology prompted the wide-scale development and use of industrial robots.But major technological developments do not take place simply because of such new capabilities.Something must provide the impetus for taking advantage of these capabilities.In the case of industrial robots, the impetus was economics.The rapid inflation of wages experienced in the 1970s tremendously increased the personnel costs of manufacturing firms.At the same time, foreign competition became a serious problem for U.S.manefacturers.Foreign manufacturers who had undertaken automation on a wide-scale basis, such as those in Japan, began to gain an increaaingly large share of the U.S.and world market for manufactured goods, particullarly automobiles.Through a variety of automation techniques, including robots, Japanese manufacturers, beginning in the 1970s, were able to produce better automobiles more cheaply than nonautomated U.S.manufacturers.Consequently, in order to survive, U.S.manufacturers were forced to consider any technological developments that could help improve productivity.It become imperative to produce better products at lower costs in order to be competitive with foreign manufacturers.Other factors such as the need to find better ways of performing dangerous manufacturing tasks contributed to the development of industrial robots.However, the principal rationale has always been, and is still, improved productivity.One of the principal advantages of robots is that they can be used in settings that are dangerous to humans.Welding and parting are examples of applications where robots can be used more safely than humans.Even though robots are closely associsted with safety in the workplace, they can, in themselves, be dangerous.Robots and robot cells must be carefully designed and cinfigured so that they do not endanger human workers and other machines.Robot work envelopes should be accurately calculated and a danger zone surrounding the envelope clearly marked off.Red flooring strips and barriers can be used to keep human workers out of a robot is work envelope.Even with such precautions it is still a good idea to have an automatic shutdown system in situations where robots are used.Such a system should have the capacity to sense the need for an automatic shutdown of operations.Fault-tolerant computers and redundant systems can be installed to ensure proper shutdown of robotics systems to ensure a safe environment.The components of a tobot systerm could be discussed either forma physical of view or from a systems point of ciew.Physically, we would divide the system into the robot, power system, and controller(computer).Likewise, the robot itself could be partitioned anthropomorphically into base, shoulder, elbow, wrist, gripper, and tool.Most of these terms require little explanation.Consequently, we will describe the components of a tobot system from the point of view of information transfer.That is, what information or signal enters the component; what logical or arithmetic operation does the component perform; and what information or signal does the component produce? It is important to note that the same physical component may perform many different information processing operations (e.g., a central computer performs many different calculations on different data ).Likewise, two physically separate components may perform identical information operations (e.g., the shoulder and elbow actuators both convert signals to motion in very similar ways).Actuator Associated with each joint on the robot is an actuator which causes that joint to move.Typical actuators are electric motors and hydtraulic cylinders.Typically, a robot system will contain six actuators, since six are required for full control of position and orientation.Many robot applications do not require this full flexibility, and consequently, robots are often built with five or fewer actuators.Sensor To control and actuator, the computer must have information regarding the posetion and possibly the velocity of the actuator.In this contest, the term position refers to a displacement from some arbitrary zero reference point for that actuator.For example, in the case of a rotary actuator, “ position ” would really the angular posit ionand be measured in radians.Many types of sensors can provide indications of position and velocity.The various types of sensors require different menchanisms for interfacing to the computer.In addition, the industrial use of the manipulator requires that the interface be protected from the harsh electrical environment of the factroy.Sources of electrical noise such as are welders and large motors can easily make a digital system useless unless care is taken in design and construction of the interface.C omputation We could easily have labeled the computation module “ computer , ” as most of the function to be described are typically performed by digital computer.However, many of the functions may be performed in dedicated custom hardware or networks of the computers.We will, thus, discuss the computational component as if it were a simple computer, recognizing that the need for real-time control may require special equipment and that some of this equipment may even be analog, although the current trend is toward fully digital systems.One further note: We will tend to avoid the use of the term microprocessor in this book and simply say computer, although many current robot manufacturers use one or more microprocessors in their systerms.The computation component performs the following operations:Servo Given the current position and/or velocity of an actuator,determine the appropriate drive signal to move that actuator toward its desired position.This operation must be performed for each actuator.Kinematics Given the current state of the actuators ( position and velocity ), determine the current state of the gripper.Conversely, given a desired state of the hand, determine the desired state for each actuator.Dynamics Given konwledge of the loads on the arm ( inertia, friction, gravity, acceleration ), use this information to adjust the servo operation to achieve better performance.Workplace Sensor Analysis Given knowledge of the task to be performed, determine appropriate robot motion commands.This may include analyzing a TV picture of the workplace or measuring and compensating for forces applied at the hand.In addition to these easily identified components, there are also supervisory operations such as path planning and operator interaction.工业机器人有许多关于机器人这个术语的定义。

加入临时路径的移动机器人路径跟踪模糊控制

加入临时路径的移动机器人路径跟踪模糊控制

加入临时路径的移动机器人路径跟踪模糊控制孙涛;师五喜【摘要】A fuzzy control scheme of path tracking for mobile robots with uncertainty is presented. A temporary path is designed for the path tracking, which makes the robot move along the designed temporary path first, when it is near the desired path, the robot will go along it then. The whole tracking process is controlled by one fuzzy controller, which reduces the computation burden, and an integration term is added into the controller to eliminate the steady-state error. The performance of proposed approach is demonstrated through simulation and experiment examples.%对含不确定性的移动机器人系统设计了路径跟踪模糊控制方法。

该方法引入临时路径,使机器人先从初始位置出发沿临时路径行进,当移动到期望路径附近时,再让机器人跟踪期望路径。

整个控制过程只需要一个模糊控制器,极大地减少了工作量,并引进积分环节以消除稳态误差。

仿真和实验结果验证了该方法的有效性。

【期刊名称】《计算机工程与应用》【年(卷),期】2013(000)013【总页数】6页(P228-233)【关键词】临时路径;跟踪控制;模糊控制【作者】孙涛;师五喜【作者单位】天津工业大学电气工程与自动化学院,天津 300387;天津工业大学电气工程与自动化学院,天津 300387【正文语种】中文【中图分类】TP242.6近年来,许多学者对非完整移动机器人系统的跟踪控制问题进行了大量研究[1-6],根据参考轨迹是否为时间的函数,跟踪控制分为轨迹跟踪和路径跟踪。

一个有关移动机器人定位的视觉传感器模型外文文献翻译、中英文翻译

一个有关移动机器人定位的视觉传感器模型外文文献翻译、中英文翻译

XX设计(XX)外文资料翻译A Visual-Sensor Model for Mobile Robot Localisation Matthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universit•at DresdenTechnical Report WV-03-03/CL-2003-02AbstractWe present a probabilistic sensor model for camera-pose estimation in hallways and cluttered o_ce environments. The model is based on the comparison of features obtained from a given 3D geometrical model of the environment with features present in the camera image. The techniques involved are simpler than state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods. Moreover, it is very well suited for sensor fusion. The sensor model has been used with Monte-Carlo localisation to track the position of a mobile robot in a hallway navigation task. Empirical results are presented for this application.1 IntroductionThe problem of accurate localisation is fundamental to mobile robotics. To solve complex tasks successfully, an autonomous mobile robot has to estimate its current pose correctly and reliably. The choice of the localization method generally depends on the kind and number of sensors, the prior knowledge about the operating environment, and the computing resources available. Recently, vision-based navigation techniques have become increasingly popular [3]. Among the techniques for indoor robots, we can distinguish methods that were developed in the _eld of photogrammetry and computer vision, and methods that have their origin in AI robotics.An important technical contribution to the development of vision-based navigation techniques was the work by [10] on the recognition of 3D-objects from unknown viewpoints in single images using scale-invariant features. Later, this technique was extended to global localisation and simultaneous map building [11].The FINALE system [8] performed position tracking by using a geometrical model of the environment and a statistical model of uncertainty in the robot's pose given the commanded motion. The robot's position is represented by a Gaussian distribution and updated by Kalman _ltering. The search for corresponding features in camera image and world model is optimized by projecting the pose uncertainty into the camera image.Monte Carlo localisation (MCL) based on the condensation algorithm has been applied successfully to tour-guide robots [1]. This vision-based Bayesian _ltering technique uses a sampling-based density representation. In contrast to FINALE, it canrepresent multi-modal probability distributions. Given a visual map of the ceiling, it localises the robot globally using a scalar brightness measure. [4] presented avision-based MCL approach that combines visual distance features and visual landmarks in a RoboCup application. As their approach depends on arti_cial landmarks, it is not applicable in o_ce environments.The aim of our work is to develop a probabilistic sensor model for camerapose estimation. Given a 3D geometrical map of the environment, we want to find an approximate measure of the probability that the current camera image has been obtained at a certain place in the robot's operating environment. We use this sensor model with MCL to track the position of a mobile robot navigating in a hallway. Possibly, it can be used also for localization in cluttered o_ce environments and for shape-based object detection.On the one hand, we combine photogrammetric techniques for map-based feature projection with the exibility and robustness of MCL, such as the capability to deal with localisation ambiguities. On the other hand, the feature matching operation should be su_ciently fast to allow sensor fusion. In addition to the visual input, we want to use the distance readings obtained from sonars and laser to improve localisation accuracy.The paper is organised as follows. In Section 2, we discuss previous work. In Section 3, we describe the components of the visual sensor model. In Section 4, we present experimental results for position tracking using MCL. We conclude in Section 5.2 Related WorkIn classical approaches to model-based pose determination, we can distinguish two interrelated problems. The correspondence problem is concerned with _nding pairs of corresponding model and image features. Before this mapping takes place, the model features are generated from the world model using a given camera pose. Features are said to match if they are located close to each other. Whereas the pose problem consists of _nding the 3D camera coordinates with respect to the origin of the world model given the pairs of corresponding features [2]. Apparently, the one problem requires the other to be solved beforehand, which renders any solution to the coupled problem very di_cult [6].The classical solution to the problem above follows a hypothesise-and-test approach: (1)Given a camera pose estimate, groups of best matching feature pairs provideinitial guesses (hypotheses).(2)For each hypothesis, an estimate of the relative camera pose is computed byminimising a given error function de_ned over the associated feature pairs. (3)Now as there is a more accurate pose estimate available for each hypothesis, theremaining model features are projected onto the image using the associatedcamera pose. The quality of the match is evaluated using a suitable error function, yielding a ranking among all hypotheses.(4)The highest-ranking hypothesis is selected.Note that the correspondence problem is addressed by steps (1) and (3), and the poseproblem by (2) and (4).The performance of the algorithm will depend on the type of features used, e.g., edges, line segments, or colour, and the choice of the similarity measure between image and model, here referred to as error function. Line segments is the feature type of our choice as they can be detected comparatively reliably under changing illumination conditions. As world model, we use a wire-frame model of the operating environment, represented in VRML. The design of a suitable similarity measure is far more difficult.In principle, the error function is based on the di_erences in orientation between corresponding line segments in image and model, their distance and difference in length, in order of decreasing importance, in consideration of all feature pairs present. This has been established in the following three common measures [10]. e3D is defined by the sum of distances between model line endpoints and the corresponding plane given by camera origin and image line. This measure strongly depends on the distance to the camera due to back-projection. e2D;1, referred to as in_nite image lines, is the sum over the perpendicular distances of projected model line endpoints to corresponding, in_nitely extended lines in the image plane. The dual measure, e2D;2, referred to as in_nite model lines, is the sum over all distances of image line endpoints to corresponding, in_nitely extended model lines in the image plane.To restrict the search space in the matching step, [10] proposed to constrain the number of possible correspondences for a given pose estimate by combining line features into perceptual, quasi-invariant structures beforehand. Since these initial correspondences are evaluated by e2D;1 and e2D;2, high demands are imposed on the accuracy of the initial pose estimate and the image processing operations, includingthe removal of distortions and noise and the feature extraction. It is assumed to obtain all visible model lines at full length. [12, 9] demonstrated that a few outliers already can severely affect the initial correspondences in Lowe's original approach due to frequent truncation of lines caused by bad contrast, occlusion, or clutter.3 Sensor ModelOur approach was motivated by the question whether a solution to the correspondence problem can be avoided in the estimation of the camera pose. Instead, we propose to perform a relatively simple, direct matching of image and model features. We want to investigate the level of accuracy and robustness one can achieve this way.The processing steps involved in our approach are depicted in Figure 1. After removing the distortion from the camera image, we use the Canny operator to extract edges. This operator is relatively tolerant to changing illumination conditions. From the edges, line segments are identi_ed. Each line is represented as a single point (_; _) in the 2D Hough space given by _ = x cos _ + y sin _. The coordinates of the end points are neglected. In this representation, truncated or split lines will have similar coordinates in the Hough space. Likewise, the lines in the 3D map are projected onto the image plane using an estimate of the camera pose and taking into account the visibility constraints, and are represented as coordinates in the Hough space as well. We have designed several error functions to be used as similarity measure in the matching step. They are described in the following.Centred match count (CMC)The first similarity measure is based on the distance of line segments in the Hough space. We consider only those image features as possible matches that lie within a rectangular cell in the Hough space centred around the model feature. The matches are counted and the resulting sum is normalised. The mapping from the expectation (model features) to the measurement (image features) accounts for the fact that the measure should be invariant with respect to objects not modelled in the 3D map or unexpected changes in the operating environment. Invariance of the number of visible features is obtained by normalisation. Speci_cally, the centred match count measure sCMC is defined by:where the predicate p de_nes a valid match using the distance parameters (t_; t_) and the operator # counts the number of matches. Generally speaking, this similarity measure computes the proportion of expected model Hough points hei 2 He that are con_rmed by at least one measured image Hough point hmj 2 Hm falling within tolerance (t_; t_). Note that neither endpoint coordinates nor lengths are considered here.Grid length match (GLM)The second similarity measure is based on a comparison of the total length values of groupes of lines. Split lines in the image are grouped together using a uniform discretisation of the Hough space. This method is similar to the Hough transform for straight lines. The same is performed for line segments obtained from the 3D model. Let lmi;j be the sum of lengths of measured lines in the image falling into grid cell (i; j), likewise lei;j for expected lines according to the model, then the grid length match measure sGLM is de_ned as:For all grid cells containing model features, this measure computes the ratio of the total line length of measured and expected lines. Again, the mapping is directional, i.e., the model is used as reference, to obtain invariance of noise, clutter, and dynamic objects.Nearest neighbour and Hausdorf distanceIn addition, we experimented with two generic methods for the comparison of two sets of geometric entities: the nearest neighbour and the Hausdor_ distance. For details see [7]. Both rely on the de_nition of a distance function, which we based on the coordinates in the Hough space, i.e., the line parameter _ and _, and optionally the length, in a linear and exponential manner. See [5] for a complete description. Common error functionsFor comparisons, we also implemented the commonly used error functions e3D,e2D;1, and e2D;2. As they are de_ned in the Cartesian space, we represent lines in the Hessian notation, x sin _ y cos _ = d. Using the generic error function f, we de_ned the similarity measure as:where M is the set of measured lines and E is the set of expected lines. In case ofe2D;1, f is de_ned by the perpendicular distances between both model line endpoints, e1, e2, and the in_nitely extended image line m:Likewise, the dual similarity measure, using e2D;2, is based on the perpendicular distances between the image line endpoints and the in_nitely extended model line. Recalling that the error function e3D is proportional to the distances of model line endpoints to the view plane through an image line and the camera origin, we can instantiate Equation 1 using f3D(m; e) de_ned as:where ~nm denotes the normal vector of the view plane given by the image endpoints ~mi = [mx;my;w]T in camera coordinates.Obtaining probabilitiesIdeally, we want the similarity measure to return monotonically decreasing values as the pose estimate used for projecting the model features departs from the actual camera pose. As we aim at a generally valid yet simple visual-sensor model, the idea is to abstract from speci_c poses and environmental conditions by averaging over a large number of di_erent, independent situations. For commensurability, we want to express the model in terms of relative robot coordinates instead of absolute world coordinates. In other words, we assumeto hold, i.e., the probability for the measurement m, given the pose lm this image has been taken at, the pose estimate le, and the world model w, is equal to the probability of this measurement given a three-dimensional pose deviation 4l and the world model w.The probability returned by the visual-sensor model is obtained by simple scaling:4 Experimental ResultsWe have evaluated the proposed sensor model and similarity measures in a series of experiments. Starting with arti_cially created images using idealized conditions, we have then added distortions and noise to the tested images. Subsequently, we have used real images from the robot's camera obtained in a hallway. Finally, we have usedthe sensor model to track the position of the robot while it was travelling through the hallway. In all these cases, a three-dimensional visualisation of the model was obtained, which was then used to assess the solutions.Simulations using arti_cially created imagesAs a first kind of evaluation, we generated synthetic image features by generating a view at the model from a certain camera pose. Generally speaking, we duplicated the right-hand branch of Figure 1 onto the left-hand side. By introducing a pose deviation 4l, we can directly demonstrate its inuence on the similarity values. For visualisation purposes, the translational deviations 4x and 4y are combined into a single spatial deviation 4t. Initial experiments have shown only insigni_cant di_erences when they were considered independently.Fig. 2: Performance of CMC on arti_cially created images.For each similarity measure given above, at least 15 million random camera poses were coupled with a random pose deviation within the range of 4t < 440cm and 4_ < 90_ yielding a model pose.The results obtained for the CMC measure are depicted in Figure 2. The surface of the 3D plot was obtained using GNUPLOT's smoothing operator dgrid3d. We notice a unique, distinctive peak at zero deviation with monotonically decreasing similarity values as the error increases. Please note that this simple measure considers neither endpoint coordinates nor lengths of lines. Nevertheless, we obtain already a decent result.While the resulting curve for the GLM measure resembles that of CMC, the peak is considerably more distinctive. This conforms to our anticipation since taking the length of image and model lines into account is very signi_cant here. In contrast to the CMC measure, incidental false matches are penalised in this method, due to the differing lengths.The nearest neighbour measure turned out to be not of use. Although linear and exponential weighting schemes were tried, even taking the length of line segmentsinto account, no distinctive peak was obtained, which caused its exclusion from further considerations.The measure based on the Hausdor_ distance performed not as good as the first two, CMC and GLM, though it behaved in the desired manner. But its moderate performance does not pay off the longest computation time consumed among all presented measures and is subsequently disregarded.So far, we have shown how our own similarity measures perform. Next, we demonstrate how the commonly used error functions behave in this framework.The function e2D;1 performed very well in our setting. The resulting curve closely resembles that of the GLM measure. Both methods exhibit a unique, distinctive peak at the correct location of zero pose deviation. Note that the length of line segments has a direct e_ect on the similarity value returned by measure GLM, while this attribute implicitly contributes to the measure e2D;1, though both linearly. Surprisingly, the other two error functions e2D;2 and e3D performed poorly.Toward more realistic conditionsIn order to learn the e_ect of distorted and noisy image data on our sensor model, we conducted another set of experiments described here. To this end, we applied the following error model to all synthetically generated image features before they are matched against model features. Each original line is duplicated with a small probability (p = 0:2) and shifted in space. Any line longer than 30 pixel is split with probability p=0:3. A small distortion is applied to the parameters (_; _; l) of each line according to a random, zeromean Gaussian. Furthermore, features not present in the model and noise are simulated by adding random lines uniformly distributed in the image. Hereof, the orientation is drawn according to the current distribution of angles to yield fairly `typical' features.The results obtained in these simulations do not di_er significantly from the first set of experiments. While the maximum similarity value at zero deviation decreased, the shape and characteristics of all similarity measures still under consideration remained the same.Using real images from the hallwaySince the results obtained in the simulations above might be questionable with respect to real-world conditions, we conducted another set of experiments replacing the synthetic feature measurements by real camera images.To compare the results for various parameter settings, we gathered images with a Pioneer 2 robot in the hallway o_-line and recorded the line features. For two di_erent locations in the hallway exemplifying typical views, the three-dimensional space of the robot poses (x; y; _) was virtually discretized. After placing the robot manually at each vertex (x; y; 0), it performed a full turn on the spot stepwise recording images. This ensures a maximum accuracy of pose coordinates associated with each image. That way, more than 3200 images have been collected from 64 di_erent (x; y)locations. Similarly to the simulations above, pairs of poses (le; lm) were systematically chosenFig. 3: Performance of GLM on real images from the hallway.from with the range covered by the measurements. The values computed by the sensor model referring to the same discretized value of pose deviation 4l were averaged according to the assumption in Equation 2.The resulting visualisation of the similarity measure over spatial (x and y combined) and rotational deviations from the correct camera pose for the CMC measure exhibits a unique peak at approximately zero deviation. Of course, due to a much smaller number of data samples compared to the simulations using synthetic data, the shape of the curve is much more bumpy, but this is in accordance with our expectation.The result of employing the GLM measure in this setting is shown in Figure 3. As it reveals a more distinctive peak compared to the curve for the CMC measure, it demonstrates the increased discrimination between more and less similar feature maps when taking the lengths of lines into account.Monte Carlo Localisation using the visual-sensor modelRecalling that our aim is to devise a probabilistic sensor model for a camera mounted on a mobile robot, we continue with presenting the results for an application to mobile robot localisation.The generic interface of the sensor model allows it to be used in the correction step of Bayesian localisation methods, for example, the standard version of the Monte Carlolocalisation (MCL) algorithm. Since statistical independence among sensor readings renders one of the underlying assumptions of MCL, our hope is to gain improved accuracy and robustness using the camera instead of or in addition to commonly used distance sensors like sonars or laser.Fig. 4: Image and projected models during localisation.In the experiment, the mobile robot equipped with a _xed-mounted CCD camera had to follow a pre-programmed route in the shape of a double loop in the corridor. On its way, it had to stop at eight pre-de_ned positions, turn to a nearby corner or open view, take an image, turn back and proceed. Each image capture initiated the so-called correction step of MCL and the weights of all samples were recomputed according to the visual-sensor model, yielding the highest density of samples at the potentially correct pose coordinates in the following resampling step. In the prediction step, the whole sample set is shifted in space according to the robot's motion model and the current odometry sensor readings.Our preliminary results look very promising. During the position tracking experiments, i.e., the robot was given an estimate of its starting position, the best hypothesis for the robot's pose was approximately at the correct pose most of the time. In this experiment, we have used the CMC measure. In Figure 4, a typical camera view is shown while the the robots follows the requested path. The grey-level image depicts the visual input for feature extraction after distortion removal andpre-processing. Also the extracted line features are displayed. Furthermore, the world model is projected according to two poses, the odometry-tracked pose and the estimate computed by MCL which approximately corresponds to the correct pose, between which we observe translational and rotational error.The picture also shows that rotational error has a strong inuence on the degree ofcoincidental feature pairs. This effect corresponds to the results presented above, where the figures exhibit a much higher gradient along the axis of rotational deviation than along that of translational deviation. The finding can be explained by the effect of motion on features in the Hough space. Hence, the strength of our camera sensor model lays at detecting rotational disagreement. This property makes it especially suitable for two-wheel driven robots like our Pioneer bearing a much higher rotational odometry error than translational error.5 Conclusions and Future WorkWe have presented a probabilistic visual-sensor model for camera-pose estimation. Its generic design makes it suitable for sensor fusion with distance measurements perceived from other sensors. We have shown extensive simulations under ideal and realistic conditions and identified appropriate similarity measures. The application of the sensor model in a localisation task for a mobile robot met our anticipations. Within the paper we highlighted much scope for improvements.We are working on suitable techniques to quantitatively evaluate the performanceof the devised sensor model in a localisation algorithm for mobile robots. This will enable us to experiment with cluttered environments and dynamic objects. Combining the camera sensor model with distance sensor information using sensor fusion renders the next step toward robust navigation. Because the number of useful features varies significantly as the robots traverses an indoor environment, the idea to steer the camera toward richer views (active vision) offers a promising research path to robust navigation.References[1] F. Dellaert, W. Burgard, D. Fox, and S. Thrun. Using the condensationalgorithm for robust, vision-based mobile robot localisation. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 1999.[2] D. DeMenthon, P. David, and H. Samet. SoftPOSIT: An algorithm forregistration of 3D models to noisy perspective images combining Softassign and POSIT. Technical report, University of Maryland, MD, 2001.[3] G. N. DeSouza and A. C. Kak. Vision for mobile robot navigation: A survey.IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(2):237{267,2002.[4] S. Enderle, M. Ritter, D. Fox, S. Sablatn•og, G. Kraetzschmar, and G. Palm.Soccer-robot localisation using sporadic visual features. In IntelligentAutonomous Systems 6, pages 959{966. IOS, 2000.[5] M. Fichtner. A camera sensor model for sensor fusion. Master's thesis, Dept. ofComputer Science, TU Dresden, Germany, 2002.[6] S. A. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servocontrol. IEEE Trans. on Robotics and Automation, 12(5):651{ 670, 1996.[7] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing imagesusing the Hausdor_ distance. IEEE Trans. on Pattern Analysis and MachineIntelligence, 15(9):850{863, 1993.[8] A. Kosaka and A. C. Kak. Fast vision-guided mobile robot navigation usingmodel-based reasoning and prediction of uncertainties. Com- puter Vision,Graphics, and Image Processing { Image Understanding, 56(3):271{329, 1992. [9] R. Kumar and A. R. Hanson. Robust methods for estimating pose and asensitivity analysis. Computer Vision, Graphics, and Image Processing { Image Understanding, 60(3):313{342, 1994.[10] D. G. Lowe. Three-dimensional object recognition from single twodimensionalimages. Arti_cial Intelligence, 31(3):355{395, 1987.[11] S. Se, D. G. Lowe, and J. Little. Vision-based mobile robot localization andmapping using scale-invariant features. In Proc. of the IEEE Int. Conf. onRobotics and Automation, pages 2051{2058, 2001.[12] G. D. Sullivan, L. Du, and K. D. Baker. Quantitative analysis of the viewpointconsistency constraint in model-based vision. In Proc. of the 4th Int. IEEE Conf.on Computer Vision, pages 632{639, 1993.13一个有关移动机器人定位的视觉传感器模型Matthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universit•at DresdenTechnical Report WV-03-03/CL-2003-02摘要我们提出一个在走廊和传感器模型凌乱的奥西环境下的概率估计。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1998年的IEEE国际会议上机器人及自动化Leuven ,比利时1998年5月一种实用的办法--带拖车移动机器人的反馈控制F. Lamiraux and J.P. Laumond拉斯,法国国家科学研究中心法国图卢兹{florent ,jpl}@laas.fr摘要本文提出了一种有效的方法来控制带拖车移动机器人。

轨迹跟踪和路径跟踪这两个问题已经得到解决。

接下来的问题是解决迭代轨迹跟踪。

并且把扰动考虑到路径跟踪内。

移动机器人Hilare的实验结果说明了我们方法的有效性。

1引言过去的8年,人们对非完整系统的运动控制做了大量的工作。

布洛基[2]提出了关于这种系统的一项具有挑战性的任务,配置的稳定性,证明它不能由一个简单的连续状态反馈。

作为替代办法随时间变化的反馈[10,4,11,13,14,15,18]或间断反馈[3]也随之被提出。

从[5]移动机器人的运动控制的一项调查可以看到。

另一方面,非完整系统的轨迹跟踪不符合布洛基的条件,从而使其这一个任务更为轻松。

许多著作也已经给出了移动机器人的特殊情况的这一问题[6,7,8,12,16]。

所有这些控制律都是工作在相同的假设下:系统的演变是完全已知和没有扰动使得系统偏离其轨迹。

很少有文章在处理移动机器人的控制时考虑到扰动的运动学方程。

但是[1]提出了一种有关稳定汽车的配置,有效的矢量控制扰动领域,并且建立在迭代轨迹跟踪的基础上。

存在的障碍使得达到规定路径的任务变得更加困难,因此在执行任务的任何动作之前都需要有一个路径规划。

在本文中,我们在迭代轨迹跟踪的基础上提出了一个健全的方案,使得带拖车的机器人按照规定路径行走。

该轨迹计算由规划的议案所描述[17],从而避免已经提交了输入的障碍物。

在下面,我们将不会给出任何有关规划的发展,我们提及这个参考的细节。

而且,我们认为,在某一特定轨迹的执行屈服于扰动。

我们选择的这些扰动模型是非常简单,非常一般。

它存在一些共同点[1]。

本文安排如下:第2节介绍我们的实验系统Hilare及其拖车:两个连接系统将被视为(图1)。

第3节处理控制方案及分析的稳定性和鲁棒性。

在第4节,我们介绍本实验结果。

图1带拖车的Hilare2 系统描述Hilare是一个有两个驱动轮的移动机器人。

拖车是被挂在这个机器人上的,确定了两个不同的系统取决于连接设备:在系统A的拖车拴在机器人的车轮轴中心线上方(图1 ,顶端),而对系统B是栓在机器人的车轮轴中心线的后面(图1 ,底部)。

Al= 0 。

这个系统不过单从控制的角度来看,需要更对B来说是一种特殊情况,其中r多的复杂的计算。

出于这个原因,我们分开处理挂接系统。

两个马达能够控制机器人的线速度和角速度(v r,rω)。

除了这些速度之外,还由传感器测量,而机器人和拖车之间的角度ϕ,由光学编码器给出。

机器人的位置和方向(x r,y r,rθ)通过整合前的速度被计算。

有了这些批注,控制系统B是:cos sin sin()cos()r r rr r rr rr r r r t t x v y v v l l l θθθωωϕϕϕω====--- (1)3 全球控制方案3.1目的当考虑到现实的系统,人们就必须要考虑到在运动的执行时产生的扰动。

这可能有许多的来源,像有缺陷的电机,轮子的滑动,惯性的影响... 这些扰动可以被设计通过增加一个周期在控制系统(1) ,得到一个新的系统的形式(,)x f x u ε=+ε 在上式中可以是确定性或随机变量。

在第一种情况下,扰动仅仅是由于系统演化的不规则,而在第二种情况下,它来自于该系统一个随机行为。

我们将看到后来,这第二个模型是一个更适合我们的实验系统。

为了引导机器人,从一开始就配置了目标,许多工程认为扰动最初只是机器人和目标之间的距离,但演变的系统是完全众所周知的。

为了解决这个问题,他们设计了一个可输入的时间-状态函数,使目标达到一个渐近稳定平衡的闭环系统。

现在,如果我们介绍了先前定义周期ε在这个闭环系统,我们不知道将会发生什么。

但是我们可以猜想,如果扰动ε很小、是确定的、在平衡点(如果仍然还有一个)将接近目标,如果扰动是一个随机变数,平衡点将成为一个平衡的子集。

但是,我们不知道这些新的平衡点或子集的位置。

此外,在处理障碍时,随时间变化的方法不是很方便。

他们只能使用在附近的目标,这附近要适当界定,以确保无碰撞轨迹的闭环系统。

请注意连续状态反馈不能适用于真实情况下的机器人,因为间断的速度导致无限的加速度。

我们建议达成某一存在障碍特定配置的方法如下。

我们首先在当前的配置和使用自由的碰撞议案所描述[17]目标之间建立一个自由的碰撞路径,然后,我们以一个简单的跟踪控制率执行轨迹。

在运动结束后,因为这一目标的各种扰动机器人从来没有完全达到和目标的轨迹一致,而是这一目标的左右。

如果达到配置远离目标,我们计算另一个我们之前已经执行过的一个轨迹。

现在我们将描述我们的轨迹跟踪控制率,然后给出我们的全球迭代方法的鲁棒性问题。

3.2轨迹跟踪控制率在这一节中,我们只处理系统A 。

对系统B 容易计算(见第3.4节)。

图2 单一机器人的跟踪控制率 很多带拖车轮式移动机器人的跟踪控制律已经被提出。

其中[16]虽然很简单,但是提供了杰出的成果。

如果(),,x y θ是模拟机器人的坐标构成真实机器人(图2),如果(00,r rv ω)是输入的参考轨迹,这种控制律表示如下: 01032cos sin r r r r v k x k k y v θθωωθθ⎧=+⎪⎨=++⎪⎩ (2) 我们控制律的关键想法如下:当机器人前进,拖车不需要稳定(见下文)。

因此,我们对机器人使用公式(2)。

当它后退时,我们定义一个虚拟的机器人(,,)r r r x y θ(图3)这是对称的真实一对拖车的车轮轴:2cos 2sin 2r r t tr r t t r t rx x l y y l θθθπθθ=-=-=+-然后,当真正的机器人退后,虚拟机器人前进和虚拟系统(,,,)r r rx y θϕ- 在运动学上是等同于真正的一个。

因此,我们对虚拟机器人实行跟踪控制法(2)。

图3 虚拟机器人现在的问题是:当机器人前进时,拖车是否真的稳定?下一节将回答这个问题。

3.3 拖车稳定性分析在这里我们考虑的向前运动情况下(0)r v =,虚拟机器人向后的运动被等值转变。

让我们把坐标00000(,,,,)r r r r r x y v θω作为参考轨迹并且把坐标(,,,,,)v y x r r r r rθϕω作为实际运动的系统。

我们假设机器人完全跟随其参考轨迹:00000(,,,,,)(,,,,)r r r r r v x y v y x r r r r rθϕωθω=并且我们把我们的注意力放在拖车偏差0ˆϕϕϕ=- 。

这一偏差的变化很容易从系统(1)推导出0r l =(系统A ) : 00ˆ(sin sin )ˆ2cos()sin()22r tr t v l v l ϕϕϕϕϕϕ=--+=-尽管ˆϕ是减少的 0ˆ222πϕπϕ-<+< [2]π (3)我们的系统而且被不等量限制了0,22ππϕϕ-<< (4) 因此0ˆ2ϕπϕπ-<+<和式(3)等价于0000ˆ022ˆ022πϕπϕπϕπϕπϕϕπ⎧<<-<<-⎪⎪⎨⎪⎪-<<--<<⎩ 且 或且 (5) 图4显示ˆϕ的范围随着给定的0ϕ的值正在减少。

我们可以看到,这个范围包含了拖车的所有的位置,包括式(4)所界定的范围。

此外,以前的计算许可轻松地表明对于变量0ϕ,0是一个渐近稳定值的变量。

因此,如果实际或虚拟的机器人按照它的参考轨迹前进,拖车是稳定的,并且将趋于自己的参考轨迹。

图4 ˆϕ的稳定范围 3.4虚拟机器人系统B当拖车挂在机器人的后面,之前的结构甚至更简单:我们可以用拖车取代虚拟的机器人。

在这种实际情况下,机器人的速度(,)r r v ω和拖车(,)t t v ω一对一映射的连接。

然后虚拟的机器人系统表示为如下:cos cos()sin sin()r r r r r r r r r r t r x x l l y y l l r r θθϕθθϕθθϕπ=--+=--+=++和以前的稳定性分析可以被很好的使用通过考虑悬挂点的运动。

下面一节讨论了我们迭代计划的鲁棒性。

3.5迭代计划的鲁棒性我们现在正在显示上文所提到的迭代计划的鲁棒性。

为此,我们需要有一个当机器人的运动时产生扰动的模型。

[1]扰动的模型系统是一个不规则,从而导致矢量场确定性的变化。

在我们的实验中,我们要看到由于随机扰动导致的例如在一些悬挂系统中发挥作用。

这些扰动对模型是非常困难的。

出于这个原因, 我们只有两个简单的假说有:00((),())((),())cs scs d q s q s d q s q s δ≤≤∆其中s 是沿曲线横坐标设计路径,q 和0q 分别是真正的和参考的结构,cs d 是结构空间系统的距离并且δ,∆是正数。

第一个不等量意味着实际和参考结构之间的距离成正比的距离覆盖计划路径。

第二个不等量是确保轨迹跟踪控制率,防止系统走得太远远离其参考轨迹。

让我们指出,这些假设是非常现实的和适合大量的扰动模型。

我们现在需要知道在每个迭代路径的长度。

我们使用指导的方法计算这些路径验证拓扑短时间的可控性[17]。

这个也就是说,如果我们的目标是充分接近起初的结构,轨迹的计算依然是起初的结构的附近。

在[9] 我们给出的估算方面的距离:如果 1q 和2q 是两种不够紧密的结构,规划路径的长度验证它们之间的关系141212((,))(,)cs l Path q q d q q η<这里η是一个正数。

因此,如果1,2...()i i q = 是配置依次获得的,我们有以下不等式:11,(,)()(,)cs goal cs i goal cs i goal d q q d q q d q q η+≤∆≤这些不等式确保distCS (,)i goal q q 是上界序列1,2...()i i d =的正数1141i i d d dη+=∆=和趋近于足够反复后的。

因此,我们没有获得渐近稳定性配置的目标,但这一结果确保存在一个稳定的范围处理这个配置。

这一结果基本上是来自我们选择非常传统扰动的模型。

让我们重复这包括诸如扰动模型的时间不同的控制律无疑将使其失去其渐近稳定。

实验结果如下节显示,收敛域的控制计划是非常小的。

4实验结果现在,我们目前获得的带拖车机器人Hilare 系统A 和B 的实验结果。

图5和图6显示第一路径计算的例子所规划初始配置(黑色)和目标配置(灰色)之间的运动。

相关文档
最新文档