A robot exploration and mapping strategy based on a semantic hierarchy of spatial represent

合集下载

我想换未来科技的作文英语

我想换未来科技的作文英语

In the realm of future technology,there are myriad possibilities that can revolutionize the way we live,work,and interact with the world around us.Here are some key areas where advancements are expected to make a significant impact:1.Artificial Intelligence AI:AI is poised to become an integral part of our daily lives, from personal assistants that can anticipate our needs to advanced machine learning algorithms that can solve complex problems in fields such as medicine,finance,and environmental science.2.Quantum Computing:The development of quantum computers will enable us to process information at unprecedented speeds,opening up new frontiers in cryptography, material science,and complex system modeling.3.Renewable Energy Technologies:As the world moves towards sustainability, advancements in solar,wind,and other renewable energy sources will become more efficient and costeffective,reducing our reliance on fossil fuels and mitigating the effects of climate change.4.Biotechnology and Genetic Engineering:The ability to edit genes will not only revolutionize medicine by allowing for the treatment and prevention of genetic disorders but also has implications for agriculture,where crops can be engineered to be more resistant to pests and environmental stress.5.Space Exploration and Colonization:With the potential for manned missions to Mars and the establishment of lunar bases,space technology will advance our understanding of the universe and possibly provide solutions to some of Earths most pressing issues,such as resource scarcity.6.Autonomous Vehicles:Selfdriving cars,drones,and other autonomous transportation systems will transform the way we commute,reducing traffic congestion,and improving safety on the roads.7.Virtual Reality VR and Augmented Reality AR:These technologies will become more immersive and integrated into our daily lives,offering new ways to learn,work,and experience entertainment.8.Advanced Robotics:Robots will become more sophisticated,capable of performing complex tasks in manufacturing,healthcare,and even in our homes,assisting with daily chores and providing companionship.9.Nanotechnology:The manipulation of matter on an atomic and molecular scale will lead to breakthroughs in materials science,medicine,and electronics,potentially leading to stronger,lighter,and more efficient products.10.5G and Beyond:The next generation of wireless technology will enable faster,more reliable internet connections,facilitating the Internet of Things IoT and smart cities, where devices and infrastructure communicate seamlessly to improve efficiency and quality of life.11.Blockchain Technology:Beyond cryptocurrencies,blockchains secure,decentralized nature will be applied to various sectors,including supply chain management,voting systems,and healthcare records management.12.Advanced Materials:The development of new materials with unique properties,such as superconductors,graphene,and metamaterials,will drive innovation in electronics, energy storage,and construction.As we venture further into the future,the convergence of these technologies will likely lead to innovations that we cannot yet imagine,continually reshaping our world in ways that are as exciting as they are challenging.The key to harnessing the potential of future technology lies in ethical considerations,education,and a proactive approach to integrating these advancements into society for the betterment of all.。

机器人的功能英语作文

机器人的功能英语作文

Robots are increasingly becoming an integral part of our daily lives,performing a variety of tasks that enhance efficiency,safety,and convenience.Here is an overview of the functions that modern robots are capable of:1.Manufacturing and Assembly:Industrial robots are widely used in assembly lines for tasks such as welding,painting,assembling,and packaging.They can work tirelessly with high precision,reducing the risk of human error.2.Domestic Assistance:Home robots,such as vacuum cleaners and lawn mowers, perform routine household chores,freeing up time for other activities.Some advanced models can even assist with cooking,cleaning,and even companionship.3.Medical Applications:Robots in healthcare settings can assist in surgeries,providing precision that is difficult for human hands to achieve.They can also help in patient care, such as lifting and moving patients,and in delivering medication.4.Disaster Response:Search and rescue robots are designed to navigate through dangerous environments,such as earthquakehit areas or burning buildings,where human rescuers may be at risk.They can locate survivors and provide vital information to rescue teams.5.Agricultural Automation:Agricultural robots can perform tasks such as planting, watering,and harvesting crops.They can also monitor crop health and optimize farming practices for better yields.6.Space Exploration:Robots are used in space missions to explore planets and other celestial bodies.They can withstand harsh conditions and perform tasks that would be too risky for human astronauts.cation and Research:Educational robots can help students learn programming, engineering,and other STEM subjects.Research robots can assist in scientific experiments,data collection,and analysis.8.Security and Surveillance:Security robots can patrol areas,detect intruders,and alert authorities.They can be equipped with cameras and sensors to monitor environments continuously.9.Entertainment:Robots are used in the entertainment industry for performances,theme park attractions,and interactive experiences.They can mimic human movements and expressions,providing a unique form of entertainment.10.Environmental Monitoring:Robots can be deployed to monitor environmental conditions,such as pollution levels,wildlife populations,and climate change indicators. They can collect data over large areas more efficiently than human researchers.11.Transportation:Autonomous vehicles,including drones and selfdriving cars,are changing the way we think about transportation.They can reduce traffic congestion, improve road safety,and provide personalized travel options.12.Customer Service:Service robots in retail,banking,and hospitality sectors can assist customers,answer queries,and perform transactions,improving the customer experience and reducing wait times.Robots are not just tools they are evolving to become intelligent systems that can learn, adapt,and interact with humans in increasingly sophisticated ways.As technology advances,the capabilities of robots will continue to expand,offering new possibilities and transforming various aspects of life.。

English PowerPoint courseware for robots

English PowerPoint courseware for robots

Error Handling
Developing robust error handling mechanisms to handle unexpected events and failures during robot operation
02
Components of a Robot
Sensors and Perception Systems
Vision Sensors
Cameras and image processing algorithms to detect and
recognize objects, faces, and environments
Motors and Actors
Control Systems
Electric motors, pneumatic actuators, and hydraulic actuators that convert energy into mechanical motion
Microcontrollers, PLCs, or PCs that regulate the robot's movements based on sensor inputs and programmed instructions
• Integration with AI: The integration of artistic intelligence (AI) with robotics is expected to lead to even more Sophisticated robots that can learn, adapt, and make decisions on their own

外研版高中英语选择性必修第三册精品课件 Unit 4 写作指导 续写科幻故事

外研版高中英语选择性必修第三册精品课件 Unit 4 写作指导 续写科幻故事

4.I just stood there enjoying this warm scene rather than interrupting them. 我没有打扰他们,只是站在那里欣赏着这个温馨的场景。 5.It was at that moment that I knew he understood the meaning of my work. 在那一刻,我知道他明白了我工作的意义。 6.To my great relief,he realised that what I did was noble and selfless. 让我欣慰的是,他意识到我所做的工作是高尚且无私的。
(二)精彩句型 1.Driven by my curiosity and confusion/doubt,I gently pushed the door and entered the room. 在好奇心和疑惑的驱使下,我轻轻地推开门并进入房间。 2.I walked into the room wanting/hoping/wondering to figure out what was happening. 我走进房间,想要弄清发生了什么。 3.It was truly amazing to see them interact,as if they were old close friends. 看着他们互动真的是太惊人了,他们好像是亲密的老朋友一样。
感知文体常识
本单元写作任务为续写科幻故事,属于读后续写。情节的发展多以时间、 地点的转移为线索。写故事时,要展开丰富的想象,适当构思情节,并把故 事情节写得生动有趣,同时要合乎逻辑。此外,续写时,要注意续写内容和 原文的衔接。续写部分必须忠实于原文的中心、内容与形式。
牢记写作素材

关于未来机器人太空太空旅行的英语作文

关于未来机器人太空太空旅行的英语作文

关于未来机器人太空太空旅行的英语作文Robot Space Explorers of the Future!Hi there! My name is Sam and I'm going to tell you all about how awesome robots will be for exploring space in the future. Just imagine – mighty metal explorers blasting off to unveil the secrets of other planets, galaxies, and who knows what else is out there! It's going to be so cool.First off, why use robots at all for space travel instead of human astronauts? Well, there are a bunch of really good reasons. Robots don't need food, water or oxygen like we do. That means you don't have to pack tons of supplies for them on a spacecraft. They also don't get tired or need to sleep. Robots can work 24/7 without a break! Human astronauts get worn out after a while in space and have to come back home. But a robot could keep on trucking for years and years, exploring deeper and deeper into the unknown.Another huge advantage is that robots aren't in danger like people would be. Outer space is an extreme environment with radiation, extreme temperatures, and all sorts of hazards that could harm or even kill a human. But a robot explorer couldn't care less! It has no life to lose and its tough metal body canwithstand just about anything. We could send robots to places that would be way too dangerous for people.So what will these future robot space explorers actually be like? First off, they'll probably look pretty different from the robots we have today. Most current robots are built for things like manufacturing cars, loading trucks, or doing tasks in controlled environments. But space robots will need to be extra rugged to survive the harsh conditions of alien worlds.Their bodies and limbs will likely be made out ofsuper-strong materials like titanium so they don't get damaged during travel or landings. They'll be designed to operate in extreme hot and cold temperatures too. Their sensors and cameras will be toughened up to withstand radiation and put up with environmental extremes. Just picture a heavy duty robot decked out for extreme sport - but in space!These robots will also be way smarter than any we have today. Current robots are pretty limited and can only do specific tasks they've been programmed for. But future space robots will have powerful AI that lets them learn, adapt their behavior, and make smart decisions on their own without step-by-step instructions from humans. That self-driving autonomy will be keywhen they're millions of miles away from Earth and can't be remote controlled in real-time.So what kind of awesome explorer missions will theseultra-advanced robots carry out? One of the first goals will probably be to map out and study planets, moons, asteroids and comets up close in our own solar system. They could land on Mars and drive around studying the surface and soil for signs of past life. Or they could rappel down towering cliffs on icy moons like Europa to explore the deep alien oceans below. How cool would that be?!Once we've thoroughly explored our celestial backyard, the robots could turn their sensors toward deeper space to investigate newly discovered planets around other stars. A telescope orbiting a distant planet could scan its atmosphere for potential signs of life before a robotic lander touches down on the surface. That would be the first step toward finding aliens!I bet these space robots will even construct bases and habitats to support longer-term exploration. They could use materials found on a planet to 3D print shelters, rovers, solar panels and other equipment they need. Maybe they'd build underground bases to get protection from radiation ormeteorites. Basically wherever these robots go, they could establish a human outpost without us even being there!Eventually, having robot explorers stake out claims on multiple planets and moons could pave the way for human colonization of other worlds. The robots could do all the dangerous prep work before people ever arrive - extracting resources, manufacturing construction materials, and even starting to grow crops for our future off-world homes.We also still have a lot to learn about the places we want to eventually send robots. For example, we know very little about the actual surface and makeup of planets in other solar systems since we've never visited them up close. Every new robotic mission to places like Mars helps teach us more to make future missions even better.But one thing's for sure - having awesome, super-smart robots leading the way will be crucial as we expand our footprint through the solar system and beyond in the decades and centuries to come. These metallic pioneers will go where no human has gone before, making discoveries that could change everything. Twenty or fifty years from now, you might see headlines about robots establishing the first human settlementon Mars or finding basic alien lifeforms on a distant exoplanet's ocean! Wouldn't that be amazing?So what do you think? I don't know about you, but I can't wait to see what these future robot space explorers will accomplish. Keep your eyes on the stars, because I have a feeling there are going to be some awesome robotic adventures happening out there!。

Robotics and Applications

Robotics and Applications

History
The 20th century Research into the functionality and potential uses of robots grow substantially.
Today Robotics is a rapidly growing field.
Components
Sensors
Sensors allow robots to receive information about a certain measurement of the environment, or internal components.
Applications
Exploration
Robotics and Applications
Contents
Definition
History Components Applications
Definition
Robotics is the branch of technology that deals with the design, construction, operation, and application of robots, as well as computer systems for their control, sensory feedback, and information processing.
The robots are able to carry cameras and other instruments so that they can collect information and send it back to their human operators.

意念控制他人成为现实

意念控制他人成为现实X-Men mind control becomes a realityTaking control of another person's body with your mind is something that has been long dreamed of in comic books and films like X Men, but now scientists have achieved it in real life.Researchers used electromagnets and computers to transmit a person's brainwaves allowing them to control the hand of another sitting in a different building one mile away. The technology recorded the brain signals from a computer gamer and then fired them into the brain of another volunteer, triggering the nerves that controlled their hand muscles.This allowed the gamer, who had no physical computer controls themselves, to use the other person to play a computer game.The technology makes it possible to control the body of another person with thoughts - something that Professor Xavier was able to do in the X Men.The researchers behind the project believe it may eventually lead to new ways of helping rehabilitate stroke patients and those who have suffered brain damage.It could also be used to pass information between people or allow skilled surgeons help others perform difficult operations from miles away or allow pilots to take control of a plane from the ground in an emergency.Dr Rajesh Rao, a computer scientist and engineer at the University of Washington who led the work, said: "Our results show that information extracted from one brain can be transmitted to another brain, ultimately allowing two humans to cooperatively perform a task using only a direct brain-to-brain interface."Such devices, which have been long cherished by science fiction writers, have the potential to not only revolutionize how humans communicate and collaborate, but also open a new avenue for investigating brain function."In the study, which is published in the journal Public Library of Science One, three pairs of volunteers played a rudimentary computer game where they had to fire a canon at passing pirate ships.While one volunteer could see what was happening on the screen, the other in a different building could not see the screen but had their hand placed over a keypad needed to fire the canon.The gamer who could see the screen was fitted with electroencephalograph, or EEG, to record the minute electrical signals that are given off by their brain.With this they were able to transmit a distinctive signal to tap the keypad over the internet to a device that stimulated the brain of the second volunteer in the other room.The signal was transmitted into the brain of the second volunteer through transcranial magnetic stimulation, or TMS, which generates an electromagnetic pulse that triggers the neurons in their brain.When positioned over the part of the brain that controls movement in the right hand, this caused the muscles in volunteer's hand to move and tap the keypad.The researchers, whose study said the technology needed little training for the volunteers to use and it was possible to trigger movement in the receiving person's hand less than 650 milliseconds after the sender gave the command to fire.They found that the accuracy varied between the pairs with one managing to accurately hit 83 per cent of the pirate ships. Most of the misses were due to person sending the thoughts failing to accurately execute the thought needed to send the "fire" command. Early brain to computer communication devices required many hours of training and required surgical implants.Scientists have already enabled paralysed patients to control robotic arms and play computer games.However, Dr Rao and his team believe their technology can be used by people who have just walked into their lab and do not require any invasive implants.Recently scientists in Spain and France were able to show that they could send words to colleagues in India using similar set ups.Those researchers recorded electrical activity from the brain and converted the words "hola" and "ciao" into a digital signal before electromagnetic pulses transmitted them into the receivers brain so they saw flashes of light that formed a kind of morse code.Dr Rao and his team have now been given a $1 million grant from the WM Keck Foundation to transmit more complex brain processes.They believe it may be possible to transmit other information such as concepts, abstract thoughts and rules as well as physical movement and information.Dr Andrea Stocco, a psychologist at the University of Washington who was also involved in the study, said: "We have envisioned many scenarios in which brain-to-brain technology could be useful."A skilled surgeon could remotely control an inexperienced assistant's hands to perform medical procedures even it not on the site, or a skilled pilot could remotely help a less experienced one control a plane during a difficult situation."It could help with neuro-rehabilitation. After brain damage, patients need to painfully and slowly re-learn simple motor actions, such as walking, grasping, or swallowing."We suspect that the re-learning phase could be greatly sped-up if we could provide the damaged brain with a "motor template", copied from a healthy person, or the healthy part of the patient's brain, of what the intended action should look like."It could also help with tutoring. Imagine that we could extract the teacher's richer representation of a difficult concept and deliver it to his or her students in terms of neural activity."人类一直以来梦想用意念控制他人身体,《X战警》等漫画和电影都曾描绘过这样的场景。

雅思阅读昆虫与生物机器人

雅思阅读昆虫与生物机器人ROBOTS are getting smarter and more agile all the time. They disarm bombs, fly combat missions, put together complicated machines, even play football. Why, then, one might ask, are they nowhere to be seen, beyond war zones, factories and technology fairs? One reason is that they themselves cannot see very well. And people are understandably wary of purblind contraptions bumping into them willy-nilly in the street or at home.All that a camera-equipped computer "sees" is lots of picture elements, or pixels. A pixel is merely a number reflecting how much light has hit a particular part of a sensor. The challenge has been to devise algorithms that can interpret such numbers as scenes composed of different objects in space. This comes naturally to people and, barring certain optical illusions, takes no time at all as well as precious little conscious effort. Yet emulating this feat in computers has proved tough.In natural vision, after an image is formed in the retina it is sent to an area at the back of the brain, called the visual cortex, for processing. The first nerve cells it passes through reactonly to simple stimuli, such as edges slanting at particular angles. They fire up other cells, further into the visual cortex, which react to simple combinations of edges, such as corners. Cells in each subsequent area discern ever more complex features, with those at the top of the hierarchy responding to general categories like animals and faces, and to entire scenes comprising assorted objects. All this takes less than a tenth of a second.The outline of this process has been known for years and in the late 1980s Yann LeCun, now at New York University, pioneered an approach to computer vision that tries to mimic the hierarchical way the visual cortex is wired. He has been tweaking his "convolutional neural networks" (ConvNets) ever since.Seeing is believingA ConvNet begins by swiping a number of software filters, each several pixels across, over the image, pixel by pixel. Like the brain's primary visual cortex, these filters look for simple features such as edges. The upshot is a set of feature maps, one for each filter, showing which patches of the original image contain the sought-after element. A series of transformations is then performed on each map in order toenhance it and improve the contrast. Next, the maps are swiped again, but this time rather than stopping at each pixel, the filter takes a snapshot every few pixels. That produces a new set of maps of lower resolution. These highlight the salient features while reining in computing power. The whole process is then repeated, with several hundred filters probing for more elaborate shapes rather than just a few scouring for simple ones. The resulting array of feature maps is run through one final set of filters. These classify objects into general categories, such as pedestrians or cars.Many state-of-the-art computer-vision systems work along similar lines. The uniqueness of ConvNets lies in where they get their filters. Traditionally, these were simply plugged in one by one, in a laborious manual process that required an expert human eye to tell the machine what features to look for, in future, at each level. That made systems which relied on them good at spotting narrow classes of objects but inept at discerning anything else.Dr LeCun's artificial visual cortex, by contrast, lights on the appropriate filters automatically as it is taught to distinguish the different types of object. When an image is fed into the unprimed system and processed, the chances are it will not, atfirst, be assigned to the right category. But, shown the correct answer, the system can work its way back, modifying its own parameters so that the next time it sees a similar image it will respond appropriately. After enough trial runs, typically10,000 or more, it makes a decent fist of recognising that class of objects in unlabelled images.This still requires human input, though. The next stage is "unsupervised" learning, in which instruction is entirely absent. Instead, the system is shown lots of pictures without being told what they depict. It knows it is on to a promising filter when the output image resembles the input. In a computing sense, resemblance is gauged by the extent to which the input image can be recreated from the lower-resolution output. When it can, the filters the system had used to get there are retained.In a tribute to nature's nous, the lowest-level filters arrived at in this unaided process are edge-seeking ones, just as in the brain. The top-level filters are sensitive to all manner of complex shapes. Caltech-101, a database routinely used for vision research, consists of some 10,000 standardised images of 101 types of just such complex shapes, including faces, cars and watches. When a ConvNet with unsupervised pre-trainingis shown the images from this database it can learn to recognise the categories more than 70% of the time. This is just below what top-scoring hand-engineered systems are capable of—and those tend to be much slower.This approach (which Geoffrey Hinton of the University of Toronto, a doyen of the field, has dubbed "deep learning") need not be confined to computer-vision. In theory, it ought to work for any hierarchical system: language processing, for example. In that case individual sounds would be low-level features akin to edges, whereas the meanings of conversations would correspond to elaborate scenes.For now, though, ConvNet has proved its mettle in the visual domain. Google has been using it to blot out faces and licence plates in its Streetview application. It has also come to the attention of DARPA, the research arm of America's Defence Department. This agency provided Dr LeCun and his team with a small roving robot which, equipped with their system, learned to detect large obstacles from afar and correct its path accordingly—a problem that lesser machines often, as it were, trip over. The scooter-sized robot was also rather good at not running into the researchers. In a selfless act of scientific bravery, they strode confidently in front of it as itrode towards them at a brisk walking pace, only to see it stop in its tracks and reverse. Such machines may not quite yet be ready to walk the streets alongside people, but the day they can is surely not far off.。

英语笔译实务 3级配套训练 第八单元 英译汉(一) Robot

英语笔译实务3级配套训练第八单元英译汉(一)RobotEven before the first robot was built, the subject of robotics was controversial. The word “robot” was coined in 1921 by a Czech playwright who wrote about a colony of machines endowed with artificial intelligence that eventually turned against their human creators. Although that account was fictional, the first industrial robots were in use by the early 1960s. Today, we continue to be intrigued by robots and their potential for both good and evil.Basically, a robot is any machine that performs work or other actions normally done by humans. Most robots are used in factories to make products such as cars and electronics. Others are used to explore underwater, in volcanoes and even on other planets.Robots consist of three main components: a brain, which is usually a computer; actuators and mechanical parts such as motors, wheels and gears; and sensors for detecting images, sound temperature, motion and light. With these basic components, robots can interact with their environment and perform the tasks they are designed to carry out.The advantages are obvious –robots can do things humans just1 / 4don’t want to do, and they are usually more cost effective. Robots can also do things more precisely than humans and allow progress in medical science and other useful advances.But, as with any machine, a robot can break down and even cause disaster. There’s also the possibility that wicked people will use robots for evil purpose. Yet this also true with other forms of technology such as weapons and biological material.Robots will probably be used even more in the future. They will continue to do tasks where danger, repetition, cost or the need for precision prevents humans from performing. As to whether they will be used for good or evil, that depends on the nature of the humans who create them.课文词汇robotics 机器人学coin (词语)生造,杜撰intrigue 激起……兴趣actuator 驱动装置参考译文机器人早在第一个机器人造出来之前,机器人学就已是个颇有争议的话题。

Unit+3+Sea+exploration+Using+language选择性必修第四册


1. Sea exploration is important for
our future.
1
∙ Scientific research ships can help
address important issues like
climate change.
2. Understanding more about the sea 2
learn for language
1. When people talk of exploring the sea more, they usually mean exploiting it.
=when talking of/ when it comes to
2. Plastic pollution is also bad, killing many birds and fish, and has even been found in our tap water. 结果状语 仿写:人们往海里排放废水,破坏了生态系统。 People have released waste water into the ocean, damaging the ecosystem.
facts 2. The key to learning English well is to keep practicing. Just as the saying goes “practice makes perfect.” quote
3. There are many ways to learn English with fun, such as listening to English songs and watching English movies.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

In the remainder of the paper, section 2 reviews other approaches to spatial exploration and map-learning. Section 3 presents our hierarchical representation and its use in detail. Section 4 describes the speci c instance of the hierarchical approach, the NX robot, that we have used in our research. Section 5 demonstrates the performance of NX as it explores a complex, large-scale environment, de nes distinctive places and paths, links them into a topological network description,
1 Introduction and Overview
We have developed a robust qualitative method for robot exploration, mapping, and navigation in large-scale spatial environments. An environment is large-scale if its spatial structure is at a signi cantly larger scale than the sensory horizon of the observer. Experiments with a simulated robot in a variety of 2-D environments have demonstrated that our method can build an accurate map of an unknown environment in spite of substantial random and systematic sensorimotor error. Most current approaches to robot exploration and mapping analyze sensor input to build a geometric map of the environment, then extract topological structure from the geometric description.
Cumulative location error is essentially eliminated while traveling among distinctive places in the topological network by alternating between path-following and hill-climbing control algorithms. Feedback-guided motion control can draw on the full range of control algorithms and performance analysis methods in the elds of control engineering and control theory (e.g. D'Azzo and Houpis, 1988]) to mitigate the e ects of sensor and motor uncertainty on navigation ability. Successful navigation is not dependent on geometric accuracy, since the control and topology levels do not depend on the geometric description. However, when geometric information is available, it can be used to optimize route-planning or to resolve topological ambiguities. Geometric sensor fusion methods Chatila and Laumond, 1985; Durrant-Whyte, 1988; Moravec and Elfes, 1985; Smith and Cheeseman, 1986] can be naturally incorporated as methods for acquiring local geometric descriptions of places and paths in the topological network. (A global geometric description can be derived by global relaxation of local metrical relations into a single frame of reference.) Indistinguishable places | i.e. places with identical local sensory characteristics | can be identi ed correctly, except in the most pathological environments, using a topological matching procedure to test hypotheses about the places' neighbors.
sensors ! geometry ! topology:
In our qualitative method, location-speci c control algorithms are dynamically selected to control the robot's interaction with its environment. These algorithms de ne distinctive places and paths, which are linked to form a topological network description. Finally, geometric knowledge is assimilated onto the elements of the network (Fig. 1):
1
Kuipers & Byun, Robotics & Autonomous Systems 8: 47{63, 1991.
2
1. The Control Level. Distinctive places and path are de ned in terms of the control strategies and sensory measures (called distinctiveness measures, or d-measures) which support convergence to them from anywhere within a local neighborhood. A distinctive place is de ned as the local maximum found by a hill-climbing control strategy, given an appropriate distinctiveness measure. A distinctive path is de ned by the distinctiveness measure and control strategy (e.g. follow-the-midline or follow-left-wall), which allows the robot to follow it. 2. The Topological Level. A topological network description of the global environment is created before the global geometric map, by identifying and linking distinctive places and distinctive paths in the environment. 3. The Geometric Level. Once a topological map is in place, the geometric map can be incrementally created by accumulating, rst, local geometric information about places and paths, then global metrical relations among these elements within a common frame of reference. Our approach, based on the spatial semantic hierarchy, provides a coherent framework for exploiting the strengths of a variety of powerful spatial reasoning methods while minimizing the robot's vulnerability to their weaknesses.
This work has taken place in the Qualitative Reasoning Group at the Arti cial Intelligence Laboratory, The University of Texas at Austin. Research of the Qualitative Reasoning Group is snts IRI-8602665, IRI-8905494, and IRI-8904454, and by NASA grants NAG 2-507 and NAG 9-200.
相关文档
最新文档