机器人和机器人传感器中英文对照外文翻译文献
传感器技术论文中英文对照资料外文翻译文献

传感器技术论文中英文对照资料外文翻译文献Development of New Sensor TechnologiesSensors are devices that can convert physical。
chemical。
logical quantities。
etc。
into electrical signals。
The output signals can take different forms。
such as voltage。
current。
frequency。
pulse。
etc。
and can meet the requirements of n n。
processing。
recording。
display。
and control。
They are indispensable components in automatic n systems and automatic control systems。
If computers are compared to brains。
then sensors are like the five senses。
Sensors can correctly sense the measured quantity and convert it into a corresponding output。
playing a decisive role in the quality of the system。
The higher the degree of n。
the higher the requirements for sensors。
In today's n age。
the n industry includes three parts: sensing technology。
n technology。
and computer technology。
机器人技术发展中英文对照外文翻译文献

机器人技术发展中英文对照外文翻译文献(文档含英文原文和中文翻译)外文资料:RobotsFirst, I explain the background robots, robot technology development. It should be said it is a common scientific and technological development of a comprehensive results, for the socio-economic development of a significant impact on a science and technology. It attributed the development of all countries in the Second World War to strengthen the economic input on strengthening the country's economic development. But they also demand the development of the productive forces the inevitable result of human development itself is the inevitable result then with the development of humanity, people constantly discuss the natural process, in understanding and reconstructing the natural process, people need to be able to liberate a slave. So this is the slave people to be able to replace the complex and engaged in heavy manual labor, People do not realize right up to the world's understanding and transformation of this technology as well as people in the development process of an objective need.Robots are three stages of development, in other words, we are accustomed to regarding robots are divided into three categories. is a first-generation robots, also known as teach-type robot, it is through a computer, to control over one of a mechanical degrees of freedom Through teaching and information stored procedures, working hours to read out information, and then issued a directive so the robot can repeat according to the people at that time said the results show this kind of movement again, For example, the car spot welding robots, only to put this spot welding process, after teaching, and it is always a repeat of a work It has the external environment is no perception that the force manipulation of the size of the work piece there does not exist, welding 0S It does not know, then this fact from the first generation robot, it will exist this shortcoming, it in the 20th century, the late 1970s, people started to study the second-generation robot, called Robot with the feeling that This feeling with the robot is similar in function of a certain feeling, for instance, force and touch, slipping, visual, hearing and who is analogous to that with all kinds of feelings, say in a robot grasping objects, In fact, it can be the size of feeling out, it can through visual, to be able to feel and identify its shape, size, color Grasping an egg, it adopted a acumen, aware of its power and the size of the slide.Third-generation robots, we were a robotics ideal pursued by the most advanced stage, called intelligent robots, So long as tell it what to do, not how to tell it to do, it will be able to complete the campaign, thinking and perception of this man-machine communication function and function Well, this current development or relative is in a smart part of the concept and meaning But the real significance of the integrity of this intelligent robot did not actually exist, but as we continued the development of science and technology, the concept of intelligent increasingly rich, it grows ever wider connotations.Now I have a brief account of China's robot development of the basic profiles. As our country there are many other factors that problem. Our country in robotics research of the 20th century the late 1970s. At that time, we organized at the national, a Japanese industrial automation products exhibition. In this meeting, there are two products, is a CNC machine tools, an industrial robot, this time, our country's many scholars see such a direction, has begun to make a robot research But this time, are basically confined to the theory of phase .Then the real robot research, in 7500 August 5, 1995, 15 nearly 20 years of development, The most rapid development, in 1986 we established a national plan of 863 high-technology development plan, As robot technology will be an important theme of the development of The state has invested nearly Jiganyi funds begun to make a robot, We made the robot in the field quickly and rapid development.At present, units like the CAS ShenYng Institute of Automation, the original machinery, automation of the Ministry, as of Harbin Industrial University, Beijing University of Aeronautics and Astronautics, Qinghua University, Chinese Academy of Sciences, also includes automation of some units, and so on have done a very important study, also made a lot of achievements Meanwhile, in recent years, we end up in college, a lot of flats in robot research, Many graduate students and doctoral candidates are engaged in robotics research, we are more representative national study Industrial robots, underwater robots, space robots, robots in the nuclear industry are on the international level should be taking the lead .On the whole of our country Compared with developed countries, there is still a big gap, primarily manifested in the We in the robot industry, at present there is no fixed maturity product, but in theseunderwater, space, the nuclear industry, a number of special robots, we have made a lot of achievements characteristics.Now, I would like to briefly outline some of the industrial robot situation. So far, the industrial robot is the most mature and widely used category of a robot, now the world's total sales of 1.1 million Taiwan, which is the 1999 statistics, however, 1.1 million in Taiwan have been using the equipment is 75 million, this volume is not small. Overall, the Japanese industrial robots in this one, is the first of the robots to become the Kingdom, the United States have developed rapidly. Newly installed in several areas of Taiwan, which already exceeds Japan, China has only just begun to enter the stage of industrialization, has developed a variety of industrial robot prototype and small batch has been used in production.Spot welding robot is the auto production line, improve production efficiency and raise the quality of welding car, reduce the labor intensity of a robot. It is characterized by two pairs of robots for spot welding of steel plate, bearing a great need for the welding tongs, general in dozens of kilograms or more, then its speed in meters per second a 5-2 meter of such high-speed movement. So it is generally five to six degrees of freedom, load 30 to 120 kilograms, the great space, probably expected that the work of a spherical space, a high velocity, the concept of freedom, that is to say, Movement is relatively independent of the number of components, the equivalent of our body, waist is a rotary degree of freedom We have to be able to hold his arm, Arm can be bent, then this three degrees of freedom, Meanwhile there is a wrist posture adjustment to the use of the three autonomy, the general robot has six degrees of freedom. We will be able to space the three locations, three postures, the robot fully achieved, and of course we have less than six degrees of freedom. Have more than six degrees of freedom robot, in different occasions the need to configure.The second category of service robots, with the development of industrialization, especially in the past decade, Robot development in the areas of application are continuously expanding, and now a very important characteristic, as we all know, Robot has gradually shifted from manufacturing to non-manufacturing and service industries, we are talking about the car manufacturer belonging to the manufacturing industry, However, the services sector including cleaning, refueling, rescue, rescue,relief, etc. These belong to the non-manufacturing industries and service industries, so here is compared with the industrial robot, it is a very important difference. It is primarily a mobile platform, it can move to sports, there are some arms operate, also installed some as a force sensor and visual sensors, ultrasonic ranging sensors, etc. It’s surrounding environment for the conduct of identification, to determine its campaign t o complete some work, this is service robot’s one of the basic characteristics.For example, domestic robot is mainly embodied in the example of some of the carpets and flooring it to the regular cleaning and vacuuming. The robot it is very meaningful, it has sensors, it can furniture and people can identify, It automatically according to a law put to the ground under the road all cleaned up. This is also the home of some robot performance.The medical robots, nearly five years of relatively rapid development of new application areas. If people in the course of an operation, doctors surgery, is a fatigue, and the other manually operated accuracy is limited. Some universities in Germany, which, facing the spine, lumbar disc disease, the identification, can automatically use the robot-aided positioning, operation and surgery Like the United States have been more than 1,000 cases of human eyeball robot surgery, the robot, also including remote-controlled approach, the right of such gastrointestinal surgery, we see on the television inside. a manipulator, about the thickness fingers such a manipulator, inserted through the abdominal viscera, people on the screen operating the machines hand, it also used the method of laser lesion laser treatment, this is the case, people would not have a very big damage to the human body.In reality, this right as a human liberation is a very good robots, medical robots it is very complex, while it is fully automated to complete all the work, there are difficulties, and generally are people to participate. This is America, the development of such a surgery Lin Bai an example, through the screen, through a remote control operator to control another manipulator, through the realization of the right abdominal surgery A few years ago our country the exhibition, the United States has been successful in achieving the right to the heart valve surgery and bypass surgery. This robot has in the area, caused a great sensation, but also, AESOP's surgical robot, In fact, it through some equipment to some of the lesions inspections, through amanipulator can be achieved on some parts of the operation Also including remotely operated manipulator, and many doctors are able to participate in the robot under surgery Robot doctor to include doctors with pliers, tweezers or a knife to replace the nurses, while lighting automatically to the doctor's movements linked, the doctor hands off, lighting went off, This is very good, a doctor's assistant.We regard this country excel, it should be said that the United States, Russia and France, in our nation, also to the international forefront, which is the CAS ShenYang Institute of Automation of developing successful, 6,000 meters underwater without cable autonomous underwater robot, the robot to 6,000 meters underwater, can be conducted without cable operations. His is 2000, has been obtained in our country one of the top ten scientific and technological achievements. This indicates that our country in this underwater robot, have reached the advanced international level, 863 in the current plan, the development of 7,000 meters underwater in a manned submersible to the ocean further development and operation, This is a great vote of financial and material resources.In this space robotics research has also been a lot of development. In Europe, including 16 in the United States space program, and the future of this space capsule such a scheme, One thing is for space robots, its main significance lies in the development of the universe and the benefit of mankind and the creation of new human homes, Its main function is to scientific investigation, as production and space scientific experiments, satellites and space vehicles maintenance and repair, and the construction of the space assembly. These applications, indeed necessary, for example, scientific investigation, as if to mock the ground some physical and chemical experiments do not necessarily have people sitting in the edge of space, because the space crew survival in the day the cost is nearly one million dollars. But also very dangerous, in fact, some action is very simple, through the ground, via satellite control robot, and some regularly scheduled completion of the action is actually very simple. Include the capsule as control experiments, some switches, buttons, simple flange repair maintenance, Robot can be used to be performed by robots because of a solar battery, then the robot will be able to survive, we will be able to work, We have just passed the last robot development on the application of the different areas ofapplication, and have seen the robots in industry, medical, underwater, space, mining, construction, service, entertainment and military aspects of the application .Also really see that the application is driven by the development of key technologies, a lack of demand, the robot can not, It is because people in understanding the natural transformation of the natural process, the needs of a wide range of robots, So this will promote the development of key technologies, the robot itself for the development of From another aspect, as key technology solutions, as well as the needs of the application, on the promotion of the robot itself a theme for the development of intelligent, and from teaching reappearance development of the current local perception of the second-generation robot, the ultimate goal, continuously with other disciplines and the development of advanced technology, the robot has become rich, eventually achieve such an intelligent robot mainstream.Robot is mankind's right-hand man; friendly coexistence can be a reliable friend. In future, we will see and there will be a robot space inside, as a mutual aide and friend. Robots will create the jobs issue. We believe that there would not be a "robot appointment of workers being laid off" situation, because people with the development of society, In fact the people from the heavy physical and dangerous environment liberated, so that people have a better position to work, to create a better spiritual wealth and cultural wealth.译文:机器人首先我介绍一下机器人产生的背景,机器人技术的发展,它应该说是一个科学技术发展共同的一个综合性的结果,同时,为社会经济发展产生了一个重大影响的一门科学技术,它的发展归功于在第二次世界大战中各国加强了经济的投入,就加强了本国的经济的发展。
机器人类外文文献翻译穿越深渊的机器人中英文翻译、外文翻译

英文原文The Abyss Transit System- James Cameron commissions the making of robots for a return to theTitanicBy Gary StixAt the beginning of the movie that made Leonardo DiCaprio a megastar, a camera-toting unmanned robot ventured into a cavernous hole in the wreck that sits on the bottom of the Atlantic, 12,640 feet from the surface. The 500-pound vehicle, christened Snoop Dog, could move only about 30 feet along a lower deck, hampered by its bulky two-inch-diameter tether hitched to a submarine that waited above. The amount of thrust needed to move its chunky frame stirred up a thick cloud. “The vehicle very quickly silted out the entire place and made imaging impossible,” director James Cameron recalls.But the eerie vista revealed by Snoop Dog on that 1995 expedition made Cameron hunger for more. He vowed to return one day with technology that could negotiate anyplace within the Titanic's interior.In the past six months two documentaries—one for IMAX movie theaters called Ghosts of the Abyss, the other, Expedition: Bismarck, for the DiscoveryChannel—demonstrated the fruits of a three-year effort that Cameron financed with $1.8 million of his own money to make this vision materialize. The payoff was two 70-pound robots, named after Blues Brothers Jake and Elwood, that had the full run of two of the world's most famous wrecks, the Titanic and the Bismarck, which they visited on separate expeditions.The person who took Jake and Elwood from dream to robot is Mike Cameron, James's brother, an aerospace engineer who once designed missiles and who also possesses a diverse background as a helicopter pilot, stunt photographer and stuntman. (Remember the corpse in the movie The Abyss, from whose mouth a crab emerges?) Giving the remotely operated vehicles freedom of movement required that they be much smaller than Snoop Dog and that the tether's width be tapered dramatically so as not to catch on vertical ship beams.Mike Cameron took inspiration from the wire-guided torpedoes used by the military that can travel for many miles. His team created vehicles operable to more than 20,000 feet (enough to reach as much as 85 percent of the ocean floor). The dimensions of the front of the robot are 16 inches high by 17 inches across, small enough to fit in a B deck window of the Titanic. The bots have an internal battery so that they do not need to be powered through a tether. Instead the tether—fifty-thousandths of an inch in diameter—contains optical fibers that relaycontrol signals from a manned submersible vehicle hovering outside and that also send video images in the other direction. The tether pays out from the robot, a design that prevents it from snagging on objects in the wreck.James Cameron thought the project would be a straightforward engineering task, not much harder than designing a new camera system. “This turned out to be a whole different order of magnitude,” he says. “There was no commercial off-the-shelf hardware that wo uld work in the vehicles. Everything had to be built from scratch.” If the team had known this early on, he added, “we wouldn't have bothered.” Water pressure on the cable that carried the optical fibers could create microscopic bends in the data pipe, completely cutting off the control signals from the submersibles. Dark Matter in Valencia, Calif. (Mike Cameron's company), had to devise a fluid-filled sheath around the fiber to displace the minuscule air pockets in the cable that could lead to the microbending.To save weight, the frame—similar to a monocoque body of a race car—was made up of small glass hollow spheres contained in an epoxy matrix. The thruster contained a large-diameter, slowly rotating blade with nozzles that diffused the propulsive flow, minimizing the churning that would otherwise disturb the caked silt.A high-resolution video camera, along with an infrared camera for navigation, was placed in the front of the craft along with three light-emitting-diode arrays for fill lighting and two quartz halogen lamps for spotlighting.The winter of 2001 marked a critical juncture. It was six months before dives to the Titanic could be safely attempted, and James had to determine whether to proceed or wait another year. “Mike was really, really negative on the idea, but I decided to go for it,” the director says. He felt he couldn't afford to wait longer and thought that a fixed deadline would focus the engineering staff at Dark Matter. Forhis part, Mike was contending with an unending series of design challenges. “It was such an overwhelming set of problems that I had very little confidence that certain parts would be solvable in the time we had,” Mike says.A few weeks before the dives commenced in the summer of 2001, the robots' lithium sulfur dioxode-based batteries caught fire while being tested in a pressure tank, destroying what was to have been a third robot. Mike wanted to delay the dives, but James found a supplier of another type of lithium battery and pressed ahead.At the dive site, Jake and Elwood took starring roles with their 2,000-foot tethers, exploring for the first time in about 90 years remote parts of the ships, including the engine room, the firemen's mess hall and the cabins of first-class passengers—even focusing in on a bowler hat, a brass headboard and an intact, upright glass decanter. The images lack the resolution and novel quality of the high-definition, three-dimensional IMAX images, the other major technological innovation of Ghostsof the Abyss. Jake and Elwood's discoveries, however, draw the viewers' interest because of what they convey of the Titanic's mystique. “You actually feel like you're out there in the wreck,” Mike says. He remembers his brother piloting the robots with the helicopter stick that had been installed in the Russian submersible from which the robots were launched. “Jim ended up being a cowboy pilot,” Mike says. “He was far more aggressive with the system than I was.”One scene in Ghosts of the Abyss reveals the tension that sometimes erupted between the brothers. James contemplates moving one of the robots through a cabin window that is still partially occluded by a shard of glass that could damage the vehicle or cut the data tether. When James declares that he is going to take Jake in, moviegoers can hear Mike pleading with his brother not to do it, ultimately relenting once the bot has negotiated the opening.The decision to install a new type of battery at the last minute came to haunt the expedition; Elwood's lithium-polymer battery ignited while in the bowels of the ship. James manipulated the remaining robot into the Titanic to perform a rescue operation by hooking a cord to the grill of the dead bot and towing it out. At the surface—on the deck of the Russian scientific vessel the Keldysh, from which the two submarines carrying Jake and Elwood to the Titanic were launched—Mike rebuilt Elwood with a backup battery. During the next dive, the robot caught fire again while it was still mounted on the submarine, endangering the crew. Finally, Mike worked for an 18-hour stretch to adapt a lead-acid gel battery used for devices onboard the mother ship into a power source for Elwood, enabling the expedition to continue.The bots, now fitted with a new, nonflammable battery that Mike designed, may find service beyond motion pictures. The U.S. Navy has funded Dark Matter to help it assess the technology for underwater recovery operations of ships or aircraft. The bots also have potential for scientific exploration of deep-sea trenches. After traveling to the Titanic and the Bismarck, the team went on to probe mid-Atlantic hydrothermal vents, discovering mollusks in a place where scientists had never encountered them before. As adventure aficionados, the brothers speculate that a descendant of Jake and Elwood might even be toted on a mission to Europa, one of Jupiter's moons, to investigate the waters that are suspected to exist below its icy shell. The Cameron siblings, who tinkered with home-built rafts and rockets as children in Ontario near Niagara Falls, hope to be around long enough to witness their robotic twins go from the bottom of the ocean to the depths of space.中文译文穿越深渊的机器--新型的机器人可在数百公尺深的水底残骸间自由穿梭游览作者╱斯蒂克斯( Gary Stix )曾一举捧红超级巨星李奥纳多狄卡皮欧的电影「铁达尼号」中,片头是一台无人驾驶的遥控装置,携带着摄影机深入大西洋,在3852公尺深的铁达尼号残骸里冒险的画面。
机器人外文文献翻译、中英文翻译

外文资料robotThe industrial robot is a tool that is used in the manufacturing environment to increase productivity. It can be used to do routine and tedious assembly line jobs,or it can perform jobs that might be hazardous to the human worker . For example ,one of the first industrial robot was used to replace the nuclear fuel rods in nuclear power plants. A human doing this job might be exposed to harmful amounts of radiation. The industrial robot can also operate on the assembly line,putting together small components,such as placing electronic components on a printed circuit board. Thus,the human worker can be relieved of the routine operation of this tedious task. Robots can also be programmed to defuse bombs,to serve the handicapped,and to perform functions in numerous applications in our society.The robot can be thought of as a machine that will move an end-of-tool ,sensor ,and/or gripper to a preprogrammed location. When the robot arrives at this location,it will perform some sort of task .This task could be welding,sealing,machine loading ,machine unloading,or a host of assembly jobs. Generally,this work can be accomplished without the involvement of a human being,except for programming and for turning the system on and off.The basic terminology of robotic systems is introduced in the following:1. A robot is a reprogrammable ,multifunctional manipulator designed to move parts,material,tool,or special devices through variable programmed motions for the performance of a variety of different task. This basic definition leads to other definitions,presented in the following paragraphs,that give acomplete picture of a robotic system.2. Preprogrammed locations are paths that the robot must follow to accomplish work,At some of these locations,the robot will stop and perform some operation ,such as assembly of parts,spray painting ,or welding .These preprogrammed locations are stored in the robot’s memory and are recalled later for continuousoperation.Furthermore,these preprogrammed locations,as well as other program data,can be changed later as the work requirements change.Thus,with regard to this programming feature,an industrial robot is very much like a computer ,where data can be stoned and later recalled and edited.3. The manipulator is the arm of the robot .It allows the robot to bend,reach,and twist.This movement is provided by the manipulator’s axes,also called the degrees of freedom of the robot .A robot can have from 3 to 16 axes.The term degrees of freedom will always relate to the number of axes found on a robot.4. The tooling and frippers are not part the robotic system itself;rather,they are attachments that fit on the end of the robot’s arm. These attachments connected to the end of the robot’s arm allow the robot to lift parts,spot-weld ,paint,arc-weld,drill,deburr,and do a variety of tasks,depending on what is required of the robot.5. The robotic system can control the work cell of the operating robot.The work cell of the robot is the total environment in which the robot must perform itstask.Included within this cell may be the controller ,the robot manipulator ,a work table ,safety features,or a conveyor.All the equipment that is required in order for the robot to do its job is included in the work cell .In addition,signals from outside devices can communicate with the robot to tell the robot when it should parts,pick up parts,or unload parts to a conveyor.The robotic system has three basic components: the manipulator,the controller,and the power source.A.ManipulatorThe manipulator ,which does the physical work of the robotic system,consists of two sections:the mechanical section and the attached appendage.The manipulator also has a base to which the appendages are attached.Fig.1 illustrates the connectionof the base and the appendage of a robot.图1.Basic components of a robot’s manipulatorThe base of the manipulator is usually fixed to the floor of the work area. Sometimes,though,the base may be movable. In this case,the base is attached to either a rail or a track,allowing the manipulator to be moved from one location to anther.As mentioned previously ,the appendage extends from the base of the robot. The appendage is the arm of the robot. It can be either a straight ,movable arm or a jointed arm. The jointed arm is also known as an articulated arm.The appendages of the robot manipulator give the manipulator its various axes of motion. These axes are attached to a fixed base ,which,in turn,is secured to a mounting. This mounting ensures that the manipulator will in one location.At the end of the arm ,a wrist(see Fig 2)is connected. The wrist is made up of additional axes and a wrist flange. The wrist flange allows the robot user to connect different tooling to the wrist for different jobs.图2.Elements of a work cell from the topThe manipulator’s axes allow it to perform work within a certain area. The area is called the work cell of the robot ,and its size corresponds to the size of the manipulator.(Fid2)illustrates the work cell of a typical assembly ro bot.As the robot’s physical size increases,the size of the work cell must also increase.The movement of the manipulator is controlled by actuator,or drive systems.The actuator,or drive systems,allows the various axes to move within the work cell. The drive system can use electric,hydraulic,or pneumatic power.The energy developed by the drive system is converted to mechanical power by various mechanical power systems.The drive systems are coupled through mechanical linkages.These linkages,in turn,drive the different axes of the robot.The mechanical linkages may be composed of chain,gear,and ball screws.B.ControllerThe controller in the robotic system is the heart of the operation .The controller stores preprogrammed information for later recall,controls peripheral devices,and communicates with computers within the plant for constant updates in production.The controller is used to control the robot manipulator’s movements as well as to control peripheral components within the work cell. The user can program the movements of the manipulator into the controller through the use of a hard-held teach pendant.This information is stored in the memory of the controller for later recall.The controller stores all program data for the robotic system.It can store several differentprograms,and any of these programs can be edited.The controller is also required to communicate with peripheral equipment within the work cell. For example,the controller has an input line that identifies when a machining operation is completed.When the machine cycle is completed,the input line turn on telling the controller to position the manipulator so that it can pick up the finished part.Then ,a new part is picked up by the manipulator and placed into the machine.Next,the controller signals the machine to start operation.The controller can be made from mechanically operated drums that step through a sequence of events.This type of controller operates with a very simple robotic system.The controllers found on the majority of robotic systems are more complex devices and represent state-of-the-art eletronoics.That is,they are microprocessor-operated.these microprocessors are either 8-bit,16-bit,or 32-bit processors.this power allows the controller to be very flexible in its operation.The controller can send electric signals over communication lines that allow it to talk with the various axes of the manipulator. This two-way communication between the robot manipulator and the controller maintains a constant update of the end the operation of the system.The controller also controls any tooling placed on the end of the robot’s wrist.The controller also has the job of communicating with the different plant computers. The communication link establishes the robot as part a computer-assisted manufacturing (CAM)system.As the basic definition stated,the robot is a reprogrammable,multifunctional manipulator.Therefore,the controller must contain some of memory stage. The microprocessor-based systems operates in conjunction with solid-state devices.These memory devices may be magnetic bubbles,random-access memory,floppy disks,or magnetic tape.Each memory storage device stores program information fir or for editing.C.power supplyThe power supply is the unit that supplies power to the controller and the manipulator. The type of power are delivered to the robotic system. One type of power is the AC power for operation of the controller. The other type of power isused for driving the various axes of the manipulator. For example,if the robot manipulator is controlled by hydraulic or pneumatic drives,control signals are sent to these devices causing motion of the robot.For each robotic system,power is required to operate the manipulator .This power can be developed from either a hydraulic power source,a pneumatic power source,or an electric power source.There power sources are part of the total components of the robotic work cell.中文翻译机器人工业机器人是在生产环境中用以提高生产效率的工具,它能做常规乏味的装配线工作,或能做那些对于工人来说是危险的工作,例如,第一代工业机器人是用来在核电站中更换核燃料棒,如果人去做这项工作,将会遭受有害放射线的辐射。
一个有关移动机器人定位的视觉传感器模型外文文献翻译、中英文翻译

XX设计(XX)外文资料翻译A Visual-Sensor Model for Mobile Robot Localisation Matthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universit•at DresdenTechnical Report WV-03-03/CL-2003-02AbstractWe present a probabilistic sensor model for camera-pose estimation in hallways and cluttered o_ce environments. The model is based on the comparison of features obtained from a given 3D geometrical model of the environment with features present in the camera image. The techniques involved are simpler than state-of-the-art photogrammetric approaches. This allows the model to be used in probabilistic robot localisation methods. Moreover, it is very well suited for sensor fusion. The sensor model has been used with Monte-Carlo localisation to track the position of a mobile robot in a hallway navigation task. Empirical results are presented for this application.1 IntroductionThe problem of accurate localisation is fundamental to mobile robotics. To solve complex tasks successfully, an autonomous mobile robot has to estimate its current pose correctly and reliably. The choice of the localization method generally depends on the kind and number of sensors, the prior knowledge about the operating environment, and the computing resources available. Recently, vision-based navigation techniques have become increasingly popular [3]. Among the techniques for indoor robots, we can distinguish methods that were developed in the _eld of photogrammetry and computer vision, and methods that have their origin in AI robotics.An important technical contribution to the development of vision-based navigation techniques was the work by [10] on the recognition of 3D-objects from unknown viewpoints in single images using scale-invariant features. Later, this technique was extended to global localisation and simultaneous map building [11].The FINALE system [8] performed position tracking by using a geometrical model of the environment and a statistical model of uncertainty in the robot's pose given the commanded motion. The robot's position is represented by a Gaussian distribution and updated by Kalman _ltering. The search for corresponding features in camera image and world model is optimized by projecting the pose uncertainty into the camera image.Monte Carlo localisation (MCL) based on the condensation algorithm has been applied successfully to tour-guide robots [1]. This vision-based Bayesian _ltering technique uses a sampling-based density representation. In contrast to FINALE, it canrepresent multi-modal probability distributions. Given a visual map of the ceiling, it localises the robot globally using a scalar brightness measure. [4] presented avision-based MCL approach that combines visual distance features and visual landmarks in a RoboCup application. As their approach depends on arti_cial landmarks, it is not applicable in o_ce environments.The aim of our work is to develop a probabilistic sensor model for camerapose estimation. Given a 3D geometrical map of the environment, we want to find an approximate measure of the probability that the current camera image has been obtained at a certain place in the robot's operating environment. We use this sensor model with MCL to track the position of a mobile robot navigating in a hallway. Possibly, it can be used also for localization in cluttered o_ce environments and for shape-based object detection.On the one hand, we combine photogrammetric techniques for map-based feature projection with the exibility and robustness of MCL, such as the capability to deal with localisation ambiguities. On the other hand, the feature matching operation should be su_ciently fast to allow sensor fusion. In addition to the visual input, we want to use the distance readings obtained from sonars and laser to improve localisation accuracy.The paper is organised as follows. In Section 2, we discuss previous work. In Section 3, we describe the components of the visual sensor model. In Section 4, we present experimental results for position tracking using MCL. We conclude in Section 5.2 Related WorkIn classical approaches to model-based pose determination, we can distinguish two interrelated problems. The correspondence problem is concerned with _nding pairs of corresponding model and image features. Before this mapping takes place, the model features are generated from the world model using a given camera pose. Features are said to match if they are located close to each other. Whereas the pose problem consists of _nding the 3D camera coordinates with respect to the origin of the world model given the pairs of corresponding features [2]. Apparently, the one problem requires the other to be solved beforehand, which renders any solution to the coupled problem very di_cult [6].The classical solution to the problem above follows a hypothesise-and-test approach: (1)Given a camera pose estimate, groups of best matching feature pairs provideinitial guesses (hypotheses).(2)For each hypothesis, an estimate of the relative camera pose is computed byminimising a given error function de_ned over the associated feature pairs. (3)Now as there is a more accurate pose estimate available for each hypothesis, theremaining model features are projected onto the image using the associatedcamera pose. The quality of the match is evaluated using a suitable error function, yielding a ranking among all hypotheses.(4)The highest-ranking hypothesis is selected.Note that the correspondence problem is addressed by steps (1) and (3), and the poseproblem by (2) and (4).The performance of the algorithm will depend on the type of features used, e.g., edges, line segments, or colour, and the choice of the similarity measure between image and model, here referred to as error function. Line segments is the feature type of our choice as they can be detected comparatively reliably under changing illumination conditions. As world model, we use a wire-frame model of the operating environment, represented in VRML. The design of a suitable similarity measure is far more difficult.In principle, the error function is based on the di_erences in orientation between corresponding line segments in image and model, their distance and difference in length, in order of decreasing importance, in consideration of all feature pairs present. This has been established in the following three common measures [10]. e3D is defined by the sum of distances between model line endpoints and the corresponding plane given by camera origin and image line. This measure strongly depends on the distance to the camera due to back-projection. e2D;1, referred to as in_nite image lines, is the sum over the perpendicular distances of projected model line endpoints to corresponding, in_nitely extended lines in the image plane. The dual measure, e2D;2, referred to as in_nite model lines, is the sum over all distances of image line endpoints to corresponding, in_nitely extended model lines in the image plane.To restrict the search space in the matching step, [10] proposed to constrain the number of possible correspondences for a given pose estimate by combining line features into perceptual, quasi-invariant structures beforehand. Since these initial correspondences are evaluated by e2D;1 and e2D;2, high demands are imposed on the accuracy of the initial pose estimate and the image processing operations, includingthe removal of distortions and noise and the feature extraction. It is assumed to obtain all visible model lines at full length. [12, 9] demonstrated that a few outliers already can severely affect the initial correspondences in Lowe's original approach due to frequent truncation of lines caused by bad contrast, occlusion, or clutter.3 Sensor ModelOur approach was motivated by the question whether a solution to the correspondence problem can be avoided in the estimation of the camera pose. Instead, we propose to perform a relatively simple, direct matching of image and model features. We want to investigate the level of accuracy and robustness one can achieve this way.The processing steps involved in our approach are depicted in Figure 1. After removing the distortion from the camera image, we use the Canny operator to extract edges. This operator is relatively tolerant to changing illumination conditions. From the edges, line segments are identi_ed. Each line is represented as a single point (_; _) in the 2D Hough space given by _ = x cos _ + y sin _. The coordinates of the end points are neglected. In this representation, truncated or split lines will have similar coordinates in the Hough space. Likewise, the lines in the 3D map are projected onto the image plane using an estimate of the camera pose and taking into account the visibility constraints, and are represented as coordinates in the Hough space as well. We have designed several error functions to be used as similarity measure in the matching step. They are described in the following.Centred match count (CMC)The first similarity measure is based on the distance of line segments in the Hough space. We consider only those image features as possible matches that lie within a rectangular cell in the Hough space centred around the model feature. The matches are counted and the resulting sum is normalised. The mapping from the expectation (model features) to the measurement (image features) accounts for the fact that the measure should be invariant with respect to objects not modelled in the 3D map or unexpected changes in the operating environment. Invariance of the number of visible features is obtained by normalisation. Speci_cally, the centred match count measure sCMC is defined by:where the predicate p de_nes a valid match using the distance parameters (t_; t_) and the operator # counts the number of matches. Generally speaking, this similarity measure computes the proportion of expected model Hough points hei 2 He that are con_rmed by at least one measured image Hough point hmj 2 Hm falling within tolerance (t_; t_). Note that neither endpoint coordinates nor lengths are considered here.Grid length match (GLM)The second similarity measure is based on a comparison of the total length values of groupes of lines. Split lines in the image are grouped together using a uniform discretisation of the Hough space. This method is similar to the Hough transform for straight lines. The same is performed for line segments obtained from the 3D model. Let lmi;j be the sum of lengths of measured lines in the image falling into grid cell (i; j), likewise lei;j for expected lines according to the model, then the grid length match measure sGLM is de_ned as:For all grid cells containing model features, this measure computes the ratio of the total line length of measured and expected lines. Again, the mapping is directional, i.e., the model is used as reference, to obtain invariance of noise, clutter, and dynamic objects.Nearest neighbour and Hausdorf distanceIn addition, we experimented with two generic methods for the comparison of two sets of geometric entities: the nearest neighbour and the Hausdor_ distance. For details see [7]. Both rely on the de_nition of a distance function, which we based on the coordinates in the Hough space, i.e., the line parameter _ and _, and optionally the length, in a linear and exponential manner. See [5] for a complete description. Common error functionsFor comparisons, we also implemented the commonly used error functions e3D,e2D;1, and e2D;2. As they are de_ned in the Cartesian space, we represent lines in the Hessian notation, x sin _ y cos _ = d. Using the generic error function f, we de_ned the similarity measure as:where M is the set of measured lines and E is the set of expected lines. In case ofe2D;1, f is de_ned by the perpendicular distances between both model line endpoints, e1, e2, and the in_nitely extended image line m:Likewise, the dual similarity measure, using e2D;2, is based on the perpendicular distances between the image line endpoints and the in_nitely extended model line. Recalling that the error function e3D is proportional to the distances of model line endpoints to the view plane through an image line and the camera origin, we can instantiate Equation 1 using f3D(m; e) de_ned as:where ~nm denotes the normal vector of the view plane given by the image endpoints ~mi = [mx;my;w]T in camera coordinates.Obtaining probabilitiesIdeally, we want the similarity measure to return monotonically decreasing values as the pose estimate used for projecting the model features departs from the actual camera pose. As we aim at a generally valid yet simple visual-sensor model, the idea is to abstract from speci_c poses and environmental conditions by averaging over a large number of di_erent, independent situations. For commensurability, we want to express the model in terms of relative robot coordinates instead of absolute world coordinates. In other words, we assumeto hold, i.e., the probability for the measurement m, given the pose lm this image has been taken at, the pose estimate le, and the world model w, is equal to the probability of this measurement given a three-dimensional pose deviation 4l and the world model w.The probability returned by the visual-sensor model is obtained by simple scaling:4 Experimental ResultsWe have evaluated the proposed sensor model and similarity measures in a series of experiments. Starting with arti_cially created images using idealized conditions, we have then added distortions and noise to the tested images. Subsequently, we have used real images from the robot's camera obtained in a hallway. Finally, we have usedthe sensor model to track the position of the robot while it was travelling through the hallway. In all these cases, a three-dimensional visualisation of the model was obtained, which was then used to assess the solutions.Simulations using arti_cially created imagesAs a first kind of evaluation, we generated synthetic image features by generating a view at the model from a certain camera pose. Generally speaking, we duplicated the right-hand branch of Figure 1 onto the left-hand side. By introducing a pose deviation 4l, we can directly demonstrate its inuence on the similarity values. For visualisation purposes, the translational deviations 4x and 4y are combined into a single spatial deviation 4t. Initial experiments have shown only insigni_cant di_erences when they were considered independently.Fig. 2: Performance of CMC on arti_cially created images.For each similarity measure given above, at least 15 million random camera poses were coupled with a random pose deviation within the range of 4t < 440cm and 4_ < 90_ yielding a model pose.The results obtained for the CMC measure are depicted in Figure 2. The surface of the 3D plot was obtained using GNUPLOT's smoothing operator dgrid3d. We notice a unique, distinctive peak at zero deviation with monotonically decreasing similarity values as the error increases. Please note that this simple measure considers neither endpoint coordinates nor lengths of lines. Nevertheless, we obtain already a decent result.While the resulting curve for the GLM measure resembles that of CMC, the peak is considerably more distinctive. This conforms to our anticipation since taking the length of image and model lines into account is very signi_cant here. In contrast to the CMC measure, incidental false matches are penalised in this method, due to the differing lengths.The nearest neighbour measure turned out to be not of use. Although linear and exponential weighting schemes were tried, even taking the length of line segmentsinto account, no distinctive peak was obtained, which caused its exclusion from further considerations.The measure based on the Hausdor_ distance performed not as good as the first two, CMC and GLM, though it behaved in the desired manner. But its moderate performance does not pay off the longest computation time consumed among all presented measures and is subsequently disregarded.So far, we have shown how our own similarity measures perform. Next, we demonstrate how the commonly used error functions behave in this framework.The function e2D;1 performed very well in our setting. The resulting curve closely resembles that of the GLM measure. Both methods exhibit a unique, distinctive peak at the correct location of zero pose deviation. Note that the length of line segments has a direct e_ect on the similarity value returned by measure GLM, while this attribute implicitly contributes to the measure e2D;1, though both linearly. Surprisingly, the other two error functions e2D;2 and e3D performed poorly.Toward more realistic conditionsIn order to learn the e_ect of distorted and noisy image data on our sensor model, we conducted another set of experiments described here. To this end, we applied the following error model to all synthetically generated image features before they are matched against model features. Each original line is duplicated with a small probability (p = 0:2) and shifted in space. Any line longer than 30 pixel is split with probability p=0:3. A small distortion is applied to the parameters (_; _; l) of each line according to a random, zeromean Gaussian. Furthermore, features not present in the model and noise are simulated by adding random lines uniformly distributed in the image. Hereof, the orientation is drawn according to the current distribution of angles to yield fairly `typical' features.The results obtained in these simulations do not di_er significantly from the first set of experiments. While the maximum similarity value at zero deviation decreased, the shape and characteristics of all similarity measures still under consideration remained the same.Using real images from the hallwaySince the results obtained in the simulations above might be questionable with respect to real-world conditions, we conducted another set of experiments replacing the synthetic feature measurements by real camera images.To compare the results for various parameter settings, we gathered images with a Pioneer 2 robot in the hallway o_-line and recorded the line features. For two di_erent locations in the hallway exemplifying typical views, the three-dimensional space of the robot poses (x; y; _) was virtually discretized. After placing the robot manually at each vertex (x; y; 0), it performed a full turn on the spot stepwise recording images. This ensures a maximum accuracy of pose coordinates associated with each image. That way, more than 3200 images have been collected from 64 di_erent (x; y)locations. Similarly to the simulations above, pairs of poses (le; lm) were systematically chosenFig. 3: Performance of GLM on real images from the hallway.from with the range covered by the measurements. The values computed by the sensor model referring to the same discretized value of pose deviation 4l were averaged according to the assumption in Equation 2.The resulting visualisation of the similarity measure over spatial (x and y combined) and rotational deviations from the correct camera pose for the CMC measure exhibits a unique peak at approximately zero deviation. Of course, due to a much smaller number of data samples compared to the simulations using synthetic data, the shape of the curve is much more bumpy, but this is in accordance with our expectation.The result of employing the GLM measure in this setting is shown in Figure 3. As it reveals a more distinctive peak compared to the curve for the CMC measure, it demonstrates the increased discrimination between more and less similar feature maps when taking the lengths of lines into account.Monte Carlo Localisation using the visual-sensor modelRecalling that our aim is to devise a probabilistic sensor model for a camera mounted on a mobile robot, we continue with presenting the results for an application to mobile robot localisation.The generic interface of the sensor model allows it to be used in the correction step of Bayesian localisation methods, for example, the standard version of the Monte Carlolocalisation (MCL) algorithm. Since statistical independence among sensor readings renders one of the underlying assumptions of MCL, our hope is to gain improved accuracy and robustness using the camera instead of or in addition to commonly used distance sensors like sonars or laser.Fig. 4: Image and projected models during localisation.In the experiment, the mobile robot equipped with a _xed-mounted CCD camera had to follow a pre-programmed route in the shape of a double loop in the corridor. On its way, it had to stop at eight pre-de_ned positions, turn to a nearby corner or open view, take an image, turn back and proceed. Each image capture initiated the so-called correction step of MCL and the weights of all samples were recomputed according to the visual-sensor model, yielding the highest density of samples at the potentially correct pose coordinates in the following resampling step. In the prediction step, the whole sample set is shifted in space according to the robot's motion model and the current odometry sensor readings.Our preliminary results look very promising. During the position tracking experiments, i.e., the robot was given an estimate of its starting position, the best hypothesis for the robot's pose was approximately at the correct pose most of the time. In this experiment, we have used the CMC measure. In Figure 4, a typical camera view is shown while the the robots follows the requested path. The grey-level image depicts the visual input for feature extraction after distortion removal andpre-processing. Also the extracted line features are displayed. Furthermore, the world model is projected according to two poses, the odometry-tracked pose and the estimate computed by MCL which approximately corresponds to the correct pose, between which we observe translational and rotational error.The picture also shows that rotational error has a strong inuence on the degree ofcoincidental feature pairs. This effect corresponds to the results presented above, where the figures exhibit a much higher gradient along the axis of rotational deviation than along that of translational deviation. The finding can be explained by the effect of motion on features in the Hough space. Hence, the strength of our camera sensor model lays at detecting rotational disagreement. This property makes it especially suitable for two-wheel driven robots like our Pioneer bearing a much higher rotational odometry error than translational error.5 Conclusions and Future WorkWe have presented a probabilistic visual-sensor model for camera-pose estimation. Its generic design makes it suitable for sensor fusion with distance measurements perceived from other sensors. We have shown extensive simulations under ideal and realistic conditions and identified appropriate similarity measures. The application of the sensor model in a localisation task for a mobile robot met our anticipations. Within the paper we highlighted much scope for improvements.We are working on suitable techniques to quantitatively evaluate the performanceof the devised sensor model in a localisation algorithm for mobile robots. This will enable us to experiment with cluttered environments and dynamic objects. Combining the camera sensor model with distance sensor information using sensor fusion renders the next step toward robust navigation. Because the number of useful features varies significantly as the robots traverses an indoor environment, the idea to steer the camera toward richer views (active vision) offers a promising research path to robust navigation.References[1] F. Dellaert, W. Burgard, D. Fox, and S. Thrun. Using the condensationalgorithm for robust, vision-based mobile robot localisation. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 1999.[2] D. DeMenthon, P. David, and H. Samet. SoftPOSIT: An algorithm forregistration of 3D models to noisy perspective images combining Softassign and POSIT. Technical report, University of Maryland, MD, 2001.[3] G. N. DeSouza and A. C. Kak. Vision for mobile robot navigation: A survey.IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(2):237{267,2002.[4] S. Enderle, M. Ritter, D. Fox, S. Sablatn•og, G. Kraetzschmar, and G. Palm.Soccer-robot localisation using sporadic visual features. In IntelligentAutonomous Systems 6, pages 959{966. IOS, 2000.[5] M. Fichtner. A camera sensor model for sensor fusion. Master's thesis, Dept. ofComputer Science, TU Dresden, Germany, 2002.[6] S. A. Hutchinson, G. D. Hager, and P. I. Corke. A tutorial on visual servocontrol. IEEE Trans. on Robotics and Automation, 12(5):651{ 670, 1996.[7] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing imagesusing the Hausdor_ distance. IEEE Trans. on Pattern Analysis and MachineIntelligence, 15(9):850{863, 1993.[8] A. Kosaka and A. C. Kak. Fast vision-guided mobile robot navigation usingmodel-based reasoning and prediction of uncertainties. Com- puter Vision,Graphics, and Image Processing { Image Understanding, 56(3):271{329, 1992. [9] R. Kumar and A. R. Hanson. Robust methods for estimating pose and asensitivity analysis. Computer Vision, Graphics, and Image Processing { Image Understanding, 60(3):313{342, 1994.[10] D. G. Lowe. Three-dimensional object recognition from single twodimensionalimages. Arti_cial Intelligence, 31(3):355{395, 1987.[11] S. Se, D. G. Lowe, and J. Little. Vision-based mobile robot localization andmapping using scale-invariant features. In Proc. of the IEEE Int. Conf. onRobotics and Automation, pages 2051{2058, 2001.[12] G. D. Sullivan, L. Du, and K. D. Baker. Quantitative analysis of the viewpointconsistency constraint in model-based vision. In Proc. of the 4th Int. IEEE Conf.on Computer Vision, pages 632{639, 1993.13一个有关移动机器人定位的视觉传感器模型Matthias Fichtner Axel Gro_mannArti_cial Intelligence InstituteDepartment of Computer ScienceTechnische Universit•at DresdenTechnical Report WV-03-03/CL-2003-02摘要我们提出一个在走廊和传感器模型凌乱的奥西环境下的概率估计。
传感器的基础知识中英文对照外文翻译文献

中英文对照外翻译Basic knowledge of transducersA transducer is a device which converts the quantity being measured into an optical, mechanical, or-more commonly-electrical signal. The energy-conversion process that takes place is referred to as transduction.Transducers are classified according to the transduction principle involved and the form of the measured. Thus a resistance transducer for measuring displacement is classified as a resistance displacement transducer. Other classification examples are pressure bellows, force diaphragm, pressure flapper-nozzle, and so on.1、Transducer ElementsAlthough there are exception ,most transducers consist of a sensing element and a conversion or control element. For example, diaphragms,bellows,strain tubes and rings, bourdon tubes, and cantilevers are sensing elements which respond to changes in pressure or force and convert these physical quantities into a displacement. This displacement may then be used to change an electrical parameter such as voltage, resistance, capacitance, or inductance. Such combination of mechanical and electrical elements form electromechanical transducing devices or transducers. Similar combination can be made for other energy input such as thermal. Photo, magnetic and chemical,giving thermoelectric, photoelectric,electromaanetic, and electrochemical transducers respectively.2、Transducer SensitivityThe relationship between the measured and the transducer output signal is usually obtained by calibration tests and is referred to as the transducer sensitivity K1= output-signal increment / measured increment . In practice, the transducer sensitivity is usually known, and, by measuring the output signal, the input quantity is determined from input= output-signal increment / K1.3、Characteristics of an Ideal TransducerThe high transducer should exhibit the following characteristicsa) high fidelity-the transducer output waveform shape be a faithful reproduction of the measured; there should be minimum distortion.b) There should be minimum interference with the quantity being measured; the presence of the transducer should not alter the measured in any way.c) Size. The transducer must be capable of being placed exactly where it is needed.d) There should be a linear relationship between the measured and the transducer signal.e) The transducer should have minimum sensitivity to external effects, pressure transducers,for example,are often subjected to external effects such vibration and temperature.f) The natural frequency of the transducer should be well separated from the frequency and harmonics of the measurand.4、Electrical TransducersElectrical transducers exhibit many of the ideal characteristics. In addition they offer high sensitivity as well as promoting the possible of remote indication or mesdurement. Electrical transducers can be divided into two distinct groups:a) variable-control-parameter types,which include:i)resistanceii) capacitanceiii) inductanceiv) mutual-inductance typesThese transducers all rely on external excitation voltage for their operation.b) self-generating types,which includei) electromagneticii)thermoelectriciii)photoemissiveiv)piezo-electric typesThese all themselves produce an output voltage in response to the measurand input and their effects are reversible. For example, a piezo-electric transducer normally produces an output voltage in response to the deformation of a crystalline material; however, if an alternating voltage is applied across the material, the transducer exhibits the reversible effect by deforming or vibrating at the frequency of the alternating voltage.5、Resistance TransducersResistance transducers may be divided into two groups, as follows:i) Those which experience a large resistance change, measured by using potential-divider methods. Potentiometers are in this group.ii)Those which experience a small resistance change, measured by bridge-circuit methods. Examples of this group include strain gauges and resistance thermometers.5.1 PotentiometersA linear wire-wound potentiometer consists of a number of turns resistance wire wound around a non-conducting former, together with a wiping contact which travels over the barwires. The construction principles are shown in figure which indicate that the wiperdisplacement can be rotary, translational, or a combination of both to give a helical-type motion. The excitation voltage may be either a.c. or d.c. and the output voltage is proportional to the input motion, provided the measuring device has a resistance which is much greater than the potentiometer resistance.Such potentiometers suffer from the linked problem of resolution and electrical noise. Resolution is defined as the smallest detectable change in input and is dependent on thecross-sectional area of the windings and the area of the sliding contact. The output voltage is thus a serials of steps as the contact moves from one wire to next.Electrical noise may be generated by variation in contact resistance, by mechanical wear due to contact friction, and by contact vibration transmitted from the sensing element. In addition, the motion being measured may experience significant mechanical loading by the inertia and friction of the moving parts of the potentiometer. The wear on the contacting surface limits the life of a potentiometer to a finite number of full strokes or rotations usually referred to in the manufacture’s specification as the ‘number of cycles of life expectancy’, a typical value being 20*1000000 cycles.The output voltage V0 of the unload potentiometer circuit is determined as follows. Let resistance R1= xi/xt *Rt where xi = input displacement, xt= maximum possible displacement, Rt total resistance of the potentiometer. Then output voltage V0= V*R1/(R1+( Rt-R1))=V*R1/Rt=V*xi/xt*Rt/Rt=V*xi/xt. This shows that there is a straight-line relationship between output voltage and input displacement for the unloaded potentiometer.It would seen that high sensitivity could be achieved simply by increasing the excitation voltage V. however, the maximum value of V is determined by the maximum power dissipation P of the fine wires of the potentiometer winding and is given by V=(PRt)1/2 .5.2 Resistance Strain GaugesResistance strain gauges are transducers which exhibit a change in electrical resistance in response to mechanical strain. They may be of the bonded or unbonded variety .a) bonded strain gaugesUsing an adhesive, these gauges are bonded, or cemented, directly on to the surface of the body or structure which is being examined.Examples of bonded gauges arei) fine wire gauges cemented to paper backingii) photo-etched grids of conducting foil on an epoxy-resin backingiii)a single semiconductor filament mounted on an epoxy-resin backing with copper or nickel leads.Resistance gauges can be made up as single elements to measuring strain in one direction only,or a combination of elements such as rosettes will permit simultaneous measurements in more than one direction.b) unbonded strain gaugesA typical unbonded-strain-gauge arrangement shows fine resistance wires stretched around supports in such a way that the deflection of the cantilever spring system changes the tension in the wires and thus alters the resistance of wire. Such an arrangement may be found in commercially available force, load, or pressure transducers.5.3 Resistance Temperature TransducersThe materials for these can be divided into two main groups:a) metals such as platinum, copper, tungsten, and nickel which exhibit and increase in resistance as the temperature rises; they have a positive temperature coefficient of resistance.b) semiconductors, such as thermistors which use oxides of manganese, cobalt, chromium, or nickel. These exhibit large non-linear resistance changes with temperature variation and normally have a negative temperature coefficient of resistance.a) metal resistance temperature transducersThese depend, for many practical purpose and within a narrow temperature range, upon the relationship R1=R0*[1+a*(b1-b2)] where a coefficient of resistance in ℃-1,and R0 resistance in ohms at the reference temperature b0=0℃ at the reference temperature range ℃.The international practical temperature scale is based on the platinum resistance thermometer, which covers the temperature range -259.35℃ to 630.5℃.b) thermistor resistance temperature transducersThermistors are temperature-sensitive resistors which exhibit large non-liner resistance changes with temperature variation. In general, they have a negative temperature coefficient. For small temperature increments the variation in resistance is reasonably linear; but, if large temperature changes are experienced, special linearizing techniques are used in the measuring circuits to produce a linear relationship of resistance against temperature.Thermistors are normally made in the form of semiconductor discs enclosed in glass vitreous enamel. Since they can be made as small as 1mm,quite rapid response times are possible.5.4 Photoconductive CellsThe photoconductive cell , uses a light-sensitive semiconductor material. The resistance between the metal electrodes decrease as the intensity of the light striking the semiconductor increases. Common semiconductor materials used for photo-conductive cells are cadmium sulphide, lead sulphide, and copper-doped germanium.The useful range of frequencies is determined by material used. Cadmium sulphide is mainly suitable for visible light, whereas lead sulphide has its peak response in the infra-red regionand is, therefore , most suitable for flame-failure detection and temperature measurement. 5.5 Photoemissive CellsWhen light strikes the cathode of the photoemissive cell are given sufficient energy to arrive the cathode. The positive anode attracts these electrons, producing a current which flows through resistor R and resulting in an output voltage V.Photoelectrically generated voltage V=Ip.RlWhere Ip=photoelectric current(A),and photoelectric current Ip=Kt.BWhere Kt=sensitivity (A/im),and B=illumination input (lumen)Although the output voltage does give a good indication of the magnitude of illumination, the cells are more often used for counting or control purpose, where the light striking the cathode can be interrupted.6、Capacitive TransducersThe capacitance can thus made to vary by changing either the relative permittivity, the effective area, or the distance separating the plates. The characteristic curves indicate that variations of area and relative permittivity give a linear relationship only over a small range of spacings. Thus the sensitivity is high for small values of d. Unlike the potentionmeter, the variable-distance capacitive transducer has an infinite resolution making it most suitable for measuring small increments of displacement or quantities which may be changed to produce a displacement.7、Inductive TransducersThe inductance can thus be made to vary by changing the reluctance of the inductive circuit. Measuring techniques used with capacitive and inductive transducers:a)A.C. excited bridges using differential capacitors inductors.b)A.C. potentiometer circuits for dynamic measurements.c) D.C. circuits to give a voltage proportional to velocity for a capacitor.d) Frequency-modulation methods, where the change of C or L varies the frequency of an oscillation circuit.Important features of capacitive and inductive transducers are as follows:i)resolution infiniteii) accuracy+- 0.1% of full scale is quotediii)displacement ranges 25*10-6 m to 10-3miv) rise time less than 50us possibleTypical measurands are displacement, pressure, vibration, sound, and liquid level.8、Linear Variable-differential Ttransformer9、Piezo-electric Transducers10、Electromagnetic Transducers11、Thermoelectric Transducers12、Photoelectric Cells13、Mechanical Transducers and Sensing Elements传感器的基础知识传感器是一种把被测量转换为光的、机械的或者更平常的电信号的装置。
机器人外文翻译(中英文翻译)

机器人外文翻译(中英文翻译)机器人外文翻译(中英文翻译)With the rapid development of technology, the use of robots has become increasingly prevalent in various industries. Robots are now commonly employed to perform tasks that are dangerous, repetitive, or require a high level of precision. However, in order for robots to effectively communicate with humans and fulfill their intended functions, accurate translation between different languages is crucial. In this article, we will explore the importance of machine translation in enabling robots to perform translation tasks, as well as discuss current advancements and challenges in this field.1. IntroductionMachine translation refers to the use of computer algorithms to automatically translate text or speech from one language to another. The ultimate goal of machine translation is to produce translations that are as accurate and natural as those generated by human translators. In the context of robots, machine translation plays a vital role in allowing them to understand and respond to human commands, as well as facilitating communication between robots of different origins.2. Advancements in Machine TranslationThe field of machine translation has experienced significant advancements in recent years, thanks to breakthroughs in artificial intelligence and deep learning. These advancements have led to the development of neural machine translation (NMT) systems, which have greatly improved translation quality. NMT models operate by analyzinglarge amounts of bilingual data, allowing them to learn the syntactic and semantic structures of different languages. As a result, NMT systems are capable of providing more accurate translations compared to traditional rule-based or statistical machine translation approaches.3. Challenges in Machine Translation for RobotsAlthough the advancements in machine translation have greatly improved translation quality, there are still challenges that need to be addressed when applying machine translation to robots. One prominent challenge is the variability of language use, including slang, idioms, and cultural references. These nuances can pose difficulties for machine translation systems, as they often require a deep understanding of the context and cultural background. Researchers are currently working on developing techniques to enhance the ability of machine translation systems to handle such linguistic variations.Another challenge is the real-time requirement of translation in a robotic setting. Robots often need to process and translate information on the fly, and any delay in translation can affect the overall performance and efficiency of the robot. Optimizing translation speed without sacrificing translation quality is an ongoing challenge for researchers in the field.4. Applications of Robot TranslationThe ability for robots to translate languages opens up a wide range of applications in various industries. One application is in the field of customer service, where robots can assist customers in multiple languages, providing support and information. Another application is in healthcare settings, where robots can act as interpreters between healthcare professionals and patientswho may speak different languages. Moreover, in international business and diplomacy, robots equipped with translation capabilities can bridge language barriers and facilitate effective communication between parties.5. ConclusionIn conclusion, machine translation plays a crucial role in enabling robots to effectively communicate with humans and fulfill their intended functions. The advancements in neural machine translation have greatly improved translation quality, but challenges such as language variability and real-time translation requirements still exist. With continuous research and innovation, the future of machine translation for robots holds great potential in various industries, revolutionizing the way we communicate and interact with technology.。
外文翻译--机器人技术简介

Introduction to robotics technologyIn the manufacturing field, robot development has focused on engineering robotic arms that perform manufacturing processes. In the space industry, robotics focuses on highly specialized, one-of-kind planetary rovers. Unlike a highly automated manufacturing plant, a planetary rover operating on the dark side of the moon -- without radio communication -- might run into unexpected situations. At a minimum, a planetary rover must have some source of sensory input, some way of interpreting that input, and a way of modifying its actions to respond to a changing world. Furthermore, the need to sense and adapt to a partially unknown environment requires intelligence (in other words, artificial intelligence).Mechanical platforms -- the hardware baseA robot consists of two main parts: the robot body and some form of artificial intelligence (AI) system. Many different body parts can be called a robot. Articulated arms are used in welding and painting; gantry and conveyor systems move parts in factories; and giant robotic machines move earth deep inside mines. One of the most interesting aspects of robots in general is their behavior, which requires a form of intelligence. The simplest behavior of a robot is locomotion. Typically, wheels are used as the underlying mechanism to make a robot move from one point to the next. And some force such as electricity is required to make the wheels turn under command.MotorsA variety of electric motors provide power to robots, allowing them to move material, parts, tools, or specialized devices with variousprogrammed motions. The efficiency rating of a motor describes how much of the electricity consumed is converted to mechanical energy. Let's take a look at some of the mechanical devices that are currently being used in modern robotics technology.Driving mechanismsGears and chains:Gears and chains are mechanical platforms that provide a strong and accurate way to transmit rotary motion from one place to another, possibly changing it along the way. The speed change between two gears depends upon the number of teeth on each gear. When a powered gear goes through a full rotation, it pulls the chain by the number of teeth on that gear.Pulleys and belts:Pulleys and belts, two other types of mechanical platforms used in robots, work the same way as gears and chains. Pulleys are wheels with a groove around the edge, and belts are the rubber loops that fit in that groove.Gearboxes:A gearbox operates on the same principles as the gear and chain, without the chain. Gearboxes require closer tolerances, since instead of using a large loose chain to transfer force and adjust for misalignments, the gears mesh directly with each other. Examples of gearboxes can be found on the transmission in a car, the timing mechanism in a grandfather clock, and the paper-feed of your printer.Power suppliesPower supplies are generally provided by two types of battery. Primary batteries are used once and then discarded; secondary batteries operate from a (mostly) reversible chemical reaction and can be recharged several times. Primary batteries have higher density and a lower self-dischargerate. Secondary (rechargeable) batteries have less energy than primary batteries, but can be recharged up to a thousand times depending on their chemistry and environment. Typically the first use of a rechargeable battery gives 4 hours of continuous operation in an application or robot.SensorsRobots react according to a basic temporal measurement, requiring different kinds of sensors.In most systems a sense of time is built-in through the circuits and programming. For this to be productive in practice, a robot has to have perceptual hardware and software, which updates quickly. Regardless of sensor hardware or software, sensing and sensors can be thought of as interacting with external events (in other words, the outside world). The sensor measures some attribute of the world. The term transducer is often used interchangeably with sensor. A transducer is the mechanism, or element, of the sensor that transforms the energy associated with what is being measured into another form of energy. A sensor receives energy and transmits a signal to a display or computer. Sensors use transducers to change the input signal (sound, light, pressure, temperature, etc.) into an analog or digital form capable of being used by a robot.Microcontroller systemsMicrocontrollers (MCUs) are intelligent electronic devices used inside robots. They deliver functions similar to those performed by a microprocessor (central processing unit, or CPU) inside a personal computer. MCUs are slower and can address less memory than CPUs, but are designed for real-world control problems. One of the major differences between CPUs and MCUs is the number of external components needed tooperate them. MCUs can often run with zero external parts, and typically need only an external crystal or oscillator.Utilities and toolsROBOOP (A robotics object oriented package in C++):This package is an object-oriented toolbox in C++ for robotics simulation. Technical references and downloads are provided in the Resources.CORBA: A real-time communications and object request broker software package for embedding distributed software agents. Each independent piece of software registers itself and its capabilities to the ORB, by means of an IDL (Interface Definition Language). Visit their Web site (see Resources) for technical information, downloads, and documentation for CORBA.TANGO/TACO:This software might be useful for controlling a robotics system with multiple devices and tools. TANGO is an object oriented control system based on CORBA. Device servers can be written in C++ or Java. TACO is object oriented because it treats all(physical and logical) control points in a control system as objects in a distributed environment. All actions are implemented in classes. New classes can be constructed out of existing classes in a hierarchical manner, thereby ensuring a high level of software reuse. Classes can be written in C++, in C (using a methodology called Objects in C), in Python or in LabView (using the G programming language).ControllersTask Control Architecture: The Task Control Architecture (TCA) simplifies building task-level control systems for mobile robots. "Task-level" refers to the integration and coordination of perception, planning, andreal time control to achieve a given set of goals (tasks). TCA provides a general control framework, and is intended to control a wide variety of robots. TCA provides a high-level machine-independent method for passing messages between distributed machines (including between Lisp and C processes). TCA provides control functions, such as task decomposition, monitoring, and resource management, that are common to many mobile robot applications. The Resources section provides technical references and download information for Task Control Architecture.EMC (Enhanced Machine Controller): The EMC software is based on the NIST Real time Control System (RCS) methodology, and is programmed using the NIST RCS Library. The RCS Library eases the porting of controller code to a variety of UNIX and Microsoft platforms, providing a neutral application programming interface (API) to operating system resources such as shared memory, semaphores and timers. The EMC software is written in C and C++, and has been ported to the PC Linux, Windows NT, and Sun Solaris operating systems.Darwin2K: Darwin2K is a free, open source toolkit for robot simulation and automated design. It features numerous simulation capabilities and an evolutionary algorithm capable of automatically synthesizing and optimizing robot designs to meet task-specific performance objectives.LanguagesRoboML (Robotic Markup Language): RoboML is used for standardized representation of robotics-related data. It is designed to support communication language between human-robot interface agents, as well as between robot-hosted processes and between interface processes, and to provide a format for archived data used by human-robot interface agents.ROSSUM: A programming and simulation environment for mobile robots. The Rossum Project is an attempt to help collect, develop, and distribute software for robotics applications. The Rossum Project hopes to extend the same kind of collaboration to the development of robotic software.XRCL (Extensible Robot Control Language): XRCL (pronounced zircle) is a relatively simple, modern language and environment designed to allow robotics researchers to share ideas by sharing code. It is an open source project, protected by the GNU Copyleft.SummaryThe field of robotics has created a large class of robots with basic physical and navigational competencies. At the same time, society has begun to move towards incorporating robots into everyday life, from entertainment to health care. Moreover, robots could free a large number of people from hazardous situations, essentially allowing them to be used as replacements for human beings. Many of the applications being pursued by AI robotics researchers are already fulfilling that potential. In addition, robots can be used for more commonplace tasks such as janitorial work. Whereas robots were initially developed for dirty, dull, and dangerous applications, they are now being considered as personal assistants. Regardless of application, robots will require more rather than less intelligence, and will thereby have a significant impact on our society in the future as technology expands to new horizons.外文出处:Robotic technology / edited by A. Pugh./P. Peregrinus, c1993.附件1:外文资料翻译译文机器人技术简介在制造业领域,机器人的开发集中在执行制造过程的工程机器人手臂上。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
(文档含英文原文和中文翻译)中英文资料外文翻译文献机器人和机器人传感器介绍工业机器人以及它的运行是本文的主题。
工业机器人是应用于制造环境下以提高生产率的一种工具。
它可用于承担常规的、冗长乏味的装配线工作,或执行那些对工人也许有危害的工作。
例如,在第一代工业机器人中,曾有一台被用于更换核电厂的核燃料棒。
从事这项工作的工人可能会暴露在有害量的放射线下。
工业机器人也能够在装配线上操作——安装小型元件,例如将电子元件安装在线路板上。
为此,工人可以从这种冗长乏味任务的常规操作中解放出来。
通过编程的机器人还能去掉炸弹的雷管、为残疾者服务以及在我们社会的众多应用中发挥作用。
机器人可被看作将臂端执行工具、传感器以及/或夹爪移动到某个预定位置的一台机器。
当机器人到达该位置,它将执行某个任务。
该任务可能是焊接、密封、机械装载、机械卸载,或许多装配工作。
除了编程以及打开和关闭系统之外,一般情况下,均不需要人们的参与就能完成这类工作。
机器人专业术语机器人是一台可再编程的多功能机械手,它可通过可编程运动移动零件、物料、工具或特殊装置以执行某种不同任务。
由这项定义可导致下面段落中被阐述的其他定义,它们为机器人系统提供了完整的写照。
预编程位置是机器人为了完成工作必须遵循和通过的途径。
在这些位置的某点,机器人会停下来并执行某种操作,例如装配零件,喷漆或焊接。
这些预编程位置被存储在机器人的记忆装置中供以后继续操作时使用。
此外,当工作的要求发生变化时,不仅其他编程数据而且这些预编程位置均可作修改。
因此,正由于这种编程的特点,一台工业机器人与一台可存储数据、以及可回忆及编辑的计算机十分相似。
机械手是机器人的手臂,它允许机器人俯仰、伸缩和转动。
这种动作是由机械手的轴所提供的,机械手的轴又称为机器人的自由度。
一台机器人可以具有3至16根轴。
在本人的后面部分,自由度这个术语总与一台机器人轴的数目相关联。
工具及夹爪并非属于机器人系统的本身,它们是装在机器人手臂端部的附件。
有了与机器人手臂端部相连接的这些附件,机器人就可以提起零件、点焊、喷漆、弧焊、钻孔、去毛刺,还可以根据所提要求指向各种类型的任务。
机器人系统还可以控制操作机器人的工作单元。
机器人工作单元是一种总体环境,在该环境下机器人必须执行赋予它的任务。
该单元可包容控制器、机器人的机械手、工作台、安全装置,或输送机。
机器人开展工作所需要的所有设备均被包括在这个工作单元中。
此外,来自外界装置的信号能够与机器人进行交流,这样就可以告诉机器人什么时候它该装配零件、捡起零件或将零件卸到输送机。
基本部件机器人系统具有3个基本部件:机械手、控制器及动力源。
在某些机器人系统中可以看到第4个部件,端部执行件,有关这些部件将在下面小节描述。
机械手机械手承担机器人系统的体力工作,它由两部分组成:机械部分及被连接的附属物。
机械手还有一个与附属物相连的底座。
机械手的底座通常被固定在工作领域的地面。
有时,底座也可以移动。
在该情况下,底座被安装到导轨上,这样该机械手就可以从一处移动到另一处。
例如,一台机器人可以为几台机床工作,为每台机床装载和卸载。
正如前面所述,附属物从机器人的底座伸出。
该附属物是机器人的手臂。
它既可以是一个直线型的可动臂,也可以是一个铰接臂。
铰接臂也称关节臂。
机器人机械手的附属物可为机械手提供各种运动轴。
这些轴与固定底座相连接,而该底座又被紧固到机架上。
这个机架能确保该机械手被维持在某个位置上。
在手臂的端部连接着一个手腕。
该手腕由附加轴及手腕法兰组成,有了该手腕法兰,机器人用户就可以根据不同的工作在手腕上安装不同的工具。
机械手的轴允许机械手在一定区域内执行工作。
如前所述,该区域被称为机器人的工作单元,它的尺度与机械手的尺寸相对应。
当机器人的物理尺寸增大时,工作单元的尺寸必然也随之增加。
机械手的运动由驱动器,或驱动系统所控制。
驱动器或驱动系统允许各根轴在工作单元内运动,驱动系统可利用电力的、液压的或气压动力。
驱动系统发出的能量由各种机械驱动装置转换成机械动力。
这些驱动装置通过机械联动机构接合在一起。
这些联动机构依次驱动机器人的不同轴。
机械联动机构由链轮机构,齿轮机构及滚珠丝杠所组成。
控制器机器人系统的控制器是运行的心脏。
控制器存储着为以后回忆所用的预编程信息,控制着外围设备,它还与厂内计算机进行交流以使生产不断更新。
控制器用于控制机器人机械手运动以及工作单元中的外围部件。
工作人员可以利用手递示教盒将机械手的动作编程进入控制器。
这种信息可被存储在控制器的记忆装置中以便以后回忆使用。
控制器存储着机器人系统的所有程序数据。
它可以存储几种不同的程序,并且它们中任一程序均可被编辑。
也可要求控制器与工作单元中外围设备进行交流。
例如,控制器具有一根输入线,该输入线可识别某项机械加工什么时候完成。
当该机械循环完成时,输入线被接通,它会吩咐控制器让机械手到位以便机械手能夹起以加工完的零件。
接着,该机械手再捡起一根新的零件并将它安放到机床上,然后,控制器向该机床发出信号让它开始运转。
控制器可由机械操纵的磁鼓构成,这些鼓按工作发生的先后次序操作。
这类控制器用于非常简单的机器人系统。
在大多数机器人系统中见到的控制器是很复杂的装置,它们体现了现代化的电子科学。
换言之,它们由微信息处理器操纵。
这些微信息处理器不是8位、16位就是32位的信息处理器。
这种功能使控制器的运行具有非常好的柔性。
控制器可通过通讯线路发出电子信号,发出能与机械手各轴线进行沟通的电信号,机器人机械手与控制器之间这种双向交流可使系统的位置及运行维持在不断修正及更新得状态下,控制器还可以控制安装在机器人手腕端部的任意工具。
控制器还有与工厂中不同计算机开展交流的任务,这个通讯网络可使机器人成为计算机辅助制造(CAM)系统的一部分。
根据上述基本定义,机器人是一台可再编程序的多功能机械手。
所以,控制器必须包含某种形式的记忆存储器,以微信息处理器为基础的系统常与固态记忆装置连同运行。
这些记忆装置可以是磁泡、随机存取记忆装置、软塑料磁盘或磁带。
每种记忆存储装置均可存储编程信息以便以后回忆使用。
动力源动力源是向控制器及机械手供给动力得装置,有两类动力供给机器人系统。
一类动力是供控制器运行的交流点动力,另一类被用于驱动机械手各轴。
例如,若机器人的机械手由液压或气压装置控制,则控制信号被发送到这些装置才能使机器人运动。
每个机器人系统均需要动力来驱动机械手,这种动力既可由液压动力源、气压动力源,也可以由电力动力源提供,这些动力源是机器人工作单元总的部件及设备中的一部分。
当液压动力源与及机器人机械手底座相连接,液压源产生液压流体,这些流体输送到机械手各控制元件,于是,使轴绕机器人底座旋转。
压力空气被输送到机械手,使轴沿轨道作直线运动,也可将这种气动源连接到钻床,它可为钻头的旋转提供动力。
一般情况下,可从工厂得供给站获取气动源并做调整,然后将它输入机器人机械手的轴。
电动机可以是交流式的,也可以是直流式的。
控制器发出的脉冲信号被发送到机械手得电机。
这些脉冲为电机提供必要的指令信息以使机械手在机器人底座上旋转。
用于机械手轴的三种动力系统任一种均需要使用反馈监督系统,这种系统会不断地将每个轴位置数据反馈给控制器。
每种机器人系统不仅需要动力来开动机械手的轴,还需要动力来驱动控制器,这种动力可由制造环境的动力源提供。
端部执行件在大部分机器人应用的场合见到的端部执行件均是机械手手腕法兰相连接的一个装置,端部执行件可应用于生产领域中许多不同场合,例如,它可用于捡起零件,用于焊接,或用于喷漆,端部执行件为机器人系统提供了机器人运行时必须的柔性。
通常所设计得端部执行件可满足机器人用户的需要。
这些部件可由机器人制造商或机器人系统的物主制造。
端部执行件事机器人系统中唯一可将一种工作变成另一种工作的部件,例如,即日起可与喷水割机相连,它在汽车生产线上被用于切割板边。
也可要求机器人将零件安放到磁盘中,在这简单的过程中,改变了机器人端部执行件,该机器人就可以用于其它应用场合,端部执行件得变更以及机器人的再编程序可使该系统具有很高的柔性。
机器人传感器尽管机器人有巨大的能力,但很多时候却比不过没有经过一点训练的工人。
例如,工人们能够发现零件掉在地上或发现进料机上没有零件,但没有了传感器,机器人就得不到这些信息,及时使用最尖端的传感器,机器人也比不上一个经验丰富的工人,因此,一个好的机器人系统的设计需要使用许多传感器与机器人控制器相接,使其尽可能接近操作工人得感知能力。
机器人技术最经常使用的传感器分为接触式的与非接触式的。
接触式传感器可以进一步分为触觉传感器、力和扭矩传感器。
触觉或接触传感器可以测出受动器端与其他物体间的实际接触,微型开关就是一个简单的触觉传感器,当机器人得受动气端与其他物体接触时,传感器是机器人停止工作,避免物体间的碰撞,告诉机器人已到达目标;或者在检测时用来测量物体尺寸。
力和扭矩传感器位于机器人得抓手与手腕的最后一个关节之间,或者放在机械手得承载部件上,测量反力与力矩。
力和扭矩传感器有压电传感器和装在柔性部件上的应变仪等。
非接触传感器包括接近传感器、视觉传感器、声敏元件及范围探测器等。
接近传感器和标示传感器附近的物体。
例如,可以用涡流传感器精确地保持与钢板之间的固定的距离。
最简单的机器人接近传感器包括一个发光二极管发射机和一个光敏二极管接收器,接收反射面移近时的反射光线,这种传感器的主要缺点是移近物对光线的反射率会影响接收信号。
其他得接近传感器使用的是与电容和电感相关的原理。
视觉传感系统十分复杂,基于电视摄像或激光扫描的工作原理。
摄像信号经过硬件预处理,以30帧至60帧每秒的速度输入计算机。
计算机分析数据并提取所需的信息,例如,物体是否存在以及物体的特征、位置、操作方向,或者检测元件的组装及产品是否完成。
声敏元件用来感应并解释声波,从基本的声波探测到人们连续讲话的逐字识别,各种声敏元件的复杂程序不等,除了人机语音交流外,机器人还可以使用声敏元件控制弧焊,听到碰撞或倒塌的声音时阻止机器人的运动,预测将要发生的机械破损及检测物体内部缺陷。
还有一种非接触系统使用投影仪和成像设备获取物体的表面形状信息或距离信息。
传感器有静态探测与闭环探测两种使用方法。
当机器人系统的探测和操作动作交替进行时,通常就要使用传感器,也就是说探测时机器人不操作,操作时与传感器无关,这种方法被称为静态探测,使用这种方法,视觉传感器先寻找被捕捉物体的位置与方向,然后机器人径直朝那个地点移动。
相反,闭式探测的机器人在操作运动中,始终受传感器的控制,多数视觉传感器都采用闭环模式,它们随时监测机器人的实际位置与理想位置间的偏差,并驱动机器人修正这一偏差。
在闭环探测中,即使物体在运动,例如在传送带上,机器人也能抓住它并把它送到预定位置。