Humanoid Robots - New Developments_部分12+18_部分2
21英语全能大一轮复习人教话题语篇专攻练 三十二 Robot 含解析

温馨提示:此套题为Word版,请按住Ctrl,滑动鼠标滚轴,调节合适的观看比例,答案解析附后。
关闭Word文档返回原板块。
话题语篇专攻练三十二选修7Unit 2Robots(限时35分钟)Ⅰ. 阅读理解ABest Smart Car ProductsBest GPS Tracker: Vyncs 3G GPS TrackerThe Vyncs tracker updates its location every three minutes. You can also update every 60 seconds, 30 seconds or 15 seconds. It offers up a stable 3G wireless connection in all 50 US states, the US Virgin Islands, Puerto Rico and Canada and then delivers information back to the owner via a Web portal or Vyncs IOS or Android apps.Best Car Charger: Roav VIVA Alexa-Enabled Car ChargerFar more than just a charger, the Roav VIVA is a smart accessory that brings a whole new level of control to your in-car entertainment and navigation experience. The USB ports are created with A nker’s patented Power IQ technology, charging your devices with lightning speed. And the dual-port design means both you and your passenger can plug in simultaneously.Best Smartphone Mount: iOttie Easy One Touch 3The iOttie Easy One Touch 3 attaches to your dashboard or windshield via a sticky gel pad that works on most surfaces, and it holds your smartphone tightly in place no matter where you stick it. The mount has a telescopic arm that brings your phone one inch closer to you and it can turn around 180 degrees. It can work with nearly every smartphone.Best Key Finder: Beets BLU Bluetooth Wireless Key/Item FinderBeets BLU Bluetooth Wireless Key/Item Finder will help you find lost keys and other objects like your purse, wallet and even your dog. This black electronic beeper is designed with a loud sound alarm and red LED light in case of link loss. The portable lost key detector also includes remote commands accessible on the key fob button press so you can send alerts.【文章大意】本文是应用文。
雅思阅读机经人类与机器人

雅思阅读机经人类与机器人大家在备考雅思阅读的时候可以多参考一些机经,让大家对雅思阅读的考试内容和形式有一个大致了解,下面小编给大家带来雅思阅读机经人类与机器人,希望对你们有所帮助。
雅思阅读机经真题解析:人类与机器人Man or MachineADuring July 2003, the Museum of Science in Cambridge, Massachusetts exhibited what Honda calls 'the world's most advanced humanoid robot', AS1MO (the Advanced Step in Innovative Mobility). Honda's brainchild is on tour in North America and delighting audiences wherever it goes. After 17 years in the making, ASIMO stands at four feet tall, weighs around 115 pounds and looks like a child in an astronaut's suit. Though it is difficult to see ASIMO's face at a distance, on closer inspection it has a smile and two large eyes' that conceal cameras. The robot cannot work autonomously - its actions are 'remote controlled' by scientists through the computer in its backpack. Yet watching ASMIO perform at a show in Massachusetts it seemed uncannily human. The audience cheered as ASIMO walked forwards and backwards, side to side and up and downstairs. After the show, a number of people told me that they would like robots to play more of a role in daily life - one even said that the robot would be like 'another person'.BWhile the Japanese have made huge strides in solving some of the engineering problems of human kinetics (n.动力学) and bipedal (adj. 两足动物的)movements, for the past 10 years scientists at MIT's former Artificial Intelligence (Al) lab (recently renamed the Computer Science and Artificial Intelligence Laboratory, CSAIL) have been making robots that can behave likehumans and interact with humans. One of MITs robots, Kismet, is an anthropomorphic (adj.拟人的) head and has two eyes (complete with eyelids), ears, a mouth, and eyebrows. It has several facial expressions, including happy, sad, frightened and disgusted. Human interlocutors are able to read some of the robot's facial expressions, and often change their behavior towards the machine as a result - for example, playing with it when it appears ‘sad’. Kismet is now in MIT’s museum, but the ideas developed here continue to be explored in new robots.CCog (short for Cognition) is another pioneering project from MIT’s former AI lab. Cog has a head, eyes, two arms, ha nds and a torso (n.躯干) - and its proportions were originally measured from the body of a researcher in the lab. The work on Cog has been used to test theories of embodiment and developmental robotics, particularly getting a robot to develop intelligence by responding to its environment via sensors, and to learn through these types of interactions.DMIT is getting furthest down the road to creating human-like and interactive robots. Some scientists argue that ASIMO is a great engineering feat but not an intelligent machine - because it is unable to interact autonomously with unpredictabilities in its environment in meaningful ways, and learn from experience. Robots like Cog and Kismet and new robots at MIT’s CSAIL and media lab, however, are beginning to do this.EThese are exciting developments. Creating a machine that can walk, make gestures and learn from its environment is an amazing achievement. And watch this space: these achievements are likely rapidly to be improved upon. Humanoid robots could have a plethora of uses in society, helping to free people from everyday tasks. In japan, for example, there is an aim to createrobots that can do the tasks similar to an average human, and also act in more sophisticated situations as firefighters, astronauts or medical assistants to the elderly in the workplace and in homes – partly in order to counterbalance the effects of an ageing population.FSuch robots say much about the way in which we view humanity, and they bring out the best and worst of us. On one hand, these developments express human creativity - our ability to invent, experiment, and to extend our control over the world. On the other hand, the aim to create a robot like a human being is spurred on by dehumanized ideas - by the sense that human companionship can be substituted by machines; that humans lose their humanity when they interact with technology; or that we are little more than surface and ritual behaviors, that can be simulated with metal and electrical circuits.Questions 1-6Reading passage 1 has six paragraphs, A-F.Which paragraph contains the following information?Write the correct letter, A-F, in boxes 1-6 on your answer sheet.NB you may use any letter more than once1 different ways of using robots2 a robot whose body has the same proportion as that of an adult3 the fact that human can be copied and replaced by robots4 a comparison between ASIMO from Honda and other robots5 the pros and cons of creating robots6 a robot that has eyebrowsQuestions 7-13Complete the following summary of the paragraphs of Reading Passage 1, using NO MORE THAN TWO WORDS from the Reading Passage for each answer.Write your answers in boxes 7-13 on your answer sheet.In 2003, Massachusetts displayed a robot named ASIMO which was invented by Honda, after a period of 7 in the making. The operating information is stored in the computer in its 8 so that scientists can control ASIMO's movement. While Japan is making great progress, MIT is developing robots that are human-like and can 9 humans. What is special about Kismet is that it has different 10 which can be read by human interlocutors. 11 is another robot from MIT, whose body's proportion is the same as an adult. By responding to the surroundings through 12 ,it could develop its 13 .文章题目:Man or Machine篇章结构体裁议论文题目是人还是机器结构A. ASMID研制成功并向公众展示的社会影响B. CSAIL一直致力于研制拟人机器人C. Cog是有着和人来一样的比例的机器人D. 在创造类人互动机器人方面, MIT走在前端E. 类人机器人的发展空间F. 创造类人机器人的利与弊试题分析Question 1-13题目类型:Information in relevant paragraph定位词文中对应点题目解析1Different ways E段第4句E段开头就引出创造机器人的成就, 随后并提出这些成就有一定的发展空间, 直到第四句说明这些类人机器人have a plethora of uses,用途多样. 因此答案为E2The same proportion...adultC段第2句C段第2句提到cog has a head...and its proportions were originally measured from the body of a researcher in the lab. 表明该机器人是按照成年人人体比例创造的, 因此答案为C3Copied replacedF段第3句F段第三句the aim to create...by the sense human...can be substituted..., that can be simulated 都表示人类可被机器等取代.因此答案为F4ComparisonASIMO... Pther robotsD段第2,3句D段第2句指出ASIMO is...but not an intelligent machine,because it is unable to...learn from experience.第3句又表明robots like...however, are beginning to do this. 体现出其他机器人能做到ASIMO所不能做到的自发学习. 因此答案为D5Pros and consF段第1句F段开头指出这些机器人证明了我们看待人性的方式, bring out the best and worst of us.这半句话体现出创造机器人的利与弊. 因此答案为F6eyebrowsB段倒数第4句B段倒数第四句提到one of MIT’S robots is...and has two eyes...and eyebrows. 因此答案B Question7-13 Summary from Reading Passagesummary参考解题思路: 先跳开空格把该段通读一遍, 了解大意, 发现总体是按照文章段落顺序概括的. (如有所遗忘, 再看原文各段段首句, 大概知道各句在文章的相应段落)解析: 第1句和第2句对应文章A段, 根据after a period of 7___in the making定位该段第3句, 答案为17 years. 然后根据文章倒数第四句its action are...controlled by scientists through...in its backpack.可以判断8答案为backpack. 该题第3, 4句对应文章B段, MIT is inventing robots...with the ability to 8___humans定位该段第2句behave like humans and interact with humans.可以判断9答案为interact with. 根据Kismet ...has various...by human interlocutors 定位原文倒数第2句human interlocutors are able to read some of the robots’ facial expressions得出10答案为facial expressions. 第5,6句对应原文C段, robot from MIT,proportion定位该段第1, 2句得出11答案为Cog/cognition. 最后根据该段最后一句getting a robot to develop intelligence via sensors判断12答案为sensors, 13 答案为intelligence.参考翻译:是人还是机器A在2003年7月,曼彻斯特的剑桥博物馆陈列了Honda称之为“世界最先进的人性机器人”:ASIMO (即“创新移动的进步之举)。
机器人的现状与发展趋势英语作文

机器人的现状与发展趋势英语作文The Current Situation and Development Trends of Robots。
In recent years, robots have become an increasingly prominent presence in our lives. From industrial automation to personal assistants, robots are transforming the way we live and work. In this article, we will explore the current situation of robots and discuss the future development trends in this field.1. The Current Situation of Robots。
Robots have made significant advancements in various industries. In manufacturing, robots have revolutionized production processes by increasing efficiency and precision. They can perform repetitive tasks with high accuracy, reducing the need for human labor and minimizing errors. This has led to increased productivity and cost savings for companies.Furthermore, robots have also made their way into the service industry. From hotels to hospitals, robots are being used to assist with tasks such as cleaning, delivery, and even customer service. For example, in some hotels, robots are employed as concierge staff, providing guests with information and assistance. This not only enhances the customer experience but also reduces the workload of human employees.In addition to industrial and service robots, there has been significant progress in the development of humanoid robots. These robots are designed to resemble humans in appearance and behavior, with the aim of performing tasks that require human-like dexterity and interaction. They have the potential to assist in areas such as healthcare, elderly care, and education.2. Development Trends of Robots。
人和机器人的作文英语

In the heart of the bustling city, amidst the towering skyscrapers and the constant hum of traffic, a new era of companionship is unfolding. The integration of artificial intelligence into our daily lives has taken a significant leap forward with the advent of humanoid robots. These machines, designed to mimic human behavior and emotions, are no longer just the stuff of science fiction but are becoming an integral part of our society.The concept of a robot is not new we have been fascinated by the idea of creating mechanical beings since the time of ancient Greece. However, the modern humanoid robot is a far cry from the simple automata of the past. With advanced algorithms and sophisticated programming, these robots can perform tasks that were once thought to be the exclusive domain of humans.One of the primary benefits of humanoid robots is their ability to assist in tasks that are repetitive, dangerous, or require precision. In industries such as manufacturing, construction, and healthcare, robots can perform tasks that would be hazardous for humans, reducing the risk of injury and increasing efficiency. For example, a robot can work in a radioactive environment without the need for protective gear, or it can perform delicate surgeries with a level of precision that is beyond human capability. Moreover, humanoid robots are also being developed to provide companionship and support for the elderly and those with disabilities. These robots can assist with daily tasks, provide emotional support, and even engage in conversation, offering a sense of companionship that can be invaluable to those who may be isolated or lonely.However, the integration of robots into our society also raises a host of ethical and societal questions. As these machines become more advanced and more humanlike, the line between man and machine begins to blur. Questions about the rights and responsibilities of robots, as well as the potential for job displacement and the dehumanization of certain tasks, are becoming increasingly relevant.Furthermore, the development of humanoid robots also presents challenges in terms of privacy and security. As these robots become more integrated into our daily lives, they will have access to a wealth of personal information, raising concerns about data protection and the potential for misuse.In conclusion, the rise of humanoid robots represents a significant shift in our relationship with technology. While the potential benefits are vast, it is crucial that we approach this new era with caution and consideration, ensuring that the development of these machines is guided by ethical principles and a commitment to the wellbeing of all members of society. As we navigate this brave new world, it is essential that we strike a balancebetween embracing the possibilities of artificial intelligence and preserving the unique qualities that make us human.。
Humanoid Robots - New Developments_部分9

232 Humanoid Robots, New Developmentswhere )0(0O O and )1(1O O . In this equation, we assume that the initial state 0z is a function of certain variables which consist of partial set of the state, namely, a part of the initial state is independent and the other depends on it. Let the independent initial statevar iables be ))0(),0(),0(),0(),0(),0((0D M T M T f f f s s p z c . The r est of the initial state ar e decided by»»»¼º«««¬ª »¼º«¬ª »¼º«¬ª 000~~~~0000000,000000111f f ff T f f f T f f s ss s f s f s f p J J H J J H I I K S h p K M T M T M T M M T T (22)(23)(24)(25)The first three equations are coordinate conversion at the instant of landing and the last isthe condition of per fectly inelastic collision at the instant of landing. Let the impulsiveexternal force at the foot of support leg be f G . The impact force f G is inflicted at the instant of the landing and the gene alized velocity changes discontinuously. F om (1), the generalized momentum after the collision is given byf J x H xH T G ~(26)where x and x denote the gener alized velocities after and befor e collision, r espectively. ],,[~I R J Jis an extended Jacobian. Since it is suppor t phase after the collision, the condition (3) holds, namely,0~xJ (27) Describing (26) and (27) for x and f G , the following equation is obtained.»¼º«¬ª »¼º«¬ª»¼º«¬ª 00~~x H f xJJ H T G (28) Eliminating f G from (28), we havexJ J H J J H I xT T ~~~~111 (29) Here, x corresponds to )0(s xwhich is the gener alized velocity of the suppor t phase at 0 W , and x corresponds to )0(f x which is the generalized velocity of the flight phase at0 W . Taking into account the coor dinate conver sion between left and r ight leg, (29) istransformed into the form of (25).From (21), the following conditions are obtained.f d dz z f z f d d zg Tw w w w w wWO W O O 011(30)(31)(32)Minimum Energy Trajectory Planning for Biped Robots 233Also the gradients are given byO O vf v f v E z z z ETTw w w w w w c w wc w w 00000~~(33)(34)To find the optimal solution, the conjugate gr adient method in infinite dimensionalspace (Hilbert space) is applied to this problem. The procedures of the algorithm are as follows.1)The initial solution ))(,(0W v z c is given.2)The initial state 0z is computed by (22)-(25).3)The differential equation (32) is solved using 0z as the initial condition.4)1O is computed by (30) using the final value 1z .5)The differential equation (31) is backwardly solved using 1z .6)The gradients for 0z c and )(W v are computed by (33)(34) using )(W z ,)(W O , and)(W v .7)The temporary solution ))(,(0W v z c is updated toward the direction of the conjugate gradient.8)If the gradient is not small enough, return to 2.Finally, the input joint torques )(t n , the joint angles )(t T , the posture and position (of thebase link) )(t M ,)(t p , their der ivatives )(t T ,)(t M , and the suppor t phase r atio D areobtained. A general method to compute the partial derivatives in (30)-(34) is proposed in the next section.4. Computational Scheme for Partial DerivativeIt is difficult to calculate the partial derivatives in (30)-(34) symbolically, because basically it costs very much to obtain a symbolic expression of the equation of motion (1). In this section, a computational scheme for the par tial der ivatives based on numer ical r epr esentation of motion equation is pr oposed. Finally we can compute them easily by using for war d-backward recursive Newton-Eular formulation.Each partial derivative appeared in (30)-(34) is represented as follows.»»»»»»»»¼º««««««««¬ªw w w w w w w w u w w 001100100000000f Tf fT ff f s T s s T s s s T f x f x f x If x f x f x I Tz fD D D DD D (35)where234 Humanoid Robots, New Developmentsf f f f f f s s s s sg x C u H f g xC u H f 11s (36)(37)And then,f fi ff fi fffi f f fi fs si s s si s s si s s si s f x H H x g x x C H x f f x H H x g x x C H x f w w ¸¸¹·¨¨©§w w w w w w w w ¸¸¹·¨¨©§w w w w w w 1111 (38)(39)»»»»»»¼º««««««¬ª w w 00100000011f f s s T P TH PTH v fD D (40)where N N T s I P u )3(]0,[,N N T f I P u )6(]0,0,[ are selection matrices. And also,»»¼º««¬ª w w »»»»»»»»»»»»»»»¼º«««««««««««««««¬ªw w f sfT f s T s f s T T vf n n T Tn Tn zf T D T D T T D D 100100000000(41)(42)»»»»»»¼º««««««¬ª/* c w w 1000000~00000I K I z z T (43)Minimum Energy Trajectory Planning for Biped Robots 235»»»»»»»»»»»»»»»»»»¼º««««««««««««««««««¬ª »¼º«¬ª w w »¼º«¬ª w w u w w 02111111111111111111111111111111111f s s f s f s f s f s f s f s s Ts f T s f s s T s f s f s s i s T s s f s Ts f s f si s i s T s T s f s T s f s pJ p h p R R pJ J p R R p h R p J J p h J W z g T M M T T M M T T M M M T T T M M MM M T T T T T (44)where the subscript 1 corresponds to the value at 1 W , and»¼º«¬ªw /w *»¼º«¬ª /»»»¼º«««¬ª f si l fTff f T f f l s s x x I K K J J H J J H I K R J I KK 0000~~~~~~00~111(45)(46)(47)(48)r f fi T f f f T f f fi f f f T f f fi f T f f fiTf Tf f f fi Tf f f T f f fi f f l si K J x J H J J H x H H J J H x J J H x J J H J x J H J J H x H H K x ~~~~~~~~~~~~~~~~111111111:¸¸¹·¨¨©§w w w w w w : w w : :w w :w w w /w (49)»»»¼º«««¬ª : 0000~~~11I K K JH J r Tf f (50)(51)236 Humanoid Robots, New DevelopmentsThe par tial der ivatives appear ed in (38), (39), (44), (48), and (49) ar e computed by usingmodified Newton-Euler formulations.5. Numerical Study of Five-link Planar BipedThe proposed method is applied to a five-link planar biped robot. The specification of therobot and the control parameters are shown in Table 1. The robot is 1.2 [m] height and 60[kg] weight. The coordinates are taken as shown in Fig. 1.The optimal tr ajector ies ar e computed as shown in Fig. 3-Fig. 6. Snapshots of therunning motion are also shown in Fig. 7. The solid lines and the dashed lines show thetr ajector ies for the r ight leg and the left leg, r espectively. In Fig. 4, ther e ar e somediscontinuous velocities due to the impact force at the instant of the landing. In Fig. 5,the peak tor que for the hip joints appear s at the beginnings of the swinging. For theknee joints, the tor que becomes maximal at the instant of the landing and keeps over100N.m. In Fig. 6, the positive peak power for the hip and knee joints appear s dur ingkicking motion. For the hip joints, the power becomes negative peak value at thebeginning of the flight phase. This means that the hip joints absor b the ener gy of thekicking motion. The negative peak power for the knee joints appear s at the instant ofthe landing. Namely, knee joints absor b the impact power between the foot and theground.body length and weight 0.6m, 20kgthigh length and weight 0.3m, 10kgshin length and weight 0.3m, 10kgtotal height & weight 1.2m, 60kgstride S 0.5mperiod of one step T 0.5srunning speed T/S 1m/sTable 1. Specifications of robot and control parameters.knee hippeak angular velocity [rad/s] 14.5 11.6peak torque [N.m] 48.2 117.6peak power (positive) [W] 355 1265peak power (negative) [W] -699 -849consumption power [W] 27.9 -5.37total consumption power [W] 45.1Table 2. Actuator requirement.Table 2 shows requirements for the actuators based on this result. It is found that very bigpower is required for knee joints. However, its total consumption power has small negativevalue. Therefore, the main work is done by the hip joints. Since the negative power is alsoMinimum Energy Trajectory Planning for Biped Robots 237big, for real robots, the introduction of the energy regeneration mechanism such as elastic actuators or combination of high back-drivable actuators and bidirectional power converters is effective to reduce the total consumption power.(a) hip joint (b) knee joint.Fig. 3. Joint angles (solid line: right leg, dashed line: left leg).(a) hip joint (b) knee joint.Fig. 4. Angular velocities of joints (solid line: right leg, dashed line: left leg).(a) hip joint (b) knee joint.Fig. 5. Joint torques (solid line: right leg, dashed line: left leg).-150-100-500 50 100 150-0.4-0.20.2 0.4 0.60.81T o r q u e s o f k n e e j o i n t s [N .m ]Time [s]-150-100-500 50 100 150-0.4-0.20.2 0.4 0.60.81T o r q u e s o f h i p j o i n t s [N .m ]Time [s]-15-10-50 5 10 15-0.4-0.20.2 0.4 0.60.81A n g l u l a r v e l o c i t i e s o f k n e e j o i n t s [r a d /s ]Time [s]-15-10-50 5 10 15-0.4-0.20.2 0.4 0.60.81A n g l u l a r v e l o c i t i e s o f h i p j o i n t s [r a d /s ]Time [s]-2-1.5-1-0.5 00.5 11.5 2-0.4-0.20.2 0.4 0.60.81A n g l e s o f k n e e j o i n t s [r a d ]Time [s]-2-1.5-1-0.5 00.5 11.5 2-0.4-0.20.2 0.4 0.60.81A n g l e s o f h i p j o i n t s [r a d ]Time [s]support phase support phase support phasesupport phase support phase support phasesupport phase support phase support phasesupport phasesupport phase support phasesupport phasesupport phasesupport phasesupport phase support phase support phase238Humanoid Robots, New Developments(a) hip joint (b) knee joint.Fig. 6. Joint powers (solid line: right leg, dashed line: left leg).Fig. 7. Snapshots of running trajectory.6. ConclusionIn this chapter , the method to gener ate a tr ajector y of a r unning motion with minimum energy consumption is proposed. It is useful to know the lower bound of the consumption ener gy when we design the bipedal r obot and select actuator s. The exact and gener al for mulation of optimal contr ol for biped r obots based on numer ical r epr esentation of motion equation is proposed to solve exactly the minimum energy consumption trajectories. Thr ough the numer ical study of a five link planar biped r obot, it is found that big peak power and torque is required for the knee joints but its consumption power is small and the main work is done by the hip joints.8. ReferencesFujimoto, Y. & Kawamu ra, A. (1995). Th ree Dimensional Digital Simulation andAutonomous Walking Contr ol for Eight-axis Biped Robot, Proceedings of IEEE-1500-1000-5000 500 1000 1500-0.4-0.20.2 0.4 0.60.81P o w e r o f k n e e j o i n t s [W ]Time [s]-1500-1000-5000 500 1000 1500-0.4-0.20.2 0.4 0.60.81P o w e r o f h i p j o i n t s [W ]Time [s]support phase support phase support phase support phase support phase support phaseMinimum Energy Trajectory Planning for Biped Robots239Int ernat ional Conference on Robot ics and Aut omat ion , pp. 2877-2884, 0-7803-1965-6, Nagoya, May 1995, IEEE, New YorkFujimoto, Y. & Kawamura, A. (1998). Simulation of an Autonomous Biped Walking RobotIncluding Environmental Force Interaction, IEEE Robotics and Automation Magazine ,Vol. 5, No. 2, June 1998, pp. 33–42, 1070-9932Goswami, A. (1999). Foot-Rotation Indicator (FRI) Point: A New Gait Planning Tool toEvaluate Postu r al Stability of Biped Robot, Proceedings of IEEE In t erna t ional Conference on Robotics and Automation , pp. 47-52, 0-7803-5180-0, Detr oit, May 1999, IEEE, New YorkHirai, K.; Hirose, M.; Haikawa, Y. & Takenaka, T. (1998). TheDevelopment of Honda Humanoid Robot, Proceedings of IEEE International Conference on Robotics and Automation , pp. 1321-1326, 0-7803-4300-X, Leuven, May 1998, IEEE, New YorkHodgins, J. K. (1996). Three-Dimensional Human Running, Proceedings of IEEE InternationalConference on Robot ics and Aut omat ion , pp. 3271-3277, 0-7803-2988-0, Minneapolis, April 1996, IEEE, New York Kajita, S.; Nagasaki, T.; Yokoi, K.; Kaneko, K. & Tanie, K. (2002). Running Patte nGener ation for a Humanoid Robot, Proceedings of IEEE Int ernat ional Conference on Robotics and Automation , pp. 2755-2761, 0-7803-7272-7, Washington DC, May 2002, IEEE, New YorkKajita, S.; Kanehir o, F.; Kaneko, K.; Fujiwar a, K.; Har ada, K.; Yokoi, K. & Hir ukawa, H.(2003). Biped Walking Patter n Gener ation by using Pr eview Contr ol of Zer o-Moment Point,Proceedings of IEEE In t erna t ional Conference on Robo t ics and Automation , pp. 1620–1626, 0-7803-7736-2, Taipei, May 2003, IEEE, New YorkKaneko, K.; Kanehir o, F.; Kajita, S.; Hir ukaka, H.; Kawasaki, T.; Hir ata, M.; Akachi, K. &Isozumi, T. (2004). Humanoid Robot HRP-2, Proceedings of IEEE In erna ional Conference on Robotics and Automation , pp. 1083-1090, 0-7803-8232-3, New Or leans, April 2004, IEEE, New YorkLoffler , K.; Gienger , M. & Pfeiffer , F. (2003). Sensor and Contr ol Design of a DynamicallyStable Biped Robot, Proceedings of IEEE Int ernat ional Conference on Robot ics and Automation , pp. 484-490, 0-7803-7736-2, Taipei, May 2003, IEEE, New YorkNagasaki, T.; Kajita, S.; Kaneko, K.; Yokoi, K. & Tanie, K. (2004). A Running Experiment ofHumanoid Biped, Proceedings of IEEE/RSJ Int ernat ional Conference on Int elligent Robots and Systems , pp. 136-141, 0-7803-8463-6, Sendai, September 2004, IEEE, New YorkNishiwaki, K.; Kagami, S.; Kuffner J. J.; Inaba, M. & Inoue, H. (2003). Online HumanoidWalking Contr ol System and a Moving Goal Tr acking Exper iment, Proceedings of IEEE International Conference on Robotics and Automation , pp. 911-916, 0-7803-7736-2, Taipei, May 2003, IEEE, New YorkRaibert, M., H. (1986). Legged Robots That Balance , MIT Press, 0-262-18117-7, CambridgeRoussel, L.; Canudas-de-Wit, C. & Goswami, A. (1998). Gener ation of Ener gy OptimalComplete Gait Cycles for Biped Robots, Proceedings of IEEE International Conference on Robotics and Automation , pp. 2036–2041, 0-7803-4300-X, Leuven, May 1998, IEEE, New YorkSugahara, Y.; Endo, T.; Lim, H. & Takanishi, A. (2003). Control and Experiments of a Multi-pu r pose Bipedal Locomoto r with Pa rallel Mechanism, Proceedings of IEEE240 Humanoid Robots, New Developments Int ernat ional Conference on Robot ics and Aut omat ion, pp. 4342-4347, 0-7803-7736-2, Taipei, May 2003, IEEE, New YorkVukob atovic, M.; Bo ovac, B. & Su dilovic, D. (2001). Ze o-Moment Point - P ope Inter pr etation and New Applications, Proceedings of Int ernat ional Conference on Humanoids Robots, pp. 237-244, 4-9901025-0-9,Tokyo, November 2001, IEEE, New YorkYamaguchi, J.; Soga, E.; Inoue, S. & Takanishi, A. (1999). Development of a Bipedal Humanoid Robot—Contr ol Method of Whole Body Cooper ative Dynamic Biped Walking,Proceedings of IEEE International Conference on Robotics and Automation, pp.368-374, 0-7803-5180-0, Detroit, May 1999, IEEE, New York14 Real-time Vision Based Mouth Tracking and Parameterization for a Humanoid Imitation TaskSabri Gurbuz a,b, Naomi Inoue a,b and Gordon Cheng c,da NICT Cognitive Information Science Laboratories, Kyoto, Japanb ATR Cognitive Information Science Laboratories, Kyoto, Japanc ATR-CNS Humanoid Robotics and Computational Neuroscience, Kyoto, Japand JST-ICORP Computational Brain Project, Kawaguchi, Saitama, Japan.1.IntroductionRobust real-time stereo facial feature tracking is one of the important research topics for a var iety multimodal Human-Computer, and human r obot Inter face applications, including telepresence, face recognition, multimodal voice recognition, and perceptual user interfaces (Moghaddam et al., 1996; Moghaddam et al., 1998; Yehia et al., 1988). Since the motion of a person's facial features and the direction of the gaze is largely related to person's intention and attention, detection of such motions with their 3D r eal measur ement values can be utilized as a natur al way of communication for human r obot inter action. For example, addition of visual speech information to robot's speech recognizer unit clearly meets at least two pr acticable cr iter ia: It mimics human visual per ception of speech r ecognition, and it may contain information that is not always present in the acoustic domain (Gurbuz et al., 2001). Another application example is enhancing the social interaction between humans and humanoid agents with robots learning human-like mouth movements from human trainers during speech (Gurbuz et al., 2004; Gurbuz et al., 2005).The motivation of this research is to develop an algorithm to track the facial features using ster eo vision system in r eal wor ld conditions without using pr ior tr aining data. We also demonstr ate the ster eo tr acking system thr ough a human to humanoid r obot mouth mimicking task. Vider e ster eo vision har dwar e and SVS softwar e system ar e used for implementing the algorithm.This work is organized as follows. In section 2, related earlier works are described. Section 3 discusses face RIO localization. Section 4 pr esents the 2D lip contour tr acking and its extention to 3D. Experimental results and discussions are presented in Section 5. Conclusion is given in Section 6. Finally, future extention is described in Section 7.2. Related WorkMost pr evious appr oaches to facial featur e tr acking utilize skin tone based segmentation fr om single camer a exclusively (Yang & Waibel, 1996; Wu et al., 1999; Hsu et al., 2002; Ter r illon & Akamatsu, 1999; Chai & Ngan, 1999). However, color infor mation is ver y sensitive to lighting conditions, and it is ver y difficult to adapt the skin tone model to a dynamically changing environment in real-time.242 Humanoid Robots, New Developments Kawato and Tetsutani (2004) proposed a mono camera based eye tracking technique based on six-segmented filter (SSR) which oper ates on integr al images (Viola & Jones, 2001). Support vector machine (SVM) classification is employed to verify pattern between the eyes passed fr om the SSR filter. This appr oach is ver y attr active and fast. However, it doesn't benefit from stereo depth information. Also SVM verification fails when the eyebrows are covered by the hair or when the lighting conditions are significantly different than the SVM training conditions.Newman et al., (2000) and Matsumoto et al., (19990) pr oposed to use 3D model fitting technique based on virtual spring for 3D facial feature tracking. In the 3D feature tracking stage each facial feature is assumed to have a small motion between the current frame and the pr evious one, and the 2D position in the pr evious fr ame is utilized to deter mine the search area in the current frame. The feature images stored in the 3D facial model are used as templates, and the right image is used as a search area firstly. Then this matched image in 2D feature tracking is used as a template in left image. Thus, as a result, 3D coordinates of each facial feature are calculated. This approach requires 3D facial model beforehand. For example, error in selection of a 3D facial model for the user may cause inaccurate tracking results.Russakoff and Her man (2000) pr oposed to use ster eo vision system for for egr ound and background segmentation for head tracking. Then, they fit a torso model to the segmented foreground data at each image frame. In this approach, background needs to be modeled first, and then the algorithm selects the largest connected component in the foreground for head tracking.Although all appr oaches r epor ted success under br oad conditions, the pr ior knowledge about the user model or requirement of modeling the background creates disadvantage for many pr actical usages. The pr oposed wor k extends these effor ts to a univer sal 3D facial featu e t acking system by adopting the six-segmented filte app oach Kawato and Tetsutani (2004) for locating the eye candidates in the left image and utilizing the ster eo infor mation for ver ification. The 3D measur ements data fr om the ster eo system allows verifying universal properties of the facial features such as convex curvature shape of the nose explicitly while such infor mation is not pr esent in the 2D image data dir ectly. Thus, stereo tracking not only makes tracking possible in 3D, but also makes tracking more robust. We will also descr ibe an online lip color lear ning algor ithm which doesn't r equir e pr ior knowledge about the user for mouth outer contour tracking in 3D.3. Face ROI LocalizationIn general, face tracking approaches are either image based or direct feature search based methods. Image based (top-down) approaches utilize statistical models of skin color pixels to find the face r egion fir st, accor dingly pr e-stor ed face templates or featur e sear ch algorithms are used to match the candidate face regions as in Chiang et al. (2003). Feature based appr oaches use specialized filter s dir ectly such as templates or Gabor filter of different frequencies and orientations to locate the facial features.Our wor k falls into the latter categor y. That is, fir st we find the eye candidate locations employing the integr al image technique and the six segmented r ectangular filter (SSR) method with SVM. Then, the similarities of all eye candidates are verified using the stereo system. The convex curvature shape of the nose and first and second derivatives around the nose tip are utilized for the verification. The nose tip is then utilized as a reference for theReal-time Vision Based Mouth Tracking and Parameterization for a Humanoid Imitation Task 243selection of the mouth ROI. At the cur r ent implementation, the system tr acks the per sonclosest to the camer a only, but it can be easily extended to a multiple face tr ackingalgorithm.3.1 Eye TrackingThe pattern of the between the eyes are detected and tracked with updated pattern matching.To cope with scales of faces, various scale down images are considered for the detection, andan appr opr iate scale is selected accor ding to the distance between the eyes (Kawato andTetsutani, 2004). The algorithm calculates the intermediate representation of the input imagecalled “Integral image“, described in Viola & Jones (2001). Then, a SSR filter is used for fastfilter ing of br ight-dar k r elations of the eye r egion in the image. Resulting face candidatesaround the eyes are further verified by perpendicular relationship of nose curvature shape aswell as the physical distance between the eyes, and eye level and nose tip.3.2 Nose Bridge and Nose Tip TrackingThe human nose has a convex curvature shape and the ridge of the nose from the eye levelto the tip of the nose lies on a line as depicted in Fig. 1. Our system utilizes the informationin the integr al intensity pr ofile of convex cur vatur e shape. The peak of the pr ofile of asegment that satisfies Eqn. 1 using the filter shown in Fig.2 is the convex hull point. Aconvolution filter with three segments traces the ridge with the center segment greater thanthe side segments, and the sum of the intensities in all thr ee segments gives a maximumvalue on the convex hull point. Fig.2 shows an example filter with three segments that tracesthe convex hull pattern starting from the eye line. The criteria for finding the convex hullpoint on an integral intensity profile of a row segment is as follows,(1) where S i denotes the integral value of the intensity of a segment in the maximum filter shownin Fig. 2, and j is the center location of the filter in the current integral intensity profile. Thefilter is convolved with the integr al intensity profile of ever y r ow segment. A row segmenttypically extends over 5 to 10 rows of the face ROI image, and a face ROI image typicallycontains 20 r ow segments. Integr al intensity pr ofiles of r ow segments ar e pr ocessed to findtheir hull points (see Fig.1 using Equation 1 until either the end of the face ROI is reached oruntil Eqn. 1 is no longer satisfied. For the refinement process, we found that the first derivativeof the 3D sur face data as well as the fir st der ivative of the intensity at the nose tip ar emaximum, and the second derivative is zero at the nostril level (Gurbuz etal., 2004a).Fig. 1. Nose bridge line using its convex hull points from integral intensity projections.244 Humanoid Robots, New Developments Fig. 2. A three-segment filter for nose bridge tracing.4. Lip TrackingThe nose tip location is then utilized for the initial mouth ROI selection. Human mouth has dynamic behavior and even dynamic colors as well as presence or absence of tongue and teeth. Therefore, at this stage, maximum-likelihood estimation of class conditional densities for subsets of lip (w 1) and non-lip (w 2) classes are formed in real-time for the Bayes decision r ule fr om the left camer a image. That is, multivar iate class conditional Gaussian density pa amete s a e estimated fo eve y image f ame using an unsupe vised maximum-likelihood estimation method.4.1 Online Learning and Extraction of Lip and Non-lip Data SamplesIn or der to alleviate the influence of ambient lighting on the sample class data, chromatic color transformation is adopted for color representation (Chiang et al., 2003; Yang et al., 1998). It was pointed out (Yang et al., 1998) that human skin colors are less variant in the chromatic color space than the RGB color space. Although in general the skin-color distr ibution of each individual may be modeled by a multivar iate nor mal distr ibution, the par ameter s of the distr ibution for differ ent people and differ ent lighting conditions ar e significantly differ ent. Ther efor e, online lear ning and sample data extraction are important keys for handling different skin-tone colors and lighting changes. To solve these two issues, the author s pr oposed an adaptation appr oach to t r ansfo r m the p r evious developed colo r model into the new envi ronment by combination of known par ameter s fr om the pr evious fr ames. This appr oach has two drawbacks in general. First, it requires an initial model to start, and second, it may fail in the case of a different user with completely different skin-tone color starts using the system.We propose an online learning approach to extract sample data for lip and non-lip classes to estimate their distribution in real time. Chiang et al. (2003) in their work provides hints for this approach. They pointed out that lip colors are distributed at the lower range of green channel in the (r,g) plane. Fig. 4 shows an example distribution of lip and non-lip colors in the normalized (r,g) space.Utilizing the nose tip, time dependent (r ,g) spaces for lip and non-lip ar e estimated for every fame by allowing H % (typically 10%) of the non-lip points stay within the lip (r,g) space as shown in Fig. 4. Then, using the obtained (r ,g) space infor mation in the initial classification, the pixels below the nostril line that falls within the lip space are considered as lip pixels, and the other pixels are considered as non-lip pixels in the sample data set ext action p ocess, and RGB colo values of pixels a e sto ed as class att ibutes, respectively.。
人工智能专业英语Unit1

– Dialogue: Artificial Intelligence – Listening Comprehension: Thinking Machines – Dictation: Intelligent Agent
Section A: The Turing Test
Section A: The Turing Test
Phrases • turn out to be 证明是,结果是,原来是 • up for grabs 大家有份
Section A: The Turing Test
Exercises I.Read the following statements carefully, and decide whether they are true (T) or false (F) according to the text.
II. Choose the best answer to each of the following questions according to the text.
3. Which of the following is based on whether a computer could fool a human to make him believe that the computer is another human? A. DOCTOR B. Chatbot C. ELIZA D. Turing test
2. When did Alan Turing write a landmark paper that asked the question: Can machines think?? A. 1991 B. 1960 C. 1950 D. None of th
机器人前沿报道英语作文

机器人前沿报道英语作文The field of robotics is rapidly advancing, with new developments and breakthroughs happening all the time. From humanoid robots to autonomous drones, the possibilities seem endless.Robots are being used in a variety of industries, from manufacturing to healthcare. They can perform tasks that are too dangerous or repetitive for humans, and they can do so with great precision and efficiency.One of the most exciting developments in robotics is the use of artificial intelligence. This allows robots to learn from their experiences and make decisions on their own, without human intervention. It's a game-changer for the field, and it's opening up new possibilities for what robots can do.As robots become more advanced, there are also ethical and social considerations to take into account. How do weensure that robots are used for the benefit of humanity, and not to its detriment? These are important questions that need to be addressed as the technology continues to evolve.Despite the challenges, the future of robotics looks incredibly promising. We can expect to see robots playing an even bigger role in our lives in the years to come, and the possibilities for what they can achieve are truly exciting.。
Conceptual Design of Humanoid Robots

Conceptual Design of Humanoid Robots Prof. Dr.-Ing. Dr.h.c. Albert AlbersInstitute of Machine Design and Automotive EngineeringUniversity of Karlsruhe1. IntroductionThe development of a humanoid robot within the scope of special research area 588 has the objective of creating a machine that closely cooperates with humans. This leads to requirements such as little weight, small moving masses (no potential danger for persons in case of collision), as well as appearance, motion space, and work movements after the human model. One reason for the last point is the requirement for the robot to operate in surroundings designed for humans. Another aspect is the acceptance by technologically unskilled users, which is likely to be higher if the robot has a humanoid shape and calculable movements.A humanoid robot is a highly complex mechatronical system, as the required functionality can only be achieved by the interplay of mechanical components with extensive sensor technology, state-of-the-art actuators and highly developed software. The development of mechatronical products is a major point of emphasis for research at our institute. 2. Development of an complex mechatronical system, e.g. humanoid robot2.1 Definition of the term “Mechatronic”In order to distinguish mechatronical systems from electromechanical systems, we define “Mechatronic” as follows [1]:“Mechatronics is concerned with technological systems, consisting of mechanical, electrical/electronical, and information technological subsystems that are characterised by intensive interaction and cannot be developed separately and in independent discipline-oriented processes.”2.2 Product development process in MechatronicSuccessful development of complex mechatronical systems is only possible in close cooperation of specialists of the concerned fields of mechanics, electronics, and information technology (fig. 1). Discipline-oriented partial solutions cannot provide or only with significant delays the desired result.Fig. 1: Product development process in MechatronicFig. 2: V-model. Reference for developing mechatronical productsThe development of technological systems can be carried through according to the V-model (fig. 2) [2]. After analysing all demands on the total system, the subfunctions and subsystems simultaneously being developed by the cooperating development teams are defined (left branch of the V-model). After verifying the subfunctions and testing the subsystems (e. g. the robot wrist including all actuators and sensors), the subsystems are gradually integrated and then the initial operation phase can begin (right branch of the V-model). The working structures with the necessary working surface pairs and connecting channel and support structures are defined according to the element model “working surface pairs & channel and support structures” developed at the Institute of Machine Design and Automotive Engineering [3].The development of technological systems is originally an iterative process involving the development of physical and mathematical models. These models help to verify hypotheses and to simulate and therefore predict properties. Additionally the model helps to gather information, which is not available from the real system, e. g. the tensile stress of certain construction components. Due to the complex hybrid structure, model development and simulation are of even greater significance when the mechatronical product development process is concerned. As tools and software are very much discipline-oriented and can very often not communicate, the process is even more difficult. This is an important research task in the field of mechatronics. The over-all solution, which is still in the conceptual and design phase of the developing process, can be contributed to build up the prototype. This is the current stage of the humanoid robot at the University of Karlsruhe. The construction of the prototype is also an iterative process into which experiences from preceding development stages are to be included.2.3 DIC-method, team-oriented development with internal competition The DIC-method (development by internal competition) is a way to increase the efficiency of team-oriented development processes. The incentive of internal competition between development teams of the same enterprise is used for finding the optimal solution. The competing teams are presented with the same terms of reference.Several development teams consisting of specialists of all the concerned subjects worked in competition in order to develop concepts for several subsystems of a humanoid robot for a period of approximately six months. By using the approaches of concurrent engineering and the DIC-method, a large number of different methods of resolution were developed (fig. 3). Each of these concepts consists of a multitude of component solutions for the mechanical structure of individual joints, sensor, and actuators. This large number of conceptual suggestions is the basis for the currently continuing development.Fig. 3: Different concepts of a humanoidrobot2.4 The demonstratorThe upper body considered optimal for a robot, developed according to the methods described, is currently being assembled at the Institute for Machine Design and Automotive Engineering. Its proportions correspond to those of an average woman with a height of 165 cm. A special emphasis has been put on the development of the arm mechanics. The robot’s arm of the first development stage will be equipped with 7 degrees of flexibility (fig.4). As a principle, only lightweight materials were used and the electric drive units were placed in the thorax in order to design a lighter arm. Three different principles are used to connect the motors and the joints. The power transmission to the wrist will be hydraulic for the first prototype. For the elbow, rope pulls will be used and the shoulder will be driven directly. This concept allows a minimal weight for the arm of only about 2, 5 kg [4]. Three different measuring principlesare applied for measuring the torsion angles, depending on the available construction space and the required accuracy. The torsion angles in the shoulder are measured absolutely by optical encoders, the ones in the elbow by precision rotary potentiometers, the ones in the wrist using a new type of magnetoresistive angular sensors [5].The neck joint (fig. 4) is equipped with four degrees of flexibility. Three rotation axes are situated in the lower neck segment and another one on the upper side of the neck, which allows the nodding of the head. The electrical motors are moved by the others as little as possible.For the pan-tilt units for moving the stereo camera system, a mechanism is implemented that allows each camera to move independently by two degrees of flexibility. It is driven by highly dynamic, brushless electric motors that are also stationary for dynamic reasons. As a high degree of accuracy is required for the angle measurement of the cameras, the high-resolution optical encoder is used here.Fig. 4: Components of the humanoid robot currently being assembled (arm, neck andpan-tilt unit)3. SummaryFor the development of a complex mechatronical system of a humanoid robot, a combination of the development methods concurrent engineering and DIC has proven to be target-oriented. In total, 33 different solutions that all fulfilled the requirements were developed in a brief period of time. The most promising concepts were then selected. They are currently being realised as the first prototype.4. References[1] Albers, A. Einführung in dieMechatronik , Lecture at theUniversity of Karlsruhe, 2001, mkl-Eigenverlag, Karlsruhe.[2] Gausemeier, Jürgen ; Lückel, Joachim.Entwicklungsumgebung Mechatronik ;Methoden und Werkzeuge zurEntwicklung mechatronischer Systeme;Paderborn ; HNI, 2000 (HNI-Verlagsschriftenreihe ; Bd.80).[3] Albers, A.; Matthiesen, S. .KonstruktionsmethodischesGrundmodell zum Zusammenhang vonGestallt und Funktion technischerSysteme – Das Elementmodell…Wirkflächenpaare &Leitstützstruktur“ zur Analyse undSynthese technischer Systeme .Konstruktion, Zeitschrift fürProduktentwicklung; Band 54; Heft7/8 - 2002; Seite 55 - 60; Springer-VDI-Verlag GmbH & Co. KG;Düsseldorf 2002.[4] Behrendt, Matthias . Entwicklung undKonstruktion der Armmechanik undSensorik eines Humanoiden Roboters .Degree Dissertation , Institut fürAngewandte Informatik, UniversitätKarlsruhe 2002.[5] Company Sensitec . Novelmagnetoresistive Angle-Sensors.Product information.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
342 Humanoid Robots, New Developments Osuka, K.; Sugimoto Y. & Sugie T. (2004). Stabilization of Semi-Passive Dynamic Walking based on Delayed Feedback Contr ol, Journal of the Robotics Society of Japan, Vol.22,No.1, pp.130-139 (in Japanese)Asano, F.; Luo, Z.-W. & Yamakita, M. (2004). Gait Generation and Control for Biped Robots Based on Passive Dynamic Walking, Journal of the Robotics Society of Japan, Vol.22, No.1, pp.130-139Imadu, A. & Ono, K. (1998). Optimum Tr ajector y Planning Method for a System that Includes Passive Joints (1st Report, Proposal of a Function Approximation Method), Transact ions of t he Japan Societ y of Mechanical Engineers, Ser ies C, Vol.64, No.618, pp.136-142 (in Japanese)Ono, K. & Liu, R. (2001). An Optimal Walking Trajectory of Biped Mechanism (1st Report, Optimal Tr ajector y Planning Method and Gait Solutions Under Full-Actuated Condition),Transactions of the Japan Society of Mechanical Engineers, Series C, Vol.67,No.660, pp.207-214 (in Japanese)Liu, R & Ono, K. (2001). An Optimal Trajectory of Biped Walking Mechanism (2nd Report, Effect of Unde-Actuated Condition, No Knee Collision and St ide Length), Transact ions of t he Japan Societ y of Mechanical Engineers, Ser ies C, Vol.67, No.661, pp.141-148 (in Japanese)Ono, K. & Liu, R. (2002). Optimal Biped Walking Locomotion Solved by Trajectory Planning Method, Tr ansactions of the ASME, Journal of Dynamic Syst ems, Measurement andControl, Vol.124, pp.554-565Peng, C. & Ono K. (2003). Numerical Analysis of Energy-Efficient Walking Gait with Flexed Knee for a Four-DOF Planar Biped Model, JSME Int ernat ional Journal, Ser ies C, Vol.46, No.4, pp.1346-1355Hase T. & Huang, Q. (2005). Optimal Trajectory Planning Method for Biped Walking Robot based on Inequality State Constraint, Proceedings. of 36th International Symposium onRobotics, Biomechanical Robots, CD-ROM, WE413, Tokyo, JapanHase, T.; Huang, Q. & Ono, K. (2006). An Optimal Walking Trajectory of Biped Mechanism (3rd Report, Analysis of Upper Body Mass Model under Inequality State Constraintand Expe imental Ve ification), Transac ions of he Japan Socie y of Mechanical Engineers, Series C, Vol.72, No.721, pp.2845-2852 (in Japanese)Huang, Q.. & Hase, T. (2006). Energy-Efficient Trajectory Planning for Biped walking Robot, Proceedings. of t he 2006 IEEE Int ernat ional Conference on Robot ics and Biomimet ics,pp.648-653, Kunming, China20 Geminoid: Teleoperated Android of an ExistingPersonShuichi Nishio*, Hiroshi Ishiguro*̐, Norihiro Hagita**ATR Intelligent Robotics and Communication Laboratories̐Department of Adaptive Machine Systems, Osaka UniversityJapan1.Intro ductio nWhy are people attracted to humanoid robots and androids? The answer is simple: because human beings ar e attuned to under stand or inter pr et human expr essions and behavior s, especially those that exist in their surroundings. As they grow, infants, who are supposedly born with the ability to discriminate various types of stimuli, gradually adapt and fine-tune their interpretations of detailed social clues from other voices, languages, facial expressions, or behaviors (Pascalis et al., 2002). Perhaps due to this functionality of nature and nurture, people have a strong tendency to anthropomorphize nearly everything they encounter. This is also tr ue for computer s or r obots. In other wor ds, when we see PCs or r obots, some automatic process starts running inside us that tries to interpret them as human. The media equation theor y (Reeves & Nass, 1996) fir st explicitly ar ticulated this tendency within us. Since then, r esear cher s have been pur suing the key element to make people feel mor e comfor table with computer s or cr eating an easier and mor e intuitive inter face to var ious information devices. This pursuit has also begun spreading in the field of robotics. Recently, r esear cher’s inter ests in r obotics ar e shifting fr om tr aditional studies on navigation and manipulation to human-r obot inter action. A number of r esear ches have investigated how people respond to robot behaviors and how robots should behave so that people can easily understand them (Fong et al., 2003; Breazeal, 2004; Kanda et al., 2004). Many insights from developmental or cognitive psychologies have been implemented and examined to see how they affect the human r esponse or whether they help r obots pr oduce smooth and natur al communication with humans.However, human-robot interaction studies have been neglecting one issue: the "appearance versus behavior problem." We empirically know that appearance, one of the most significant elements in communication, is a crucial factor in the evaluation of interaction (See Figure 1). The inter active r obots developed so far had ver y mechanical outcomes that do appear as “robots.” Researchers tried to make such interactive robots “humanoid” by equipping them with heads, eyes, or hands so that their appearance more closely resembled human beings and to enable them to make such analogous human movements or gestur es as star ing, pointing, and so on. Functionality was consider ed the pr imar y concer n in impr oving communication with humans. In this manner, many studies have compar ed r obots with different behaviors. Thus far, scant attention has been paid to robot appearances. Although344 Humanoid Robots, New Developmentsthere are many empirical discussions on such very simple static robots as dolls, the design ofa robot’s appearance, particularly to increase its human likeness, has always been the role of industr ial designer s; it has seldom been a field of study. This is a ser ious pr oblem for developing and evaluating inter active r obots. Recent neur oimaging studies show that cer tain br ain activation does not occur when the obser ved actions ar e per for med by non-human agents (Per ani et al., 2001; Han et al., 2005). Appear ance and behavior ar e tightly coupled, and concern is high that evaluation results might be affected by appearance.Fig. 1. Three categories of humanlike robots: humanoid robot Robovie II (left: developed by ATR Intelligent Robotics and Communication Labor ator ies), andr oid Repliee Q2 (middle: developed by Osaka Univer sity and Kokor o cor por ation), geminoid HI-1 and its human source (right: developed by ATR Intelligent Robotics and Communication Laboratories).In this chapter , we intr oduce android science, a cr oss-inter disciplinar y r esear ch fr amewor k that combines two approaches, one in robotics for constructing very humanlike robots and androids, and another in cognitive science that uses androids to explore human nature. Here andr oids ser ve as a platfor m to dir ectly exchange insights fr om the two domains. To proceed with this new framework, several androids have been developed so far, and many r esear ches have been done. At that time, however , we encounter ed ser ious issues that spar ked the development of a new categor y of r obot called geminoid . Its concept and the development of the fir st pr ototype ar e descr ibed. Pr eliminar y findings to date and futur e directions with geminoids are also discussed.2. Android ScienceCurrent robotics research uses various findings from the field of cognitive science, especially in the human-r obot inte r action a r ea, t r ying to adopt findings f r om human-human inter actions with r obots to make r obots that people can easily communicate with. At the same time, cognitive science researchers have also begun to utilize robots. As research fields extend to more complex, higher-level human functions such as seeking the neural basis of social skills (Blakemor e, 2004), expectations will r ise for r obots to function as easily cont rolled appa r atuses with communicative ability. Howeve , the cont ibution f omrobotics to cognitive science has not been adequate because the appearance and behavior ofGeminoid: Teleoperated Android of an Existing Person 345current robots cannot be separately handled. Since traditional robots look quite mechanical and very different from human beings, the effect of their appearance may be too strong to ignore. As a result, researchers cannot clarify whether a specific finding reflects the robot’s appearance, its movement, or a combination of both.We expect to solve this problem using an android whose appearance and behavior closely resembles humans. The same thing is also an issue in robotics research, since it is difficult to clear ly distinguish whether the cues per tain solely to r obot behavior s. An objective, quantitative means of measuring the effect of appearance is required.Andr oids ar e r obots whose behavior and appear ance ar e highly anthr opomor phized. Developing andr oids r equir es contr ibutions fr om both r obotics and cognitive science. To ealize a mo e humanlike and oid, knowledge f om human sciences is also necessar y. At the same time, cognitive science r esear cher s can exploit andr oids for ver ifying hypotheses in under standing human natur e. This new, bi-dir ectional, cr oss-inter disciplinar y r esear ch fr amewor k is called android science (Ishigur o, 2005). Under this f r amewo r k, and r oids enable us to di r ectly sha re knowledge between the development of androids in engineering and the understanding of humans in cognitive science (Figure 2). Fig. 2. Bi-directional feedback in Android Science.The major r obotics issue in constr ucting andr oids is the development of humanlike appea ance, movements, and pe ception functions. On the othe hand, one issue in cognitive science is “conscious and unconscious recognition.” The goal of android science is to r ealize a humanlike r obot and to find the essential factor s for r epr esenting human likeness. How can we define human likeness? Further, how do we perceive human likeness? It is common knowledge that humans have conscious and unconscious recognition. When we observe objects, various modules are activated in our brain. Each of them matches the input sensory data with human models, and then they affect reactions. A typical example is that even if we recognize a robot as an android, we react to it as a human. This issue is fundamental both for engineer ing and scientific appr oaches. It will be an evaluation cr iter ion in andr oid development and will pr ovide cues for under standing the human brain’s mechanism of recognition.So far , sever al andr oids have been developed. Repliee Q2, the latest andr oid (Ishigur o, 2005), is shown in the middle of Figure 1. Forty-two pneumatic actuators are embedded in the android’s upper torso, allowing it to move smoothly and quietly. Tactile sensors, which ar e also embedded under its skin, ar e connected to sensor s in its envir onment, such as omnidirectional cameras, microphone arrays, and floor sensors. Using these sensory inputs, RoboticsSensor technology Mechanical eng. Control sys. A.I.Cognitive sci. PsychologyNeuro sci. Analysis andunderstanding ofhumans Development ofmechanical humans Hypothesis andVerification346 Humanoid Robots, New Developments the autonomous pr ogr am installed in the andr oid can make smooth, natur al inter actions with people near it.Even though these androids enabled us to conduct a variety of cognitive experiments, they ar e still quite limited. The bottleneck in inter action with human is its lack of ability to perform long-term conversation. Unfortunately, since current AI technology for developing humanlike brains is limited, we cannot expect humanlike conversation with robots. When meeting humanoid r obots, people usually expect humanlike conver sation with them. However, the technology greatly lags behind this expectation. AI progress takes time, and such AI that can make humanlike conversation is our final goal in robotics. To arrive at this final goal, we need to use cur r ently available technologies and under stand deeply what a human is. Ou solution fo this p oblem is to integ ate and oid and teleope ation technologies.3. GeminoidFig. 3. Geminoid HI-1 (right).We have developed Geminoid, a new category of robot, to overcome the bottleneck issue. We coined “geminoid” fr om the Latin “geminus,” meaning “twin” or “double,” and added “oides,” which indicates “similarity” or being a twin. As the name suggests, a geminoid is a robot that will work as a duplicate of an existing person. It appears and behaves as a person and is connected to the per son by a computer networ k. Geminoids extend the applicable field of android science. Androids are designed for studying human nature in general. With geminoids, we can study such per sonal aspects as pr esence or per sonality tr aits, tr acing their origins and implemention into robots. Figure 3 shows the robotic part of HI-1, the first geminoid prototype. Geminoids have the following capabilities:Appearance and behavior highly similar to an existing personThe appearance of a geminoid is based on an existing person and does not depend on the imagination of designers. Its movements can be made or evaluated simply by referring to the or iginal per son. The existence of a r eal per son analogous to the r obot enables easy compar ison studies. Mor eover, if a r esear cher is used as the or iginal, we can expect thatGeminoid: Teleoperated Android of an Existing Person 347 individual to offer meaningful insights into the experiments, which are especially important at the ver y fir st stage of a new field of study when beginning fr om established r esear ch methodologies.Teleoperation (remote control)Since geminoids are equipped with teleoperation functionality, they are not only driven by an autonomous pr ogr am. By intr oducing manual contr ol, the limitations in cur r ent AI technologies can be avoided, enabling long-ter m, intelligent conver sational human-r obot interaction experiments. This feature also enables various studies on human characteristics by separ ating “body” and “mind.” In geminoids, the oper ator (mind) can be easily exchanged, while the r obot (body) r emains the same. Also, the str ength of connection, or what kind of infor mation is tr ansmitted between the body and mind, can be easilyreconfigured. This is especially important when taking a top-down approach that adds/deletes elements fr om a per son to discover the “cr itical” elements that compr ise human characteristics. Before geminoids, this was impossible.3.1 System overviewThe current geminoid prototype, HI-1, consists of roughly three elements: a robot, a central controlling server (geminoid server), and a teleoperation interface (Figure 4).Fig. 4. Overview of geminoid system.A robot that resembles a living personThe robotic element has essentially identical structure as previous androids (Ishiguro, 2005). However, efforts concentrated on making a robot that appears—not just to resemble a living person—to be a copy of the original person. Silicone skin was molded by a cast taken from the original person; shape adjustments and skin textures were painted manually based on MRI scans and photographs. Fifty pneumatic actuators drive the robot to generate smooth and quiet movements, which are important attributes when interacting with humans. The allocations of actuator s wer e decided so that the r esulting r obot can effectively show the necessar y movements for human inter action and simultaneously expr ess the or iginal person’s personality traits. Among the 50 actuators, 13 are embedded in the face, 15 in the torso, and the remaining 22 move the arms and legs. The softness of the silicone skin and the compliant natur e of the pneumatic actuator s also pr ovide safety while inter acting with humans. Since this prototype was aimed for interaction experiments, it lacks the capability to walk ar ound; it always r emains seated. Figur e 1 shows the r esulting r obot (r ight) alongside the original person, Dr. Ishiguro (author).Teleoperationinterface The Internet server348 Humanoid Robots, New DevelopmentsTeleoperation interfaceFigur e 5 shows the teleoper ation inter face pr ototype. Two monitor s show the contr olled r obot and its sur r oundings, and micr ophones and a headphone ar e used to captur e and tr ansmit utter ances. The captur ed sounds ar e encoded and tr ansmitted to the geminoid server by IP links from the interface to the robot and vice versa. The operator’s lip corner positions are measured by an infrared motion capturing system in real time, converted to motion commands, and sent to the geminoid ser ver by the networ k. This enables the operator to implicitly generate suitable lip movement on the robot while speaking. However, compared to the large number of human facial muscles used for speech, the current robot only has a limited number of actuator s on its face. Also, r esponse speed is much slower , par tially due to the natur e of the pneumatic actuator s. Thus, simple tr ansmission and playback of the operator’s lip movement would not result in sufficient, natural robot motion. To over come this issue, measur ed lip movements ar e cur r ently tr ansfor med into contr ol commands using heuristics obtained through observation of the original person’s actual lipmovement.Fig. 5. Teleoperation interface.The oper ator can also explicitly send commands for contr olling r obot behavior s using a simple GUI interface. Several selected movements, such as nodding, opposing, or staring in a certain direction can be specified by a single mouse click. This relatively simple interface was pr epar ed because the r obot has 50 degr ees of fr eedom, which makes it one of the wor ld’s most complex r obots, and is basically impossible to manipulate manually in r eal time. A simple, intuitive inter face is necessar y so that the oper ator can concentr ate on inter action and not on r obot manipulation. Despite its simplicity, by cooper ating with the geminoid server, this interface enables the operator to generate natural humanlike motions in the robot.Geminoid serverThe geminoid ser ver r eceives r obot contr ol commands and sound data fr om the r emote cont r olling inte r face, adjusts and me r ges inputs, and sends and r eceives p rimitive contr olling commands between the r obot har dwar e. Figur e 6 shows the data flow in the geminoid system. The geminoid server also maintains the state of human-robot interaction and gener ates autonomous or unconscious movements for the r obot. As descr ibed above, as the r obot’s featur es become mor e humanlike, its behaviorshould also become suitablyGeminoid: Teleoperated Android of an Existing Person 349 sophisticated to retain a “natural” look (Minato et al., 2006). One thing that can be seen in every human being, and that most robots lack, are the slight body movements caused by an autonomous system, such as breathing or blinking. To increase the robot’s naturalness, the geminoid server emulates the human autonomous system and automatically generates these micr o-movements, depending on the state of inter action each time. When the r obot is “speaking,” it shows differ ent micr o-movements than when “listening” to other s. Such automatic r obot motions, gener ated without oper ator’s explicit or der s, ar e mer ged and adjusted with conscious oper ation commands fr om the teleoper ation inter face (Figur e 6). Alongside, the geminoid ser ver gives the tr ansmitted sounds specific delay, taking into account the transmission delay/jitter and the start-up delay of the pneumatic actuators. This adjustment serves synchronizing lip movements and speech, thus enhancing the naturalness of geminoid movement.Fig. 6. Data flow in the geminoid system.3.2 Experiences with the geminoid prototypeThe fir st geminoid pr ototype, HI-1, was completed and pr ess-r eleased in July 2006. Since then, numer ous oper ations have been held, including inter actions with lab member s and exper iment subjects. Also, geminoid was demonstr ated to a number of visitor s and r epor ter s. Dur ing these oper ations, we encounter ed sever al inter esting phenomena. Her e are some discourses made by the geminoid operator:x When I (Dr. Ishiguro, the origin of the geminoid prototype) first saw HI-1 sitting still, it was like looking in a mirror. However, when it began moving, it looked likesomebody else, and I couldn’t r ecognize it as myself. This was str ange, since wecopied my movements to HI-1, and other s who know me well say the r obot accur ately shows my char acter istics. This means that we ar e not objectively recognizing our unconscious movements ourselves.x While oper ating HI-1 with the oper ation inter face, I find myself unconsciously adapting my movements to the geminoid movements. The cur ent geminoid cannot move as freely as I can. I felt that, not just the geminoid but my own body isrestricted to the movements that HI-1 can make.350 Humanoid Robots, New DevelopmentsxIn less than 5 minutes both the visitor s and I can quickly adapt to conver sation through the geminoid. The visitors recognize and accept the geminoid as me whiletalking to each other.x When a visitor pokes HI-1, especially ar ound its face, I get a str ong feeling ofbeing poked myself. This is strange, as the system currently provides no tactilefeedback. Just by watching the monitors and interacting with visitors, I get this feeling.We also asked the visitors how they felt when interacting through the geminoid. Most said that when they saw HI-1 for the very first time, they thought that somebody (or Dr. Ishiguro, if familiar with him) was waiting there. After taking a closer look, they soon realized that HI-1 was a r obot and began to have some weir d and ner vous feelings. But shor tly after having a conver sation thr ough the geminoid, they found themselves concentr ating on the inte action, and soon the st ange feelings vanished. Most of the visito s we e non-researchers unfamiliar with robots of any kind.Does this mean that the geminoid has over come the “uncanny valley”? Befor e talking through the geminoid, the initial response of the visitors seemingly resembles the reactions seen with previous androids: even though at the very first moment they could not recognize the andr oids as ar tificial, they never theless soon become ner vous while being with the androids. Are intelligence or long-term interaction crucial factors in overcoming the valley and arriving at an area of natural humanness?We certainly need objective means to measure how people feel about geminoids and other types of robots. In a previous android study, Minato et al. found that gaze fixation revealed cr iter ia about the natur alness of r obots (Minato et al., 2006). Recent studies have shown different human responses and reactions to natural or artificial stimuli of the same nature. Per ani et al. showed that differ ent br ain r egions ar e activated while watching human or computer gr aphic ar ms movements (Per ani et al., 2001). Kilner et al. showed that body movement entrainment occurs when watching human motions, but not with robot motions (Kilner et al., 2003). By examining these findings with geminoids, we may be able to find some concr ete measur ements of human likeliness and appr oach the “appear ance ver sus behavior” issue.Perhaps HI-1 was recognized as a sort of communication device, similar to a telephone or a TV-phone. Recent studies have suggested a distinction in the b r ain p r ocess that discr iminates between people appear ing in videos and existing per sons appear ing live (Kuhl et al., 2003). While attending TV conferences or talking by cellular phones, however, we often experience the feeling that something is missing from a face-to-face meeting. What is missing here? Is there an objective means to measure and capture this element? Can we ever implement this on robots? 4. Summary and further issuesIn developing the geminoid, our purpose is to study Sonzai-Kan , or human presence, by extending the framework of android science. The scientific aspect must answer questions about how humans r ecognize human existence/presence. The technological aspect must r ealize a teleoper ated andr oid that wor ks on behalf of the per son r emotely accessing it. This will be one of the practical networked robots realized by integrating robots with the Internet.The following are our current challenges:Geminoid: Teleoperated Android of an Existing Person 351 Teleoperation technologies for complex humanlike robotsMethods must be studied to teleoper ate the geminoid to convey existence/pr esence, which is much mor e complex than tr aditional teleoper ation for mobile and industr ial robots. We are studying a method to autonomously control an android by transferring motions of the ope ato measu ed by a motion captu ing system. We a e also developing methods to autonomously control eye-gaze and humanlike small and large movements.Synchronization between speech utterances sent by the teleoperation system and body movementsThe most impor tant technology for the teleoper ation system is synchr onization between speech utter ances and lip movements. We ar e investigating how to pr oduce natur al behaviors during speech utterances. This problem is extended to other modalities, such ashead and ar m movements. Further, we are studying the effects on non-verbalcommunication by investigating not only synchronization of speech and lip movements but also facial expressions, head, and even whole body movements.Psychological test for human existence/presenceWe are studying the effect of transmitting Sonzai-Kan from remote places, such as meeting par ticipation instead of the per son himself. Mor eover, we ar e inter ested in studying existence/presence through cognitive and psychological experiments. For example, we are studying whether the andr oid can r epr esent the author ity of the per son himself by comparing the person and the android.ApplicationAlthough being developed as research apparatus, the nature of geminoids can allow us to extend the use of r obots in the r eal wor ld. The teleoper ated, semi-autonomous facility of geminoids allows them to be used as substitutes for clerks, for example, that can be contr olled by human oper ator s only when non-typical r esponses ar e r equir ed. Since in most cases an autonomous AI response will be sufficient, a few operators will be able to control hundreds of geminoids. Also because their appearance and behavior closely resembles humans, in the next age geminoids should be the ultimate interface device.5. AcknowledgementThis work was supported in part by the Ministry of Internal Affairs and Communications of Japan.6. ReferencesBlakemor e, S. J. & Fr ith, U. (2004). How does the br ain deal with the social wor ld?Neuroreport, 15, 119-128Br eazeal, C. (2004). Social Inter actions in HRI: The Robot View, IEEE Transactions on Man, Cybernetics and Systems: Part C, 34, 181-186Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots, Robotics and Autonomous Systems, 42, 143–166502 Humanoid Robots, New DevelopmentsFig. 12. Desir ed and r eal angle (degr ee) fr om left ar m and leg joints over 4000 ms as an average over 5 recurrences.Fig. 13 Tilt values for rear-front and right-left pitch as an average over 5 recurrences.Experiments on Embodied Cognition: A Bio-Inspired Approach for Robust Biped Locomotion503Fig. 14. Calculated er r or fr om the balance behavior ’s PID-contr oller as an aver age over 5recurrences.Fig. 15. An architecture integrating learning, representation and robust low-level control.8. ReferencesAlbr echt, M., Backhaus, T., Planthaber , S., Stoeppeler , H., Spenneber g, D., & Kir chner , F.(2005). AIMEE: A Four Legged Robot for RoboCup Rescue. In: Pr oceedings ofCLAWAR 2005. Springer.Ambr ose, Rober t et al "The development of the Robonaut System for Space Oper ations",ICAR 2001, Invited Session on Space Robotics, Budapest, August 20, 2001.B r eazeal, C. and Velasquez, J. “Towa r d Teaching a Robot ‘Infant’ using EmotiveCommunication Acts” In Pr oceedings of 1998 Simulation of Adaptive Behavior ,workshop on Socially Situated Intelligence, Zurich Switzerland. 25-40.B ooks, R. et al “The Cog P oject: Building a Humanoid Robot” In Computation foMetaphors, Analogy and Agents, Vol. 1562 of Springer Lecture Notes in ArtificialIntelligence, Springer-Verlag, 1998.。