Interaction Techniques with Virtual Humans in Mixed Environments
人工智能深度学习算法在虚拟现实交互产品的应用与设计实现

162019年2月第3卷第1期Feb.2019Vol.3 No.1创新发展论坛黄昌正1,陈 曦1,周言明2人工智能深度学习算法在虚拟现实交互产品的应用与设计实现(1.东莞市易联交互信息科技有限责任公司,广东东莞 523808;2.广州幻境科技有限公司,广东广州 510075)摘要:随着虚拟现实技术的发展,传统电脑的鼠标和键盘不适用于虚拟现实环境的交互需求,虚拟现实交互需求越来越突出。
通过对虚拟现实交互需求的深入研究,设计了一款基于九轴惯性传感器、采用人工智能深度学习算法实现的虚拟现实交互手套。
通过九轴传感器捕捉手指和手掌运动数据,利用人工智能深度学习算法进行手势姿态识别,借助配套的软件开发工具包 (SDK) 软件,实现在虚拟现实环境中同步还原出手部的完整运动过程,以及进行手势交互意图识别,达到在虚拟现实环境中通过手部自然动作交互的目的。
关键词:虚拟现实;人工智能;深度学习;人机交互中图分类号:F124.3;O234 文献标志码:A 文章编号:2096-5095(2019)01-0016-09收稿时间:2019-01-30项目来源:东莞市引进创新创业领军人才计划资助“面向虚拟现实的自然交互技术与融合产品研发和产业化”作者简介:黄昌正 (1975—),男,湖南永州人,本科,助理工程师,研究方向:动作捕捉手势交互及虚拟现实技术;陈曦(1977—),男,湖南株洲人,硕士,高级工程师,研究方向:人机交互、动作捕捉及手势识别技术;周言明(1990—),男,广东佛山人,本科,助理工程师,研究方向:动作捕捉手势交互及虚拟现实技术。
Application and Design of Artificial Intelligence Deep Learning Algorithm in Virtual Reality Interaction ProductsAbstract: With the development of virtual reality technology, the mouse and keyboard interaction of traditional computers fails to meet the needs of interaction in virtual reality environment, the requirements of virtual reality interaction become more and more prominent. Through the in-depth study of the virtual reality interaction requirements, a virtual reality interactive glove based on nine-axis inertial sensor and artificial intelligence deep learning algorithm is designed. The finger and palm motion data are captured by the nine-axis sensor, and the hand gesture recognition is performed by the artificial intelligence deep learning algorithm. Through the supporting software development kit(SDK), the complete motion process of synchronously restoring the hand in the virtual reality environment and recognizing the hand gesture interaction intention are realized. The purpose of interacting with the natural hand movement in the virtual reality environment is achieved.Key words: virtual reality; artificial intelligence; in-depth learning; human-machine interaction0 引言随着虚拟现实行业的发展,各大智能设备厂商在虚拟现实交互方面相继推出了不同的虚拟现实交互手柄。
英语作文-虚拟现实技术应用于体育训练,提高运动员技能

英语作文-虚拟现实技术应用于体育训练,提高运动员技能Virtual Reality Technology Applied to Sports Training to Improve Athletes' Skills。
With the rapid development of technology, virtual reality (VR) has emerged as a powerful tool that can revolutionize various fields, including sports training. By creating a simulated environment, VR technology allows athletes to enhance their skills and performance in ways that were previously unimaginable. In this article, we will explore the applications of VR technology in sports training and the benefits it brings to athletes.Firstly, VR technology provides a safe and controlled environment for athletes to practice and train. Traditional sports training often involves physical risks and limitations. However, with VR technology, athletes can simulate various game scenarios without the fear of injuries. For example, a basketball player can practice shooting techniques in a virtual basketball court, where they can repeat shots and receive instant feedback on their form and accuracy. This allows athletes to refine their skills and correct any errors in a risk-free environment.Secondly, VR technology offers a highly immersive experience that enhances athletes' mental preparation and decision-making abilities. In sports, quick and accurate decision-making is crucial for success. By using VR technology, athletes can be exposed to realistic game situations and challenges, allowing them to develop their cognitive skills and improve their ability to make split-second decisions. For instance, a soccer player can practice their reaction time and decision-making skills by facing virtual opponents who move and react like real players. This immersive experience helps athletes develop a better understanding of the game and enhances their overall performance on the field.Furthermore, VR technology enables athletes to access personalized training programs and receive real-time feedback. Coaches can create customized training modules tailored to each athlete's needs and goals. Athletes can then engage in virtualtraining sessions that target specific areas for improvement. During these sessions, VR technology can provide real-time feedback on technique, posture, and performance, allowing athletes to make immediate adjustments and improvements. This personalized approach to training ensures that athletes receive targeted guidance and maximize their potential.In addition to individual training, VR technology also facilitates team training and collaboration. Team sports require coordination, communication, and synchronization among players. With VR technology, teams can engage in virtual practice sessions where they can simulate game scenarios, practice plays, and improve their teamwork. For example, a basketball team can use VR technology to run through offensive and defensive strategies, allowing players to understand their roles and positions more effectively. This collaborative training enhances team cohesion and performance, ultimately leading to better results on the field.Moreover, VR technology offers the opportunity for athletes to experience and learn from elite athletes and coaches. Through VR simulations, athletes can virtually train and compete with renowned athletes, observing their techniques and strategies up close. This exposure to top-level performance can inspire athletes and provide valuable insights for their own training. Additionally, coaches can use VR technology to analyze and evaluate athletes' performances, identifying areas for improvement and providing targeted feedback. This interaction with virtual mentors and coaches enhances athletes' learning experience and accelerates their skill development.In conclusion, virtual reality technology has immense potential in revolutionizing sports training. From providing a safe and controlled environment to enhancing decision-making abilities and offering personalized training programs, VR technology offers numerous benefits for athletes. By incorporating VR technology into sports training, athletes can improve their skills, performance, and overall success in their respective sports. As technology continues to advance, we can expect even more exciting developments in the field of virtual reality and its applications in sports training.。
计算机是最伟大的发明英文作文

计算机是最伟大的发明英文作文In the pantheon of human invention, the computer stands as a titan, reshaping every facet of our lives with its digital alchemy. From the abacus to the smartphone, the journey of computing devices has been nothing short of miraculous, a testament to human ingenuity and the relentless pursuit of progress.The computer's inception can be traced back to the need for complex calculations, far beyond the capability of human mental arithmetic. Charles Babbage's Analytical Engine, conceptualized in the 19th century, laid the groundwork for what would become the modern computer. It was an idea ahead of its time, envisioning a machine that could perform a variety of calculations through a series of mechanical instructions.As the 20th century dawned, the evolution of computers accelerated. The colossal ENIAC, developed in the 1940s, became the herald of the electronic age. It was a behemoth, consuming vast amounts of power and space, yet it unlocked new possibilities in computation, science, and engineering.The true revolution, however, began with the miniaturization of electronic components. The invention of the transistor and later the integrated circuit paved the way for computers to become accessible to the masses. The 1980s saw the advent of personal computers, bringing the transformative power of computing into homes and offices around the world.The impact of computers on society is immeasurable. They have become the backbone of modern infrastructure, controlling everything from traffic lights to financial markets. In science, computers enable simulations of complex phenomena, from the behavior of subatomic particles to the formation of galaxies. In medicine, they assist in diagnosing diseases and modeling biological processes.Perhaps the most profound change has been in communication. The Internet, a global network of computers, has connected humanity in ways previously unimaginable. It has democratized information, fostered global communities, and given rise to new industriesand careers. Social media, a byproduct of the Internet, has altered the landscape of human interaction, for better or worse.The computer has also been a catalyst for creativity. Digital art, music production, and film-making have evolved with the tools provided by computing technology. Writers, artists, and musicians harness software to push the boundaries of their crafts, creating works that blend traditional techniques with digital innovation.In education, computers have opened doors to knowledge that were once closed to many. Online courses and resources have made learning more accessible, breaking down barriers of geography and socioeconomic status. Students can explore subjects in virtual environments, engage with interactive modules, and collaborate with peers across the globe.The future of computing promises even greater advancements. Quantum computers, still in their infancy, hint at a new era of processing power, capable of solving problems that are currently intractable. Artificial intelligence, powered by sophisticated algorithms and vast datasets, is poised to redefine what machines can do, blurring the lines between human and computer capabilities.Yet, with great power comes great responsibility. The proliferation of computers has raised concerns about privacy, security, and the ethical use of technology. Cybersecurity has become a critical field, as individuals, corporations, and nations seek to protect their digital assets. The debate over artificial intelligence and automation touches on fundamental questions about the nature of work and the value of human labor.In conclusion, the computer, in its myriad forms, is indeed one of the greatest inventions of mankind. It has reshaped the world in countless ways, becoming an indispensable tool for progress and a mirror reflecting our collective aspirations and fears. As we stand on the cusp of new discoveries and challenges, the computer remains a symbol of human potential, a creation that has forever altered the course of history. 。
外文文献—虚拟现实

VIRTUAL REALITYJae-Jin Kim Chapter 4Virtual Reality to Simulate Visual Tasks forRobotic Systems4.1IntroductionVirtual reality (VR) can be used as a tool to analyze the interactions between the visual system of a robotic agent and the environment, with the aim of designing the algorithms to solve the visual tasks necessary to properly behave into the 3D world. The novelty of our approach lies in the use of the VR as a tool to simulate the behavior of vision systems. The visual system of a robot (e.g., an autonomous vehicle, an active vision system, or a driving assistance system) and its interplay with the environment can be modeled through the geometrical relationships between the virtual stereo cameras and the virtual 3D world. Differently from conventional applications, where VR is used for the perceptual rendering of the visual information to a human observer, in the proposed approach, a virtual world is rendered to simulate the actual projections on the cameras of a robotic system. In this way, machine vision algorithms can be quantitatively validated by using the ground truth data provided by the knowledge of both the structure of the environment and the vision system.In computer vision (Trucco & Verri, 1998; Forsyth & Ponce, 2002), in particular for motion analysis and depth reconstruction, it is important to quantitatively assess the progress in the field, but too often the researchers reported only qualitative results on the performance of their algorithms due to the lack of calibrated image database. To overcome this problem, recent works in the literature describe test beds for a quantitative evaluation of the vision algorithms by providing both sequences of images and ground truth disparity and optic flowmaps (Scharstein & Szeliski, 2002; Baker et al., 2007). A different approach is to generate image sequences and stereo pairs by using a database of range images collected by a laser range-finder (Yang & Purves, 2003; Liu et al., 2008).In general, the major drawback of the calibrated data sets is the lack of interactivity: it is not possible to change the scene and the camera point of view. In order to face the limits of these approaches, several authors proposed robot simulators equipped with visual sensors and capable to act in virtual environments. Nevertheless, such software tools are capable of accurately simulating the physics of robots, rather than their visual systems. In many works, the stereo vision is intended for future developments (Jørgensen & Petersen, 2008; Awaad et al., 2008), whereas other robot simulators in the literature have a binocular vision system (Okada et al., 2002; Ulusoy et al., 2004), but they work on stereo image pairs where parallel axis cameras are used. More recently, a commercial application (Michel, 2004) and an open source project for cognitive robotics research (Tikhanoff et al., 2008) have been developed both capable to fixate a target, nevertheless the ground truth data are not provided.4.2 The visual system simulatorFigure 4.1a-b shows the real-world images gathered by a binocular robotic head, for different stereo configurations: the visual axes of the cameras are k ept parallel (Figure 4.1a) and convergentfor fixating an object in the scene (the small tin, see Figure 4.1b). It is worth noting that both horizontal and vertical disparities have quite large values in the periphery, while disparities are zero in the fixat ion point. Analogously, if we look at the motion field generated by an agent moving in the environment (see Figure 4.1c), where both still and moving objects are present the resulting optic flow is composed both by ego-motion components, due to motion of the observer, and by the independent movements of the objects in the scene.(a)(b)(c)Figure 4.1 Binocular snapshots obtained by real-world vision systems. (a)-(b): The stereo image pairs are acquired by a binocular active vision system (http://www.searise.eu/) for different stereo configurations: the visual axes of the cameras are (a) kept parallel, (b) convergent for fixating an object in the scene (the small tin). The anaglyphs are obtained with the left image on the red channel and the right image on the green and blue channels. The interocular distance is 30 cm and the camera resolution is 1392 × 1236 pixels with a focal length of 7.3 mm. The distance between the cameras and the objects is between 4 m and 6 m. It is worth noting that both horizontal and vertical dispariti es are present. (c): Optic flow superimposed on a snapshot of the relative image sequence, obtained by a car, equipped with a pair of stereo cameras with parallel visual axes , moving in a complex real environment. The resolution of the cameras is 1392 × 1040 pixels with a focal length of 6.5mm, and the baseline is 33 cm (http://pspc.dibe.unige.it/drivsco/). Different situations are represented: ego-motion (due to the motion of the car) and a translating independent movement of a pedestrian (only the left frame is shown).The aim of the work described in this chapter is to simulate the active vision system of a robot acting and moving in an environment rather than the mechanical movements of the robot itself. In particular, we aim to precisely simulate the movements (e.g. vergence and version) of the two cameras and of the robot in order to provide the binocular views and the related ground truth data (horizontal and vertical disparities and binocular motion filed ). Thus, our VR tool can be used for two different purposes (see Figure 4.2):1. to obtain binocular image sequences with related ground truth, to quantitatively assess the performances of computer vision algorithms;2. to simulate the closed loop interaction between visual perception and action of the robot.The binocular image sequences provided by the VR engine could be processed by computer vision algorithms in order to obtain the visual features necessary to the control strategy of the robot movements. These control signals act as an input to the VR engine, thus simulating the robot movements in the virtual environment, then the updated binocular views are obtained. In the following, a detailed description of the model of a robotic visual system is presented.Figure 4.2 The proposed active vision system simulator. Mutual interactions between a robot and the environment can be emulated to validate the visual processing modules in a closed perception-action loop and to obtain calibrated ground truth data.4.2.1 Tridimensional environmentThe 3D scene is described by using the VRML format. Together with its successor X3D, VRML has been accepted as an international standard for specifying vertices and edges for 3D polygons, along with the surface color, UV mapped textures, shininess and transparency. Though a large number of VRML models are available, e.g. on the web, they usually have not photorealistic textures and they are often characterized by simple 3D structures. To overcome this problem, a dataset of 3D scenes, acquired in controlled but cluttered laboratory conditions, has been created by using a scanner laser. The results presented in Section 6 are obtained by using the dataset obtained in our laboratory.It is worth noting that the complex 3D VRML models can be easily replaced by simple geomet ric figures (cubes, cones, planes) with or without textures at any time, in order to use the simulator as an agile testing platform for the development of complex computer vision algorithms.4.2.2 RenderingThe scene is rendered in an on-screen OpenGL context (see Section 5 for details). Moreover, the SoOffScreenRenderer class is used for rendering scenes in off-screen buffers and to save to disk the sequence of stereo pairs. The renderer can produce stereo images of different resolution and acquired by cameras with different filed of views. In particular, one can set the following parameters :(1)resolution of the cameras (the maximum possible resolution depends on the resolution ofthe textures and on the number of points of the 3D model);(2)horizontal and verti cal field of view (HFOV and VFOV, respectively);(3)distance from camera position to the near clipping plane in the camera’s view volume,also referred to as a viewing frustum, (nearDistance);(4)distance from camera position to the far clipping plane in the came ra’s view volume(farDistance);(5)distance from camera position to the point of focus (focalDistance).4.2.3 Binocular head and eye movementsThe visual system, presented in this Section, is able to generate the sequence of stereo imagepairs of a binocular head moving in the 3D space and fixating a 3D point (X F, Y F, Z F).The geometry of the system and the parameters that can be set are shown in Figure 4.3.Figure 4.3 Schematic representation of the geometry of the binocular active vision system.The head is characterized by the following parameters (each expressed with respect to the world reference frame (X W, Y W, Z W)) :(1)cyclopic position C =(X C, Y C, Z C);(2)nose orientation;(3)fixation point F =(X F, Y F ,Z F ).Once the initial position of the head is fixed, then dif ferent behaviours are possible:(1)to move the eyes by keeping the head (position and orientation) fixed;(2)to change the orientation of the head, thus mimicking the movements of the neck;(3)to change both the orientation and the position of the head, thus generating more complexmotion patterns.These situations imply the study of different perceptual problems, from scene exploration to navigation with ego-motion. Thus, in the following (see Section 6), we will present the results obtained in different situations. For the sake of clarity and simplicity, in the following we will consider the position C = (X C,Y C,Z C) and the orientation of the head fixed, thus only the ocular movements will be considered. In Section 3.3.1 different stereo systems will be described (e.g. pan-tilt, tilt-pan, etc.), the simulator can switch through all these different behaviours. The results presented in the following consider a situation in which the eyes can rotate around an arbitrary axis, chosen in order to obtain the minimum rotation to make the ocular axis rotate from the initial position to the target position (see Section 3.3.1).第二部分中文译文虚拟现实Jae-Jin Kim 第四章虚拟现实机器人的模拟视觉任务4.1引言虚拟现实(VR)可以作为一种工具,用来分析机器人代理的视觉系统和环境之间的相互作用,意在设计算法来解决3D世界中必要的正确行为的视觉任务。
互联网对友谊的好处英语作文

互联网对友谊的好处英语作文英文回答:In the contemporary era characterized by rapid technological advancements, the internet has emerged as a ubiquitous force transforming various aspects of human society. Among its profound influences, the internet has significantly reshaped the dynamics of friendship, offering both advantages and challenges.Advantages of the Internet for Friendship:1. Enhanced Communication: The internet has revolutionized communication, facilitating instantaneous and seamless exchanges across vast distances. Social media platforms, instant messaging services, and video conferencing tools allow individuals to connect withfriends effortlessly, regardless of their physical proximity.2. Broadened Social Networks: The internet provides a vast virtual space where people can interact with individuals from diverse backgrounds and interests. Online platforms connect like-minded individuals, fostering new friendships that may not have been possible in offline settings.3. Strengthening Existing Relationships: The internet offers opportunities for friends to maintain and strengthen their bonds. Regular online interactions, shared experiences on social media, and virtual group activities foster a sense of closeness and familiarity.4. Support and Guidance: Online forums and support groups provide a platform for individuals to seek advice and emotional support from friends and peers. Theinternet's anonymity often encourages people to open up and share personal experiences, fostering a deeper level of understanding and empathy.5. Bridging Cultural Divides: The internet transcends geographical and cultural boundaries, connecting peoplefrom different parts of the world. Online interactions promote cross-cultural exchanges, fostering tolerance, understanding, and global unity.Challenges of the Internet for Friendship:1. Lack of Physical Interaction: While the internet facilitates virtual connections, it cannot fully replace the richness of face-to-face interactions. Nonverbal cues, physical touch, and shared experiences are crucial for building deep and meaningful friendships.2. Online Harassment and Cyberbullying: The internet can be a breeding ground for negative online behaviors, including harassment, cyberbullying, and identity theft. These behaviors can erode trust and damage friendships.3. Excessive Use: Overindulgence in internet use can lead to social isolation and neglect of real-world relationships. Individuals who spend excessive time online may prioritize virtual interactions over in-person social activities.4. Privacy Concerns: Social media platforms collect vast amounts of personal data, which can raise concerns about privacy and data breaches. Fear of having personal information compromised may hinder the formation of genuine friendships online.Conclusion:The internet has undeniably transformed the nature of friendship, offering both benefits and challenges. While it facilitates communication, broadens social networks, and strengthens existing bonds, it also risks replacing meaningful in-person interactions, fostering negative online behaviors, and raising privacy concerns. By navigating these advantages and challenges responsibly, individuals can harness the power of the internet to enrich their friendships while maintaining balance and real-world connections.中文回答:1. 加强沟通,互联网彻底改变了沟通方式,促进了远距离的即时无缝交流。
虚拟乐团英语作文

Title: The Evolution of Music: Embracing the Virtual OrchestraIn the ever-evolving world of music, a new phenomenon has emerged—the virtual orchestra. This innovative approach to musical performance leverages the power of technology to create an immersive experience that transcends traditional boundaries.A virtual orchestra utilizes software and advanced audio processing techniques to simulate the sound of a traditional ensemble. Instruments are digitally reproduced, and their sounds are blended together to create harmonious melodies. This process allows for endless possibilities in composition, as musicians are no longer limited by the physical constraints of a traditional orchestra.The benefits of a virtual orchestra are numerous. Firstly, it opens up the world of music to a wider audience. With the internet as a platform, people from all corners of the globe can access and enjoy performances. This democratization of music breaks down geographical barriers and brings cultures closer together.Moreover, a virtual orchestra provides a platform for experimentation and innovation. Composers and musicians can experiment with new sounds, textures, and arrangements without the need for expensive equipment or a physical ensemble. This flexibility encourages creativity and pushes the boundaries of what is possible in music.However, it is important to note that the virtual orchestra does not replace the traditional orchestra. Rather, it is a complementary form of expression that adds to the rich tapestry of musical experiences. The emotional connection and the live interaction between musicians in a traditional ensemble are still unparalleled.In conclusion, the virtual orchestra represents a significant shift in the way we create and consume music. It offers a new platform for experimentation, innovation, and accessibility. As technology continues to progress, we can expect to see even more exciting developments in this fascinating field of music.。
儿童家具设计中英文

儿童家具的人性化设计外文翻译Researches and Development of Interactive Educational Toys for Children Sun Yingying, Guo Liyan,Zhang Zuyao Zhejiang Sci-Tech UniversityHangzhou, ChinaAbstract: For Oriented by the teaching philosophy "game based learning" this paper carried out an in-depth research on the interactive mode of children's educational toys. In the research process, it attempted to build a new immersed educational-game scenario for children by using the new interactive technology so as to inspire the children's interest in learning and exploration. The research object in this paper was an interactive educational toy-"funny tap" English learning machine for children. After integrating the design concept of this product from an industrial design perspective, we selected specific interactive technology and completed the engineering. Moreover, we have conducted tests of work principles and effect of usage based on the sample machine. The final result indicated that there is a promising and huge market potential to apply the new interactive technology to development of educational toys. Keywords: Interactive Educational Toys, Interactive Design, interactive mode1. INTRODUCTIONSince 1980s, human beings including the children have entered a digital age.Under the influence of the advanced information, early stage education machines, electronic building blocks, electronic wall charts, and other new toys have become children's new favorites. With the influence of the west teaching philosophy-"game based learning" parents are strongly agreed with such toys for children. These modern educational toys will become the mainstream of toy development due to their promotion of children's learning, practical ability, creativity and imagination.Interaction exists in all things contacted by humans, and interactive design emerged to design a kind of communication and dialogue between human and objects to minimize the "cognitive conflict". As a new design theory, interactive design has a wide range of applications in designing educational toys.2. THE PLAN AND BENEFITS OF THE INTERACTIVE MODE OFCHILDREN 'S TOYSThe rise of various digital technologies, such as voice recognition, 3D video, and virtual reality technology etc., gives new experience to people's perception. The author aimed to apply these new digital technologies to the researches of interactive educational toys design.The plan of the interactive mode of children's educational toys:2.1. Voice InteractionVoice interaction voice includes touch voice interactio n, voice command interaction and intelligent voice interaction. Touch voice interaction and voice command interaction have been very common, such as electronic wall charts, televox; intelligent voice interaction is the author's aim to create a genuine dialogue between children and simulation toys through digital technology, to foster children's language ability, particularly in a family with only one child, the children need a "partner" to accompany them to learn and play with.2.2. Video InteractionVideo interaction can be divided into 2D image interaction and 3D video interaction. The former has been broadly used in toys, such as in multimedia courseware, image or video of horse will appear when referring to "horse"; 3D video interaction is the author's aim to apply 3D projection technology in the "play" process, for instance, when referring to a green grassland, a grassland projection will appear so that children feel like being on the grassland, which enhances children's learning experience; meanwhile, this enhanced emotional experience will prolong the memory retention time or even extend to a ultra- long-term memory.2.3. Narrative InteractionNarrative interaction is to conceive a story for the toy and offer a task role forchildren to make them participate in the story. The steps are shown in Figure 1:Fig. 1. The steps of narrative interactionBased on children's curiosity and imitation psychology as well as the investigation of the games, the author found the correct application of story interaction in educational toys can greatly mobilize children's learningenthusiasm, for example, we conceive an English learning process as treasure hunt activity. In this activity, the words are hidden in the treasure box, and children themselves are explorers, if they put one or a few words together, they will get a treasure box, and they can also make a competition with peers to get the treasure boxes. Through establishment of game theme, selection of roles, and plot development in the activities, children not only increase their knowledge of English, also learn how to get along with peers and develop good self-awareness.2.4. Web Virtual Reality InteractionWeb virtual reality interaction is virtual imaging through network connections, making you feel like your partners sitting, playing and learning with you, to deliberate the loneliness in the contemporary families, and promote children's learning initiative in the competitive context.Psychological research shows that with respect to the learners, the learning behavior resulting in emotional pleasure experience will produce a positive emotional resonance, thereby enhancing the learners' learning initiative and enthusiasm. The realistic educational-game scenario created by interactive educational toys for children not only brings emotional pleasure experience to children so that learning is no longer boring for them with a purpose of mobilizing the enthusiasm of study and developing creative thinking, but also enhances children's social communication ability to help children establish good social character favorable for their life.3. DEVELOPMENT OF INTERACTIVE EDUCATIONAL TOY—"FUNNYTAP"Parents are head-ached on children's learning English, so we focus on developing an interactive English learning toy to help the children remembering words in gamescenario and stimulate their interests in learning English, and training children's hand operation and brain coordination.The development practice procedure of interactive toy for children-"funny tap" is shown in Figure 2 as following:Fig. 2. The development practice procedure of "funny tap"3.1. The development process of interactive concept of interactive educational toy-"funny tap"It is the development process of "funny tap" interactive concept. After investigating the object group of children and parents about their needs of English learning machine, we summarized six key indicators such as security, fun and incentive. Here we mainly describe three models of interactions shaded in Figure 3.To meet the requirement of fun, the author designed a narrative interactive process, as is shown in Figure 3:Fig. 3. The narrative interactive process of "funny tap"The word learning process is conceived as a game of whack-a-mole, imagining there are N mole holes, and there are M letters in a word (i.e. M moles with a letter). If you tap down M jumping moles in accordance with the order, you will get the cheers, if the tap is not correct, it will continue to call "come on"; meanwhile, the action of "tap" is not only funny, but also effective to train children's hands and brain coordination.Voice interaction was prepared by the microcontroller program to control the voice modules. There are two features regarding the "funny pat": one is word pronunciation; the other is the design of the applause and cheering voices for reward and punishment, which help to reach the goal of incentive.In the first stage, video interaction was prepared by displaying the letters on buttons through LED dot matrix character display modules mainly controlled by microcontroller; in the second stage, we provided toy with 3D projector for projecting the whole process in the air to construct a 3D emotional scenario, and the action of "tap" is to tap the projections in the air.3.2. Principle diagram of interactive educational toy for children-"funny tap" The operation principle of "funny tap" is shown in Figure 4:Fig. 4. The operation principle of "funny tap"The system consists of six components, such as voice module, LED indicator, action back module, MCU, power module and LED dot matrix character display module. Among these, the three formers are connected with MCU through 8-bit data bus; LED dot matrix character display module is connected with the microcontroller through the 12C bus. Voice module stores English word pronunciation documents needed in the game, and MCU pronounces the word by controlling the voice module via the bus. LED dot matrix character display module consists of driver chips and the 8*8 LEDmatrix. MCU bus control LED dot matrix character display module via I2C to show the corresponding English letters. Action back module tests and captures the player's actions during the game for the MCU to judge whether the player conducts normal actions to control the game process.3.3. Appearance design process of interactive educational toy-"funny tap"3.4. Interaction testSample of N (N is an odd number) preschool children was randomly selected to test the product's availability, usability and user's willingness of using it. Mainly onsite testing observation and questionnaire survey, and then we improve the product according to the test results.Testing times are equal to or more than I so as to find the products with highest interaction. In the product final trial, most of parents fed back that this toy combined fun and knowledge well and the whole learning process was very smooth and the children were very happy when "learning".4. SUMMARY AND PROSPECTChina is a large country of toy manufacturing, but it still remains in the stage of imitating foreign design, especially in educational toy design. The research and practice of interactive educational toys in this study is expected to give some thought and inspiration to toy designers so as to further promote the development of Chinese toy industry.REFERENCES[I] Liu Zaihua, Children's Social Intelligence, Anhui People's Publishing House, 2008.[2] KARL T. ULRICH, STEVEN D. EPPINGER, Product Design and Developmen,Higher Education Press, 2005.[3] (U.S.) Robert J. stembeg, Translated by Yang Bingjun, Chen Yan, Chow Zhiling,Cognitive Psychology, Beijing: China Light Industry Press, 2006.[4] Zhang Zhcnzhong, Li Yanjun, Classification Research of Educational Toys,Textiles and Design, December 2008 Vol. 12.[5] Li Qiaodan, Xia Hongwen, On the Function of Digitized Bran-training Toys inElementary Education, China Education Informationalization Issuing Department.[6] Song Jun, Researches on Design Principles of Children's Educational Toys,[Online]. Available: [7] Liu Mingliang, " The Principle Production and Purchasing of Electronic Toys", New Era Press, 1992.Toy development and design based on the needs of olderpersonsAbstract: In china, aging and the life-quality of older persons has become today’s important issues of social concern, and how to solve this problem thus turns to be an important challenge in the design and development of supplies for the old. Now, the ensuing ways to solve varied. For instance, the design community has put emphasis on the design and development of the supplies for the old, but a large part of these de signs were for medical care and medical products of the senior person. The designs for the vast majority of the healthy people in their senior age are rarely involved. In this, I think, for the function of toys, the emphasis on the development of physical and mental health of older persons is the key, so to rethink the development of toys for the old persons in china is one of the ways.Keywords: Toys for the old, Needs design, humane careMentions of the toys, we always unconsciously think of the innocent children, as if toys are just child’s belongings. With the improvement of living standards, emphasis on the toys is constantly improved. To meet the needs of children, various designs are brought out, and then from luxury goods, toys have gradually become the child’s necessities. However, the authorities of the china toy association state that the toy is no longer the children’s only product: toy concept has been extended and functional and practical range of modern toy has been further expanded. Toys notonly inspire children, but also become the recreation products for the seniors. The old also need toys that could meet their spiritual needs and enrich their life in later years.1. Status of the development and design of toys for the seniorsIn china, toy for the old is still an industry to be developed. Senior people, as customers, they have needs and also purchasing power, but no targeted toys for them. There are as much as 130 million seniors in china who would be a huge consumer group, but the research and development of toys for old consumers has lagged behind developed country for more than 30 years.In America, the toys designed for the senior amount to 40% of the toy market. The toy market for old persons is more mature. They have many toy stores for the seniors throughout the urban and rural areas. Also our neighbor Japan does well in the development of toys for the old persons, and most toy companies have produced toys for seniors, and continued to introduce new products.2. The meaning of the development of toys for the seniorsRetiring from work, the senior people get more time than before. Besides watching TV at home, they have no many alternative entertainments. Some old people have been for a long period in loneliness. Over time, they are prone to depression, anxiety disorder and Alzheimer’s, seriously affecting their physical and mental health and become burden to children and society.Li guangqing in department of rehabilitation of Beijing Xuanwu Hospital once said: ―with age increasing, the function of the body of the seniors gradually degraded, and their reaction will be clumsier. At the same time, retirement from work, the opportunity for the old to use their brain reduces, which further brings the decline of attention and cognitive ability. Except to maintain good habits and moderate exercise, to slow down brain aging, putting hands and brain in work at the same time is the most effective way, which is exactly the function of toys. for people with Alzheimer’s, playing with toys, to some extent, would alleviate the condition.Therefore, toys can develop people’s thinking ability, and improve our intelligence. If the seniors play with toys constantly, the aging of the brain and theAlzheimer’s would be effectively prevented. Medical experts found that to maintain old people’s intelligence, we must first fully protect the brain. In addition to proper nutrition and adequate sleep, the seniors should make most of the brain. Just as Chinese saying tells that‖ water does not rot, and thedoor hinge is never worm-eaten ―, the more one use his brain, the more sensitive it becomes. Playing with toys is exactly a good way to use the brain. With toys, the old people not only receive more information, at the same time become more optimistic than before, thereby enhancing their immune system function.3. The needs-analysis of toys for the oldWhat is a needs analysis? This approach is to focus on the users’ needs. Users’ needs are sources of many new products.What is the demand-design? it is the most front-end process for new product in its life cycle, and decides the success or failure of the new products. Needs-design starts from the businesses and designers’ judgment of the market or the needs of users, and ends at planning proposals or technical specifications on description of the product development. Understand the market or user demand is a high-level investment for the success of the product.The development and design of toys for old persons should start from the needs of the seniors. Only a real understanding of the old consumers and their psychological and physiological needs can bring toys that give practical cares for the seniors physically and spiritually.Toys for the old should bring human care. Toy design process should be integrated into this concept. The aim of the toy design for the old is enhanced, with seniors-centered design principles, and with the help of analysis on the seniors’ physiological psychological characteristics, cultural level and lifestyle. The toy design principle that shows humane care for the seniors is reflected at the same time.(1)Safety firstTo varying degrees, the judgment, cognitive ability and ability to respond of the old people weaken, thus in the process of using the product, they inevitably make mistakes. In case a threat to physical and mental health occurs, they usually are unable to escape the danger. Therefore, toys for the seniors should befault-tolerant. So that, the old people even make a mistake, there will be no danger. Here the reduction of operation process and the set of messagefor safe operation is an effective way to ensure the safety of the seniors with toys.(2)Moderate difficultThe design of toys for the old should be of moderate difficulty, and the purpose is to arouse their interest in playing. If too simple, it would not enhance the interest of the seniors and thus would not achieve the aim of exercising the brain; if too difficult, it would be strenuous for them to learn, and consequently cause a sense of failure which is not conducive to their mental health.(3)Easy to identifyThe toy should have a familiar form and an unde rstandable functional theory for the old. It should also be equipped with an interface in keeping with the experience and habits of the seniors. Besides, the toys that need interface design, should take into account the graphic symbols, size, color, clarity of sound, light intensity.(4)Facilitate communicationPeople’s feelings need to vent and exchange, especially for the seniors. For them, emotional communication is indispensable to maintain their vitality, and improve the quality of life. Playing with toys, there are many ways for the old to choose, such as: taking turns to participate, working together and racing in the game. The development of multiple-persons playing toys is to create a harmonious environment in which they can talk when play. So the core of toy-development is to involve the participants as much as possible. For the participating ways, common collaborative participatory approach is the best, which is more conducive to conversation, and get to know some new friends. In this way the seniors can expand their social circle with emotional exchange.(5)The effect for keeping fitness and developing intelligenceIncreasing with age, people’s organ recession becomes an objective physiological phenomenon. In order to maintain good physical function and mental state, and improve the quality of life, fitness puzzle is a very important content in the lives of older persons. Body-building that can achieve with playing toys is themost basic needs of older persons. Old people by playing intellectual toys can effectively prevent Alzheimer’s disease, so to maintain the flexibility of the seniors’ mind is the main direction of the toy development.(6)Cultural connotationsLife experiences bring the old people with more comprehensive concept of life, thus toys with a certain ideological and cultural depth usually put them in recollecting and thinking of issues. Toys for the seniors are different from those for children: a child plays a toy intuitively, while the old emphasize the toy’s inherent fun, and show great interest in the toys with cultural connotations. Of course, this culture must be familiar with the elderly, has gone deep into the ideological deep. ?? Summing up, toys for the seniors have a promising market, for each one of us would inevitably become old. The design industry should make more efforts to improve the living standard of the seniors. One way is to develop toys for the old and help them improve their life quality with theses design. We all know, care for the old is to care for all mankind, and designing from the needs of the old has become an urgent task of today’s society.References:[1]Yang Guanghui. China’s Population Aging and the Industrial Structure[m].Liaoning Science and Technology Press, 2008.7.[2] Wang Lianhai. Chinese Toys, Art History [m], Hunan Fine Arts Publishing House, 2006.8.[3] Wang court. Toys And Innovative Design [m], Chemical Industry Press, 2005.12.儿童家具的人性化设计摘要:本文以儿童家具设计问题为出发点,提出人性化的概念在新的时代环境下的新解释,并指出新的人性化设计原则在儿童家具的设计方法中的实现,分析儿童家具的现状,并提出一些建议。
脑机接口技术的脑电波分析

脑机接口技术的脑电波分析Hello, I'm thrilled to delve into the fascinating realm ofbrain-computer interface technology, specifically focusing on the analysis of brainwaves. This cutting-edge field holds immense potential for revolutionizing human-machine interaction.大家好,我很高兴能够深入探讨脑机接口技术这一迷人领域,特别是脑电波分析。
这一前沿领域拥有巨大的潜力,能够彻底改变人机交互的方式。
Brain-computer interfaces, or BCIs, establish a direct link between the brain and an external device, enabling the transmission of neural signals. These signals, primarily manifested as brainwaves, reflect the electrical activity within the brain.脑机接口(BCI)在大脑和外部设备之间建立了直接联系,使得神经信号能够得以传输。
这些信号主要表现为脑电波,反映了大脑内的电活动。
Analyzing brainwaves is crucial in understanding how the brain processes information and how it reacts to stimuli. Scientists use sophisticated techniques to capture and interpret these signals, revealing insights into cognitive processes, emotions, and even motor functions.分析脑电波对于理解大脑如何处理信息以及如何对刺激做出反应至关重要。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Interaction Techniques with Virtual Humans in MixedEnvironmentsSelim Balcisoy, Marcelo Kallmann, Rémy Torre, Pascal Fua, Daniel Thalmann Computer Graphics Laboratory, Swiss Federal Institute of Technology Lausanne ssbalcis, kallmann, rtorre, fua, thalmann@lig.di.epfl.chAbstractHomogenous Virtual Environments evolved into Mixed Environments thanks to recent Augmented Reality techniques. We have done research on novel interaction techniques in virtual environments exploiting the usage of 3D virtual space. We will briefly present our Augmented Reality set-up for mixed reality experiments and we will analyze two distinct mixed environment case studies, and present two new interaction techniques where we use virtual humans as mediators between the real and virtual world.Keywords: Interaction Techniques, Mixed Environments, Augmented Reality, Virtual Humans.1 IntroductionMost of the obstacles making the long awaited switch from a 2D computing environment to a 3D are diminishing. Fast workstations with high-end graphics cards are affordable, animation techniques are getting more complex and realistic, new input devices are emerging. However users are still limited to their mouse, keyboard and screen for computerized work and entertainment. In this paper we present two new interaction techniques for entertainment and work using mixed environments populated with virtual humans.Recent developments in VR and human animation have led to the integration of Virtual Humans [9] into interactive 3D environments. With emerge of fast workstations allow us animate them in real-time for VR and Interactive Television applications. Virtual Humans can be guided or autonomous. Autonomous virtual humans can respond to perceived information and act on their own using their behavioral mechanisms. They can be used for crowd simulation, behavioral experiments or playing interactive games. Guided virtual humans are used as avatars, representing a user in a virtual environment. Guidance of virtual humans is performed using 3D input devices like magnetic trackers for motion capture, spaceball or 2D graphical user interfaces.Interaction with computers is dominated by the WIMP (Windows, Icons, Menus, Pointer) and desktop metaphor, which are originally developed for 2D environments to match the computing capabilities available in late 1970 and early 1980 [1]. These interaction techniques are well understood and established in application development. These successful interaction metaphors for 2D, like WIMP and desktop, are still used widely for commercial virtual environment applications with some modifications. These commercial applications are mostly used in entertainment industry, and usage of virtual environments or VR did not make expected rapid high usability in other areas, such as: manufacturing or medical. One of the reasons for this is that we do not know enough about the nature of user interaction in VEs to create systems, which allow people to do real work [2].Research on user interfaces for 3D interactive virtual environments (VE) addresses developing interactions with a new medium. In this paper we present two interaction techniques using mixed environments with virtual humans:Direct manipulation of objects to interact ina mixed environment.Employing a virtual human as avatar to interact in a mixed environment.We developed an Augmented Reality [10], [11] set-up to demonstrate our ideas with two case studies.2 Experimental Augmented RealitySet-upOur experimental Augmented Reality Set-up contains rendering, environment and virtual human simulation, mixing and computer vision sensor modules. The Figure 1 presents an overview of our system.Figure 1: Overview of experimentalAugmented Reality set-upOur system gets images from real environment, and processes it at a dedicated workstation to perform vision based tracking and detection algorithms. Results from these algorithms are sent to a graphics workstation, where a virtual environment simulation is running. This simulation controls virtual humans and objects in the VE and renders a synthetic image. This image is sent to a video mixer and blended into filmed image. The result, an illusion of a mixed environment is displayed on a large monitor.On the graphics workstation, a SGI Onyx2 with Infinite Reality Graphics board, we run an integrated real-time virtual environment simulation application, Virtual Human Director (VHD), which provides:Virtual human simulation: Speech, facialand body animation with motion generators and keyframed sequences, based on inverse kinematics and motion blending algorithms. Rendering : IRIX Performer, OpenGLConnections to external applications:External applications like AI controllers, GUIs can connect to VHD kernel over a TCP/IP port. They can send and retrieve simulation data, and control virtual humans, objects and camera parameters, using a communication protocol called virtual human control protocol.VHD core is a multiprocessor application, and processes are communication with each other over a shared memory data structure. The Figure 2 depicts the VHD kernel and Figure 3 presentsusage of virtual human control protocol.Figure 2:VHD Core.Figure 3: Connections with externalapplications.In our experiments we used three external applications connecting to VHD. The first one is the Graphical User Interface (GUI), which is a common GUI to control virtual humans and camera with menus in a 2D fashion. A GUI is necessary for overall simulation control.Our second extension is vision-based sensors. We developed a monocular real-time tracker based on the concept of Model-Based Optimization [5] [12]. A parametric model of a feature, such as a box, is extracted from one or more images by automatically adjusting the model’s state variables until a minimum value of an objective function is obtained. The optimization procedure yields a description that simultaneously satisfies or nearly satisfies all constrains, and as result, is likely to be a good model of the feature. We also developed a detector, which visually scans a 2D array of squares and detects changes of luminance values on those squares.Autonomous virtual humans are controlled by external Artificial Intelligence applications. In our experiments we used two types of such applications. One is for game control, and one for behavior modeling. The first application generates the next game move given the current situation. It is freeware software. The latter requires a Python language interface. This interface lets Python scripts generated by behavior modeling applications to perform in VHD. Both applications are presented more in detail in the case studies.To create the illusion of a mixed environment, we blend the real video stream into the rendered one using chroma keying technique utilizing an over the shelf video mixer. To combine real and rendered scenes, we need to project synthetic models (virtual humans, objects) at the right location in real images. This requires a calibration of the camera used to film the scene, that is to compute the internal and external camera parameters that define the projection matrix [6]. In our setup we calibrate the camera, and we use our model-based tracker to register an object of interest correctly. We also use this tracker to track the camera motion in real-time.3 Interaction Techniques in MixedEnvironmentsInteraction is defined as mutual or reciprocal action or influence. In real world everything and everybody interacts with everybody else. The most basic form of interaction is the physical one, where objects follow Newtonian mechanics for collision and support. The physical interaction changes the physical properties of an object, like shape, size, position and material properties.Several methods have been developed for interactions inside a homogenous Virtual Environment, and between an application and its user. Several research groups [3], [8], [13], [14], [15] worked on novel interaction techniques and tools for virtual environments:Adaptation of 2D techniques in 3D,Usage of new input devices and trackers,Direct manipulation, Physical mnemonics, Gestural actionsThese techniques have some limitations:No reference to actual working habits in a real environment,Lack of haptic input and feedback,Cumbersome input devices.There is clear need for a natural way of interacting with and designing of a virtual environment. A mixed environment will allow users to be in a real environment and perceive the world as it is. Studies have shown that using some type of real environment (objects, room) helps them to accomplish real work tasks. On the other hand for entertainment purposes, there is an interest for mixed virtual sets from production companies. A mixed virtual set will allow designers to create a mixed environment, where some object can be real or virtual depending the visual impact and more important rendering power. Every rendering system has limited polygon budget, and it is more interesting to spend it on some virtual characters than a complete synthetic background for interactive productions. Using an image based technique for creating an environment solves this problem.There will be audio-visual feedback from the simulation in form of a see-through HMD output, or a large monitor in a mixed environment. However both feedback technologies have their own limitations such as wires, weight, head tracking in case of a HMD,or lack of visual realism and precision in case ofa large monitor. We need a reference point or in other words a mediator between real and virtual worlds.Our approach is that using mixed environments and virtual humans as a mediator between a real environment and a virtual environment, we can achieve a natural way of interaction between human and machine. A virtual human can act very human like in termsof animation, ergonomics, and perform very precise handling of a virtual object at the same time.We will define interaction with a virtual human as: triggering some meaningful reactionin the environment in response actions such as body, social gestures or verbal output from a participant.3.1 Direct manipulation of objects tointeract in a mixed environmentOne of our goals is that participants in a mixed environment are interacting with the simulation not over a GUI or any kind of input device, but by becoming part of the environment. Their natural interaction with objects will change the state of simulation and trigger some meaningful response.A good example for becoming part of the environment is a fight simulator application, where a user’s actions are captured by a magnetic tracking system, and a posture recognition is performed [4]. In this system participant uses his own body to interact with the virtual environment. An autonomous virtual human responses according the result of a gesture/posture recognition algorithm.To have a similar interactive application in a mixed environment without any cumbersome tracking device, requires a real-time non-invasive tracker for complete human body. Onlya computer vision based tracking system wouldfit this specification, however current vision trackers are not up to full body tracking and giving similar results like a magnetic tracker. To overcome this limitation we decide to track simpler objects than a human, but still having a large range of interaction possibilities. Wepropose an approach, where tracked objects have semantic values known to real humans and to the mixed environment simulation.We made several experiments to test the robustness of our algorithm before using it in a large-scale application. The Figure 4 presentsseveral snapshots of tracking experiments.Figure 4: Snapshots from trackingexperiments.By tracking an object where the user has a priori knowledge of how to operate with it, provides a natural interaction with the environment. In the second experiment, having a dynamic ball in a box, users already know/expect what will happen when they move the box in 3D. When the ball starts to roll upwards or downwards in a natural way, users will accept this immediately, as a real ball would have done similar movements. On the other hand the first experiment it is more entertainment than physical simulation, requires knowledge of popular culture. We tested our experiments with our laboratory members and visitors, mainly computer scientists. Users were exited to play around with the box and ball demo more than the light saber demo, probably due to realistic dynamic ball simulation.These experiments showed us that a meaningful interaction with a mixed environment without cumbersome trackers is possible. As a large scale experiment to developed a checkers game simulator, where a real person plays checkers against a virtual opponent.3.2 Case Study I: Checkers GameIn checkers game, like any other board game, all the interactions are happening on or very close to the game board. Additionally several objects on the board are moved by participants. A minimalist approach would be to let user play with his pieces and virtual pieces would change their places automatically similar to chess simulators from early 1980s. However we are accustomed to play against a real opponent inreal life. A virtual human acting as an opponent will enrich the experience of playing the game, making it more realistic in several ways:Perception of time: Even if the checkerssimulator would produce instant results, a virtual human will raise her hand, take a piece and make her move. The time needed for a move is analog for a real human, without considering time for thinking.Perception of 3D space: In a similar setupto our mixed reality setup, virtual pieces would change their places by themselves, in case of a virtual human, we can track her movements and have a longer time to follow which piece is played from where to where.Perception of opponent: A virtual humancan be programmed or guided to speak and perform facial and body animation according to a situation.The game play is similar to a real game, with only major difference where virtual player can not move real pieces when they are eliminated from game, this has to be done by the real player.To develop our checkers game simulator we need to track real pieces and to place correctly a virtual human and virtual pieces we need to track the board. Tracking the board is performed by our model-based tracker, the result is used to track the camera movements.The AR system uses a calibrated camera for correct registration of the board, the board coordinates are used to let the virtual camera follow the real one over the game play. Virtual human is controlled by a text based checkers game simulator, which calculates the next move for the virtual human. The Figure 5 presentssnap-shots from game play.Figure 5 Checkers play between realand virtual human.This experiment shows that using available computer vision algorithms and virtual humans in a mixed environment, we can develop a natural interaction technique. In this case users manipulate real objects to trigger some meaningful actions in a mixed environment.Next example will present another interaction technique, where semi-autonomous virtual humans manipulate virtual objects and demonstrate some meaningful results in a mixed environment.3.3 Employing a virtual human asavatar to interact with a mixedenvironmentPerforming precise operations in a virtual environment is a difficult task. The input devices are not precise enough, there is lack of haptic feedback and direct modification requires complex interaction techniques like auto scaling. To analyze the problem, and develop a new interaction technique we can think of an imaginary product development scenario, where a CD-Driver has to be added to a workstation tower.3.4 Case Study II: rapid prototyping inmixed environmentsWe assume that we have the workstation tower already produced and we have a rough 3D model of it. Our designers have made 3D model of a CD-Driver, which they think suitable for this tower design. Now we have to place this CD-Driver at an appropriate position in the tower and figure out if this design is acceptable in a real work environment.To find the position there are several solutions, one is to manufacture one CD-Driver prototype and test it on the tower, place it in a real office, let a test group play around with it, but manufacturing costs and time budget will be serious limitations. And if we don’t like the design of the CD-Driver, it would cost more to make a new prototype.Another solution is to work with a CAD application, where we place the CD-Driver in the tower using a GUI. We can work very accurate and try many possible positions. Making other tests, like letting a user group test the solution in a realistic environment, is not easily possible inside a CAD application. And if we decide to use a virtual reality setup, to let users place the CD-Driver in the tower. In this case users are expected to wear a HMD for visual feedback and a glove with sensors, to track the hand movements. And still placement will be difficult and not precise enough.There are two important tests to be performed. The first one is the position and orientation of the CD-Driver relative to the tower. The second one to determine the impact of different placements and different work environments on users. We propose to use virtual humans in a mixed environment to make these tests to overcome some if not all of the limitations of other solutions. We propose that a virtual human places the CD-Driver at a predefined position first, pushes the open button puts a CD in it, closes it. Performs the same for another position, until a suitable position is found. Employing a virtual human for rapid prototyping has the following advantages:Precise Positioning: Like in a CAD application, multiple positions can be defined before the experiment, and virtualhuman can test a set of possible positions.High visual realism: We use a real background and position a virtual object ona real one. We can put the real object indifferent locations and positions. The virtual human will perform the tests like areal human. For example if we position theO2 on the ground, she will bend.Fast testing: Such a mixed environment will produce multiple results and will reproducethem at any given time.We predefine several positions of the CD-Driver on the tower using a specific feature modeler, where the behaviors of existing parts in the model, and the behaviors of the new parts to evaluate are defined. In our case we define several positions of the CD-Driver and its movement for opening and closing. This modeler is called SOMOD, from Smart Object Modeler. This modeler is described in [7] in detail. The behaviors are recorded as Python scripts.We developed a Python interpreter for our experimental AR setup, which allows pre-recorded behavioral scripts to operate within the mixed environment. The data transfer between VHD and Python interpreter uses the virtual human transfer protocol.We place the workstation tower, a SGI O2 on a table and register its position using the model-based tracker. The CD-Driver is placed at a given place in the beginning. In our experiment we designed three different positions for the CD-Driver. A virtual human tests the first one like, reaching to the CD-Driver, pushing its button, it opens, taking the CD out of it, or putting a CD in it, pushing the button again, closing the driver. Than the virtual human takes the driver and puts it into its next position, and performs tests like the first one. .The Figure 6 presents two snapshots from tests. Note that in the upper image couple the driver is in a high position, and in the lower ones in a low one, the position of the virtual humans is also different in the left images than the right ones.Figure 6: A virtual human performing position tests for a virtual CD-Driver, ona real workstation.4 ConclusionsIn this paper we presented two possible interaction techniques with 3D environments. We used a mixed environment as a simulation environment and virtual humans as mediators between the real and virtual world.The first interaction technique detects changes in a real environment, like positioning of an object, and triggers actions in the simulation. Note that these actions should be natural responses, otherwise users may get confused. For example, this would be the case if a virtual object started to move without any visible force being implied on it. To avoid such confusions we use virtual humans to operate on virtual object. Users see a realistic animated human figure performing operation on the virtual object.We propose a second interaction technique, where users perform precise operations on real and virtual objects in a mixed environment using a semi-autonomous virtual human. In this way we can pre-define several operations like positions to reach relative to an object.Human-machine interaction is an open research topic. With emerge of new technologies such as Virtual Reality and Augmented Reality, we are exploring new ways of interacting with computers in 3D environments. Like successful 2D interaction techniques, some 3D interaction techniques and metaphors will become successful, if only they become used for assisting some real work or creating new entertainment content. 5 References[1] Ken Hinckley, Rany Pausch, Dennis Proffitt and Neal F. Kassel, Two-Handed Virtual Manipulation ACM Transactions on Computer-Human Interaction, Vol. 5, No. 3, September 1998, Pages 260–302.[2] Robert W. Lindeman John L. Sibert James K. Hahn, Towards Usable VR: An Empirical Study of User Interfaces for lmmersive Virtual Environments, CHl ‘99 Pittsburgh PA USA[3] Mark R. Mine, Frederick P., Brooks Jr., Carlo H. Sequin, Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction, Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, pp. 19-26 (August 1997, Los Angeles, California).[4] Emering L., et al.. "Interacting with Virtual Humans through Body Actions", IEEE Computer Graphics & Applications, Vol.18, No1, pp8-11, 1998 [5] Fua. P. RADIUS: Image Understanding for Intelligence Imagery, chapter Model-Based Optimization: An Approach to Fast, Accurate, and Consistent Site Modelling from Imagery. Morgan Kaufmann, 1997. O. Firschein and T.M. Strat, Eds. Available as Tech Note 570, Artificial Intelligence Center, SRI International.[6] J.P. Tarel and J.M. Vezien. Camcal Manual: A Complete Software Solution for Camera Calibration. Technical Report 0196, INRIA, September 1996.[7] M. Kallmann and D. Thalmann, “Modeling Objects for Interaction Tasks”, EGCAS’98 - 9th Eurographics Workshop on Animation and Simulation, Lisbon, Portugal, 73-86, 1998.[8] An Approach to Natural Gesture in Virtual Environments, Alan Wexelblat, ACM Transactions on Computer-Human Interaction, Vol. 2, No. 3, September 1995, Pages 179-200.[9] Badler, N. and Webber, B. L. 1993. Simulating Humans: Computer Graphics Animation and Control. Oxford University Press, New York.[10] R.T. Azuma. A Survey of Augmented Reality. Presence, Teleoperators and Virtual Environments,6(4):355–385, August 1997.[11] G. Klinker, K. Ahlers, D. Breen, P.-Y. Chevalier,C. Crampton,D. Greer, D. Koller, A. Kramer,E. Rose, M. Tuceryan, and R. Whitaker. Confluence of Computer Vision and Interactive Graphics for Augmented Reality. Presence: Teleoperations and Virtual Environments, 6(4):433–451, 1997.[12] E. Marchand, P. Bouthemy, F. Chaumette, and V. Moreau. Robust real-time Visual Tracking Using a2D-3D Model-Based Approach. In International Conference on Computer Vision, pages 262–268, Corfu, Greece, September 1999.[13] Z. Szalavari, E. Eckstein, and M. Gervautz. Collaborative Gaming in Augmented Reality. In ACM Symposium on Virtual Reality Software and Technology, pages 195–204, Taipei, Taiwan, November 1998.[14] M. Anabuki, H. Kakuta, H. Yamamoto, H. Tamura. Welbo: An embodied converstional agent living in mixed reality space. CHI 2000 pp. 10-11 [15] J. Underkoffler, B. Ullmer, H. Ishii. Emancipated Pixels: real-world graphics in the luminous room. SIGGRAPH’99。