User-defined Gestures in AR_extended abstract
Multi-monitor mouse

Multi-Monitor MouseHrvoje Benko Steven FeinerDepartment of Computer Science, Columbia University500 West 120th Street, 450 CS Building, New York, NY{benko, feiner}@ ABSTRACTMultiple-monitor computer configurations significantlyincrease the distances that users must traverse with themouse when interacting with existing applications, resultingin increased time and effort. We introduce the Multi-Monitor Mouse (M3) technique, which virtually simulateshaving one mouse pointer per monitor when using a singlephysical mouse device. M3 allows for conventional controlof the mouse within each monitor's screen, while permittingimmediate warping across monitors when desired to in-crease mouse traversal speed. We report the results of a user study in which we compared three implementations of M3 and two cursor placement strategies. Our results suggest that using M3 significantly increases interaction speed in a multi-monitor environment. All eight study participants strongly preferred M3 to the regular mouse behavior.Author KeywordsMulti-monitor, mouse pointer, interaction technique.ACM Classification KeywordsH.5.2. [User Interfaces]: Graphical User Interfaces, Input Devices and Strategies.INTRODUCTIONIncreased display size and resolution and the proliferation of multiple-monitor display configurations have signifi-cantly extended the amount of desktop space available to computer users. However, increased desktop space forces users to move their mouse cursor over larger distances. To compensate, users can increase pointer speed or accelera-tion, which have drawbacks pointed out by Baudisch et al.[3]. In addition, tiling several displays in a row often results in one dimension (typically width) being drastically larger than the other, causing excessive “clutching” when travers-ing multiple displays with a mouse.While the operating system considers the multi-monitor desktop as one seamless environment, previous research by Grudin [6] clearly suggests that users treat multi-monitor systems as means of partitioning their desktop space. This is primarily due to the physical gaps between monitors, but differences in resolution, size, and apparent mouse speed can also contribute towards this mental partitioning. As Grudin points out, users have a tendency to distribute tasks among monitors, treating them as separate, but connected, spaces, and only rarely do they straddle an application win-dow across multiple physical displays.This research inspired us to create Multi-Monitor Mouse (M3), a pointer interaction technique that warps the pointerbetween screens in a multi-monitor system configuration. M3 simulates having one independent mouse pointer per screen, using a single physical mouse device.RELATED WORKImproving target acquisition across multiple monitors has been explored in context with eliminating warping effects caused by mismatched monitor alignment and differing screen resolutions with mouse ether [1], as well as avoiding the need to cross the bezels by bringing the targets closer to the current cursor location with drag-and-pop [2]. Baudisch et al. also proposed visual enhancements, such as high-density cursor [3], that increase visibility of cursors at high speeds. Interactions that warp the pointer closer to a target location have previously been explored on a single monitor in combination with eye gaze (e.g., [8] and MAGIC point-ing [9]) or hand gestures (e.g., flick [5]). Zhai, Smith, and Selker compared one- and two-handed techniques for com-pounding tasks of scrolling and pointing [10].MULTI-MONITOR MOUSEWe have implemented M3 as a Windows application that runs in the background, minimized to the system tray. When M3 is launched, it reads the system’s information about the size, number and relative location of attached screens and forms a corresponding set of virtual frames to represent the screens. When the user issues a frame switch command, M3 warps the mouse pointer to the new frame (screen). The new location of the cursor is signaled to the user by invoking the “mouse sonar” animation around the pointer (Figure 1b), a built-in Windows option that en-hances pointer visibility. Otherwise, pointer movement isCopyright is held by the author/owner(s).CHI 2005, April 2–7, 2005, Portland, Oregon, USA. ACM 1-59593-002-7/05/0004. Figure 1: a) Using standard pointer movement to move between monitors from S to T. b) M3 warps the cursor (dashed line) to the next screen, reducing the distance trav-ersed by conventional mouse movement. “Sonar” circles are displayed to increase cursor visibility.completely unaffected by M3. (For example, the user is still free to move the pointer across screen boundaries by physi-cally moving the mouse, but now has the option of directly warping to a different screen.)M3 segments the pointer space according to screen space divisions, thus allowing for pointer warping across screens. While the techniques presented in this paper have been ap-plied to switching between screens in a multi-monitor con-figuration, the same techniques, without modification, can be applied to any desktop space by dividing it into a set of virtual rectangular frames. These frames can be of arbitrary number and size, and can even overlap. For example, a large high-resolution monitor could be divided into several virtual frames, each containing the windows for one appli-cation. M3 would in this case switch the mouse pointer be-tween different applications.M3 FRAME SWITCH ALTERNATIVESWe have experimented with several switch designs. Two of the final four designs (mouse button and keyboard switches) require only standard computer peripherals, thus making them easy for most computer users to adopt. The other two (head orientation and mouse location switches) support more direct switching, but require extra equipment.Mouse Button SwitchThe mouse button switch command is issued by pressing one of the two side buttons (XButtons) on the five-button Microsoft IntelliMouse Explorer mouse. Since multi-monitor configurations are typically side-by-side arrange-ments, we decided to map the top side button to advance the frames forward (clockwise), and the bottom side button to advance the frames backward (counterclockwise). This de-cision is technically arbitrary, but we believe that it has ecological validity in that it mimics the behavior of the mouse itself when those side buttons are pushed: Pushing the top button would tend to rotate the whole mouse clock-wise, while pushing the bottom button would have a coun-terclockwise effect. The virtual frames form a loop, making it possible to cycle through all the screens using just one of the buttons.Keyboard SwitchThe keyboard switch is modeled after a built-in Windows task-switch command (ALT+TAB). We mapped the for-ward frame switch to the ALT+“~” key combination, and backwards to ALT+SHIFT+“~”. As with the mouse button switch, looping is supported and it is possible to use just one key combination to switch among all screens. This mode, although implemented, was not evaluated in our tests because of its similarity to mouse button switch.Head Orientation SwitchBy observing the work of several individuals in a multi-display environment, we noticed that a user’s head position does not change much, but their head orientation changes continuously, depending upon the screen on which they are working. At a constant working distance from the user, the larger the screens, the larger the horizontal angle subtended by each screen that can be reliably measured with an abso-lute orientation sensor. We outfitted a pair of headphones with a 3DOF orientation sensor (InterSense InertiaCube2) and measure the user’s head orientation to determine the screen at which they are looking. When the user turns their head towards another screen, the head orientation switch performs a frame switch.While 6DOF tracking combined with eye-gaze tracking would reduce errors, we wanted to design a minimally inva-sive switch that would perform well with minimal calibra-tion time. We have noticed that while eye gaze alone is sometimes used to glance at another screen, users tend to align their head with a monitor when performing tasks on its screen (especially with the larger, 24" diagonal, monitors we used), which supports our head orientation switch solu-tion. In general, our technique is similar to MAGIC point-ing [9], insofar as it warps the pointer to the area “in focus” (in our case, a screen), followed by fine selection by mouse movement alone. However, while MAGIC pointing warps the pointer within one screen whenever the eye gaze changes, we warp only across frame boundaries and leave all mouse manipulation within a single screen unchanged. Mouse Location SwitchThe last switching method we implemented is based on the idea that every screen could have a corresponding mouse-pad. The user still manipulates only one physical mouse, but physically placing the mouse on a different pad warps the cursor to the screen corresponding to that pad. We im-plemented mouse location switch using a touch-sensitive surface (MERL DiamondTouch table) on which the user can define any axis-aligned rectangle as a pad for a given screen. To aid the user in remembering the locations of the virtual mouse pads, we provided paper mouse-pad cutouts to be placed on the surface (Figure 2). Since the table oper-ates through electrostatic coupling, the mouse is wrapped in aluminum foil to allow it to be tracked by the Diamond-Touch surface when held by the user.M3 POINTER-PLACEMENT STRATEGIESIn addition to deciding on how to trigger the frame switch, there are several possibilities for where to warp the mouse cursor in the target frame after the frame switch has oc-Figure 2: M3 test setup consisting of four monitors and four corresponding “mouse pads” used by mouse location switch. The background is set to an inactive spreadsheet image to simulate a typical noisy working environment.curred. After some preliminary experimentation, three strategies emerged as plausible candidates: fixed location, frame-relative, and frame-dependent.Fixed-Location Placement StrategyOur initial implementation of M3 used the single fixed-location placement strategy of always warping the cursor to the center of the next frame (Figure 3a). While the center location is somewhat arbitrary, it does ensure that the maximum mouse traversal distance after the frame switch will always be at most half of the frame’s diagonal. This can be beneficial if users distribute their tasks equally around the center, which is often the case with active work-ing windows. However, it can be a nuisance when the target is located near an edge, which is the case for some frequent selection tasks, such as accessing the taskbar. In our current M3 implementation, it is possible to select any fixed loca-tion as the warping target, as long as that location is avail-able on all frames.Frame-Relative Placement StrategyFrame-relative placement works by translating the pointer to the next frame at the same location relative to the new frame’s upper left corner as it was relative to its old frame’s upper left corner (Figure 3b). This strategy essentially col-lapses the entire available space into one frame of mouse movement and is the only strategy we implemented in which the effect of pointer movement prior to the frame switch will not be negated by the switch itself.Frame-Dependent Placement StrategyIf all frames are considered as completely independent spaces, the system can remember the last location of the cursor in each frame and warp the incoming cursor to that location. Thus, the last position of the cursor when the user warps out of the frame, becomes the starting location when the user eventually warps back to that frame (Figure 3c). This frame-dependent placement strategy, while preferred by some initial test users in the two-monitor setup, was not formally evaluated for four monitors, due to its increasing difficulty as the number of frames increases. This is pre-sumably due to the memory load imposed on the user hav-ing to remember each frame’s cursor position.USER STUDYEight right-handed participants (6 male, 2 female, ages 23–32), all unfamiliar with the techniques, participated in a target-selection experiment, with a counterbalanced within-subject design. We decided to test regular unassisted mouse interaction (CTRL mode) with three M3 frame-switch modes: mouse button (MB), head orientation (HEAD), and mouse location (ML). We tested each switch mode using two pointer-placement strategies: frame-relative (FR) and center fixed-location (C). This resulted in a total of seven different conditions. Each user was allowed to familiarize themselves with all mouse behaviors, and performed a block of 10 practice trials before completing the block for each condition. Each block consisted of five trials for each of three different start-target distances and two directions (right and left) for a total of 30 movements per block.Our hypothesis was that participants would acquire targets faster when using M3 relative to the control mode as the number of screen bezels that needed to be crossed in-creased. In addition, we speculated that the users would befaster using the frame-relative strategy rather than the cen-ter fixed-location strategy because of the utility of move-ment prior to the frame switch in the former strategy.The experiment was conducted on a Dual Xeon (2.6GHz, 2GB RAM) computer running Windows XP, with four monitors tiled in a horizontal arrangement, driven by two ATI Radeon 9800 and 9000 graphics cards. All four moni-tors were Samsung SyncMaster 240T (24" diagonal, 1920×1200 resolution, 60Hz refresh), for a total desktop space of 7680×1200 pixels. The monitors were arranged in a semicircle of 80cm radius, with 12cm horizontal separa-tion (including bezels) between each monitor’s display, and the user was seated in the center to ensure equal distance and viewing angle to all monitors (Figure 2). Thus, each display occupied a 35° horizontal viewing angle with 8° separation. The mouse speed was set to the medium setting. The task was based on a Fitts’ Law target acquisition task [7], but without any variation of start and target sizes (fixed at 30 pixels square). To eliminate target discovery over-head, we presented the participant with both start and target buttons at the same time, asked them to locate both before starting a trial, and recorded the time it took between click-ing on the start button and clicking on the target button. We selected distances of 2134, 4047, and 5956 pixels, such that each required crossing one, two, or three bezels, respec-tively. To eliminate strategy bias, each target was located exactly halfway between the center of the screen and the frame-relative location of the start button on that target screen (Figure 4). The direction varied between left-to-right and right-to-left.Figure 3: Traversing between S, T1 and T2 locations using different M3 pointer placement strategies: a) fixed-location (center), b) frame-relative, c) one of many possible frame-dependent scenarios. Dashed lines indicate warping; solid lines indicate conventional movement.ResultsMovement times were first cleared by removing outliers (movement times more than two standard deviations larger than the mean for each condition), which accounted for less than 3.5% of all the trials. All data analysis was performed on a median movement time for each participant, distance, direction, and condition combination. We performed a 7 (Condition) × 2 (Direction) × 3 (Distance) ANOVA, with our subjects as a random variable. There were significant main effects for all three factors. The Direction factor con-tained a significant main effect, F (2,14)=82.72, p <<0.001: transitioning left-to-right was on average 0.2s faster than going right-to-left, which is consistent with previous re-search showing that right-handed users performed mouse movements faster from left-to-right than from right-to-left when using their right hand [4].The mean movement times across Condition, F (6,42)=3.88, p <0.01, are shown in Figure 5(a). The interaction of Dis-tance and Condition (Figure 5b) also had significant effects, F (12,84)=9.509, p <<0.001. As predicted, in all cases the FR strategy outperformed the C strategy by an average of 0.1s, which we believe is due to the C strategy discarding pointer movement prior to the frame switch. Of the M 3 modes using the FR strategy, MB (1584ms) was the fastest compared with CTRL (1906ms), presenting a 17% im-provement in performance. MB was followed by HEAD (1698ms) and ML (1710ms), which were both significantly faster than CTRL. It is interesting to notice that for the shortest distance (2134 pixels) there were no significant differences between movement times across conditions. All of the M 3 performance gains come from time saved when traversing two bezels or more, with the biggest gains (up to 29%) being present at the largest distance (5965 pixels). In the post-experiment questionnaire, all subjects strongly preferred some M 3 mode to the CTRL condition, and 6 out of 8 users preferred the FR strategy over the C strategy. While 7 out of 8 users preferred the MB mode over all the other modes, some mentioned that they would actually pre-fer a combination of HEAD and MB modes, in which the screen switch is triggered with the mouse button, but the cursor warps directly to the screen at which they are look-ing. This combination would resolve the “Midas touch” issue of pure HEAD mode, and also eliminate the multiple clicks necessary to switch across more than one screen.CONCLUSIONSOur user study confirmed that M 3 can improve target acqui-sition performance in multi-monitor systems; as predicted, the effect was higher with increased distance and FR strat-egy. Subjective evaluations show a strong preference for M 3 over regular mouse pointing, and we believe that the performance advantage could significantly increase with experience. In future work, we would like to evaluate addi-tional M 3 modes and strategies, as well as implement a hy-brid mouse button + head orientation switch mode.ACKNOWLEDGMENTSThis work is funded in part by NSF Grant IIS-01-21239, ONR Contract N00014-04-1-0005, and a gift from Mitsubi-shi Electric Research Labs.REFERENCES1. Baudisch, P., Cutrell, E., Hinckley, K. and Gruen, R., Mouse Ether: Accelerating the Acquisition of Targets Across Multi-Monitor Displays. CHI '04 Extended Abstracts , ACM Press (2004), 1379–1382.2. Baudisch, P., Cutrell, E., Robbins, D., Czerwinski, M., Tandler, P., Bederson, B. and Zierlinger, Z., Drag-and-Pop and Drag-and-Pick: Techniques for Accessing Remote Screen Content on Touch- and Pen-operated Systems. Proc. INTERACT '03, (2003), 57–64.3. Baudisch, P., Cutrell, E. and Robertson, G., High-Density Cursor: A Visualization Technique that Helps Users Keep Track of Fast-Moving Mouse Cursors. Proc. INTERACT '03, ACM Press (2003), 236–243.4. Boritz, J., Booth, K.S. and Cowan, W.B. Fitts's Law Studies of Direc-tional Mouse Movement. Proc. of Graphics Interface (1991). 216–223.5. Dulberg, M.S., Amant, R.S. and Zettlemoyer, L.S., An Imprecise Mouse Gesture for the Fast Activation of Controls. Proc. INTERACT '99, IOS Press (1999), 375–382.6. Grudin, J., Partitioning Digital Worlds: Focal and Peripheral Awareness in Multiple Monitor Use. Proc. CHI '01, ACM Press (2001), 458–465.7. MacKenzie, I.S. Fitts' Law as a Research and Design Tool in Human-Computer Interaction. Human-Computer Interaction , 7 (1992). 91–139.8. Sibert, L.E. and Jacob, R.J.K., Evaluation of Eye Gaze Interaction. Proc. CHI '00, ACM Press (2000), 281–288.9. Zhai, S., Morimoto, C. and Ihde, S., Manual and Gaze Input Cascaded (MAGIC) Pointing. Proc. CHI '99, ACM Press (1999), 246–253.10.Zhai, S., Smith, B.A. and Selker, T., Dual Stream Input for Pointing and Scrolling. CHI '97 Extended Abstracts , ACM Press (1997).Figure 4: Start (S) and target (T) button layout in our user study. To eliminate strategy bias, T is located at the half-way point between the warping locations for frame-relative (fr ) and center fixed-location (c ) strategies. Control path (ctrl) is also shown. Figure 5: Aggregated movement times (ms) with 95% con-fidence intervals: a) Condition factor, b) Interaction ofDistance and Condition.b)a)。
Emotional Interaction

Abstract The extensive propagation and usage of computers, cell phones, PDAs, MP3-players, etc. in our everydayslife (and particulary in almost any situation) causes the opportunity for interacting with computers away from the desktop in an increasing amount [4] - with the consequence, that the role of the user and his interactions in this environment become more and more variable. In my PhD thesis i will propose a software framework with corresponding hardware to support the interaction between human and computer in an implicit (non invasive) way by measuring and processing biological signals (the intention is to detect emotional states and react on it) with the goal of controlling computing devices by this. Keywords Emotional Computing, Similarity Analysis, Emotional Interaction, Human-Centered Computing, Aesthetics and Computation. Problem Statement and Research Question Computers need to be able to sense and interpret emotions in order to respond intelligently to (complex) human interactions.
IJSH-2007-01-02-Foreword_EB

Foreword of Part 1With the proliferation of wireless technologies and electronic devices, there is a fast growing interest in Mobile and Ubiquitous Computing (MUC). MUC enables to create a human-oriented computing environment where computer chips are embedded in everyday objects and interact with physical world. Through MUC, people can get online even while moving around, thus having almost permanent access to their preferred services. With a great potential to revolutionize our lives, MUC also poses new research challenges. This special issue is composed of six papers, which are closely related to the various theories and practical applications in MUC. Especially, four of them are extended versions of the best papers presented at the International Workshop on Interactive Multimedia & Intelligent Services in Mobile and Ubiquitous Computing (IMIS 2007). We hope that this issue will be a trigger for further related research and technology improvements in this important subject.In the paper "An Energy-Efficient Sensor Routing with low latency, scalability for Smart Home Networks," H. Oh and K. Chae present a novel energy-efficient sensor routing scheme in wireless sensor networks, namely EESR (Energy-Efficient Sensor Routing). The authors show that the proposed scheme provides energy-efficient data delivery to the base station with low latency, scalabilityThe paper "Novel Mechanism to Defend DDoS Attacks Caused by Spam" written by D. Nagamalai, C. Dhinakaran and J. Lee introduces a multi layer approach to defend the DDoS attack caused by spam mails. This approach is a combination of fine tuning of source filters, content filters, strictly implementing mail policies, educating user, network monitoring and logical solutions to the ongoing attack. The experimental results show that there is 60% of reduction in spam traffic after implementing the defense mechanism.F. E. Sandnes and Y. Huang, in the paper "From Smart Light Dimmers to the IPOD: Text-Input with Circular Gestures on Wheel-ontrolled Devices", present a uni-stroke text input strategy for wheel input controls. The presented uni-strokes are based on circular motions that follow the contour of the wheel. While spatial mnemonics based on the shape of the alphabetic characters are used to minimize the time and effort learning the uni-strokes, an approximate distance based method is used for robust character recognition of uni-stroke patterns.iIn the paper "A CPU Usage Control Mechanism for Processes with Execution Resource for Mitigating CPU DoS Attack," T. Tabata, et al. propose an access control model for CPU resources based on an execution resource. The proposed model can control the usage ratio of CPU resources appropriately for each user and each program domain. Through the results of experiments employing the Apache web server, the authors show that the proposed method can mitigate DoS attacks and does not have bad effects upon the performance of a target service.The paper "Sentry@Home - Leveraging the Smart Home for Privacy in Pervasive Computing" written by S. A. Bagüés, et al. introduces a new infrastructure component for smart homes: A privacy proxy, named Sentry@HOME, as part of our User-centric Privacy Framework (UCPF). Its main task and responsibility is to take care of privacy-related data when accessed from the outside. Based on a set of privacy policies defined by the user it controls and enforces privacy for individuals roaming freely in pervasive computing environments.In the paper "User Authentication Using Neural Network in Smart Home Networks," S. Z. Reyhani and M. Mahdavi present a new authentication scheme based on the Radial Basis Function (RBF) neural network. This scheme can produce the corresponding encrypted password according to the entered username, and it could be used to replace the password table or verification table stored in the common authentication systems.Finally, we would like to extend special thanks to all authors as well as reviewers for their enthusiasm and dedication, which have made this issue a reality.Guest Editors of Part 1Ilsun YouSchool of Information Science, Korean Bible University,South KoreaByoung-Soo KohDigiCAPS Co., Ltd,South Korea iiForeword of Part 2Recent advances in computer and communication technologies have offered people an unprecedented level of convenience of living, making life more comfortable and enjoyable. To allow people to better perform their daily living activities, improve the quality of life, and enjoy entertainment and leisure activities, one must first understand the services that are in demand in a smart living environment, and then develop key technologies for supporting such demands. This special issue contains four selected papers from the 2007 International Workshop on Smart Living Space and is to focus on emerging technologies and innovative solutions for intelligent living spaces.We strongly believe that the selected papers make a significant contribution to researchers, practitioners, and students working in the areas of the smart living space. We are grateful to authors for their research contributions in this special issue. Our special thanks go to the IJSH editorial board and Dr. Jong Hyuk Park for his supports throughout the whole publication processes. Finally, the Guest Editors wish to gratefully acknowledge all those who have generously given their time to review the papers submitted to the workshop as well.Guest Editors of Part 2Hsu-Chun YenDept. of Electrical Engineering, National Taiwan University,TaiwanHan-Chieh ChaoDept. of Electronic Engineering, National Ilan University,TaiwanWhai-En Chen Institute of Computer Science & Information Engineering, National Ilan University,TaiwaniiiEditorial Board of IJSHEditor in ChiefSajal K. DasUniversity Texas at Arlington, USAEmail:***********Managing EditorJong Hyuk ParkKyungnam University, KoreaEmail:**********************Associate EditorTaihoon KimHanman University, KoreaEmail:******************Ilsun YouKorean Bible University, KoreaEmail:****************Advisory Board (AB)Ching-Hsien Hsu (Chung Hua University, Taiwan)Daqing ZHANG (Institute for Infocomm Research (I2R), Singapore) Javier Lopez (University of Malaga, Spain)Jianhua Ma (Hosei University, Japan)Jiannong Cao (The Hong Kong Polytechnic University, Hong Kong) Laurence T. Yang (St Francis Xavier University, Canada)Witold Pedrycz (University of Alberta, Canada)General Editors (GE)Eun-Sun Jung (Samsung Advanced Institute of Technology, Korea) George Roussos (University of London, UK)ivHesham H. Ali (University of Nebraska at Omaha, USA)Im Yeong Lee (SoonChunHyang University, Korea)Kia Makki (Florida International University, USA)Michael Beigl (University of Karlsruhe, Germany)Niki Pissinou (Florida International University, USA)Editorial Board (EB)Alex Zhaoyu Liu (University of North Carolina at Charlotte, USA)Ali Shahrabi (Glasgow Caledonian University, UK)Andry Rakotonirainy (Queensland University of Technology, Australia)Anind K. Dey (Carnegie Mellon University, USA)Antonio Coronato (ICAR-CNR, Italy)Antonio Pescape' (University of Napoli “Federico II”, Italy)Arek Dadej (University of South Australia, Australia)Bessam Abdulrazak (University of Florida, USA)Biplab K. Sarker (University of New Brunswick, Fredericton, Canada)Bo Yang (University of Electronic Science and Technology of China)Bo-Chao Cheng (National Chung-Cheng University, Taiwan)Borhanuddin Mohd Ali (University of Putra Malaysia, Malaysia)Byoung-Soo Koh (DigiCAPS Co., Ltd, Korea)Chunming Rong (University of Stavanger, Norway)Damien Sauveron (University of Limoges, France)Debasish Ghose (Indian Institute of Science, India)Deok-Gyu Lee (ETRI, Korea)Eung-Nam Ko (Baekseok University, Korea)Fabio Martinelli. (National Research Council - C.N.R., Italy)Fevzi Belli, (University of Paderborn, Germany)Gerd Kortuem (Lancaster University, UK)Geyong Min (University of Bradford, UK)Giuseppe De Pietro ( ICAR-CNR, Italy)Hakan Duman (British Telecom, UK)Hans-Peter Dommel (Santa Clara University, USA)Hongli Luo (Indiana University, USA)Huirong Fu (Oakland University, USA)vHung-Chang Hsiao (National Cheng Kung University, Taiwan)HwaJin Park (Sookmyung Women's University , Korea)Hyoung Joong Kim (Korea University, Korea)Ibrahim Kamel (University of Sharjah, UAE)Irfan Awan (University of Bradford, UK)Jiann-Liang Chen (National Dong Hwa University, Taiwan)Jianzhong Li (Harbin Inst. of Technology, China)Jin Wook Lee (Samsung Advanced Institute of Technology, Korea)Joohun Lee (Dong-Ah Broadcasting College,Korea)Jordi Forne (Universitat Politecnica de Cataluny, Spain)Juan Carlos Augusto (University of Ulster at Jordanstown, UK)Karen Henricksen (NICTA, Australia)Kuei-Ping Shih (Tamkang University, Taiwan)LF Kwok (City University of Hong Kong, HK)Liudong Xing (University of Massachsetts - Dartmouth, USA)Marc Lacoste (France Télécom Division R&D, France)Mei-Ling Shyu (University of Miami, USA)Mounir Mokhtari (INT/GET, France)Nicolas Sklavos (Technological Educational Institute of Mesolonghi, Greece) Paris Kitsos (Hellenic Open University, Greece)Pedro M. Ruiz Martinez (Univ. of Murcia, Spain)Phillip G. Bradford (The University of Alabama, USA)Pilar Herrero (Universidad Politécnica de Madrid, Spain)Qi Shi (Liverpool John Moores University, UK)Rodrigo de Mello(University of Sao Paulo, Brazil )Serge Chaumette (Université Bordeaux 1,France)Shaohua TANG (South China University of Technology, China)Stefanos Gritzalis (University of the Aegean, Greece)Tatsuya Yamazaki (NICT, Japan)Toshihiro Tabata (Okayama University, Japan)Tsung-Chuan Huang (National Sun Yat-sen University, Taiwan)Tsutomu Terada (Osaka University, Japan)Umberto Villano (Universita' del Sannio, Italy)Vincenzo De Florio (University of Antwerp, Belgium)viVipin Chaudhary (Wayne State University to University at Buffalo, SUNY)Wen-Shenq Juang (Shih Hsin University, Taiwan)Xinwen Fu (Dakota State University, USA)Yang Guangwen (Tsinghua University, P.R.China)Yoshiaki HORI (Kyushu University, Japan)Young Yong Kim (Yonsei University, Korea)viiviii。
technology tools

276
R. Gobbidlnteracting
with Computers 9 (1998) 275-285
introduced cultural models. Besides crunching numbers, IT tools incorporate sophisticated text processing, graphics, voice recognition, data analysis, virtual reality and knowledge engineering, thereby permeating the sociocultural aspects of human cognition far more than computing’s original calculating scope. However, the methods of analysis used in the design of information systems do not reflect the social and anthropological reality of the human environment. They only address the technicality of the implementation. IT tools designers concentrate on reflecting taskoriented processes [ 11 thus avoiding unfamiliar issues of cultural context. Such methods are centred on the user adaptation side of IT tool manipulation, neglecting culture context. Traditional IT system analysis follows rationalistic management theories originated from Taylor et al. [2], which are used currently used in many organizational environments. Task specialization and symbolic coordination in manufacturing, learning and using tools are not phenomena confined to industrial societies. They are evident in today’s hunting and gathering societies in relation to tool manufacturing and learning practices. The face-to-face task group activities within a social structure which uses tools cooperatively in a sub-assembly fashion [3] are an integral part of human technical progress. Language, gestures and tool manipulation are characterized by sequential patterns with evident links among language, tool use and cognition as has appeared in recent studies of neuronal mapping of the human brain [4][5]. The reductionism of current system analysis methodologies is just following historically and culturally derived methods for the interpretation and description of human sequential activities. These methods address a particular task or data path following a humanly determined working sequence. However, the modus operandi of current system analysis presents major problems because whole areas of cultural, social and cognitive narrative at the metaphysical level are omitted. An historical analysis of interface design shows a continuing evolution [6] from hardware to software and from software to higher-level cognitive processes, extended into social processes. Computer application tools are providing [7][8] an extension to (and sometimes incorporating) pre-computer human activities. Human activities involving IT tools are following the cultural patterns of a variety of other tools used by humans. The cultural evolution of agricultural tools shows how the human capacity for culture [S] evolved by means of natural selection in a Darwinian fashion. At this point, it is important to clarify the fundamental distinction between cultural fitness and adaptation in relationship to the cultural environment. Cultural traits may spread not because they are adaptive to environmental needs, but because they indirectly increase the relative fitness of the individual in a specific cultural environment. The development of cultural behaviour follows a long-term pattern of natural selection influencing human survival. On the other hand the operational adaptation experienced using a tool should be considered as a short-term intentional adjustment to tool functionality. Human adaptation, together with operational adaptation, can facilitate [9] but not cause cultural evolution. It is the human communication process which can integrate human adaptations and transform them into cultural traits, adaptive events historically are unique in each individual. Social communication and learning processes, together with historical and cultural time involvement, represent the challenging factors in the research of user