Parallel graphics and interactivity with the scaleable graphics engine
以前的游戏和现在的游戏对比作文

以前的游戏和现在的游戏对比作文英文回答:In the realm of gaming, the evolution from the humble beginnings of pixelated sprites to the immersive 3D worlds of today has been nothing short of remarkable. The past decades have witnessed a paradigm shift in both the technological capabilities and the creative visions that drive game development.Graphics and Visuals:One of the most striking differences between modern games and their predecessors lies in the realm of graphics. The early days of gaming were characterized by 2D pixelated visuals that lacked depth and realism. However, as hardware capabilities improved, developers began to embrace 3D graphics, enabling them to create virtual worlds with unprecedented detail and immersion. Today, games boast photorealistic textures, intricate lighting systems, anddynamic weather effects, bringing players closer to the action than ever before.Gameplay Mechanics:The evolution of gameplay mechanics has also been profound. Older games often relied on simple controls and limited player interaction, with a primary focus on high scores and arcade-style action. In contrast, modern games are known for their complex mechanics, open-ended exploration, and immersive storylines. Players now have the freedom to choose their own paths, engage in strategic decision-making, and experience a level of interactivity that was once unimaginable.Storytelling and Narrative:The narrative aspect of gaming has undergone a similar transformation. While early games typically featured basic or nonexistent storylines, modern games have evolved into interactive cinematic experiences that rival Hollywood productions. Developers now utilize advanced storytellingtechniques, compelling characters, and branching narratives to create games that captivate players on an emotional and intellectual level.Multiplayer Gaming:Multiplayer gaming has become an integral part of the gaming landscape, allowing players to connect with others from around the world. In the past, multiplayer was limited to local area networks (LANs) or online chat rooms. However, the advent of high-speed internet has enabled the development of massively multiplayer online games (MMORPGs) and other online multiplayer experiences that bring players together in virtual worlds.Technological Advancements:Technological advancements have been the driving force behind the evolution of gaming. From the introduction ofthe joystick to the development of virtual reality headsets, each technological leap has pushed the boundaries of whatis possible in the gaming realm. Today, games can leverageartificial intelligence, motion capture, and cloud computing to create experiences that were once considered science fiction.The Future of Gaming:As technology continues to advance, the future of gaming holds endless possibilities. Virtual reality (VR) and augmented reality (AR) are expected to play a major role, blurring the lines between reality and gaming. Cloud gaming services are also gaining traction, offering gamers access to high-end gaming experiences without the need for expensive hardware. The future of gaming promises to be one of constant innovation and immersive experiences that captivate and entertain players for generations to come.中文回答:画质与视觉效果:过去的游戏以像素化的2D视觉效果为主,缺乏深度和真实感。
手机和电视机对比的英语作文

手机和电视机对比的英语作文英文回答:Television vs. Smartphone: A Comparative Analysis.The advent of smartphones has undoubtedly reshaped the entertainment landscape, challenging the dominance of traditional television sets. Both devices offer unique advantages and drawbacks, sparking a debate on which is the superior choice for entertainment consumption.Immersive Experience.Televisions reign supreme when it comes to providing an immersive viewing experience. Their large screens and high-quality image and sound reproduction capabilities create a cinematic atmosphere that can transport viewers to other worlds. Smartphones, with their smaller screens and lower resolution displays, offer a less immersive experience, especially when it comes to watching movies or playinggames.Portability and Convenience.Smartphones excel in portability and convenience. Their compact size and wireless connectivity allow users to enjoy entertainment anywhere, anytime. They can easily be carried in bags or pockets, making them ideal for commutes, waiting rooms, or vacations. Televisions, on the other hand, are stationary devices that require a fixed location and setup.Content Availability.Both smartphones and televisions offer a vast array of content. Televisions have traditionally relied on cable or satellite providers for broadcast content, while smartphones have access to a wider range of digital platforms and streaming services. This allows smartphone users to access a virtually unlimited selection of movies, TV shows, documentaries, and other entertainment options.Interactivity and Customization.Smartphones offer a level of interactivity that is unmatched by televisions. They allow users to engage with content, share it on social media, and customize their viewing experience. Additionally, smartphone apps provide a plethora of additional features, such as gaming, social networking, and productivity tools. Televisions, while offering some limited interactivity, primarily function as passive entertainment devices.Gaming Experience.For serious gamers, smartphones are often not the ideal choice. While some games can be played on smartphones, they are often scaled down versions of console or PC games. Televisions, with their larger screens and more powerful processors, provide a superior gaming experience,especially for immersive titles that require high-fidelity graphics and low latency.Cost and Value.Smartphones are generally more affordable than televisions. This makes them a more accessible option for budget-conscious consumers. However, televisions offer better value for those who prioritize a high-quality entertainment experience in the comfort of their own homes.Conclusion.Both smartphones and televisions have their own strengths and weaknesses. Smartphones offer portability, convenience, and a wide range of content, while televisions provide an immersive viewing experience and superior gaming capabilities. Ultimately, the choice between the two depends on individual preferences and priorities.中文回答:手机和电视机对比。
设计细节说明英文作文

设计细节说明英文作文1. The color scheme of the design is vibrant and eye-catching. The use of bold and contrasting colors adds a dynamic and energetic feel to the overall look. It is meant to grab attention and create a memorable visual impact.2. The layout of the design is clean and organized. The use of grids and alignment helps to create a sense of order and balance. It allows for easy navigation and readability, ensuring that the information is easily accessible to the audience.3. The typography in the design is carefully chosen to reflect the tone and message of the content. The use of different font styles and sizes adds variety and visual interest. It helps to emphasize important points and create a hierarchy of information.4. The use of imagery and graphics in the design is strategic and purposeful. It helps to enhance thestorytelling and convey the message in a visually appealing way. Whether it is through illustrations, photographs, or icons, the visuals are meant to engage and captivate the audience.5. The design incorporates interactive elements to encourage user engagement. This could include buttons, links, or animations that invite the audience to interact with the content. It adds a level of interactivity and makes the design more engaging and memorable.6. The design is responsive and adaptable to different devices and screen sizes. It is important to ensure that the design looks and functions well on various platforms, such as desktop, mobile, and tablets. This allows for a seamless user experience and ensures that the design is accessible to a wider audience.7. The overall aesthetic of the design is modern and contemporary. It incorporates current design trends and styles to create a fresh and up-to-date look. It is important to stay relevant and appealing to the targetaudience, and the design achieves this by staying on top of current design trends.8. The design is user-centered and focuses ondelivering a positive user experience. It takes into consideration the needs and preferences of the target audience and aims to create a design that is intuitive, user-friendly, and enjoyable to interact with.9. The design is consistent and cohesive throughout all elements. This includes the colors, typography, imagery, and overall style. Consistency helps to create a sense of unity and professionalism, and it ensures that the designis visually appealing and easy to understand.10. The design is attention-grabbing from the first glance. It uses bold and impactful visuals, headlines, or slogans to immediately capture the audience's attention. This helps to create a strong first impression and entices the audience to further explore the design and its content.。
关于玩具的英文单词

关于玩具的英文单词Toys have been an integral part of human culture for centuries, serving as a means of entertainment, learning, and development for children and adults alike. The English language is rich with a diverse vocabulary dedicated to these beloved objects, reflecting the creativity and ingenuity of their design and the joy they bring. In this essay, we will explore the fascinating world of toy-related English words, delving into their origins, meanings, and the important role they play in our lives.At the heart of any discussion about toys lies the word "toy" itself. Derived from the Middle English word "toye," meaning "plaything," the term "toy" encompasses a wide range of objects designed for the purpose of play and amusement. From the simplest wooden blocks to the most technologically advanced robotic companions, toys come in countless shapes, sizes, and materials, each with their own unique appeal and purpose.One of the most fundamental categories of toys is the "doll," a miniature human-like figure that has captivated children forgenerations. The word "doll" can be traced back to the Greek word "dolium," meaning "little woman," and has since evolved to encompass a diverse array of dolls, from the classic porcelain figurines to the modern fashion dolls that have become cultural icons.Closely related to dolls are the "action figures," which are posable toy representations of fictional characters or real-life individuals. These toys, often associated with popular media such as movies, TV shows, and comic books, allow children to reenact their favorite stories and engage in imaginative play. The term "action figure" reflects the dynamic and interactive nature of these toys, as they are designed to be posed and moved in a variety of ways.Another category of toys that has captured the imagination of children and adults alike is the "stuffed animal," also known as a "plush toy." These soft, cuddly companions are often made of fabric and filled with materials such as cotton or polyester, creating a comforting and huggable object. The term "stuffed animal" reflects the fact that these toys are designed to resemble real-life animals, from the classic teddy bear to the more exotic species found in the wild.In addition to these traditional toy categories, the modern era has brought forth a wealth of technological advancements that haverevolutionized the world of toys. The term "electronic toy" encompasses a diverse range of toys that incorporate digital components, such as microprocessors, sensors, and various forms of technology. From interactive robotic pets to handheld gaming devices, these toys offer a level of engagement and interactivity that was once unimaginable.Closely related to electronic toys are the "video games," which have become a ubiquitous part of modern life. These interactive software programs, designed to be played on various digital platforms, have captured the attention of people of all ages, from the casual gamer to the passionate enthusiast. The term "video game" reflects the visual nature of these experiences, as they often feature highly detailed and immersive graphics.Another category of toys that has gained significant popularity in recent years is the "building toy," which allows children to construct and create their own unique structures and objects. The most well-known example of a building toy is the "Lego," a beloved brand of plastic bricks that can be assembled in countless ways to bring imaginations to life. The term "Lego" is derived from the Danish words "leg godt," meaning "play well."Closely related to building toys are the "puzzle" and "board game," both of which involve problem-solving and strategic thinking.Puzzles, which can range from simple jigsaw puzzles to complex three-dimensional challenges, are often designed to test and stimulate the mind, while board games offer a social and interactive experience, allowing players to compete or cooperate in a variety of settings.The world of toys is not limited to physical objects alone, as the digital age has brought forth a new category of "virtual toys." These include computer programs, mobile applications, and online games that simulate the experience of playing with physical toys, often offering a level of customization and interactivity that was previously unattainable.The language used to describe toys is not only functional but also reflects the rich cultural and historical context in which these objects have evolved. Words like "toy," "doll," "action figure," and "stuffed animal" have become part of the everyday lexicon, serving as a testament to the enduring appeal of these beloved objects.In conclusion, the English language is a treasure trove of words related to toys, each one revealing the creativity, innovation, and joy that these objects have brought to our lives. From the classic wooden blocks to the cutting-edge electronic marvels, the vocabulary of toys reflects the human desire to play, learn, and connect with one another. As we continue to explore and discovernew forms of toys, the language used to describe them will undoubtedly evolve, ensuring that the magic of play remains an integral part of our lives.。
外文文献翻译——顾客满意度(附原文)

外文文献翻译(附原文)译文一:韩国网上购物者满意度的决定因素摘要这篇文章的目的是确定可能导致韩国各地网上商场顾客满意的因素。
假设客户的积极认知互联网购物的有用性,安全,技术能力,客户支持和商场接口积极影响客户满意度。
这也是推测,满意的顾客成为忠实的客户。
调查结果证实,客户满意度对顾客的忠诚度有显著影响,这表明,当顾客满意服务时会显示出很高的忠诚度.我们还发现,“网上客户有关安全风险的感知交易中,客户支持,网上购物和商场接口与客户满意度呈正相关.概念模型网上购物者可以很容易的将一个商场内的商品通过价格或质量进行排序,并且可以在不同的商场之间比较相同的产品。
网上购物也可以节省时间和降低信息搜索成本。
因此,客户可能有一种感知,他们可以用更少的时间和精力得到更好的网上交易。
这个创新的系统特性已被定义为知觉有用性。
若干实证研究发现,客户感知的实用性在采用影响满意度的创新技术后得以实现.因此,假设网上购物的知觉有用性与满意度成正相关(H1).网上客户首要关注的是涉及关于网上信用卡使用的明显的不安全感。
虽然认证系统有明显进步,但是顾客担心在网上传输信用卡号码这些敏感的信息是不会被轻易的解决的。
网上的隐私保护环境是另一个值得关注的问题。
研究表明,网上客户担心通过这些网上业务会造成身份盗窃或冒用他们的私人信息。
因此,据推测,网上购物的安全性对顾客满意度有积极地影响(H2)。
以往的研究表明,系统方面的技术,如网络速度,错误恢复能力和系统稳定性都是导致客户满意度的重要因素。
例如,Kim和Lim(2001)发现,网络速度与网上购物者的满意度有关.Dellaert和卡恩(1999年)也报告说,当网络提供商没有进行很好的管理时网上冲浪速度慢会给评价网站内容带来负面影响。
丹尼尔和Aladwani的文件表明,系统错误的迅速准确的恢复能力以及网络速度是影响网上银行用户满意度的重要因素(H3)。
由于网上交易的非个人化性质客户查询产品和其他服务的迅速反应对客户满意度来说很重要。
海报英语作文五、

海报英语作文五、Posters have been an integral part of our visual landscape for centuries, serving as a powerful medium for communication, advertisement, and artistic expression. These versatile canvases have the ability to captivate, inform, and inspire, making them an invaluable tool in various industries and settings.One of the primary functions of posters is to convey information effectively. Whether it's promoting an upcoming event, raising awareness about a social issue, or advertising a product or service, posters can deliver a concise and impactful message that immediately grabs the viewer's attention. The strategic use of eye-catching imagery, bold typography, and strategic color choices can make even the most complex information accessible and memorable.Moreover, posters have the power to evoke emotions and elicit a response from the viewer. A well-designed poster can inspire a sense of wonder, curiosity, or even a call to action. The art of poster design often involves striking a balance between aesthetic appeal and meaningful content, creating a visual experience that resonates withthe audience on a deeper level.In the realm of advertising, posters have long been a staple tool for businesses and organizations to promote their offerings. From the iconic Coca-Cola advertisements of the early 20th century to the modern-day digital billboards, posters have evolved to keep pace with changing consumer preferences and technological advancements. Effective poster design can help a brand stand out in a cluttered marketplace, leaving a lasting impression on potential customers.Beyond the commercial realm, posters have also played a significant role in social and political movements. Throughout history, posters have been used as a medium for activism, protest, and social change. From the iconic "I Have a Dream" poster featuring Martin Luther King Jr. to the powerful imagery of the women's suffrage movement, posters have the ability to amplify voices, raise awareness, and inspire collective action.In the realm of art and design, posters have also found a prominent place. Poster art has evolved into a distinct genre, with renowned artists and designers creating visually stunning works that blur the line between commercial and fine art. These posters often showcase a unique artistic style, experimental typography, and a deep understanding of visual communication principles.The versatility of posters extends beyond their traditional applications. In the digital age, the poster format has adapted to the online world, with e-posters and social media graphics becoming increasingly prevalent. These digital iterations maintain the core principles of effective poster design while leveraging the reach and interactivity of the internet.In conclusion, posters are a remarkable medium that continue to shape our visual landscape and influence our perceptions. From informative announcements to captivating works of art, posters possess the power to communicate, inspire, and leave a lasting impression on the viewer. As technology and societal trends evolve, the role of posters in our lives is likely to continue to expand, making them an enduring and essential component of our visual culture.。
他打游戏的作文英文

他打游戏的作文英文I love playing video games. It's a great way to relax and have fun. When I play games, I feel like I'm in another world. The graphics and sound effects are so realistic that I sometimes forget I'm just sitting in front of a screen.Playing games also helps me improve my problem-solving skills. I have to think quickly and make decisions on the spot. It's like a mental workout that keeps my brain sharp. Plus, I get a sense of accomplishment when I overcome challenges and complete difficult levels.One thing I enjoy about gaming is the social aspect. I can play with my friends online and we can chat and strategize together. It's a great way to connect and bond with others who share the same interests. Sometimes, we even form teams and compete against other players from around the world.Another reason why I love gaming is the immersivestorytelling. Many games have intricate plots and captivating characters. I get emotionally invested in their journeys and feel like I'm part of the story. It's like watching a movie or reading a book, but with the added element of interactivity.Gaming also allows me to explore different worlds and cultures. I can travel to ancient civilizations, futuristic cities, or even fantasy realms. It's a way to escapereality and experience something completely different. I can learn about history, mythology, and even pick up some new vocabulary in the process.Lastly, playing games is just plain fun. Whether I'm racing cars, shooting aliens, or solving puzzles, I always have a blast. It's a form of entertainment that never gets old. I can play for hours and never get bored.In conclusion, gaming is a hobby that brings me joy and excitement. It's not just about wasting time or being lazy, but rather a way to challenge myself, connect with others, and explore new worlds. So the next time someone asks mewhy I love playing games, I'll simply say, "Because it's awesome!"。
在线学习的改进建议英语作文

在线学习的改进建议英语作文Improving Online Learning: Key Strategies for Enhanced Engagement and Effectiveness.Online learning has become a prevalent mode of education in recent years, as technology has allowed for greater access and flexibility in acquiring knowledge. However, with the rise of remote learning, it has become crucial to identify areas for improvement to ensure optimal student engagement and learning outcomes. In this article, we will explore several key strategies to enhance the quality and effectiveness of online learning.1. Improving User Interface and Experience.The user interface and experience play a crucial rolein maintaining student engagement in online learning. Platforms should be designed with a focus on usability, accessibility, and intuitiveness. Simple navigation, clear labels, and consistent design patterns can help studentsnavigate the learning platform seamlessly. Additionally, the platform should be optimized for various devices and internet speeds to ensure a smooth learning experience.2. Enhancing Interactivity and Engagement.One of the key challenges of online learning is maintaining student engagement. To address this, platforms should incorporate interactive elements such as forums, chatboxes, and virtual classrooms. These features encourage students to participate actively in discussions, ask questions, and collaborate with peers. Furthermore, gamification elements can be added to make learning more engaging and fun.3. Providing Personalized Learning Paths.Students learn in different ways and at different speeds. Therefore, online learning platforms should offer personalized learning paths that cater to individual needs and preferences. This can be achieved through adaptive learning algorithms that analyze student performance andprovide tailored content and recommendations. By personalizing the learning experience, students are more likely to engage with the material and achieve better outcomes.4. Leveraging Multimedia Content.Multimedia content, such as videos, podcasts, and interactive graphics, can enhance the learning experience by making the material more engaging and comprehensible. These tools are particularly useful for explaining complex concepts and processes. Online learning platforms should incorporate a variety of multimedia elements to engage students visually, auditorily, and kinesthetically.5. Encouraging Collaborative Learning.Collaborative learning is an essential part of education, as it fosters critical thinking, communication skills, and teamwork. Online learning platforms should provide opportunities for students to work together in groups, share ideas, and collaborate on projects. This canbe achieved through virtual group workspaces, forums, and online collaboration tools.6. Providing Timely Feedback and Assessment.Feedback and assessment are crucial for monitoring student progress and identifying areas for improvement. Online learning platforms should offer timely feedback on student performance, allowing them to make adjustments and improve their learning strategies. Additionally, regular assessments can help instructors identify student needs and address them effectively.7. Enhancing Teacher Training and Support.Instructors play a pivotal role in online learning, as they guide students through the material and facilitate learning. Therefore, it is essential to provide teachers with comprehensive training and support to ensure they are equipped to teach effectively online. This training should cover areas such as effective online teaching strategies, using the learning platform, and providing feedback andassessments.8. Promoting a Positive Learning Environment.Finally, it is crucial to create a positive and inclusive learning environment online. This involves fostering a culture of respect, inclusivity, and mutual support. Platforms should provide tools and resources to promote these values, such as guidelines for community engagement, conflict resolution mechanisms, and opportunities for students to voice their opinions and suggestions.In conclusion, online learning has the potential to transform education and provide greater access and flexibility. However, to ensure its effectiveness, it is essential to continuously improve the user interface, enhance interactivity and engagement, personalize learning paths, leverage multimedia content, encourage collaborative learning, provide timely feedback and assessment, enhance teacher training and support, and promote a positive learning environment. By implementing these strategies, wecan ensure that online learning remains an engaging and effective mode of education.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Parallel Graphics and Interactivity with the Scaleable Graphics EngineKenneth A. Perrine, Donald R. JonesWilliam R. Wiley Environmental Molecular Sciences LaboratoryPacific Northwest National LaboratoryPO Box 999MS K1 - 96Richland, WA 99352kenneth.perrine@,dr.jones@AbstractA parallel rendering environment is being developed to utilize the IBM Scaleable Graphics Engine (SGE), a hardware frame buffer for parallel computers. Goals of this software development effort include finding efficient ways of producing and displaying graphics generated on IBM SP nodes and of assisting programmers in adapting or creating scientific simulation applications to use the SGE. Four software development phases discussed utilize the SGE: tunneling, SMP rendering, development of an OpenGL API implementation which utilizes the SGE in parallel environments, and additions to the SGE-enabled OpenGL implementation that uses threads. The performance observed in software tests show that programmers would be able to utilize the SGE to output interactive graphics in a parallel environment. Keywords: parallel hardware frame buffer OpenGL multithreaded SMP visualization framework1. IntroductionResearchers at Pacific Northwest National Laboratory (PNNL), William R. Wiley Environmental Molecular Sciences Laboratory (EMSL) have a growing need to visualize distributed data generated by the 512-node IBM SP system at the Molecular Sciences Computing Facility (MSCF). Currently, researchers collect information generated by simulations performed on the SP and visualize the data on a graphics workstation. Transferring the data from the SP to a workstation is time-consuming and does not offer interactive capabilities, such as steering an ongoing simulation computation in real-time.Alternatively, graphics data generated across many SP nodes can be consolidated onto one node and viewed across the network on an X terminal. This approach can be slow, both from the time required to consolidate the data and to transfer the images to the X terminal.To address these problems, PNNL recently collaborated with the IBM Watson Research Center to produce the IBM Scaleable Graphics Engine (SGE) [6]. The SGE is a high-performance frame buffer for parallel computers. By generating graphics on SP nodes and eliminating the reliance on workstations, graphics can be more rapidly rendered. There is now an effort to write software to support the SGE and utilize its capabilities.This paper describes PNNL’s software development approach for utilizing the SGE in creating a parallel rendering environment. The first phase is an interactive simulation of a vibrating spring mesh to build graphical output from pre-rendered primitives. The second phase involves the use of threads on SMP nodes to decode a number of MPEG video streams and output the streams to the SGE in parallel. The third phase is a set of extensions for the Mesa and GLUT libraries to support OpenGL rendering in the parallel environment and accelerated rendering using automatically-assigned viewport clipping planes on all nodes. The fourth phase involves further modifying Mesa and GLUT to allow multiple graphics frames to be rendered simultaneously by the use of threads. Future work includes incorporating imaging compositing and geometry sorting to the rendering environment. In the descriptions of the programming projects, SP and SGE performance is addressed along with considerations for optimally utilizing the resources available, including SMP and the high-speed switch links.First, a description of the SGE architecture [5] is given to provide a basic understanding of its capabilities.Copyright is held by the author / owner.SC2001 November 2001, Denver 1-58113-293-X/01/0011 $5.00.2. SGE ArchitectureThe SGE is a hardware component that connects to the SP switch. It can also connect to a Linux cluster through Gigabit Ethernet interfaces. It receives pixel fragments directly from any number of nodes. Multiple high-speed links to the switch allow it to receive pixel data in parallel from multiple nodes. The SGE can support up to 16 links to the switch.Disjoint pixel fragments are joined within the SGE frame buffer and displayed as a high-resolution, contiguous image. As an example, Figure 1 shows quarters of an image rendered on 4 nodes being displayed on the SGE as a contiguous image. The joining of pixel fragments from multiple switch links is done within the SGE through a router backplane that pipes incoming graphics data from the switch links to the multi-banked back-buffer memory. The router maintains multiple paths of pixel data from the input link interfaces to the memory banks, enabling the streams of incoming data to be stored in memory in parallel, rather than serially. The frame buffer can support up to 16 million pixels and can output to multiple, arbitrarily tiled display units through the display driver hardware. Both 24-bit and 16-bit color is supported, and stereo output is provided for use with stereo LCD glasses.Concurrent read and write bandwidth of each memory bank is 45 megapixels per second. Each of the 8 memory cards contains 2 banks. The SGE is capable of updating 8 display driver cards with a peak performance of 720 megapixels per second. Display driver cards are available for analog outputs and digital outputs. With digital video output hardware, the SGE can drive the new IBM 204-DPI, 3840x2400-pixel “Bertha” LCD display.The SGE software includes an X server (X11R6.4) and a subroutine library allowing graphics applications to communicate directly with the SGE. The use of this library to transfer graphics data to the SGE is called tunneling. X11 extensions coordinate with and support the high-bandwidth tunneling libraries.Figure 2 shows the structure of a basic SGE parallel application. The application runs in parallel on multiple nodes and maintains a buffer for outgoing graphic data on each node. The application performs X11 window creation and event handling setup on one node and utilizes the SGE tunneling library to distribute events towhich pixel areas of the SGE output window are to be updated by each node. In addition to supporting evenly-divided regions as pictured in Figure 1, the SGE also supports arbitrarily-placed regions of any complexity, including multiple, disjoint regions from each node and nonrectangular regions. These regions may also be overlapped and sequenced to produce animation.The SGE differs from PixelFlow [12][4], a graphics hardware device that is designed to work in conjunction with a parallel computer. Although the SGE contains parallel frame buffer memory and has the ability to perform image masking for arbitrarily-shaped regions, it does not contain hardware for geometry transformation, shading, anti-aliasing, depth-buffering, or the use of alpha pixels to perform compositing. When using the SGE for graphics output, all of these steps must be performed on the parallel computer’s nodes. The SGE is also unlike the Lightning-2 [16], a passive external compositing device that interacts with digital video interfaces to combine multiple video outputs (in the form of DVI video signals) into a single video output. The SGE’s hardware is targeted for framebuffer-related functionality at the pixel level, and provides flexibility for utilization with a variety of applications and rendering algorithms. For example, parallel software rendering algorithms for volume rendering, polygonal rendering, and large image manipulation can all utilize the SGE for parallel output. Rendering algorithms which produce blocks of rendered image data, such as some ray-tracing or bucketed polygonal rendering [2] algorithms can output graphics directly to the SGE whereas other algorithms may require a software compositing step in order to create ready-to-display graphics data. The SGE also can be used in conjunction with hardware graphics accelerators installed in Linux clusters. The use of off-the-shelf graphics cards in Linux clusters is an increasing area of research—an example being Sandia National Laboratory’s Ric cluster [17]. It is possible for such clusters to output regions of rendered images to the SGE using graphics accelerator framebuffer readbacks.The SGE at PNNL is currently connected by 4 switch links to an IBM SP with 24 Winterhawk-2 nodes. Each node is equipped with 4 375MHz Power3 processors. The SGE is connected to the Panoram Technologies PV290 monitor, which is three 1280x1024-pixel flat-panel displays housed in a single unit. The three displays output the SGE frame buffer, which is configured for 3840x1024 pixels. One of the displays can be directed to a CRT video projector where stereo LCD glasses may be used to view stereo graphics.The SGE will soon be configured with another SP that has 4 Nighthawk-2 nodes with 16 processors on each node. Although the SP “Colony” switch will be used in the SP, Gigabit Ethernet links will connect each node to the SGE.3. Application DevelopmentThe following sections describe software development projects for the SGE. These projects all contribute to the final objective of having a set of tools and libraries—a parallel rendering environment—that developers and scientists can use in order to utilize the SGE.So far, the SGE code has been designed at PNNL to perform software-only 2D and 3D rendering. Before development began, the decision to utilize software for rendering was made mainly because PNNL does not have hardware-accelerated rendering resources for the 4-processor Winterhawk-2 nodes or the 16-processor Nighthawk-2 nodes. The number of processors on each node of these SPs also encourages the use of multiple threads to generate geometry and to render images, an activity that may not be readily feasible with single-pipelined hardware graphics accelerators.3.1 SGE TunnelingStretch is a sample application written by Peter Hochschild at IBM provided with the SGE. It is designed to test the operation of SP systems with the SGE. The application simulates a spring-coupled mesh of vibrating spheres and renders pixel-level images representing each step of the simulation. Figure 3 shows the steps each node performs as the demonstration application runs. In the application setup stage, shaded primitive graphics elements are pre-rendered and stored in buffers. These primitives include rods to represent the springs and colored spheres to represent the connections between springs. These primitives are later copied out of their buffers into a larger common buffer when a completean X window for graphics output. It calls a synchronous SGE library function that places the X window into “tunneled” mode. This enables the window to receive graphics data from multiple nodes via the SGE tunneling library. Each node proceeds to the main calculation and rendering loop when the setup stage is complete. For each loop of the simulation, spring forces and sphere positions are calculated.A representation of the simulation is pieced together in a large pixel buffer using the pre-rendered primitives. Scaleable rendering performance is achieved by assigning a proportionately-divided horizontal strip of the output window area to each node. For example, when the Stretch application is run on three nodes, each node renders one-third of the complete image.The tunneling library is then called, which reads the pixel data from the large pixel buffer and transfers them to the SGE. The SGE displays the horizontal strips from all of the nodes as one contiguous image. The tunneling function call is synchronous across all of the nodes.The user can interact with the simulation by moving the mouse cursor over the spheres. In the simulation, kinetic energy is added to the mesh of springs at spheres where mouse movement occurs. As the application runs, node 0 processes incoming X events (node 0 is the “owner” of the output window). Node 0 uses a synchronous SGE library call to distribute the events to all of the other nodes.The SGE application achieves a frame rate of greater than 30 frames per second (rendering to an 800x800-pixel window), resulting in smooth animation. The simulation is also responsive to user interaction. Had this application been output to an X terminal, overhead would have been incurred by copying pixel data to a concentrator node (to join together the disjoint horizontal strips from all of the nodes) and then outputting the joined pixel data to an X terminal over the network.3.2 SMP Rendering: Parallel MPEG PlayerIn Stretch, multiple processors on the SMP nodes had not been utilized in rendering graphics data. An experimental application was constructed to evaluate SGE performance where all processors on each node would be utilized in image creation. This application is called the SGE MPEG Player and consists of modifications to the Berkeley MPEG Player [1]. It simultaneously decodes many MPEG video streams, positions their outputs into locations within the final pixel buffer, and then uses the SGE to display all the streams. Figure 4a shows a diagram of simplified steps that the original Berkeley MPEG Player performs in decoding an MPEG stream (the frame read/decoding process actually involves further looping). Figure 4b shows the steps performed by the all threads on each node in the modified version, where each thread decodes a unique stream.When the player starts its setup procedure, it reads a text file that lists all of the MPEG streams to be decoded. Each node identifies the MPEG streams it is responsible for and determines where each MPEG stream should be positioned in the output window. Figure 5 shows an example of the placement of MPEG stream outputs for a set of 3 nodes, each node decoding 2 MPEG streams. A complex region mask is constructed, consisting of a set of rectangles. Each rectangle is positioned corresponding to where pixels from decoded MPEG streams are located in the pixel buffer. Threads are then generated; each thread opens its MPEG stream and performs the functions needed to decode the stream data into RGB pixels. The main thread waits until all decoding threads are finished decoding their frames and then initiates a tunneling operation. A double-buffering scheme is employed to allow decoding threads to begin decoding a new frame while tunneling takes place in the main thread.An experiment was conducted which involved decoding 48 MPEG video streams to a 3840x960 screenarea. Eight nodes were utilized, each decoding 6 MPEG streams. Each video stream was 320x240 pixels in size. The output frame rate averaged 31.0 frames per second (FPS), with a variance of +/– 2.4 FPS. On average, 114 million pixels per second were displayed.The experiment was done with 4 switch links established between the SGE and the switch. An earlier experiment was performed with a similar setup, but with 2 switch links connecting the SGE to the SP. The maximum frame rate of that experiment was around 18 frames per second. From this result, it can be reasoned that the dominant limiting factor for the MPEG player performance while utilizing 2 links is the SP switch. This also demonstrates that the SGE scales favorably when the number of switch links are increased from 2 to 4. At this time, no other switch configurations have yet been tested at PNNL.This experiment was intended to require a similar amount of processing to what could possibly occur in a simple multithreaded scientific simulation or large image manipulation application. These results show that multithreaded algorithms running in parallel can produce pixel data which covers a large screen area on the SGE, and that the screen area can be updated to display full motion. Such performance is needed for PNNL’s future plans for utilizing IBM's new 204-DPI flat panel display.3.3 Graphics API Development: SGE/MesaA goal at PNNL for SGE software development is to create tools for programmers allowing them to create and customize graphics and simulation applications to render in the parallel environment and output graphics to the SGE. It is also desirable to provide a way for users tointeract with the rendered graphics. In order to achieve that goal, it was logical to begin development by utilizing an existing graphics API which programmers were already familiar. The Mesa [9] and GLUT [14] libraries were utilized for this purpose because the OpenGL API is a commonly-used 3D graphics standard and GLUT is often used for simple, platform-independent window preparation and event handling. Mesa, an implementation of the OpenGL API, has a driver interface that allows programmers to supply code to support new graphics devices. A new driver has been developed at PNNL, called SGE/Mesa [15], to support the SGE and facilitate the handling of pixel buffers in parallel. GLUT was also altered to isolate window-creation calls to node 0, automatically distribute events caught on node 0 to the other nodes, swap buffers using SGE function calls, handle distributed color buffer reallocation on window resize events, and handle complex event-driven tasks such as generating and displaying pop-up menus in interactive parallel applications.A guiding principle for adding parallel SGE support to Mesa was to require no changes to be made to existing OpenGL application code in order to let the code run in parallel and achieve improved rendering performance. The approach for doing this was to divide the output into a set of regions, each sized proportionately to the number of nodes running the OpenGL application. For example, a four-node execution would divide the output into quarters, as seen in Figure 1. The OpenGL viewports and frustums would then be set automatically according to the boundaries of the regions, resulting in each node having clipping planes to reduce rendering to only the respective region. This scheme works if all geometry to be rendered is provided on all nodes, or if geometry is prefiltered (or bucketed [2]) and provided on nodes maintaining the viewports in which it is to be displayed, similar to what WireGL [7][8] and its successor, Chromium [3], can do.Advantages for programmers are gained by using Mesa and GLUT to drive the SGE, rather than directly programming for the SGE. First, GLUT hides the details of window creation, SGE window tunnel enabling, event distribution, and display updating for parallel SGE applications. Unlike the Stretch demo, the programmer does not need to know low-level SGE library details, such as how to configure windows for use with parallel tunneling. Second, Mesa does not require programmers to perform pixel buffer allocation and direct pixel manipulation in the SGE pixel format. For example, the Stretch demonstration application allocates an output buffer where direct pixel manipulation takes place in the form of pixel copying. Mesa, however, performs all of the OpenGL polygon-drawing functions into a pixel buffer it maintains and the contents of the pixel buffer are tunneled to the SGE when the GLUT swapping function call is made.Applications which utilize GLUT and the SGE/Mesa library are structured so that execution flow proceeds as seen in Figure 6. While library functions handle much of the system-related functionality, the application utilizing GLUT and SGE/Mesa performs the next-step calculations (such as object positions or states) and does renderingsimple OpenGL applications can achieve increased rendering performance when the code is run in parallel. Most applications in the Mesa distribution’s “/demos” directory compile with the SGE/Mesa library and exhibit performance increases when run on more than one node.A preliminary experiment was conducted on two of these applications (“gears” and “gloss”) and a customized version of the “gears” demo called “cgears”. The “gears” demonstration handles 357 quadrilateral surfaces and 74 triangles, flat-shaded with z-buffering and backface culling. The “cgears” application draws 100 times the geometry of “gears” with backface culling disabled (making it so that all geometry is drawn, regardless of which side of the geometry is to be rendered) and z-buffering disabled. Effectively, 35,700 quadrilateral surfaces and 7,400 triangles are drawn per frame. The “gloss” demonstration draws a glossy teapot through software rendering with diffuse and specular texture mapping at around 5000 drawn polygons. All tests were run with graphics displayed in a 1280x1024-pixel window on the SGE. Table 1 lists the node count, average frame rate, and rendering latency for each test. The overhead incurred in tunneling the rendered images is not included in the latency measurement. For tests run on more than one node, the rendering latencies for the busiest and most idle nodes are listed as “maximum latency/minimum latency”.It can be seen from the benchmark data that utilizing more than 8 nodes with the “gloss” demo and more than 4 nodes for the “gears” demo produces insignificant performance increases and even slightly decreases performance in some cases. It is estimated that this is partially caused by poorly distributed geometry across the fixed viewport regions. The minimum rendering latency values for the high node counts show that at least one node is mostly idle due to this poor load balancing; some of the corner subdivisions do not contain much geometry to be rendered.Although further work can be done in finding ways to achieve proper load balancing and faster frame rates for increased polygon counts, the preliminary experiment shows promising results for SGE operation: increased performance is observed with multiple nodes performing rendering on different regions of one image.3.4 SMP Rendering Threads with SGE/MesaThe performance to be gained by utilizing multiple processors on each node in the SGE MPEG Player prompted additions to be made to SGE/Mesa to facilitate the use of multiple rendering threads. The utilization of multiple processors was determined to be the next step in development in order to increase the software-rendered polygon count and maintain interactivity.For the next development phase, functionality was added to allow multiple frames and/or multiple regions to be rendered simultaneously on one or more nodes. It was expected that allowing multiple frames to be rendered simultaneously would increase the frame rate but keep the rendering latency the same as what was seen earlier with SGE/Mesa. Other multithreaded accelerations with geometry transformations, rendering, or compositing were not addressed in this stage of SGE/Mesa development. Figure 7 shows the typical application steps in rendering simultaneous frames on each node.The timing diagram in Figure 8a shows the progression of execution steps for the main thread and two rendering threads each using their own pixel buffers. The main thread creates the rendering threads and waits for the first rendering thread to report that its pixel buffer is ready. The rendering thread’s pixel buffer is ready for tunneling when the rendering thread calls the “Post” function. Aframe sequence counter is utilized to associate the firstrendering thread with the first, third, etc. frames and the second rendering thread with second, fourth, etc. frames. When the frame is ready, its pixel buffer is tunneled to the SGE. When tunneling occurs, the corresponding rendering thread is suspended since modifications cannot be made to the pixel buffers while tunneling is taking place. A double-buffering scheme can be employed to allow one buffer to be tunneled while another is being rendered, as seen in Figure 8b. This eliminates the need to suspend the rendering thread while its buffer is tunneled.When multiple nodes are utilized, each rendering thread may be configured to render to a specific frame in a time-sequence. An example of this can be described by two nodes each running two rendering threads. Between the two threads on both nodes, a sequence of four frames can be simultaneously rendered. The two threads on the first node can render the first two frames of the sequence and the two threads on the second node may render the last two frames. In addition, multiple threads can be configured to render to a proportionately-sized region of a frame (the region being sized according to the number of threads across all nodes assigned to render to the same frame in the time-sequence). Each thread utilizes an automatically-set viewport and frustum to create clippingTable 1: Frame rates (FPS) and max./minimum rendering latency for SGE/Mesa tests“gears” “cgears” Nodes FPS latency(max/min)FPS latency(max/min)1 14 Hz 0.045 sec 0.41Hz 2.4 sec2 20 0.038/0.026 0.75 1.3/1.34 25 0.031/0.020 1.2 0.80/0.57 8 29 0.024/0.018 1.4 0.64/0.085 16 28 0.023/0.005 2.1 0.46/0.051“gloss”Nodes FPS lat. (max/min) 1 3.5 Hz 0.26 sec 2 6.3 0.14/0.14 4 8.8 0.10/0.090 8 7.5 0.10/0.040 168.60.10/0.040planes along the borders of the region. For example, four nodes could each employ two rendering threads. Each thread would render to a quarter of a frame, and one-quarter of the output area of the two-frame sequence would be maintained on each node. Since the SGE takes advantage of multiple connections to the switch, dividing the display area across multiple nodes offers optimal switch utilization when tunneling.Application code utilizing the SGE/Mesa library must be modified from its original format in order to support multiple rendering threads. The reason for this is that special considerations must be made concerning how multiple rendering threads access global variables (such as positions and rotations of objects to draw) in a way that is thread-safe on SMP architectures. To work around this problem without relying upon mutexes in the user application code, a queue data structure was created for use with SGE/Mesa to allow the main thread to store variables, which could be read by the rendering threads in a thread-safe manner. The queuing approach also provides a way to distribute data to threads rendering toparticular frames of a sequence. This is important for keeping data that change upon each iteration of the main thread, such as an increasing angle for a rotating gear, in the order that rendered frames would be displayed. The queue can hold data for all running threads. As seen in Figure 7, after the setup stage is complete, a loop causes data to be stored in the queue until it is filled, providing data for all running threads. The SGE/Mesa API reference for maintaining multiple rendering threads is available online at [15].Table 2 shows benchmarks from multithreaded versions of the Mesa “gears” and “gloss” demonstrations and the “cgears” test application. The multithreaded versions are called “agears”, “agloss”, and “acgears” respectively. Each test is run at 1280x1024 resolution. Each 4-processor node is running 4 rendering threads, rendering 4 frames simultaneously. Note that the latencies are similar to those reported in Table 1, and that the frame rates for many of the tests are close to the previous rates multiplied by 4. This indicates that the multiple threads in this experiment are rendering at about the same rate as the processes in the earlier experiment. For the “agears” test application, it is estimated that the 61-Hz frame rate limitation may be caused by limited switch link bandwidth to the SGE and overhead in the tunneling process.Although the frame rate is increased by utilizing multiple rendering threads, there are still limitations that keep SGE/Mesa from efficiently handling the rendering of large numbers of polygons. Most importantly, the load-balancing and geometry sorting limitations of the earlier single-process SGE/Mesa version are present in SGE/Mesa for multiple rendering threads. 4. Future WorkFuture work will be done to utilize multiple threads in order to achieve an interactive frame rate, but to also decrease rendering latency.The Chromium library [3] (the successor to WireGL [7][8]) will be evaluated to determine its compatibility with SGE/Mesa. Chromium allows geometry to be distributed from one or more nodes to a series of rendering nodes, each in charge of a region of the output rendered image. While Chromium and its predecessor can distribute geometry to multiple rendering nodes and drive multiple-monitor displays, the same geometry-sorting mechanism can be utilized with the viewport-division scheme employed by SGE/Mesa.The ability for the SP nodes to effectively handle multiple rendering threads calls for experimentation with partial-geometry compositing functionality. Work will be done to allow graphic fragments generated with multiple threads and nodes to be composited together in sort-last fashion [11]. Compositing will be carried out using a binary-swap approach, similar to what is described in [10]. It is anticipated that the compositing capability will improve load balancing and rendering speed by allowing overlapping fragments of an image to be simultaneously rendered and joined together as a complete frame.At a future time when PNNL tests the SGE with a Linux cluster, the possibility of utilizing hardware accelerators with the SGE will be evaluated.5. ConclusionThe SGE’s ability to accept pixel data from multiple nodes simultaneously makes it a viable tool for use with Table 2: Frame rates of multithreaded SGE/Mesa applications“agears” “acgears” Nodes FPS latency(max/min)FPS latency(max/min)1 30 Hz 0.052 sec 1.6 Hz 2.5 sec2 61 0.047/0.035 2.9 1.4/1.44 61 0.038/0.026 4.8 0.83/0.618 61 0.033/0.024 5.3 0.73/0.096 16 61 0.030/0.024 8.2 0.46/0.058“agloss”Nodes FPS lat. (max/min)1 14 Hz 0.28 sec2 23 0.17/0.164 31 0.11/0.118 32 0.11/0.05116 33 0.11/0.051。