Abstract Virtual Heritage at iGrid
一种新的空间数据索引方法

一种新的空间数据索引方法
何云斌;周帆
【期刊名称】《哈尔滨理工大学学报》
【年(卷),期】2009(014)004
【摘要】针对传统的R-树节点交叠面积大的问题,提出了一种新的空间数据索引结构--R0-树.主要思想是在内部树结点存储外部对象,如果将这样的对象存储在树的更高层,则低层结点的最小边界矩形MBR的面积更小,因此索引的性能更好.研究结果表明,此方法能可提高空间利用率,减少I/O访问次数,使索引性能得到大幅提升.【总页数】4页(P9-11,16)
【作者】何云斌;周帆
【作者单位】哈尔滨理工大学,计算机科学与技术学院,黑龙江,哈尔滨,150080;哈尔滨理工大学,计算机科学与技术学院,黑龙江,哈尔滨,150080
【正文语种】中文
【中图分类】TP311.131
【相关文献】
1.媒质中的城市空间——一种新的城市空间研究方法及其在历史街区改造中的应用[J], 成砚
2.A树—一种新颖的空间数据索引方法 [J], 易文根;韩波
3.一种基于P2P的空间数据索引方法 [J], 刘丹;谢文君
4.一种新的P2P空间矢量数据索引网络的研究 [J], 吴家皋;卞超杰;邵世伟;邹志强;
胡斌
5.聚类金字塔树:一种新的高维空间数据索引方法 [J], 张海勤;欧阳为民;蔡庆生因版权原因,仅展示原文概要,查看原文内容请购买。
维基空间应用于独立学院英语写作教学实证研究

摘要 : 维基 作 为 一种 在 网络 上 开 放 且 可 供 多人 协 同 创 作 的 超 文 本 系 统 , 具有 协作性 、 简便 性 、 开放 性 等 特 点 , 可应 用 于 独 立 学 院 英 语 写 作 教 学。 独 立 学 院 英语 专业 学生 通 过 维基 空 间进 行 合 作 式 英 语 写作 实验 , 结 果 证 实 维 基 合 作 式 写 作 比 传 统 的 个 人 写 作 更 能 促 进 写 作 水 平 的提 高 , 并有助于激发 学 习者 的积极性 , 提 升 学 习者 的 自主 性 并培 养 合 作 学 习 的 能 力 。 同 时 结 果 显 示, 教 师应 在 此 过 程 中积 极 地 发 挥 作 用 , 巧 妙 设 计 任 务 与 指 导 学生 。
中 在 Em a i l 、 BB S 等 网 络 工 具 。 随 着 We b 2 . 0技 术 的发展 , Bl o g、 wi k i 等新 型 交 流方 式 风靡 互 联 网 , 越 来 越 多 地 被 教 育 T作 者 应 用 于 语 言 写 作 教 学 。 wi k i 作为 we b 2 . 0发 展 的 新 技 术 , 在英 语 写 作教 学 中具 有 巨 大 的 潜 力 和 应 用 价 值 。 wi k i 面 向 社 群 的 协 作 性、 开 放 性 等 特 点 能 够 帮 助 教 师 与 学 习 者 突 破 传 统 英语 写作 教学 的局 限 , 增强 学 习者之 间 的交互 性 、 参 与 性和合 作性 , 最 终 达 到 提 高 学 习 者 英 语 书 面 表 达
能 力 的 目的 。 在 国 外 , wi k i 辅 助 写 作 教 学 的 理 论 探 索和教学 实 践开 始 较早 , 到 现在 已 经 比较 普遍 。
NVIDIA GRID vGPU (Virtual GPU Technology) for Auto

Solution GuideBalancing Graphics Performance, User Density & Concurrency with NVIDIA GRID™vGPU ™ (Virtual GPU Technology) for Autodesk Revit Power UsersV1.0Table of ContentsThe GRID vGPU benefit (3)Understanding GRID vGPU Profiles (3)Benchmarking as a proxy for real world workflows (5)Methodology (6)Fully Engaged Graphics Workloads? (6)Analyzing the Performance Data to Understand How User Density Affects Overall Performance (7)Server Configuration & GRID Resources (11)The GRID vGPU BenefitThe inclusion of GRID vGPU™ support in XenDesktop 7.1 allows businesses to leverage the power of NVI DIA’s GRID™ technology to create a whole new class of virtual machines designed to provide end users with a rich, interactive graphics experience. By allowing multiple virtual machines to access the power of a single GPU within the virtualization server, enterprises can now maximize the number of users with access to true GPU based graphics acceleration in their virtual machines. Because each physical GPU within the server can be configured with a specific vGPU profile organizations have a great deal of flexibility in how to best configure their server to meet the needs of various types of end users.Up to 8 VMs can connect to the physical GRID GPU via vGPU profiles controlled by the NVIDIA vGPU Manager.While the flexibility and power of vGPU system implementations provide improved end user experience and productivity benefits, they also provide server administrators with direct control of GPU resource allocation for multiple users. Administrators can balance user density and performance, maintaining high GPU performance for all users. While user density requirements can vary from installation to installation based on specific application usage, concurrency of usage, vGPU profile characteristics, and hardware variation, it’s possible to run sta ndardized benchmarking procedures to establish user density and performance baselines for new vGPU installations.Understanding GRID vGPU ProfilesWithin any given enterprise the needs of individual users varies widely, a one size fits all approach to graphic s virtualization doesn’t take these differences into account. One of the key benefits of NVIDIA GRID vGPU is the flexibility to utilize various vGPU profiles designed to serve the needs of different classes of end users. While the needs of end users can be quite diverse, for simplicity we can group them into the following categories: Knowledge Workers, Designers and Power Users.For knowledge workers key areas of importance include office productivityapplications, a rich web experience, and fluid video playback. Graphically knowledgeworkers have the least graphics demands, but they expect a similarly smooth, fluidexperience that exists natively on today’s graphic accelerated devices such asdesktop PCs, notebooks, tablets and smart phones.Power Users are those users with the need to run more demanding officeapplications; examples include office productivity software, image editing softwarelike Adobe Photoshop, mainstream CAD software like Autodesk Revit and productlifecycle management (PLM) applications. These applications are more demandingand require additional graphics resources with full support for APIs such as OpenGLand Direct3D.Designers are those users within an organization running demanding professionalapplications such as high end CAD software and professional digital contentcreation (DCC) tools. Examples include Autodesk Inventor, PTC Creo, Autodesk Revitand Adobe Premiere. Historically designers have utilized desktop workstations andhave been a difficult group to incorporate into virtual deployments due to the needfor high end graphics, and the certification requirements of professional CAD andDCC software.The various NVIDIA GRID vGPU profiles are designed to serve the needs of these three categories of users:Each GPU within a system must be configured to provide a single vGPU profile, however separate GPU’s on the same GRID board can each be configured separately. For example a single K2 board could be configured to serve eight K200 enabled VM’s on one GPU and two K260Q enabled VM’s on the other GPU.The key to effi cient utilization of a system’s GRID resources requires understanding the correct end user workload to properly configure the installed GRID cards with the ideal vGPU profiles maximizing both end user productivity and vGPU user density.The vGPU profiles with the “Q” suffix (K140Q, K240Qand K260Q), offer additional benefits not available inthe non-Q profiles, the primary of which is that Qbased vGPU profiles will be certified for professionalapplications. These profiles offer additional supportfor professional applications by optimizing thegraphics driver settings for each application usingNVIDIA’s Application Configuration Engine (ACE),ACE offers dedicated profiles for most professionalworkstation applications, once ACE detects thelaunch of a supported application it verifies that thedriver is optimally tuned for the best userexperience in the application. Benchmarking as a Proxy for Real World WorkflowsIn order to provide data that offers a positive correlation to the workloads we can expect to see in actual use, benchmarking test case should serve as a reasonable proxy for the type of work we want to measure. A benchmark test workload will be different based on the end user category we are looking to characterize. For knowledge worker workloads a reasonable benchmark is the Windows Experience Index, and for Power Users we can use the Revit benchmark for Autodesk Revit. The SPEC Viewperf benchmark is a good proxy for Designer use cases.To illustrate how we can use benchmark testing to help determine the correct ratio between total user density and workload performance we’ll look at a Power User workload using t he Revit benchmark, which tests performance within Autodesk Revit 2014. The benchmark tests various aspects of Revit performance by running through a series of common workloads used in the creation of a Revit project. These workloads include viewport rotation and viewport refresh using realistic and hidden line visual styles. These areas have been identified in particular as pain points within the average users Revit workflow. The benchmark creates a detailed model and then automates interacting with this model within the application viewports in real-time.The Revit benchmark is an excellent proxy for end user workloads, it is designed to test the creation of an actual real world model and test performance using various graphic display styles and return a benchmark score which isolates the various performance categories. Because the benchmark runs without user interaction once started it is an ideal candidate for multi-instance testing. As an industry standard benchmark, it has the benefit of being a credible test case, and since the benchmark shows positive scaling with higher end GPU’s it allows us to test various vGPU profiles to understand how profile selection affects both performance and density.MethodologyBy utilizing test automation scripting tools, we can automate launching the benchmark on the target VM’s. We can then automate launching the VM’s so that the benchmark is running on the target number of VM’s concurrently. Starting with a single active user per physical GPU, the benchmark is launched by the client VM and the results of the test are recorded. This same procedure is repeated by simultaneously launching the benchmark on additional VM’s and continuing to repeat these steps until the maximum number of vGPU accelerated VMs per GRID card (K1 or K2) is reached for that particular vGPU profile.Fully Engaged Graphics Workloads?When running benchmark tests, we need to determine whether our test nodes should be fully engaged with a graphics load or not. In typical real-world configurations the number o f provisioned VM’s actively engaged in performing graphically intensive tasks will vary based on need within the enterpriseenvironment. While possible, it is highly unlikely that every single provisioned VM is going to be under a high demand workload at any given moment in time.In setting up our benchmarking framework we have elected to utilize a scenario that assumes that every available node is fully engaged. While such heavy loading is unlikely to occur in a real world environment, it allows us to use a “worst case scenario” to plot our density vs. performance data.Analyzing the Performance Data to Understand How User Density Affects Overall PerformanceTo analyze the benchmark result data it’s important to understand that we are less interested in individual performance results than we are in looking for the relationship between overall performance and total user load. By identifying trends within the results where performance shows a rapid falloff we can begin to make an educated determination about the maximum number of Revit users we can support per server. Because we are most interested in maintaining interactivity within the viewport, we’ll focus on the benchmark results from the Rotate View test. To measure scalability we take the sum of the individual result scores from each VM and total them. The total is then divided by the total number of active VM’s to obtain an Average Score Per VM. In determining the impacts of density on overall benchmarking performance we plot the benchmark as seen in the graphs below. For each plot we record the average results for each portion of the benchmark score result, and indicate the percentage drop in performance compared to the same profile with a single active VM. Because Revit is an application which certifies professional graphics for use with the application, we can focus on the professional “Q” profiles, 140Q , 240Q and 260Q which are certified options for Revit.All our testing is done with 2 x GRID boards installed in the server (2x K1 or 2x K2).In Example 1 below we analyze the data for the K240Q vGPU profile, one of the professional profiles available on the K2 GRID board. The K240Q profile provide 1028MB of framebuffer on the virtual GPU. The performance trend for the K240Q profile show a performance falloff of 109% between a single fully engaged K240Q VM and the maximum number of K240Q fully engaged VM’s supported on the server (16).We can see the superior performance offered by vGPU in the Revit benchmark when running the maximum number of VMs on a dual K2 boards (16), completes the benchmark rotation test 192% faster than a server running a single VM instance of the benchmark using CPU emulated graphics and is 614% faster than CPU emulated graphics running the same number of active VMs (16). As the number of active VM’s increases on the server, the results show a performance falloff of 109% between a single fully engaged K240Q VM and the maximum number of K240Q fully engaged VM’s supported on the server (16).Example 1 – Dual K2 boards allocated with K240Q vGPU profile (1024MB Framebuffer), each K2 board can support up to 8K240Q vGPU accelerated VMs.In Example 2 below is the Revit performance profile for the K140Q the professional profile for the K1 GRID board. The K140Q profile is configured with 1024MB of framebuffer per accelerated VM, the same as the K240Q. On a single K1 GRID board the performance profile is extremely similar between theK140Q and the K240Q profiles up to 8 active VMs, which is the maximum number of VMs supported on the K240Q. Moving beyond 8 VM’s we see that although the average benchm ark scores continue to decline the decline continues at a gradual pace until we get beyond 16 active VM’s. Beyond 16 active VM’s we see a much more rapid falloff in terms of performance until at around 24 active VM’s we see a performance level that falls below the performance of a single CPU emulated graphics VM for the first time, although performance is still significantly better than a CPU emulated graphics configuration running a matching number of active VM’s.Example 2 Dual K1 boards allocated with K140Q vGPU profile (1024MB Framebuffer), each K1 board can support up to 16 K140Q vGPU accelerated VMs for a total of 32 VMs in the tested configuration.Example 3 below shows the combined performance profiles for both the K2 GRID based K240Q andK260Q profiles and the GRID K1 based K140Q profile compared to CPU emulated graphics showing the results of the Revit benchmark rotate view portion of the test. The performance data for all three GRID profiles are virtually identical. It’s worth noting that the trend of performance falloff is similar between the vGPU results and the CPU graphics results. The similarity in falloff is likely an indication that the falloff represents a lack of enough system resources on the server as the number of fully engaged VMs increases past at certain point (for our hardware configuration that point is seen around 16 VMs). The results show that regardless of profile used vGPU offers a significant performance increase over CPU emulated graphics under the same workload.Example 3 – K260Q, K240Q, and K140Q vGPU profiles show very similar performance and falloff curve matches the CPU falloff curve indicating that system resources are likely the limiting factor.Board Profile Maximum VMs per Board Recommended range of VM'sP a g e | 11 Server ConfigurationDell R720Intel® Xeon® CPU E5-2670 2.6GHz, Dual Socket (16 Physical CPU, 32 vCPU with HT)Memory 384GBXenServer 6.2 + SP1Virtual Machine ConfigurationVM Vcpu : 4 Virtual CPUMemory : 5GBXenDesktop 7.1 RTM HDX 3D ProRevit 2014Revit BenchmarkNVIDIA Driver: 332.07Guest Driver: 331.30Additional NVIDIA GRID ResourcesWebsite –/vdiNVIDIA GRID Forums - https://Certified Platform List –/wheretobuyISV Application Certification –/gridcertificationsGRID YouTube Playlist –/gridvideosHave issues or questions? Contact us through the NVIDIA GRID Forums or via Twitter @NVIDIAGRID。
外文文献翻译-设计一个沉浸式的虚拟现实界面布局规划

设计一个沉浸式的虚拟现实界面布局规划摘要这篇论文论述了生产布局规划被认为是虚拟现实运用的一个合适全新领域的原因;开发了一个用于虚拟布局规划工具的框架和做一份使用沉浸式虚拟现实系统来检测布局设计问题的研究报告,汇报了对比沉浸式虚拟现实系统与基于监控式式系统,以此来检测布局设计局限的研究。
所提议的框架评估已经被引用在一项研究上,这份研究还没有对于一个交互式的变更的空间布局的规范。
研究的主要目的是比较一个沉浸式系统和一个基于监控式虚拟现实系统在工厂布局分析上的应用。
研究组成员调查了车间的环境,其中包括三个主要的车间布局的问题(设备布置,空间利用率?,和设备位置),给出了可行性改进的评估,2000艾斯维尔科技B.V版权所有。
关键字:虚拟现实,NMD,布局设计,制造单元1.介绍虚拟现实(VR),不像其他的新技术,从来没有被大肆宣传。
自从BBC在1993年1月19号的9点当新闻里面简短得表达了VR,其中包括了一台适用HMD的VR 型喷气式发动机,这项技术便引起了巨大的热潮和关注。
目前,然而情况有些化变,虚拟现实的行话不再难以理解。
虚拟现实已经变成一个负面的项目,承诺了太多缺却都没有做到。
部分原因是夸张的期望,以及何处可以运用虚拟现实来实现显著定量的好处的调研,可以计量的好处的深入调查。
目前,虚拟现实的研究主要针对质量的应用。
目前,VR的研究用于寻找质量工程应用,这能体现优越的视觉效果和互动能力,远超于这一技术的缺点。
2.背景车间布局在制造环境中不是一个新的话题。
布局脚本的选择是基于传统用户定义的特征,比如漫游频率,漫游距离,和零件、设备、操作者的物理属性【1】。
正如研究显示,第一阶段的规划、总体规划设计,也就是所谓的模块化布局【2】。
然而,当设备的准确位置和方位在详细的布局中被设定时,这些数据缺乏实用性。
目前工厂设备的方式,比如,制造单元的形成,已经暴露了问题,这就要求一种新的设计工具【3,4】。
制造单元的目的是把流水线(典型的有汽车转配线)的效率和功能布局(车床部件,磨床部件,转配部件等等)的柔性通过单元或模块结合起来。
“一带一路”背景下《极花》英译中的“文化过滤”

“一带一路”背景下《极花》英译中的“文化过滤”
刘佳
【期刊名称】《山西能源学院学报》
【年(卷),期】2022(35)1
【摘要】在当今"一带一路"的现实语境中,英国翻译家韩斌热衷于翻译贾平凹的小说,2019年她又对贾平凹的《极花》进行了翻译介绍,虽然韩斌用了多种翻译策略来达到忠实于原作的目的,但译本依然表明"文化过滤"的作用机制在文学翻译中不可避免,也是文学在传播过程中必然产生变异、耗损和误读的原因所在,因此,利用好文化隔阂是文学翻译和传播成功的关键。
【总页数】3页(P93-95)
【作者】刘佳
【作者单位】延安大学西安创新学院
【正文语种】中文
【中图分类】H315.9
【相关文献】
1.“一带一路”背景下中国纪录片的角色构建——“一带一路”背景下中国纪录片产业价值链构建
2."一带一路"倡议下民族文化传播和对外交流中英语教育的再创新——评《"一带一路"背景下的中国文化战略》
3.\"一带一路\"背景下的纪录片跨文化传播策略\r——以《远方的家》特别节目《一带一路》之新加坡段为例
4.“一带一路”背景下纪录片跨文化传播路径探究——以纪录片《一带一路》为例
5.“一带一路”背景下中华语言文化对外传播路径研究——评《“一带一路”与中华语言文化国际传播》
因版权原因,仅展示原文概要,查看原文内容请购买。
Virtual reality on a WIM interactive worlds in miniature

Virtual Reality on a WIM: Interactive Worlds in Miniature Richard Stoakley, Matthew J. Conway, Randy PauschThe University of VirginiaDepartment of Computer ScienceCharlottesville, V A 22903{rws2v | conway | pausch}@(804) 982-2200KEYWORDSvirtual reality, three-dimensional interaction, two-handed interaction, information visualizationABSTRACTThis paper explores a user interface technique which aug-ments an immersive head tracked display with a hand-held miniature copy of the virtual environment. We call this interface technique the Worlds in Miniature (WIM) meta-phor. By establishing a direct relationship between life-size objects in the virtual world and miniature objects in the WIM, we can use the WIM as a tool for manipulating objects in the virtual environment.In addition to describing object manipulation, this paper explores ways in which Worlds in Miniature can act as a single unifying metaphor for such application independent interaction techniques as object selection, navigation, path planning, and visualization. The WIM metaphor naturally offers multiple points of view and multiple scales at which the user can operate, all without requiring explicit modes or commands.Informal user observation indicates that users adapt to the Worlds in Miniature metaphor quickly and that physical props are helpful in manipulating the WIM and other objects in the environment.INTRODUCTIONMany benefits have been claimed formally and informally for using immersive three dimensional displays. While vir-tual reality technology has the potential to give the user a better understanding of the space he or she inhabits, and can improve performance in some tasks [18], it can easily present a virtual world to the user that is just as confusing, limiting and ambiguous as the real world. We have grown accustomed to these real world constraints: things we can-not reach, things hidden from view, things beyond our sight and behind us, and things which appear close to each other because they line up along our current line of sight. Our vir-tual environments should address these constraints and with respect to these issues be “better” than the real world.In particular, we notice that many implementations of vir-tual environments only give the user one point of view (an all-encompassing, immersive view from within the head mounted display) and a single scale (1:1) at which to oper-ate. A single point of view prohibits the user from gaining a larger context of the environment, and the 1:1 scale in which the user operates puts most of the world out of the user’s immediate reach.To address these two concerns, we propose allowing a vir-tual reality user to hold in his or her hands a three-dimen-sional interactive model – a miniature copy of the life-size virtual world (figure 1). The objects in the model each cor-respond to a life-size object; the positions and orientations of these objects in the real world “shadow” those of their proxies in the miniature. Moving an object on the model moves an object in the real world and vice versa.This World In Miniature (WIM) gives the user another point of view from which to observe the scene, and the ability to change that point of view under direct manipulation as rap-idly as the user can turn the model in his or her hands.As an adjunct to the WIM, we have explored the advantages and disadvantages of grounding the user’s perception of the model with a physical prop; in this case, a clipboard.The rest of this paper discusses previous work in the realm of miniature worlds used for three dimensional interfaces, a description of our WIM implementation, the basic interac-tion techniques we have used to demonstrate the effective-ness of the WIM concept, and the importance of asymmetric two-handed interaction. We conclude with results from informal user observation of the WIM interface and a dis-cussion of future work.PREVIOUS WORKMany researchers have dealt with the open questions of three dimensional object manipulation and navigation in virtual environments. The World in Miniature metaphor draws on these previous experiences, and attempts to syn-thesize an intuitive, coherent model to help address these questions. Most previous work falls into two categories: (1) object manipulation and (2) navigation in virtual environ-ments.We use the term “navigation” to mean allowing the user to move in his or her virtual environment and helping the user maintain orientation while there.Previous Work in Object ManipulationWare’s Bat [23] interface demonstrates the use of a 6 degree-of-freedom (DOF) input device (a position and ori-entation tracker) to grab and place objects in a virtual envi-ronment. In this work, Ware used the bat to pick up and manipulate the virtual objects themselves, not miniature, proxy objects. Ware found that users easily understood the 1:1 mapping between translations and rotations on the input device and the object being manipulated. This study was a unimanual task and did not place the user’s hands in the same physical space as the graphics.In Sachs’s 3-Draw [20], we see two hands used asymmetri-cally in a three-dimensional drawing and designing task. In addition to this, Sachs used props for each of the user’s hands and found that relative motion between hands was better than a fixed single object and one free mover. 3-Draw was not implemented in an immersive, head-tracked envi-ronment and the system did not provide multiple, simulta-neous views. The input props controlled the point of view by rotating the object’s base plane.Hinkley’s [13] work with props exploited the asymmetric use of hands, which follows from work by Guiard [12]. This work showed how a prop in the non-dominant hand can be used to specify a coordinate system with gross ori-entation, while the user’s preferred hand can be used for fine grain positioning relative to that coordinate system. This work is also three dimensional but non-immersive and directly manipulates an object at 1:1 scale in a “fishtank”paradigm.3DM [2] was an immersive three dimensional drawing package, but provided only one point of view at a time and required the user to change scale or fly explicitly to manip-ulate objects which were currently out of arm’s reach. But-terworth states that users sometimes found the scaling disorienting.Schmandt’s [21] early explorations of Augmented Reality (AR) used a half-silvered mirror over a stationary drafting tablet in order to specify both a base plane and a slicing plane in computer generated VLSI models. He found this surface invaluable in constraining the user’s input to a plane. The scene was not immersive and the system only displayed one scale view at a time.Previous Work in NavigationDarken’s [7] discussion of navigating virtual environments enumerates many important techniques and compares their relative strengths and weaknesses. Several of the naviga-tion techniques presented were WIM-like maps, but were primarily two-dimensional in nature. Through the WIM interface, some of these techniques have been extended into the third dimension.Ware [24] explored the possibilities of holding the three-dimensional scene in hand for the purpose of quickly navi-gating the space. He found this scene in hand metaphor par-ticularly good for quickly viewing the bounding-cube edges of a scene. The scene in hand task was a unimanual opera-tion which employed ratcheting to perform large rotations. The work most closely resembling the WIM interface was Fisher’s map cube in virtual reality [9]. The NASA VIEW system used a three dimensional miniature map of the immersive world to help navigate. In addition, it used mul-tiple two dimensional viewports to jump from one place in the virtual environment to another. A user’s manipulation of the “map cube” was unimanual. A similar map-cube concept was referred to as the God’s-eye-view in the super cockpit project [11].Previous Work in Object SelectionMany researchers have explored methods for selecting objects in a virtual world. Common approaches include raycasting [10] [23] and selection cones [15]. Both of these techniques suffer from object occlusion and therefore need to be tied closely with some mechanism that can quickly establish different points of view.Put-That-There [3] used selection via a combination of pointing and naming (or description). Pointing in this two dimensional application is analogous to raycasting in vir-tual environments.SYSTEM DESCRIPTIONTo explore the benefits and limitations of the WIM meta-phor, we built a simple three dimensional modeling pack-age that could be used as a design tool for a traditional architecture design project called a Kit of Parts.We outfitted the user’s non-dominant hand with a clipboard attached to a Polhemus position sensor. In his or her other hand, the user holds a tennis ball in which we have installedtwo buttons and another Polhemus™ sensor. This button-ball was used as the selection and manipulation tool for all of our user observation and WIM development. The first button on the buttonball was used for selection of objects,and the second was left open for application-specified actions. Thus equipped, the user’s view from inside the HMD is exactly like that in any other immersive virtual environment, except that the user can raise the clipboard to view a miniature copy of the world in which he or she is standing and can lower the WIM graphics out of sight toremove them from his or her field of view (figure 2).The WIMgraphics attached to the clipboard are nothing more than a miniature copy of all the surrounding graphics in the immersive environment. Each of the objects in the WIM copy are tied to their counterparts in the immersive environment through pointers and vice versa at the point of WIM creation. In this way, when an object responds to a method call, the object has enough information to ensure that the same method gets called on its “shadow” object.Thus the user can manipulate the objects in the WIM and the objects in the world will follow (video figure 1 - The WIM Interface). The environment itself (in miniature)becomes its own widget for manipulating objects in the environment [5].Software and EquipmentThe Kit of Parts modeler was implemented using the Alice Rapid Prototyping System [6] running the simulation on a Sun Microsystems Sparc 10™ and rendering on a Silicon Graphics Onyx Reality Engine 2™. Typical rendering rates were about 25 frames per second (FPS ), while simulation rates were typically 6FPS . A Virtual Research Flight Hel-met™ was used for the display and was tracked with a Pol-hemus Isotrak magnetic tracker. The buttonball and clipboard each carried a Polhemus tracker sensor for posi-tion and orientation information.INTERACTION TECHNIQUES USING THE WIMIn this section, we discuss basic application independent WIM -based interaction techniques we have built using the Alice Rapid Prototyping System [6].Quickly Changing the POVBeing able to see objects from many different angles allows us to quickly remove or reduce occlusion and improves the sense of the three-dimensional space it occupies [22].Because the WIM is a hand-held model, the user can quickly establish different points of view by rotating the WIM in both hands. Note that this form of “WIM fly-by” can often give the user all the information that he or she needs with-out destroying the point of view established in the larger,immersive point of view. We believe that this interaction technique can establish a new viewpoint more quickly and with less cognitive burden than a technique that requires an explicit “flight” command and management of the flight path.Object Selection: Overcoming Range and OcclusionIf the virtual, immersive environment is very large, there will be objects that are out of physical arm’s reach. If the user must touch an object to select it, the user would have to employ a separate flying mechanism, which means mov-ing the camera; a sometimes disorienting or otherwise inap-propriate approach. Armed with a World In Miniature ,the user now has the choice of selecting objects either by point-ing to the object itself (as before) or by pointing to its proxy on the WIM . By turning the model in his or her hands, the user can even view and pick objects that are obscured by his or her current line of sight from the immersive camera viewpoint. The WIM provides a second (often “bird’s eye”)point of view from which to examine the scene.Object ManipulationOnce objects are selected, the WIM allows us to manipulate those objects at either the scale offered by the WIM or the one-to-one scale offered by the immersive environment. If the scale of the WIM is smaller than that of the immersive world, manipulating objects on the WIM necessarily gives the user far-reaching coarse-grained control of objects.The WIM can also display objects at a greater than one-to-one scale, implementing a three dimensional magnifying glass of sorts. This gives the user very fine grain control of objects through the WIM at the expense of range. Though we have not implemented zooming in our current system,we clearly see the need for allowing the user to get more detail on the WIM or to zoom out to view more context. We are currently pursuing this avenue of research.We speculate that because the WIM is clearly a model attached to the user’s hand, it is seen as something separate from the rest of the immersive environment. The WIM therefore naturally offers two different scales to the user without requiring explicit modes or commands.An Example: Hanging a PicturePutting these ideas together, we can consider an example task: hanging a picture on a wall. This task is typical of abroad class of two-person tasks in which the proximity required to manipulate an object interferes with the desire to see those effects in a larger context. With a WIM, a single user can stand at a comfortable distance to view the picture in context, while at the same time reaching into the WIM to manipulate it.Of course, the user could choose to use the WIM the other way around: fly close to the wall to stand next to the pic-ture, then use the WIM to view the entire room in miniature to determine if the picture is straight. Examining relative strengths and weaknesses of each of these approaches is an area of further study.Mixing Scales and OperationsViewing, selection, and manipulation are independent oper-ations. Because the WIM gives the user another scale at which to operate, the user can choose the most appropriate scale for any given subtask, and even switch scales in the middle to suit the requirements of the task. For example: the user can reach into the WIM to select a distant object (taking advantage of the greater than 1:1 scale of the WIM), and then reach out to the immersive world to move the WIM-selected object at a distance in 1:1: scale [23] [15] all the while viewing the scene in the WIM.RotationOur current implementation allows users to rotate objects, through ratcheting (repeated grabbing, rotating and releas-ing) [25] and is therefore more awkward than a rotation done with just the fingers [15]. Interestingly, some users found it just as effective to grab the object and to counter-rotate the entire WIM.In our current implementation, rotation is gridded to 30 degree increments, primarily to assist in aligning rectilinear objects [15]. We found that if the rotation grid is too course (greater than about 45 degrees), some people assume that they cannot rotate at all and if set to 15 degrees or less, users report that rotation behaves as if it had no gridded increments at all.Navigation: Flight with a WIMTo make the view travel through the immersive environ-ment, the most common user interface technique in virtual environments is probably “flying.” If the WIM includes some representation of the user as an object in the scene, the user can simply reach into the WIM and “pick himself up” to change his location in the environment. This raises the question of when to update the immersive world as objects in the WIM are manipulated. We enumerate three possibilities.Updating after Manipulation:immediate / post-mortem / batchWhen changes are made on the WIM, we usually move the real object and the proxy object simultaneously, something we refer to as immediate update. Under some conditions, immediate update is either not desirable (due to visual clut-ter or occlusion) or impossible (the complexity of the object prevents changes to it from being updated in real time). In these situations, we use post-mortem update,where the immersive environment updates only after the user is finished with the WIM interaction and has released the proxy.A good special case of post-mortem update is the case of moving the user’s viewpoint. We find that immediate update of the camera while the user is manipulating the camera proxy is highly disorienting, so instead we wait until the user has stopped moving the camera, and then use a smooth slow in / slow out animation [16] to move the camera to its new position. This animated movement helps maintain visual continuity [15].Another useful form of update delay is batch update. Here, the user makes several changes to the WIM and then issues an explicit command (e.g. pressing the second button on the buttonball) to cause the immersive environment to com-mit to the current layout of the WIM. This is useful for two reasons. First, before the user commits his or her changes, the user has two independent views of the environment (the “as-is” picture in the immersive world and the “proposed”picture in the WIM). Secondly, it might be the case that moving one object at a time might leave the simulation in an inconsistent state, and so “batching” the changes like this gives the user a transaction-like commit operation on the changes to objects in the scene (with the possibility of supporting rollback or revert operations if the changes seem undesirable halfway through the operation). VISUALIZATIONThe Worlds in Miniature metaphor supports several kinds of displays and interaction techniques that fall loosely under the heading of visualization. These techniques exploit the WIM’s ability to provide a different view of the immersive data with improved context. It would seem that the WIM is good for visualization for all the same reasons that a map is good for visualization:Spatially locating and orienting the user:the WIM can pro-vide an indicator showing where the user is and which way he or she is facing relative to the rest of the environment. Path planning: with a WIM we can easily plan a future camera path in three dimensions to prepare for an object fly-by. The user can even preview the camera motion before committing him or herself to the change in the larger, immersive viewpoint.History: if the user leaves a trail behind as he or she travels from place to place, the WIM can be used like a regular 2D map to see the trail in its entirety. Dropping a trail of crumbs is not as useful if you cannot see the trail in context. Measuring distances:the WIM can be configured to display distances between distant (or very closely spaced) points that are difficult to reach at the immersive one-to-one scale. The WIM also provides a convenient tool for measuring areas and volumes.Viewing alternate representations:the immersive environ-ment may be dense with spatially co-located data (i.e. topo-logical contours, ore deposits, fault lines). The user candisplay data like this on the WIM, showing an alternate view of the immersive space in context. The improved context can also facilitate the observation of more implicit relation-ships in a space (i.e. symmetry, primary axis) and can dis-play data not shown in the immersive scene (circulation patterns, wiring and plumbing paths). Here, the WIM acts more like a three dimensional version of Beir’s “magic lenses” [1] or one of Fitzmaurice’s “active maps” [10]. Three Dimensional Design: the WIM, being a small three dimensional model, serves the same functions that architec-tural models have traditionally served.MULTIPLE WIMSUntil now, we have considered only a single instantiation of a WIM in a virtual environment, but clearly there might be a reason to have more than one such miniature active at a time. Multiple WIM s could be used to display:•widely separated regions of the same environment •several completely different environments•worlds at different scales•the same world displayed at different points in time This last option allows the user to do a side by side compar-ison of several design ideas (video figure 2 –Multiple WIM s). A logical extension of this notion is that these snap-shots can act as jump points to different spaces or times, much the same way hypertext systems sometimes have thumbnail pictures of previously visited documents [14]. Selecting a WIM would cause the immersive environment to change to that particular world [9].Multiple WIM s enable users to multiplex their attention much the same way Window Managers allow this in 2D. These multiple views into the virtual world, allow the user to visually compare different scales and/or different loca-tions [8].MANIPULATING THE WIMThrough the exploration of the previous interfaces, several issues arose concerning the interface between the human and the WIM tool.The Importance of PropsOne of our early implementations of the WIM work did not use physical props; the user grasped at the WIM graphics as he or she would any other graphical object in the scene. As long the user continued the grasping gesture, the WIM fol-lowed the position and orientation of the user’s hand and when released, it would remain hovering in space wherever it was dropped. While this was sufficient for many tasks, we found that rotating the WIM without the benefit of haptic feedback was extremely difficult. Invariably, users would contort themselves into uncomfortable positions rather than let go of the WIM to grab it again by another, more comfort-able corner.After Sachs [20], we decided to use physical props to assist the user’s manipulation of the WIM itself. To represent the WIM, we chose an ordinary clipboard to which we attached a Polhemus 6 DOF tracker for the user’s non-dominant hand. For the user’s preferred hand, we used a tennis ball with a Polhemus tracker and two buttons (figure 3). Props: The ClipboardThis prop allows the user to rotate the WIM using a two-handed technique that passes the clipboard quickly from one hand to the other and back when the rotation of the WIM is greater than can be done comfortably with one hand. Interestingly, some users hold the clipboard from under-neath, rotating the clipboard deftly with one hand. Both of these techniques are hard to imagine doing in the absence of haptic feedback provided by a physical prop.Props: The ButtonballBefore we settled on the buttonball as our primary pointing device, we experimented with a pen interface to the WIM. This technique is most appropriate for manipulation of objects when they are constrained to a plane [21] (the base plane being the default). When manipulation of objects in three dimensions is called for, a pen on the surface of the clipboard does not appear to be expressive enough to cap-ture object rotation well.Two Handed InteractionOur implementation of the WIM metaphor takes advantage of several previously published results in the field of motor behavior that have not been fully exploited in a head tracked virtual environment. The most important of these results state that a human’s dominant (preferred) hand makes its motions relative to the coordinate system speci-fied by the non-dominant hand, and the preferred hand’s motion is generally at a finer grain [12]. In our case, the non-dominant hand establishes a coordinate system with the clipboard and the dominant hand performs fine grained picking and manipulation operations.While the dominant hand may be occupied with a pointing device of some kind, it is still sufficiently free to help the other hand spin the WIMquickly when necessary.Shape of PropsLike all real world artifacts, the shape of the props and the users’ experience suggest things about the usage of the props [17]. For example, the shape of the clipboard says something to users about its preferred orientation. The cur-sor’s physical prop is spherical, indicating that it has no preferred orientation, and in fact it does not matter how the cursor is wielded since rotation is relative to the plane spec-ified with the non-dominant hand, which holds the clip-board.The clipboard also provides a surface that the user can bear down on when necessary. This is similar to the way an artist might rest his or her hand on a paint palette or a guitarist might rest a finger on the guitar body.PROBLEMS WITH PHYSICAL PROPSThe use of the clipboard as a prop presents some problems of its own.FatigueHolding a physical clipboard, even a relatively light one, can cause users to fatigue rather quickly. To overcome this problem, we created a simple clutching mechanism that allows the user to alternately attach and detach the WIM from the physical prop with the press of a button. When detached, the WIM “floats” in the air, permitting the user to set the prop down (video figure 3 – Prop Clutching). This clutching mechanism extended well to multiple WIM s: when the user toggles the clutch, the closest WIM snaps to the user’s clipboard. Toggling the clutch again disengages the current WIM and allows the user to pick up another WIM. Another technique for relieving arm stress is to have the user sit at a physical table on which the clipboard could be set. Users can also rest their arms on the table while manip-ulating the model. The presence of the table clearly pre-sents a mobility problem for the user because it prevents the user from moving or walking in the virtual environ-ment, and so may not be ideal for all applications. Limitations of Solid ObjectsIn our experience, one of the first things a user of the WIM is likely to try is to hold the WIM close to his or her face in order to get a closer, more dynamic look at the world. Users quickly discover that this is an easy, efficient way to estab-lish many different points of view from inside the minia-ture. Unfortunately, many times the physical prop itself gets in the way, preventing the user from putting the tracker in the appropriate position to get a useful viewpoint. Fortu-nately, the ability to disengage the WIM, leaving it in space without the clipboard helps alleviate this problem. INFORMAL OBSERVATIONWe observed ten people using the WIM. Some had previous experience with virtually reality, and some had three dimensional design experience (e.g. architecture students). Users were given a simple architectural modeler and asked to design an office space. We proceeded with a rapid “observe, evaluate, revise” methodology to learn about the Worlds in Miniature interface.The user was given a miniature copy of the room he or she was standing in, and a series of common furniture pieces on a shelf at the side of the WIM (figure 4). The user was then asked to design an office space by positioning the furniture in the room.In many ways, this design task replicates the traditional architectural design project known as a Kit of Parts. The furniture pieces (half-height walls, shelves, tables and table corner pieces) represent the kit of manipulable objects. Moving the traditional kit of parts project into virtual real-ity was particularly appealing for several reasons:•It constrains the space in which the user can work.•Since users are designing a life-size space, clearly see-ing that space at 1:1 scale is helpful.•The WIM succinctly replaces the plans, elevations and models typically associated with this type of design project.The WIM that we used was a 1/4” scale version of the immersive world, with a snap spacing of 1/8” (0.5 scale feet). In addition to the translation snap, rotation was con-strained to be about the Z axis in increments of 30 degrees.User ReactionsWe observed users in order to see how viable a solution the WIM interface was to several types of tasks. While it was not our intention for the study to produce concrete num-bers, we were after what Brooks refers to as interesting “Observations” [14]. We hoped to gain some sense of:1 )How quickly do users take to the WIM metaphor?2 )How do users like the weight and maneuverability ofthe physical clipboard?3 )Do users like clutching? Do they take to it easily?4 )How do users feel about moving themselves via thecamera proxy?None of the users expressed problems establishing the map-。
ViFi族,视觉高保真

ViFi族,视觉高保真
车库里
【期刊名称】《互联网周刊》
【年(卷),期】2005(000)012
【摘要】模拟家电时代的HiFi属于自闭而迷茫的中产阶级,他们宁愿窝在宽大而舒适的沙发中通过一对价值十万元的木头箱子聆听玻璃杯子摔碎的全过程,而不愿意花100元买一打杯子感受更为真实的声音。
精确保存及回放声波对鼓膜和皮肤的刺激是“HiFi的原罪”。
HiFi一族的消费心理倾向于B型血或处女座的中产男士,他们不一定真的爱好音乐,但一定是吹毛求疵、自以为是的主儿。
【总页数】1页(P71)
【作者】车库里
【作者单位】无
【正文语种】中文
【中图分类】F274
【相关文献】
1.布依族物质文化的视觉美——黔西南布依族文化心理的发展与变迁 [J], 张晶;韦磐石;张军;张翔;彭建兵;黄守斌;赵燕
2.哈尼族蘑菇房民宿综合视觉设计研究 [J], 周东旭;黄素涌;邢鹏飞
3.虚拟环境的高效高保真建模和视觉呈现技术 [J],
4.哈尼族蘑菇房民宿综合视觉设计研究 [J], 周东旭;黄素涌;邢鹏飞
5.蒙古族传统图形元素的视觉语言在现代文创设计中的创新运用 [J], 乔倩
因版权原因,仅展示原文概要,查看原文内容请购买。
轴对称破碎文物的虚拟复原方法

第18卷第5期计算机辅助设计与图形学学报V01.18.N o.5 2006年5月JOURNAL0F COMPUTER.AIDE D D E SI GN&CO M PU T ER G R A P H I C S May.2006轴对称破碎文物的虚拟复原方法李春龙”周明全2’成欣D 程日彬u1’(西北大学可视化技术研究所西安710069)2’(北京师范大学信息科学与技术学院北京 100875)(1i ch lo n g—bo x@s in a.e01TI)摘要根据旋转面的轴线与其表面顶点法矢相交的几何准则,提出一种估计轴对称破碎文物轴线的几何方法,进而获得文物碎片的母线信息,生成具有内外表面的文物复原模型.最后,采集破碎文物的纹理并对其进行必要修复后对复原模型进行渲染,得到三维虚拟文物,展示在考古数字博物馆中.关键词文物复原;虚拟文物;旋转面;考古数字博物馆中图法分类号T P391.41Virtual R e st o r at i o n o f Axisymmetric Relic FragmentsLi C h u nl o n91)Z ho u Mingquan2)Cheng Xinl)Che ng R i b i n l)1’(Institute of V i s u a l i z a t i o n Te c hn o lo g y,No r th w es t University,Xi’an710069)2’(C ol le ge of In forma tion S c i e n c e a n d Te c h n ol o g y,B e i j i ng N ormal University,Beijing 100875)Abst rac t Ba s ed o n t h e pr i n ci p le t ha t t h e n or m a l v e c t o r o f a p oi n t f ro m t h e s u r fa c e o f r e v ol u t i o n in t er se c ts it s r o ta t io na l a x i s,a geometric method is p ro po sed t o estimate t h e a xi s o f f ra gm e nt ed relic a nd r e s t o r e its pr of il e cur ve.Re vo lv in g t h e pr o fi l e.ar ou n d it s a x is g e n e r a t e s th e o r ig in a l m od e l w it h bo th inner a nd o u t e rsurf ace s.T ext ure s o f t h e relic c ol l e ct e d a n d p ainted o n t o the m o d e l for e xh i b i t i o n in t h e D i gi t a l Museum of Archaeology.Key w or d s relic r e p ai r i n g;v i r t u a l heritage;surface o f r e vo l u t io n;d i g i t a l museum of arch aeolo gy粘接和文物补缺,然后通过照片、实物和仿制赝品等0引言形式展示给观众.考古数字博物馆的出现,使得文物修复概念的范畴扩大,加入了以计算机展示为目数字化技术的出现为文化遗产保护提供了新的的的虚拟文物修复工作.因此,进行破碎文物计算契机.如何借助计算机图形图像的知识和虚拟现实机辅助复原技术的研究,提供文物三维复原模型的技术对中国丰富的古代文化遗产进行数字化保存和展示,具有重要的实用价值.计算机辅助文物复原展示,具有重要的现实意义.新兴的考古数字博物系统根据文物碎片的形状特征,对散乱文物碎片馆是一种利用现代科技手段(主要是计算机技术)来进行自动虚拟拼接,生成拼接步骤来指导文物修复展示、保护、保存世界自然和文化遗产的网络平台,师的工作.本文所要做的工作与文献[1]不同,并不其中的一个重要部分便是三维虚拟文物的展示,为是去指导实际文物修复步骤,而是仅生成可供展示公众提供了网上鉴赏文物的机会.然而,已经发掘的虚拟文物.修复对象则是外形轴对称、部分信息出的大量古文物多数已被外力(包括自然的和人为丢失的文物碎片.的)损坏,复原这些碎片对考古人员来说是件费时费轴对称文物碎片的复原问题实际上是重建旋转力的工作.文物手工修复工作包括残片清洗、手工面的问题.重建时,首先要得到旋转面的轴线和母收稿日期:2005 0401;修回日期:2005 0829基金项目:国家自然科学基金(60372072);国家文物局项目(20040311)万方数据5期 李春龙等:轴对称破碎文物的虚拟复原方法621线信息.文献[2]最早提出旋转面重建的概念,采用 平面圆圆心来估计对称轴,并进行必要的优化与调 NURBS定义旋转面,对2种情形(对称轴已知和对 整.Pottmann 等[4]使用直线几何的方法计算旋转面称轴未知)下的重建进行了讨论,但这2种重建都需的对称轴,它将旋转曲面上的点看作是在作一种螺要旋转面的完整信息,不适用于本文这种信息不全 旋运动,曲面的参数方程可表示为 (仅有文物的部分碎片)的情况.Calio 等旧1在确定旋 [z(“)COSt—Y(“)s int,z(“)sin t+ 转面的对称轴时,需要将被测物体手工调整为正态 Y (U )C O S t ,z(乱)], 姿式.P o t t ma n n 等H o 使用直线几何的方法来解决旋 其中点[z(U),Y(U),z(“)]在其旋转面的母线上; 转面的重建问题.另外,对于部分采样的旋转面重 然后估计每个采样点处的法矢,根据直线几何的结 建,文献[5]采用求曲线曲率导数平均值的方法. 论,旋转面上每个点的法矢在某一线性线丛C 上H J .本文根据实际应用,采用几何方法并利用破碎 同时,该线性线丛所在平面的法矢方向也就是每个 文物上的一片碎片来复原出的文物完整模型. 点作螺旋运动的切向;最后,旋转面对称轴的 Plucker 坐标(a ,石)可以通过口=击,石=击1 复原过程 ||c l| ||L ||计算得到.因而,估计对称轴的问题变成了通过顶 点法矢来确定某一线性线丛的问题,最后归结为最 首先将文物碎片数字化,得到待复原文物碎片 小特征值及对应的特征向量的问题.文献[5]利用 的三维点云数据以及文物表面几何拓扑信息(本文 一组平行平面去截部分采样的表面,得到一系列曲 的模型均为三角网格).建议在进行实体数字化时 线,然后求出这组曲线曲率导数的平均值.改变平使用便携式三维激光扫描仪采集数据.我们使用的 行平面法矢方向并重复上述计算,直至找到截得的是加拿大POL HEM US 公司生产的FAS TSC AN 手曲线曲率导数平均值趋近于0的那组平行平面,该持式扫描仪,适用于有大量文物碎片的发掘现场.组平行平面的法方向与旋转面的轴方向重合.需要指出的是,由于激光扫描线对光的反射十分敏由于文物修复后的复原模型并不用于工业制 感,而陶瓷质地的文物碎片最容易产生高光这种干造,主要用于鉴赏;因此本文提出一种先手工确定, 扰光,会影响扫描质量,因此在对陶瓷器这种容易产然后自动校正的对称轴估计方法. 生高光的文物表面进行扫描时,需要用特殊方法(一 首先,在物体两端手工确定2个平面圆,则2个 般给文物表面涂抹非反光物质)使陶瓷表面高光减 圆心确定一条近似的对称轴Oo 7,得到一个待优化 小到最低. 的轴,如图2所示.获得破碎文物的数据模型后,计算得到对称轴 由于007是手工确定的,因此还需要进行调整. 及母线信息.本文提出了一种估计轴线信息的几何 设该旋转面的真实对称轴为P P 7,如图3所示.一般方法.情况下,P P 7与007是2条异面直线.现在的问题是本文使用数字相机采集文物表面纹理信息,对如何修改o o 7,以便逼近PP其进行必要修复后贴于文物复原模型的内外表面; 然后调节光照渲染出效果逼真的文物,输出至考古 。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Virtual Heritage at iGrid 2000Dave Pape1, Josephine Anstey2, Bryan Carter3,Jason Leigh1, Maria Roussou4, Tim Portlock11 University of Illinois at Chicago, USA,2 University at Buffalo, USA,3 Central Missouri State University, USA,4 Foundation of the Hellenic World, Greececontact: pape@AbstractAs part of the iGrid Research Demonstration at INET 2000, we created two Virtual Cultural Heritage environments – "Virtual Harlem" and "Shared Miletus". The purpose of these applications was to explore possibilities in using the combination of high-speed international networks and virtual reality (VR) displays for cultural heritage education. Our ultimate goal is to enable the construction of tele-immersive museums and classes. In this paper we present an overview of the infrastructure used for these applications, and some details of their construction.IntroductionCultural heritage is becoming an important application for virtual reality technology. An EC/NSF Advanced Research Workshop identified it as one of the key application domains for driving the development of new human-computer interfaces [1]. Virtual heritage applications use the immersive and interactive qualities of VR to give students or museum visitors access to computer reconstructions of historical sites that would normally be inaccessible, due to location or fragile condition. They also provide the possibility of visiting places that no longer exist at all, or of viewing the how the places would have appeared at different times in history. A number of research projects have delved into the problems of building accurate digital reconstructions of ancient artifacts. For example, Boulanger et al describe the work done in creating models of the tombs of Nefertari and Tutankhamun, and presenting a VR tour of the models in a public museum [2]. Similarly, a recent joint project by the Israel Antiquities Authority and UCLA created an interactive simulation of the Herodian Mount for the Jerusalem Archaeological Park [3].Tele-immersion is defined as the synthesis of collaborative virtual environments, audio and video conferencing, and supercomputing resources and massive data stores, all interconnected and running over high-speed national or worldwide networks [4]. Tele-immersion enables people at distant locations to work together in a common virtual space, particularly on problems in highly compute-intensive areas such as scientific visualization, computational steering, and design engineering. With tele-immersion technologies, it will be possible to take the cultural heritage work that has been done so far, and expand it beyond a single location to create a virtual museum or educational environment, which can be visited at any time by users on the Internet.Shared Miletus and Virtual Harlem emerged from very different communities, with different goals and motivations. However the combination of VR and tele-immersion enriches and extends both projects. Both projects document places that no longer exist – the ruins of Miletus are sunk in a swamp near the Turkish coast, the Harlem of the Harlem Rennaissance exists only on celluloid, paper and in music. In both cases a carefully documented virtual environment can bring back a sense of the place, the history, the architecture, and be used as an educational tool.Shared Miletus is a reconstruction of the ancient city of Miletus, created by the Foundation of the Hellenic World (FHW), a non-profit, privately funded museum and cultural research institution in Athens, Greece. The FHW is an interpretive museum. Its mission is to preserve and present Hellenic history and culture; it seeks to use state-of-the-art technology to accomplish these goals [5].The Virtual Harlem Project is a collaborative learning network designed to supplement African American literature courses studying the Harlem Renaissance [6]. The project was originally conceived in 1998 by Bryan Carter at Central Missouri State University and the first prototype was initiated in collaboration with Bill Plummer at the Advanced Technology Center at the University of Missouri. In August of 1999, the University of Illinois at Chicago joined the project, to translate the Harlem experienceinto a fully immersive environment.BackgroundCAVE Virtual RealityShared Miletus and Harlem were shown on the CAVE tm and ImmersaDesk tm VR systems at INET 2000. The CAVE (CAVE Automatic Virtual Environment) is a projection-based virtual reality system [7]. In contrast to head-mounted display VR systems, where the user views a virtual world through small video screens attached to a helmet, in projection-based VR large, fixed screens are used to provide a panoramic display without encumbering the user. The CAVE is a 10 foot-cubed room. Stereoscopic images are rear-projected onto the walls creating the illusion that 3D objects exist with the user in the room. The user wears liquid crystal shutter glasses to resolve the stereoscopic imagery. An electromagnetic tracking sensor attached to the glasses allows the CAVE system to determine the location and orientation of the user's head. This information is used by the Silicon Graphics Onyx that drives the CAVE to render the imagery from the user's point of view. The user can physically walk around an object that appears to exist in 3D in the middle of the CAVE. The user holds a wand which is also tracked and has a joystick and three buttons for interaction with the virtual environment. Typically the joystick is used to navigate through environments that are larger than the CAVE itself. The buttons can be used to change modes, or bring up menus in the CAVE, or to grab virtual objects. Speakers are mounted to the top corners of the CAVE structure to provide sounds from the virtual environment. The ImmersaDesk is a smaller VR system with a 6 by 4 foot angled screen, resembling a drafting table, that is also capable of displaying rear-projected stereoscopic images [8]. It has a similar tracking system and the same wand interface. The CAVE and the ImmersaDesk can run the same VR applications.VR applications displayed on the CAVE and ImmersaDesk systems can be linked over high-speed networks. In these tele-immersive experiences users can share the same virtual world from remote locations. They can interact with each other and with the objects in the virtual world. Users see each other as avatars; simple physical models like puppets, receiving tracking information over the network. For example: a user in CAVE in Boston navigates through the virtual space, turns his, head or points, and a user in a linked CAVE in Chicago sees the avatar move, turn and point. This simple body language makes collaboration in the virtual environment much more effective. The users also speak to each other using audio-confencing tools. Tele-immersion supports high-quality interaction between small groups of participants involved in many fields including design, training, and education.CAVERNsoftCAVERNsoft is a toolkit for building tele-immersive VR applications [9]. Its purpose is to enable rapid generation of tele-immersive applications, without the application authors needing to worry about network protocols and architectures. It is a C++ library that provides a wide range of tools at different levels of complexity. It includes low-level network classes that form interfaces to TCP, UDP, and multicast socket functions, and other classes for threading and cross-platform data conversion. Built on top of these are middle-level modules for such things as remote transfer of very large files, HTTP communications, and remote procedure calls. Above these are database modules that can be used to emulate a distributed shared memory system.The database module provides a simple two-field database, associating arbitrary chunks of binary data with character string keys. The keys are treated like Unix directory paths, so that a hierarchical arrangement of data is possible. When a client connects to a CAVERN database, it can make asynchronous requests to fetch particular keys' values, and it can store new values for keys. Stored data is automatically reflected to all other clients by the database server. The database client class can also be used without a server, in which case it operates in a standalone mode, making it transparent to the application whether the database is network-shared or not. An additional feature of the database is that data may be stored using either a reliable or an unreliable network connection, under control of the application. This allows one to store state changes, such as a switch being turned on or off, reliably, so that all clients will be sure to receive the change, while storing data that may be a continuous stream, such as avatar positions, unreliably, so that it can be delivered to other clients more quickly.iGrid 2000The International Grid (iGrid) is a series of research demonstrations highlighting the value of international high-speed computer networks in science, media communications, and education [10]. iGrid 2000 took place at the INET 2000 conference in Yokohama, Japan, and featured twenty-four applications from North America, Europe and Asia displayed on a CAVE, two ImmersaDesks and the Access Grid presentation environment. iGrid was connected to the JGN, the WIDE Project Network (in cooperation with NTT, TTNet and PNJC), APAN and the APAN/TransPAC (100Mbps) link to STARTAP sm in Chicago, Illinois. STARTAP serves as an international connection point for several research networks in America, Europe, and Asia. APAN, the Asia-Pacific Advanced Network is a non-profit international consortium that provides an advanced networking environment for the Asian Pacific research community and promotes international collaboration.Our virtual heritage applications at iGrid 2000 were networked between the VR displays in Yokohama, a CAVE at the Electronic Visualization Laboratory in Chicago, and the Advanced Technology Center in Missouri.YgdrasilMiletus and Harlem are both based on Ygdrasil. Ygdrasil is a framework that we are developing as a tool for creating networked virtual environments. It is focused on building the behaviors of virtual objects from re-usable components, and on sharing the state of an environment through a distributed scene graph mechanism. It is presently being used in the construction of several artistic and educational applications.Ygdrasil is built in C++, around SGI's IRIS Performer visual simulation toolkit [11] and the CAVERNsoft G2 library. Performer provides a hierarchical representation of the virtual world database,called a scene graph. The scene graph is a tree that encodes the grouping of objects and nesting of 3D transformations, and provides tools for operations such as switching elements on or off. For example,Figure 1 shows a basic scene graph for a robotic puppet, consisting of three geometric models and some transformations.Ygdrasil focuses on constructing dynamic, interactive virtual worlds, so in addition to the basic graphical data as used in Performer, its scene graph nodes can have behaviors (i.e. functions to update the nodes’ state) attached to them. Tying the behaviors to the nodes has made it easy to assemble large worlds from simple components. A behavior is added by taking one of the core classes, such as a transformation node, and deriving a subclass with the new features. Individual node classes are compiled into dynamically loaded objects (DSOs), so that they can be rapidly added to a world or modified. The system includes a number of pre-made classes (also DSOs) that implement common virtual world tools –sounds, users' avatars, navigation controls, triggers that detect when a user enters an area, etc. Thesebuilt-in tools simplify the quick construction of many basic applications.RootHeadtransform Headmodel BodytransformHand transform Hand modelBodymodelFigure 1. A puppet and its scene graphFigure 2. Ygdrasil - Scene graph as a shared databaseThe Ygdrasil framework further extends the scene graph to be shared across the network. The concept of a shared scene graph has also been explored by other recent VR development systems, such as Avango (formerly known as Avocado) [12]. In a shared system, each participant has his own local copy of the common scene graph. Node data, such as lists of nodes' children, transformation matrices, and model information, are automatically distributed among these participants as the data changes or when participants first join the shared world.In Ygdrasil, the scene graph is shared using CAVERNsoft’s distributed database model. The data that are to be shared for any node in the scene graph are stored in the database keyed by the node name and the data members' names (see Figure 2). The scene can be dynamic, so whenever a client first learns about a new node (by a reference to the node's name), it looks up the node and its type in the database. The client can then create an appropriate local copy of the node, and retrieve all the other data as needed.Each particular node is considered to be owned by the host that creates it. The owning host maintains the master version of the node, and executes any behavior associated with it. All other hosts will create proxy versions of the node, and only receive data for it through CAVERNsoft; they do not directly modify the node (they can however send messages to the master copy to request changes). The proxy version of a node is typically of the simpler parent class type, without the added behavior code. Thus, if all of the main behaviors for a virtual world are executed by a single master version of the scene, remote sites can join in this world without needing anything beyond the standard, core node types.Shared MiletusThe FHW owns two virtual reality systems, an ImmersaDesk and a ReaCTor (a CAVE-like system), that are used to present a variety of content created by Foundation staff. One of the first VR applications shown at FHW was a reconstruction of Miletus (see Figure 3). Detailed models of some of the buildings of Miletus were created, and museum visitors can explore the city as it was in antiquity. Experienced museum guides lead visitors through the exhibits; the guides have both technical skills to operate the VR displays and museum education skills to explain the history of the city.Figure 3. The Delfinio of MiletusThe objective of the Shared Miletus application was to take the content that would normally be shown in the controlled environment of FHW's museum, and let remote, networked people visit it. In particular, we did not want to simply make it something like a VRML model that visitors would download and then play with on their own; instead, it was to be a dynamically shared world, "hosted" by the conference demonstrators.In creating the demonstration of the Shared Miletus environment, we focused on two issues – guiding visitors through the city, and providing them with information about what they were seeing. These features needed to work in an internationally distributed environment, where users could come and go from the space at will.Many museum-based VR exhibits lead visitors through the virtual world on a pre-selected path, so that users do not have to learn any special controls or know where they should be going. In our case, we wanted to give the visitors freedom to explore Miletus at their own pace. They were given a 3D wand for simple joystick-driven navigation; a recorded introduction explained how to use the wand. To make it easier to get to places of interest, we also gave them a dynamic, virtual map. This map showed the layout of the city, the user's position in it, and also the positions of any other visitors. This helped them to drive to particular buildings, or to meet up with other visitors or guides from the museum. The map also served as a navigation shortcut – clicking on a particular building summoned a magic carpet that automatically brought the user to the building's entrance.The first stage of providing visitors with information about Miletus was to include expert human guides. Official guides could enter the shared world, just like ordinary visitors. Through their avatars, and streaming network audio connections, the guides could then interact with the visitors, pointing out special details and answering questions.Given the international scope of the shared space, we felt that a few human guides alone would not beenough. So, we placed automated information kiosks within the various buildings of Miletus. These kiosks contained recorded audio commentaries describing each building and its history. This audio was available in multiple languages; for the iGrid demo we provided English and Japanese commentaries, but given enough time and translation personnel, any number of languages could be supported. The multi-lingual capability was implemented by having each visitor carry his own virtual audio tool. The tool was effectively a part of the user's avatar, and kept track of his preferred language. When the user approached a kiosk, it detected the presence of an audio tool and sent the tool messages informing it of what recordings the kiosk could provide. If the user chose to listen to one of them, the tool would send a request back to the kiosk, asking for the sound file in the desired language. Other tools at the entrance to the world could be used to switch languages – clicking on a Japanese flag icon would send a message to the user's audio tool to use Japanese, for example. The audio tool also provided the introduction and navigation instructions in the appropriate language.Virtual HarlemThe goal of the Harlem project is to provide an environment that contextualizes the study of the Harlem Renaissance, an important period in African American literary history, through the construction of a virtual reality scenario that represents Harlem, New York, as it existed circa 1925-35. Networked students can navigate through the streets and buildings of Harlem, and see the shops, homes, theaters, churches, and businesses that the writers of the period experienced in their everyday lives. Students can enter the Cotton Club and listen to African American entertainers like Fats Waller or Duke Ellington. They can experience the fact that while the entertainers and servers were African American, the clientele was white. In Virtual Harlem, students can hear the music of the time, watch the dances of the period, and even listen to political speeches of figures like Marcus Garvey or enjoy a poetry reading by Langston Hughes. An instructor at one site can lead students through the environment, explaining things and answering questions, and students can present their own research.Figure 4. Virtual Harlem on the ImmersaDesk at iGridThe people working on Virtual Harlem had many discussions about what type of narrative could be developed that could reconcile the presence of early 21st century students and teachers in an early 20th century environment. For the short term we loosely settled on the tour guide narrative. The avatars for Virtual Harlem were made to function as realistic representations of the actual people leading the tours of Harlem replete with early 21st century style clothing.The particular features of the Harlem environment that were implemented for iGrid 2000 were a period trolley car for transportation, some basic virtual characters, and movies in the Cotton Club. The trolley car moves automatically through the streets. Visitors can board the car by entering it, at which point they are ‘attached’ to it and move with it; they can exit by stepping outside of the car. Distributed along the city sidewalks are historical characters – Langston Hughes, Marcus Garvey, a group of women headed to a rent party, etc. These characters are billboarded, cutout images of the people from 1920s photographs. When a user approaches the characters, recorded speeches or conversations are played. Visitors can also enter the Cotton Club; inside is a static re-creation of patrons and staff in the main hall, and an interactive movie screen on the stage. By clicking wand buttons, any visitor can play back QuickTime movies of various performances that were at the Cotton Club. The movies are displayed as animated textures attached to a large square that is part of the 3D environment.Use of the NetworkIn running the shared versions of Miletus and Harlem, the network was used for: distributing data files, sharing the scene graph, and audio conferencing.A VR environment is typically made up of many individual 3D models (buildings, characters, etc.), texture images for the models, and recorded sound files. The complete Miletus environment comprised roughly 650 files, for a total of 228 megabytes. Harlem was similar – 610 files, totaling 212 megabytes. All of these files needed to be downloaded to the local VR system for any remote user to join the world. The initial draft of Ygdrasil did not include support for distributing the data files automatically, so users would download them in an archive file along with the client program. Newer versions of Ygdrasil can transfer files from web servers if they do not exist locally, allowing a user to join a world little more than the client program. In this case, the user will see the virtual world gradually build up bit by bit, with objects appearing as they are downloaded.The first thing that a client must do when joining a shared world is to get a copy of the scene graph. This is done by giving it the network address of the machine running the CAVERN database server or reflector. The client requests information about the root node, and then from there builds up a view of the complete scene. Once the program is running, it continues to receive updates for any dynamic objects (e.g. the moving trolley car in Harlem), and send its own updates for the user’s avatar, through the shared database. The large part of these environments are static – the amount of data that is changing at any one time is small compared to the size of the entire scene.Live audio connections are important for distributed users to talk to each other and collaborate. To enable this, we ran an audio-conferencing tool in parallel with the VR programs. This tool is a standard part of the CAVERNsoft library. It is a unicast UDP/IP based program (multicast connections are not available between many of the sites that we work with); a central reflector is used to allow more than two hosts to connect. In addition to its use within the virtual environment, this audio link is often very useful in demo situations such as iGrid, providing a back-channel for setup and problem-solving communications between sites.Assessment of Networked DemonstrationThe "Shared Miletus" and "Virtual Harlem" projects ran successfully networked between Yokohama, Japan and North America. Bryan Carter and Maria Roussou acted as expert networked guides and educators in the virtual worlds, discussing the history, architecture and issues of cultural importance that the virtual environments provoked. Roussou also used the pre-recorded Japanese language explanations as she guided tour groups who did not speak English or Greek.Our most significant problem was the startup time for the applications. It typically took several minutes for client programs to simply create their copy of the shared scene graph, before the user could do anything. This was due to high network latency – the Yokohama to Chicago round trip ping time was 150milliseconds. The problem is related to the long-fat pipe problem in TCP/IP networking [13]. The LFP problem occurs when the TCP is using a small window, and the system must wait a long time for an acknowledgment for each window of data. In our case, although we were using UDP/IP, there were similar acknowledgment delays during the startup. To build the local scene graph, a proxy for every existing remote node must be created. This requires waiting for a response from the remote system with the node’s type information. The creation of nodes was also effectively serialized – until one node’s type was received and it’s proxy created, no subsequent nodes could be created. With a 150 ms round trip time, at best 6 nodes could be created per second. The Miletus scene graph consists of 1025 nodes, thus leading to the extremely slow startup.Ygdrasil has since been redesigned to address this problem. Dummy proxies of nodes are created first, without waiting for a response from the shared database. When the type information is actually received, the dummy proxy is transparently replaced by the correct proxy. This allows many key requests to be outstanding at once. Furthermore, as all nodes can have a “children” key, this data is also requested immediately, along with the “type” key. The net result then is that initialization times will be proportional to the depth of the scene graph, rather than the total number of nodes, a significant improvement.A second difficulty also had to do with high network latencies. Certain activities can involve nodes controlled by separate hosts interacting closely. For example, when a user rides on the trolley car in Harlem, his avatar must constantly update its position based on that of the car, which is being updated by a remote host. Locally, everything will look okay to the user, but other people may see the user’s avatar bobbing back and forth within the car, as the position data for the two is received from the different hosts at different times. This problem can be resolved by dynamically manipulating the scene graph; if the user’s avatar node is attached below the car node, then everyone will see him moving relative to its coordinate system, and the avatar will not need to constantly update its position just to stay with the car. Conclusion and Future WorkThe iGrid 2000 demos of "Shared Miletus" and "Virtual Harlem" were successful; however, the high-end research nature of the venue points out the current limitations of these applications. The general public does not have easy access to high speed networks and large-scale VR devices. We do not feel these problems are insurmountable, though – polygon reduction algorithms and other research can help cut down the size of the data involved in these applications, general broadband network access is continually improving, bring greater bandwidth to private citizens, and home computer capabilities are increasing thanks to the demands of games. We are now able to run reduced versions of Miletus and Harlem on an ordinary Linux PC with a commercial graphics card."Shared Miletus" has demonstrated the possible future use of networking and virtual reality to provide a tele-immersive museum. This approach can increase the accessibility of a museum to world-wide audiences. It can also allow much larger and more in-depth exhibits than in a physical museum, and offers a chance for greater direct interaction by visitors with an exhibit. "Virtual Harlem" is an ongoing research project for scholars in the fields of VR and the Harlem Renaissance and is in continuing use as a teaching tool. We have only just scratched the surface of what is possible in such virtual museums and classrooms; much work still remains in both the creation of tools and the construction of new experiences.One such tool is an annotation system that is currently being deployed in Virtual Harlem. The annotation is based on the “VR Mail” system, originally designed for asynchronous collaboration [14]. This tool allows users to leave 3D messages in the virtual world comprising a voice, gesture and positional recording. In the case of "Virtual Harlem," a lecturer will be able to record a guided tour or a series of notes at one point in time, and students can come to listen to the information and record their own comments at a different time.AcknowledgmentsWe would like to thank the entire iGrid 2000 staff, especially Hiroshi Esaki and Goro Kunito of the University of Tokyo and Akihiro Tsutsui of NTT, for their help and extreme dedication to making the show a success.The virtual reality research, collaborations, and outreach programs at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago are made possible by major funding from the。