A Real-Time 3D Animation Environment for Storm Surge

合集下载

实景三维建模的流程和方法英语

实景三维建模的流程和方法英语

实景三维建模的流程和方法英语Workflow and Methodologies for Reality Capture Modeling.Reality capture modeling, often referred to as 3D scanning or photogrammetry, is an advanced technique that allows for the creation of three-dimensional models fromreal-world objects or environments. This process converts2D images into a 3D representation, offering a detailed and accurate replication of the subject. The following outlines the detailed steps and methodologies involved in this fascinating field.1. Data Acquisition.The initial step involves capturing data from the real-world environment or object. This can be achieved through various techniques such as photogrammetry, whereoverlapping images are captured using cameras or drones. These images are then processed to generate 3D point clouds and meshes. Another method is the use of LiDAR (LightDetection and Ranging) technology, which measures the distance to targets by illuminating them with laser pulses and analyzing the reflected light.2. Alignment of Photos.After capturing the necessary images, they need to be aligned to ensure accuracy. This involves matching the overlapping features in the photos to create a coherent 3D model. Software tools like Agisoft PhotoScan or Autodesk ReCap 360 can automate this process, but manual adjustments may be required for optimal results.3. Generation of Dense Point Cloud.Once the photos are aligned, the next step is to generate a dense point cloud. This involves converting the aligned photos into a cloud of millions of 3D points, representing the surface of the object or environment. The density of the point cloud determines the level of detail in the final model.4. Generation of Mesh.The dense point cloud is then converted into a 3D mesh. The mesh is a collection of connected triangles that approximate the shape of the object or environment. Software tools allow for the adjustment of mesh density and smoothing for a more refined model.5. Texture Generation.To provide a realistic representation, textures need to be applied to the mesh. Textures are essentially 2D images that are mapped onto the 3D mesh, providing color, detail, and material properties. These textures can be captured using high-resolution cameras or generated from the same set of photos used for 3D reconstruction.6. Model Editing and Optimization.Once the mesh and textures are generated, the model may require further editing and optimization. This involves removing any unwanted artifacts, smoothing edges, andadjusting the overall geometry for better accuracy.Software tools like Blender or ZBrush provide powerful editing capabilities, allowing for detailed adjustments to the model.7. Export and Delivery.The final step involves exporting the model in asuitable format for further use or presentation. Common formats include OBJ, STL, and FBX, which are compatiblewith various 3D printing, animation, and rendering software. The model can then be shared, printed, animated, or integrated into other applications as needed.In conclusion, reality capture modeling is a complexbut fascinating field that allows for the creation ofhighly detailed and accurate 3D models from real-world objects or environments. The process involves capturing data, aligning photos, generating dense point clouds, meshes, and textures, followed by model editing and optimization, and finally, export and delivery. With the advancements in technology, this field is constantlyevolving, opening up new possibilities in variousindustries such as architecture, engineering, entertainment, and beyond.。

高职三维场景课程教学中的Substance_Painter软件应用

高职三维场景课程教学中的Substance_Painter软件应用

高职三维场景课程教学中的Substance Painter软件应用黄雅欣(江汉艺术职业学院 湖北潜江 433100)摘要:随着我国三维动画产业的快速发展,产业对技术的要求水涨船高,满足各类需求的软件层出不穷,高职院校在学生教育中也应对照产业需求进行教授。

Substance Painter是一款纹理材质绘制软件,支持基于物理渲染(Physically Based Rendering,PBR)技术,具有实时渲染的功能,让使用者可以在绘制窗口上实时观察贴图纹理变化,简化了制作模型的流程,因能够帮助使用者以更加高效的流程制作出高质量的材质,其在三维动画制作中广受从业者青睐,在三维动画场景教学中也具有重要的作用。

文章以Substance Painter的功能与三维动画场景教学的概念为研究的切入点,探讨Substance Painter在动画场景教学中的应用。

关键词:Substance Painter 动画场景教学 软件应用 教学应用中图分类号:G64文献标识码:A 文章编号:1672-3791(2023)11-0208-04 Application of Substance Painter Software in the Teaching of 3DScene in Higher Vocational EducationHUANG Yaxin( Jianghan Art Vocational College, Qianjiang, Hubei Province, 433100 China) Abstract:With the rapid development of China's 3D animation industry, the requirements of the industry for technology are rising, software to meet various needs is emerging in endlessly, and higher vocational colleges should also teach students according to the needs of the industry. Substance Painter is a texture material rendering software, supports PBR's latest technology based on physical rendering, has the function of real-time rendering, enables users to observe the changes of the map texture in real time in the drawing window, and simplifies the process of making models. It can help users to produce high-quality materials with a more efficient process, so it is widely favored by practitioners in 3D animation production and also plays an important role in 3D animation scene teaching. This pa‐per takes the function of Substance painter and the concept of 3D animation scene teaching as the starting point of research, and discusses the application of Substance painter in animation scene teaching.Key Words: Substance Painter; Animation scene teaching; Software application; Teaching application1 S ubstance Painter与动画场景教学的基本概念1.1 Substance Painter的基本概念与用途Substance Painter是一款新型的贴图绘制软件,内置全新的3D贴图绘制程序和特色的粒子绘制工具[1],专为三维从业者制作。

《3ds Max三维动画设计与制作(第2版)》教学课件—11环境特效动画制作

《3ds Max三维动画设计与制作(第2版)》教学课件—11环境特效动画制作
图11-27 打开场景并设置参数
11.4 大气特效
② 单击键盘上的数字“8”,打开环境和效果对话框,在大气卷展栏中点

按钮,在弹出的添加大气或效果对话框内选择体积光,并点击
确定。在体积光参数卷展栏中,点击灯光组中的
,在视图中选择聚
光灯,将体积光效果添加到聚光灯上,效果如图11-28所示。点击体积组下
最小值:设置在渲染中要测量和表示的最低值。小于等于此值将映射 最左端的显示颜色。
最大值:设置在渲染中要测量和表示的最高值。大于等于此值将映射 最右端的显示颜色。
物理比例:设置曝光控制的物理比例。
11. 3 “曝光控制”卷展栏
11.3.3 线性曝光控制 “线性曝光控制”对渲染图像进行采样,计算出场景的平均亮度值并将其 转换成RGB值,适合于低动态范围的场景。它的参数类似于“曝光控制”, 其参数选项参见“自动曝光控制”(图11-11)。
图11-26 不同数量值的躁波效果
11.4 大气特效
11.4.4 案例:灯光特效的制作 案例学习目标:学习用体积光来完成灯光特效的制作。 案例知识要点:通过体积光效果的使用和参数设置来完成灯光特效的制作。 效果所在位置:本书配套文件包>第11章>案例:灯光特效的制作。 ① 打开本书配套文件包中的初始效果文件,效果如图11-27左图所示。选择聚 光灯,进入修改面板,在聚光灯参数面板中将调节聚光灯/光束值设置为 “20”,将衰减区/区域值设置为“22”,效果如图11-27右图所示。
火焰大小:50
火焰细节:1
火焰细节:2
火焰细节:5
图11-18 不同火焰细节值的效果
密度:10
密度:60
图11-19 不同火焰密度值的效果
密度:120

游戏引擎列表

游戏引擎列表

游戏引擎列表游戏引擎是用在游戏或互交式应用程序中,支撑对外输出图像的核心部件。

以下引擎按字母(或拼音)顺序排列。

目录∙ 1 免费引擎∙ 2 商业引擎o 2.1 手机游戏引擎o 2.2 PSPo 2.3 游戏及其使用的引擎∙ 3 引擎概览(A--G)∙ 4 引擎概览(G--Q)∙ 5 引擎概览(R--Z)∙ 6 参见免费引擎∙Agar - 一个高级图形应用程序框架,用于2D和3D游戏。

∙Allegro library - 基于 C/C++ 的游戏引擎,支持图形,声音,输入,游戏时钟,浮点,压缩文件以及GUI。

∙Axiom 引擎 - OGRE的衍生引擎。

∙Baja 引擎 - 专业品质的图像引擎,用于The Lost Mansion。

∙Boom - Doom代码的一部分。

由TeamTNT开发∙Build 引擎 - 一个第一人称射击游戏引擎,用于Duke Nukem 3D。

∙BYOND - “Build Your Own Net Dream”的所写,支持各种类型的游戏,包括MMORPG。

∙Ca3D-引擎 - 一个比较成熟的引擎,有自己的SDK,世界编辑器等。

∙Cadabra 3D 引擎 - 用于快速开发3D游戏。

∙Catmother - 一个基于BSD授权的引擎,只限个人使用,不能做商业用途。

是一家游戏公司的开源引擎。

∙CheapHack - An outdated TomazQuake derived engine∙Crystal Entity Layer - Crystal Space 3D 引擎的扩充∙Crystal Space - 3D应用程序的常规框架。

∙Cube - Powers the computer game of the same name∙DarkPlaces - 高级免费软件之一。

∙Delta3d - 整合和其他知名免费引擎,最初由美国军方开发。

∙DGD - 一个面向对象的MUD引擎。

render

render

renderRenderIntroductionIn the world of technology and graphics, the concept of render holds significant importance. Render refers to the process of generating an image or a sequence of images from a model, utilizing computer programs or special software. This document aims to provide an in-depth understanding of rendering, its various types, applications, and the technologies involved in the process.I. Understanding Rendering1.1 Definition of RenderingRendering is the process of converting a virtual 3D model or scene into a 2D image or animation. It involves taking inputs such as lighting conditions, object properties, and camera angles to produce a high-quality visual representation. Rendering allows designers, artists, and animators to turn their ideas into reality by creating realistic images and animations.1.2 Importance of RenderingRendering plays a crucial role in various industries, including architecture, entertainment, gaming, virtual reality, and film-making. It allows architects to visualize buildings before construction, helps game developers create immersive and visually pleasing games, and enables filmmakers to bring their stories to life through realistic visual effects.II. Types of Rendering2.1 Real-time RenderingReal-time rendering focuses on generating images or animations in real-time, typically at interactive frame rates. It is widely used in video games, virtual reality applications, and simulations. Real-time rendering requires efficient algorithms and hardware acceleration to render frames quickly, allowing for smooth user interaction and immersive experiences.2.2 Offline RenderingUnlike real-time rendering, offline rendering aims to produce the highest quality images or animations, disregarding the time it takes to render each frame. It is commonly used in film-making and computer-generated imagery (CGI). Offline rendering techniques, such as ray tracing and global illumination, accurately simulate light behavior and produce realistic and visually stunning results.III. Rendering Techniques3.1 RasterizationRasterization is a fast and efficient rendering technique used in real-time graphics. It works by converting 3D objects into 2D images by projecting them onto the screen. Rasterization utilizes the graphics processing unit (GPU) to calculate the lighting, shading, and colors of each pixel, resulting in real-time rendering suitable for video games and interactive applications.3.2 Ray TracingRay tracing is an advanced rendering technique used in offline rendering to produce highly realistic images. It simulates the behavior of light by tracing the path of virtual rays from the camera through the scene. Ray tracing accurately calculates reflections, refractions, and shadows, resulting in realistic lighting and photorealistic imagery. However, ray tracing is computationally intensive and may require hours or even days to render a single frame.IV. Rendering Software and Tools4.1 3D Modeling and Rendering SoftwareVarious software packages, such as Autodesk 3ds Max, Blender, and Cinema 4D, provide comprehensive 3Dmodeling and rendering capabilities. These tools allow artists to create and manipulate 3D models, apply textures and materials, set up lighting, and render high-quality images or animations.4.2 GPU Rendering EnginesTo accelerate the rendering process, GPU rendering engines, such as NVIDIA's CUDA and AMD's Radeon ProRender, leverage the power of graphics cards. These engines utilize the parallel processing capabilities of GPUs to distribute the rendering workload, resulting in faster render times compared to traditional CPU-based rendering.V. ConclusionRender is an essential process in the world of technology and graphics. It converts virtual models into realistic images or animations, enabling designers, artists, and animators to bring their ideas to life. Real-time rendering facilitates interactive experiences in video games and virtual reality, while offline rendering produces high-quality imagery for films and CGI. Understanding rendering techniques and utilizing the right software and tools can significantly enhance the artistic and technical capabilities of professionals in various industries.。

simulink-3d-animation

simulink-3d-animation

Visualization of Simulink based applications, clockwise from bottom left: self-balancing robot, aircraft over terrain, automotive vehicle dynamics, and wind farm.Authoring and Importing 3D WorldsSimulink 3D Animation provides two editors for authoring and importing virtual reality worlds: V-Realm Builder and 3D World Editor.Building 3D WorldsV-Realm Builder in Simulink 3D Animation is a native VRML authoring tool that enables you to create 3D views and images of physical objects using VRML. 3D World Editor offers a hierarchical, tree-style view of VRML objects that make up the virtual world. It contains a set of object, texture, transform, and material libraries that are stored locally for reuse.3D World Editor showing a hierarchical, tree-style view (left) and scene preview (right) of components of a lunar module.Importing 3D Content from the WebYou can build 3D worlds with several3D authoring tools and export them to the VRML97 format for use with Simulink 3D Animation. In addition, you can download 3D content from the Web and use it to assemble detailed 3D scenes.Importing CAD Models3D World Editor lets you manipulate 3D VRML objects imported from most CAD packages for developing detailed 3D worlds that animate dynamic systems modeled in Simscape™,SimMechanics™, and Aerospace Blockset™. Simulink 3D Animation enables you to process VRML files created by CAD tools such as SolidWorks®and Pro/ENGINEER®. You can use the SimMechanics Link utility to automatically create SimMechanics models from CAD tools and add associated Simulink 3D Animation visualization to them.3D animation of the dynamics of an internal combustion engine modeled in SimMechanics (top) and trajectory trace of an aircraft computed using coordinate transformations from Aerospace Blockset (bottom).Animating 3D WorldsSimulink 3D Animation provides bidirectional MATLAB and Simulink interfaces to 3D worlds.MATLAB Interface to 3D WorldsFrom MATLAB, you can read and change the positions and other properties of VRML objects, read signals from VRML sensors, create callbacks from graphical tools, record animations, and map data onto 3D objects. You can use MATLAB Compiler™to generate standalone applications with Simulink 3D Animation functionality forroyalty-free deployment.MATLAB based 3D application compiled as an executable using MATLAB Compiler and deployed on an end-user machine running MATLAB Compiler Runtime.Simulink Interface to 3D WorldsYou can control the position, rotation, and size of a virtual object in a scene to visualize its motion and deformation. During simulation, VRML object properties in the scene can also be read into Simulink. A set of vector and matrix utilities for axis transformations enables associations of Simulink signals with properties of objects in your virtual world. You can adjust views relative to objects and display Simulink signals as text in the virtual world. You can also trace the 3D trajectory, generated using Curve Fitting Toolbox™, of an object in theassociated virtual scene. For example, you can perform flight-path visualization for the launch of a spacecraft.Modeling and simulation in Simulink of a multi-agent system animated with Simulink 3D Animation. The virtual world is linked through the VR Sink block (middle) and viewed with the Simulink 3D animation viewer (bottom).Viewing and Interacting with 3D WorldsSimulink 3D Animation provides VRML viewers that display your virtual worlds and record scene data. It also provides Simulink blocks and MATLAB functions for user interaction and virtual prototyping with 3D input devices, including 3D mice and force-feedback joysticks.VRML ViewersSimulink 3D Animation includes viewers that let you navigate the virtual world by zooming, panning, moving sideways, and rotating about points of interest known as viewpoints. In the virtual world, you can establish viewpoints that emphasize areas of interest, guide visitors, or observe an object in motion from different positions. During a simulation, you can switch between these viewpoints.Integrating with MATLAB Handle GraphicsThe Simulink 3D Animation viewer integrates with MATLAB figures so that you can combine virtual scenes withMATLAB Handle Graphics®and multiple views of one or more virtual worlds.Example of a graphical interface authored with MATLAB Handle Graphics. The screen shows a car suspension test on a race track that combines multiple 3D views (top), including speed data and visualizations of the steering wheel and force triads, with 2D graphics for trend analysis (bottom).Recording and Sharing AnimationsSimulink 3D Animation enables you to record scene data and share your work.Recording Scene DataSimulink 3D Animation enables you to control frame snapshots (captures) of a virtual scene, or record animations into video files. You can save a frame snapshot of the current viewer scene as a TIFF or PNG file. You can schedule and configure recordings of animation data into AVI video files and VRML animation files for future playback. You can use video and image processing techniques on frame snapshot and animation data. These approaches enable the development of control algorithms using a visual feedback loop through the link with a virtual reality environment instead of physical experimental setups.Enabling Collaborative EnvironmentsSimulink 3D Animation lets you view and interact with simulated virtual worlds on one machine that is running Simulink or on networked computers that are connected locally or via the Internet. In a collaborative work environment, you can view an animated virtual world on multiple client machines connected to a host server through TCP/IP protocol. When you work in an individual (nonnetworked) environment, your modeled system and the 3D visualization run on the same host.Visualizing Real-Time SimulationsSimulink 3D Animation contains functionality to visualize real-time simulations and connect with input hardware. You can use C code generated from Simulink models using Simulink Coder™to drive animations. This approach enhances your hardware-in-the-loop simulations or rapid prototyping applications on xPC Target™and Real-Time Windows Target™by providing a visual animation of your dynamic system model as it connects withreal-time hardware.Product Details, Demos, and System Requirements/products/3d-animationTrial Software/trialrequestSales/contactsalesTechnical Support/support Components of an xPC Target real-time testing environment that includes Simulink 3D Animation for rapid prototyping (top) and hardware-in-the-loop simulation (bottom).ResourcesOnline User Community /matlabcentral Training Services /training Third-Party Products and Services /connections Worldwide Contacts /contact。

三维科技科普英语

三维科技科普英语

三维科技科普英语如下:Three-dimensional technology, also known as 3D technology, refers to the creation and manipulation of three-dimensional digital objects and environments. It involves the use of computer graphics and imaging techniques to create realistic and interactive virtual worlds that can be experienced through various devices such as computers, smartphones, and specialized glasses or headsets.There are several types of 3D technology, including:1. Stereoscopy: This technique uses two slightly different images to create a 3D effect, simulating the way human eyes perceive depth in the real world.2. Volumetric display: This technology creates a three-dimensional image by projecting light in all directions, allowing viewers to see the image from any angle without the need for special glasses.3. Augmented reality (AR): AR technology overlays digital information onto the real world, creating an enhanced experience for users. For example, AR apps can be used to visualize furniture in a room before making a purchase.4. Virtual reality (VR): VR technology immerses users in a completely virtual environment, often using a headset to block out the real world and enhance the sense of presence in the virtual world.5. 3D printing: This technology allows for the creation of physical objects from digital files, using a process called additive manufacturing. Objects are built layer by layer, with each layer being printed on top of the previous one.3D technology has numerous applications across various industries, including entertainment, gaming, education, healthcare, architecture, and engineering. For example, 3D animation is commonly used in movies and video games to create lifelike characters and environments, while 3D printing is used in the medical field to create prosthetics and implants.。

Motionbuilder2012 新功能介绍

Motionbuilder2012 新功能介绍

Autodesk MotionBuilder®是用于游戏、电影、广播和多媒体制作的最重要的三维角色动画制作套装软件。

这个屡获殊荣的软件结合了独特的实时体系结构、动画分层和故事时间线非线性编辑环境以及简化角色动画流水线的直观工作流程和工具。

Autodesk MotionBuilder 十分重视工作流程效率,可使传统动画师和技术指导承接最苛刻的高容量动画项目。

该软件可在Windows® 和Mac®操作系统下运行,完美地支持平台不受限制的FBX 三维制作与交换格式,使MotionBuilder 能够与制作流水线中任何支持FBX 的软件集成。

功能要点实时动画工具利用专为实现速度和效率最大化而设计的工具,你可以即时回放角色动画,从而减少了预览或渲染作品的需要。

MotionBuilder的实时功能使之成为个人艺术家的首选角色动画软件或大型制作流水线的动画骨干工具。

创新的HumanIK 角色技术凭借其强大的全身FK/IK 套索工具,Autodesk MotionBuilder 提供了无与伦比的自动化角色设置功能。

你可以快速设置强大的套索工具,而无论角色的尺寸或比例如何,并能轻松定制外观和感觉,而无需编写脚本或设置约束。

您也可以用实时运动重定位功能重现各个角色的动画动作。

统一非线性编辑环境故事时间线是唯一把多种内容组合到一个简化编辑器中的非线性环境–自始至终都是基于Autodesk MotionBuilder 实时体系结构开发的。

预可视化和布局故事时间线让你能够轻松地对包含动画、镜头、数字视频和音频的轨道进行混合、编辑和排序。

故事时间线还能让你设定镜头拍摄,动态地重新为拍摄排序和定时,就像传统的非线性视频编辑一样。

FBXAutodesk MotionBuilder 完美地支持平台不受限制的FBX®高端三维制作与交换格式。

它能让你从各种各样的资源快速轻松地获取和交换三维资源与媒体。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
d
p
on difference threshold. The elevation threshold dh,k can be given by
dh0 dhk = s(wk − wk −1 )c + dh0 dhmax
if wk > 3 if dhk > dhmax
1
ABSTRACT This paper describes an approach to develop a high performance animation environment for storm surge. The system provides the capability to simulate the storm surge effects in the physical world by (1) modeling a region using the airborne light Detection And Ranging (LIDAR) data, USGS orthophotos, RLG road data and photos; (2) animating the storm impact by using the features of this model; and (3) providing the capability for users to explore the animation environment. We present our system by modeling the dataset collected from Ft. Lauderdale, a region in South Florida, USA. 1. INTRODUCTION With the availability of digital data archive, the exponential growth of the affordable computational power and maturation of computer graphics technology, real time animations of the locations and events in the physical world become possible. Real time modeling of the physical world has many uses, such as disaster impact prediction, disaster recovery planning and training, urban planning and virtual tourism. However, the current state of technology lacks the capability to translate storm damage predictions into a meaningful form by depicting the actual damage estimate at a location efficiently to be understandable by the general public who has little knowledge of computations, meteorology and mechanics. In order to address this issue, we use the progressive morphology filter, developed by our group, to process the LIDAR data to acquire a highresolution Digital Terrain Model (DTM) automatically [1], utilize the OpenGL technology [6], 3D Studio tools [7] and the Virtual Terrain Project (VTP) [5] to create the 3D interactive environments, and extend the capability to animate buildings, vegetation and flooding as it pertains to storm surge effects.
Dataset Processing Model Construction
Building Model
Animation
LIDAR Progressive Morphology Filter USGS/ RLG Correct for different projections
Building Animation Vegetation Animation Flooding Animation
if wk ≤ 3
(4)
The DTM generating steps using the progressive morphological filter is shown in Table 1. 1. Given a set of LIDAR measurements P = {p1 , p2 , …, pn}, where n is the total number of the points in the measurements and pi = (xi , yi , zi ), sample them in every 2*2 m2 cell. If the number of sampled points m > 1 in one cell, select the point p with minimum elevation (zp). If m = 0, use nearest neighborhood interpolation to derive an elevation. Upon the initial filtering window size w0 and elevation threshold dh0 according to Equations (3) and (4), using the morphological filter whose major component is an opening operation to the measurements. Obtain the non-ground point pi,k with zi,k > dhk and the approximate surface model. Continue to calculate the next values for wk and dhk by applying the morphological filter to the surface model obtained from the previous iteration, until wk is greater than a predefined maximum value. Generate the DTM based on the data set after the non-ground measurements have been removed.
This paper is organized as follows. Section 2 outlines the system architecture and introduces three modules in this system. Section 3 concludes this paper. 2. SYSTEM PARADIGM The high-level system architecture of our proposed approach is outlined in Figure 1. As can be seen from this figure, there are three modules in this system: dataset processing module, model construction module and animation module.
is to generate a high-resolution DTM for that area by processing the datasets automatically. The methodology adopted in this system is the progressive morphological filter [1] for removing the nonground measurements from the LIDAR data. The LIDAR data were derived using Optech 1210 LIDAR mapping system. Each flight surveyed a 600 m wide swath with a 0.3 m diameter laser footprint spaced approximately every 2.5 m. The raw points are sampled every 2*2 m2 cell. If more than one measurement falls within a cell, the point with the minimum elevation is selected. If there is no measurement for a cell, nearest neighborhood interpolation is used to derive an elevation. In the progressive morphological filter, two fundamental operations based on set theory, dilation and erosion, are extended to remove non-ground measurements in the LIDAR data. Considering a LIDAR measurement p(x, y, z), the dilation of elevation z at x and y is defined as follows.
相关文档
最新文档