z Computer Surface Modeling with Oriented Particle Systems
自然语言处理 学生力模型-概述说明以及解释

自然语言处理学生力模型-概述说明以及解释1.引言1.1 概述概述是文章引言部分的一部分,它主要介绍了本文所要讨论的主题:自然语言处理学生力模型。
自然语言处理是一门研究如何让计算机能够理解和处理人类语言的学科。
随着人工智能的发展和应用的广泛化,自然语言处理变得越来越重要。
学生力模型(Student Models)则是自然语言处理中的一个重要概念。
它是指通过构建计算机模型来模拟人类学习和理解语言的过程。
学生力模型可以帮助我们更好地理解和分析文本,判断文本的情感倾向,提取关键信息等。
通过学生力模型,我们可以使计算机系统能够更接近人类的理解能力,进而实现自然语言处理的目标。
本篇文章将以自然语言处理学生力模型为主题,详细介绍学生力模型的概念和原理,以及它在自然语言处理中的应用。
文章将从理论和实践的角度出发,探讨学生力模型在文本处理、情感分析、问答系统等方面的应用,并总结其优势和局限性。
在全面介绍学生力模型后,本文将着眼于对学生力模型未来发展的展望。
随着深度学习等技术的不断进步,学生力模型在自然语言处理领域的应用前景将更加广阔。
最后,文章将给出对学生力模型的总结和结论。
通过本篇文章的阅读,读者将对自然语言处理学生力模型有更深入的了解,同时能够了解到学生力模型在实际应用中的潜力和局限性。
进一步探讨和研究学生力模型的发展将有助于推动自然语言处理领域的进步,更好地应用于实际场景。
1.2 文章结构本文将按照以下结构来呈现关于自然语言处理学生力模型的论述。
首先,在引言部分,将对本文的整体内容进行概述,包括对自然语言处理和学生力模型的简要介绍,并明确文章的目的。
接下来,在正文部分,将详细讨论自然语言处理的定义和背景,包括自然语言处理的基本概念和发展历程,以便读者对该领域有一个全面的了解。
随后,将介绍学生力模型的概念和原理,包括其由来、基本原理以及相关理论支持。
同时,还将探讨学生力模型在自然语言处理中的具体应用,例如自动问答系统、文本摘要、机器翻译等,以及学生力模型在这些应用中的作用和优势。
考研英语作文:人脑和电脑的不同

考研英语作文:人脑和电脑的不同导读:本文考研英语作文:人脑和电脑的不同,仅供参考,如果觉得很不错,欢迎点评和分享。
要求写一篇关于电脑和人脑差别的文章:The Difference between a Brain and a Computerhe difference between a brain and a computer can be expressed in a single word: complexity.Even the most complicated computer man has yet built can't compare in intricacy with the brain. Computer switches and comportents number in the thousands rather than in the billions.What's more, the computer switch is just an on-off device,whereas the brain cell is itself possessed of a tremendously complex inner structure.Can a computer think? That depends on what you mean by"think". If solving a mathematical problem is"thinking',then a computer can"think'and do so much faster than a man. Of course, most mathematical problems can be solved quite mechanically by repeating certain straightforward processes over and over again. Even the simple computers of today can be geared for that.Surely, though, if a computer can be made complexenough, it can be as creative as we are. If it could be made as complex as a human brain, it could be the equivalent of a human brain and do whatever a human brain can do.But how long will it take to build a computer complex enough to duplicate the human brain? Perhaps not as long as some think. Long before we approach a computer as complex as our brain, we will perhaps build a computer that is at least complex enough to design another computer more complex than itself. This more complex computer could design one still more complex and so on and so on and so on.In other words, once we pass a certain critical point, the computers take over and there is a"complexity explosion". In a very short time thereafter, computers may exist that not only duplicate the human brain--but far surpass it.原文翻译:要形容人脑与电脑的区别只需一个词,即:复杂。
Geometric Modeling

Geometric ModelingGeometric modeling is an essential aspect of computer-aided design (CAD) and computer graphics. It involves creating digital representations of physicalobjects and environments using mathematical algorithms and geometric shapes. Geometric modeling is used in various industries, including architecture, engineering, manufacturing, and entertainment. It enables designers and engineers to visualize and analyze complex structures, simulate real-world scenarios, and create realistic animations and visual effects. One of the primary challenges in geometric modeling is accurately representing the intricate details of real-world objects. This requires the use of advanced mathematical techniques and algorithms to create smooth surfaces, sharp edges, and complex geometries. Additionally, geometric modeling must account for physical properties such as material properties, lighting, and environmental factors to create realistic and visually appealing simulations. From an engineering perspective, geometric modeling playsa crucial role in product design and development. Engineers use geometric modeling to create 3D models of components and assemblies, analyze their structuralintegrity and performance, and optimize their designs for manufacturing. Geometric modeling also enables engineers to simulate the behavior of products under various conditions, such as stress, heat, and fluid flow, allowing them to identify potential issues and make informed design decisions. In the field of architecture, geometric modeling is used to create detailed 3D models of buildings, landscapes, and urban environments. Architects and urban planners use geometric modeling to visualize their designs, analyze spatial relationships, and communicate theirideas to clients and stakeholders. Geometric modeling also allows architects to simulate the impact of natural light, shadows, and environmental factors on their designs, helping them create sustainable and energy-efficient buildings. In the manufacturing industry, geometric modeling is essential for creating digital prototypes of products and production processes. Manufacturers use geometric modeling to design and optimize manufacturing equipment, plan production workflows, and simulate the assembly and operation of complex machinery. Geometric modeling also enables manufacturers to identify potential issues in the production process, such as interference between components or inefficient material usage, and makenecessary adjustments before physical production begins. In the entertainment industry, geometric modeling is used to create realistic 3D models of characters, props, and environments for movies, video games, and virtual reality experiences. Artists and animators use geometric modeling to bring their creative visions to life, adding intricate details and textures to their digital creations. Geometric modeling also plays a crucial role in simulating physical phenomena such as fluid dynamics, cloth simulation, and particle effects, allowing for immersive and visually stunning visual effects. Overall, geometric modeling is a versatile and indispensable tool for a wide range of industries, enabling professionals to create, analyze, and simulate complex objects and environments with a high degree of accuracy and realism. As technology continues to advance, the capabilities of geometric modeling are expected to expand, allowing for even more sophisticated and immersive digital experiences in the future.。
基于屏幕空间的泊松表面重建

Screened Poisson Surface ReconstructionMICHAEL KAZHDANJohns Hopkins UniversityandHUGUES HOPPEMicrosoft ResearchPoisson surface reconstruction creates watertight surfaces from oriented point sets.In this work we extend the technique to explicitly incorporate the points as interpolation constraints.The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation.In contrast to other image and geometry processing techniques,the screening term is defined over a sparse set of points rather than over the full domain.We show that these sparse constraints can nonetheless be integrated efficiently.Because the modified linear system retains the samefinite-element discretization,the sparsity structure is unchanged,and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster,higher-quality surface reconstructions.Categories and Subject Descriptors:I.3.5[Computer Graphics]:Compu-tational Geometry and Object ModelingAdditional Key Words and Phrases:screened Poisson equation,adaptive octree,finite elements,surfacefittingACM Reference Format:Kazhdan,M.,and Hoppe,H.Screened Poisson surface reconstruction. ACM Trans.Graph.NN,N,Article NN(Month YYYY),PP pages.DOI=10.1145/XXXXXXX.YYYYYYY/10.1145/XXXXXXX.YYYYYYY1.INTRODUCTIONPoisson surface reconstruction[Kazhdan et al.2006]is a well known technique for creating watertight surfaces from oriented point samples acquired with3D range scanners.The technique is resilient to noisy data and misregistration artifacts.However, as noted by several researchers,it suffers from a tendency to over-smooth the data[Alliez et al.2007;Manson et al.2008; Calakli and Taubin2011;Berger et al.2011;Digne et al.2011].In this work,we explore modifying the Poisson reconstruc-tion algorithm to incorporate positional constraints.This mod-ification is inspired by the recent reconstruction technique of Calakli and Taubin[2011].It also relates to recent work in im-age and geometry processing[Nehab et al.2005;Bhat et al.2008; Chuang and Kazhdan2011],in which a datafidelity term is used to“screen”the associated Poisson equation.In our surface recon-struction context,this screening term corresponds to a soft con-straint that encourages the reconstructed isosurface to pass through the input points.The approach we propose differs from the traditional screened Poisson formulation in that the position and gradient constraints are defined over different domain types.Whereas gradients are constrained over the full3D space,positional constraints are introduced only over the input points,which lie near a2D manifold. We show how these two types of constraints can be efficiently integrated,so that we can leverage the original multigrid structure to solve the linear system without incurring a significant overhead in space or time.To demonstrate the benefits of screening,Figure1compares results of the traditional Poisson surface reconstruction and the screened Poisson formulation on a subset of11.4M points from the scan of Michelangelo’s David[Levoy et al.2000].Both reconstructions are computed over a spatial octree of depth10,corresponding to an effective voxel resolution of10243.Screening generates a model that better captures the input data(as visualized by the surface cross-sections overlaid with the projection of nearby samples), even though both reconstructions have similar complexity(6.8M and6.9M triangles respectively)and required similar processing time(230and272seconds respectively,without parallelization).1 Another contribution of our work is to modify both the octree structure and the multigrid implementation to reduce the time complexity of solving the Poisson system from log-linear to linear in the number of input points.Moreover we show that hierarchical point clustering enables screened Poisson reconstruction to attain this same linear complexity.2.RELA TED WORKReconstructing surfaces from scanned points is an important and extensively studied problem in computer graphics.The numerous approaches can be broadly categorized as follows. Combinatorial Algorithms.Many schemes form a triangula-tion using a subset of the input points[Cazals and Giesen2006]. Space is often discretized using a tetrahedralization or a voxel grid,and the resulting elements are partitioned into inside and outside regions using an analysis of cells[Amenta et al.2001; Boissonnat and Oudot2005;Podolak and Rusinkiewicz2005], eigenvector computation[Kolluri et al.2004],or graph cut [Labatut et al.2009;Hornung and Kobbelt2006].Implicit Functions.In the presence of sampling noise,a common approach is tofit the points using the zero set of an implicit func-tion,such as a sum of radial bases[Carr et al.2001]or piecewise polynomial functions[Ohtake et al.2005;Nagai et al.2009].Many techniques estimate a signed-distance function[Hoppe et al.1992; 1The performance of the unscreened solver is measured using our imple-mentation with screening weight set to zero.The implementation of the original Poisson reconstruction runs in412seconds.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.2•M.Kazhdan and H.HoppeFig.1:Reconstruction of the David head ‡,comparing traditional Poisson surface reconstruction (left)and screened Poisson surface reconstruction which incorporates point constraints (center).The rightmost diagram plots pixel depth (z )values along the colored segments together with the positions of nearby samples.The introduction of point constraints significantly improves fit accuracy,sharpening the reconstruction without amplifying noise.Bajaj et al.1995;Curless and Levoy 1996].If the input points are unoriented,an important step is to correctly infer the sign of the resulting distance field [Mullen et al.2010].Our work extends Poisson surface reconstruction [Kazhdan et al.2006],in which the implicit function corresponds to the model’s indicator function χ.The function χis often defined to have value 1inside and value 0outside the model.To simplify the derivations,inthis paper we define χto be 12inside and −12outside,so that its zero isosurface passes near the points.The function χis solved using a Laplacian system discretized over a multiresolution B-spline basis,as reviewed in Section 3.Alliez et al.[2007]form a Laplacian system over a tetrahedral-ization,and constrain the solution’s biharmonic energy;the de-sired function is obtained as the solution to an eigenvector prob-lem.Manson et al.[2008]represent the indicator function χusing a wavelet basis,and efficiently compute the basis coefficients using simple local sums over an adapted octree.Calakli and Taubin [2011]optimize a signed-distance function to have value zero at the points,have derivatives that agree with the point normals,and minimize a Hessian smoothness norm.The resulting optimization involves a bilaplacian operator,which requires estimating derivatives of higher order than in the Laplacian.The reconstructed surfaces are shown to have good accuracy,strongly suggesting the importance of explicitly fitting the points within the optimization.This motivated us to explore whether a Laplacian system could be extended in this respect,and also be compatible with a multigrid solver.Screened Poisson Surface Fitting.The method of Nehab et al.[2005],which simultaneously fits position and normal constraints,may also be viewed as the solution of a screened Poisson equation.The fitting algorithm assumes that a 2D parametric domain (i.e.,a plane or triangle mesh)is already established.The position and derivative constraints are both defined over this 2D domain.In contrast,in Poisson surface reconstruction the 2D domain manifold is initially unknown,and therefore the goal is to infer anindicator function χrather than a parametric function.This leadsto a hybrid problem with derivative (Laplacian)constraints defined densely over 3D and position constraints defined sparsely on the set of points sampled near the unknown 2D manifold.3.REVIEW OF POISSON SURFACE RECONSTRUCTIONThe approach of Poisson surface reconstruction is based on the observation that the (inward pointing)normal field of the boundary of a solid can be interpreted as the gradient of the solid’s indicator function.Thus,given a set of oriented points sampling the boundary,a watertight mesh can be obtained by (1)transforming the oriented point samples into a continuous vector field in 3D,(2)finding a scalar function whose gradients best match the vector field,and (3)extracting the appropriate isosurface.Because our work focuses primarily on the second step,we review it here in more detail.Scalar Function Fitting.Given a vector field V :R 3→R 3,thegoal is to solve for the scalar function χ:R 3→R minimizing:E (χ)=∇χ(p )− V (p ) 2d p .(1)Using the Euler-Lagrange formulation,the minimum is obtainedby solving the Poisson equation:∆χ=∇· V .System Discretization.The Galerkin formulation is used totransform this into a finite-dimensional system [Fletcher 1984].First,a basis {B 1,...,B N }:R 3→R is chosen,namely a collection of trivariate (usually triquadratic)B-spline functions.With respect to this basis,the discretization becomes:∆χ,B i [0,1]3= ∇· V ,B i [0,1]31≤i ≤Nwhere ·,· [0,1]3is the standard inner-product on the space of(scalar-and vector-valued)functions defined on the unit cube:F ,G [0,1]3=[0,1]3F (p )·G (p )d p , U , V [0,1]3=[0,1]3U (p ), V (p ) d p .Since the solution is itself expressed in terms of the basis functions:χ(p )=N∑i =1x i B i (p ),ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .1.离散化->连续2.找个常量函数最佳拟合这些这些向量域;3.抽取等值面这里已经将离散的有向点转化为了连续的向量域表示;点集合的最初的思考Screened Poisson Surface Reconstruction•3finding the coefficients{x i}of the solution reduces to solving the linear system Ax=b where:A i j= ∇B i,∇B j [0,1]3and b i= V,∇B i [0,1]3.(2) The basis functions{B1,...,B N}are chosen to be compactly supported,so most pairs of functions do not have overlapping support,and thus the matrix A is sparse.Because the solution is expected to be smooth away from the input samples,the linear system is discretized byfirst adapting an octree to the input samples and then associating an(appropriately scaled and translated)trivariate B-spline function to each octree node. This provides high-resolution detail in the vicinity of the surface while reducing the overall dimensionality of the system.System Solution.Given the hierarchy defined by an octree of depth D,a multigrid approach is used to solve the linear system. The basis functions are partitioned according to the depths of their associated nodes and,for each depth d,a linear system A d x d=b d is defined using the corresponding B-splines{B d1,...,B d Nd},such thatχ(p)=∑D d=0∑i x d i B d i(p).Because the octree-selected B-spline functions do not form a complete grid at each depth,it is generally not possible to prolong the solution x d at depth d into the solution x d+1at depth d+1. (The B-spline associated with a given node is a sum of B-spline functions associated not only with its own child nodes,but also with child nodes of its neighbors.)Instead,the constraints at depth d+1are adjusted to account for the part of the solution already realized at coarser depths.Pseudocode for a cascadic solver,where the solution is only relaxed on the up-stroke of the V-cycle,is given in Algorithm1.Algorithm1:Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine2For d ∈{0,...,d−1}Remove the constraints3b d=b d−A dd x d met at coarser depths4Relax A d x d=b d Adjust the system at depth dHere,A dd is the N d×N d matrix used to transform solution coefficients at depth d into constraints at depth d:A dd i j= ∇B d i,∇B d j [0,1]3.Note that,by definition,A d=A dd.Isosurface Extraction.Solving the Poisson equation,one obtains a functionχthat approximates the indicator function.Ideally,the function’s zero level-set should therefore correspond to the desired surface.In practice however,the functionχcan differ from the true indicator function due to several sources of error:—The point sampling may be noisy,possibly containing outliers.—The Galerkin discretization is only an approximation of the continuous problem.—The point sampling density is approximated during octree construction.To mitigate these errors,in[Kazhdan et al.2006]the implicit function is adjusted by globally subtracting the average value of the function at the input samples.4.INCORPORA TING POINT CONSTRAINTSThe original Poisson surface reconstruction algorithm adjusts the implicit function using a single global offset such that its average value at all points is zero.However,the presence of errors can cause the implicit function to drift so that no global offset is satisfactory. Instead,we seek to explicitly interpolate the points.Given the set of input points P with weights w:P→R≥0,we add to the energy of Equation1a term that penalizes the function’s deviation from zero at the samples:E(χ)=V(p)−∇χ(p) 2d p+α·Area(P)∑p∈P∑p∈Pw(p)χ2(p)(3)whereαis a weight that trades off the importance offitting the gradients andfitting the values,and Area(P)is the area of the reconstructed surface,estimated by computing the local sampling density as in[Kazhdan et al.2006].In our implementation,we set the per-sample weights w(p)=1,although one can also use confidence values if these are available.The energy can be expressed concisely asE(χ)= V−∇χ, V−∇χ [0,1]3+α χ,χ (w,P)(4)where ·,· (w,P)is the bilinear,symmetric,positive,semi-definite form on the space of functions in the unit-cube,obtained by taking the weighted sum of function values:F,G (w,P)=Area(P)∑p∈P w(p)∑p∈Pw(p)·F(p)·G(p).4.1Interpretation as a Screened Poisson EquationThe energy in Equation4combines a gradient constraint integrated over the spatial domain with a value constraint summed at discrete points.As shown in the appendix,its minimization can be interpreted as a screened Poisson equation(∆−α˜I)χ=∇· V with an appropriately defined operator˜I.4.2DiscretizationWe apply a discretization similar to that in Section3to the minimization of the energy in Equation4.The coefficients of the solutionχwith respect to the basis{B1,...,B N}are again obtained by solving a linear system of the form Ax=b.The right-hand-side b is unchanged because the constrained value at the sample points is zero.Matrix A now includes the point constraints:A i j= ∇B i,∇B j [0,1]3+α B i,B j (w,P).(5) Note that incorporating the point constraints does not change the sparsity of matrix A because B i(p)·B j(p)is nonzero only if the supports of the two functions overlap,in which case the Poisson equation has already introduced a nonzero entry in the matrix.As in Section3,we solve this linear system using a cascadic multigrid algorithm–iterating over the octree depths from coarsest tofinest,adjusting the constraints,and relaxing the system.Similar to Equation5,the matrix used to transform a solution at depth d to a constraint at depth d is expressed as:A dd i j= ∇B d i,∇B d j [0,1]3+α B d i,B d j (w,P).ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.4•M.Kazhdan and H.HoppeFig.2:Visualizations of the reconstructed implicit function along a planar slice through the cow ‡(shown in blue on the left),for the original Poisson solver,and for the screened Poisson solver without and with scale-independent screening.This operator adjusts the constraint b d (line 3of Algorithm 1)not only by removing the Poisson constraints met at coarser resolutions,but also by modifying the constrained values at points where the coarser solution does not evaluate to zero.4.3Scale-Independent ScreeningTo balance the two energy terms in Equation 3,it is desirable to adjust the screening parameter αsuch that (1)the reconstructed surface shape is invariant under scaling of the input points with respect to the solver domain,and (2)the prolongation of a solution at a coarse depth is an accurate estimate of the solution at a finer depth in the cascadic multigrid approach.We achieve both these goals by adjusting the relative weighting of position and gradient constraints across the different octree depths.Noting that the magnitude of the gradient constraint scales with resolution,we double the weight of the interpolation constraint with each depth:A ddi j = ∇B d i ,∇B dj [0,1]3+2d α B d i ,B dj (w ,P ).The adaptive weight of 2d is chosen to keep the Laplacian and screening constraints around the surface in balance.To see this,assume that the points are locally planar,and consider the row of the system matrix corresponding to an octree node overlapping the points.The coefficients of the system in that row are the sum of Laplacian and screening terms.If we consider the rows corresponding to the child nodes that overlap the surface,we find that the contribution from the Laplacian constraints scales by a factor of 1/2while the contribution from the screening term scales by a factor of 1/4.2Thus,scaling the screening weights by a factor of two with each resolution keeps the two terms in balance.Figure 2shows the benefit of scale-independent screening in reconstructing a cow model.The leftmost image shows a plane passing through the bounding cube of the cow,and the images to the right show the values of the computed indicator function along that plane,for different implementations of the solver.As the figure shows,the unscreened Poisson solver provides a good approximation of the indicator functions,with values inside (resp.outside)the surface approximately 1/2(resp.-1/2).However,applying the same solver to the screened Poisson equation (second from right)provides a solution that is only correct near the input samples and returns to zero near the faces of the bounding cube,2Forthe Laplacian term,the Laplacian scales by a factor of 4with refinement,and volumetric integrals scale by a factor of 1/8.For the screening term,area integrals scale by a factor of 1/4.potentially resulting in spurious surface sheets away from the surface.It is only with scale-independent screening (right)that we obtain a high-quality solution to the screened Poisson ing this resolution adaptive weighting,our system has the property that the reconstruction obtained by solving at depth D is identical to the reconstruction that would be obtained by scaling the point set by 1/2and solving at depth D +1.To see this,we consider the two energies that guide the reconstruc-tion,E V (χ)measuring the extent to which the gradients of the so-lution match the prescribed vector field,and E (w ,P )(χ)measuring the extent to which the solution meets the screening constraint:E V (χ)=V (p )−∇χ(p )2d p E (w ,P )(χ)=Area (P )∑p ∈P w (p )∑p ∈Pw (p )χ2(p ).Scaling by 1/2,we obtain a new point set (˜w ,˜P)with positions scaled by 1/2,unchanged weights,˜w (p )=w (2p ),and scaled area,Area (˜P )=Area (P )/4;a new scalar field,˜χ(p )=χ(2p );and a new vector field,˜ V (p )=2 V (2p ).Computing the correspondingenergies,we get:E ˜ V (˜χ)=1E V(χ)and E (˜w ,˜P )(˜χ)=1E (w ,P )(χ).Thus,scaling the screening weight by a factor of two with eachsuccessive depth ensures that the sum of energies is unchanged (up to multiplication by a constant)so the minimizer remains the same.4.4Boundary ConditionsIn order to define the linear system,it is necessary to define the behavior of the function space along the boundary of the integration domain.In the original Poisson reconstruction the authors imposed Dirichlet boundary conditions,forcing the implicit function to havea value of −12along the boundary.In the present work we extend the implementation to support Neumann boundary conditions as well,forcing the normal derivative to be zero along the boundary.In principle these two boundary conditions are equivalent for watertight surfaces,since the indicator function has a constant negative value outside the model.However,in the presence of missing data we find Neumann constraints to be less restrictive because they only require that the implicit function have zero derivative across the boundary of the integration domain,a property that is compatible with the gradient constraint since the guiding vector field V is set to zero away from the samples.(Note that when the surface does cross the boundary of the domain,the Neumann boundary constraints create a bias to crossing the domain boundary orthogonally.)Figure 3shows the practical implications of this choice when reconstructing the Angel model,which was only scanned from the front.The left image shows the original point set and the reconstructions using Dirichlet and Neumann boundary conditions are shown to the right.As the figure shows,imposing Dirichlet constraints creates a water-tight surface that closes off before reaching the boundary while using Neumann constraints allows the surface to extend out to the boundary of the domain.ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .Screened Poisson Surface Reconstruction•5Fig.3:Reconstructions of the Angel point set‡(left)using Dirichlet(center) and Neumann(right)boundary conditions.Similar results can be seen at the bases of the models in Figures1 and4a,with the original Poisson reconstructions obtained using Dirichlet constraints and the screened reconstructions obtained using Neumann constraints.5.IMPROVED ALGORITHMIC COMPLEXITYIn this section we discuss the efficiency of our reconstruction al-gorithm.We begin by analyzing the complexity of the algorithm described above.Then,we present two algorithmic improvements. Thefirst describes how hierarchical clustering can be used to re-duce the screening overhead at coarser resolutions.The second ap-plies to both the unscreened and screened solver implementations, showing that the asymptotic time complexity in both cases can be reduced to be linear in the number of input points.5.1Efficiency of basic solverLet us begin by analyzing the computational complexity of the unscreened and screened solvers.We assume that the points P are evenly distributed over a surface,so that the depth of the adapted octree is D=O(log|P|)and the number of octree nodes at depth d is O(4d).We also note that the number of nonzero entries in matrix A dd is O(4d),since the matrix has O(4d)rows and each row has at most53nonzero entries.(Since we use second-order B-splines, basis functions are supported within their one-ring neighborhoods and the support of two functions will overlap only if one is within the two-ring neighborhood of the other.)Assuming that the matrices A dd have already been computed,the computational complexity for the different steps in Algorithm1is: Step3:O(4d)–since A dd has O(4d)nonzero entries.Step4:O(4d)–since A d has O(4d)nonzero entries and the number of relaxation steps performed is constant.Steps2-3:∑d−1d =0O(4d)=O(4d·d).Steps2-4:O(4d·d+4d)=O(4d·d).Steps1-4:∑D d=0O(4d·d)=O(4D·D)=O(|P|·log|P|). There still remains the computation of matrices A dd .For the unscreened solver,the complexity of computing A dd is O(4d),since each entry can be computed in constant time.Thus, the overall time complexity remains O(|P|·log|P|).For the screened solver,the complexity of computing A dd is O(|P|)since defining the coefficients requires accumulating the screening contribution from each of the points,and each point contributes to a constant number of rows.Thus,the overall time complexity is dominated by the cost of evaluating the coefficients of A dd which is:D∑d=0d−1∑d =0O(|P|)=O(|P|·D2)=O(|P|·log2|P|).5.2Hierarchical Clustering of Point ConstraintsOurfirst modification is based on the observation that since the basis functions at coarser resolutions are smooth,it is unnecessary to constrain them at the precise sample locations.Instead,we cluster the weighted points as in[Rusinkiewicz and Levoy2000]. Specifically,for each depth d,we define(w d,P d)where p i∈P d is the weighted average position of the points falling into octree node i at depth d,and w d(p i)is the sum of the associated weights.3 If all input points have weight w(p)=1,then w d(p i)is simply the number of points falling into node i.This alters the computation of the system matrix coefficients:A dd i j= ∇B d i,∇B d j [0,1]3+2dα B d i,B d j (w d,P d).Note that since d>d ,the value B d i,B d j (w d,P d)is obtained by summing over points stored with thefiner resolution.In particular,the complexity of computing A dd for the screened solver becomes O(|P d|)=O(4d),which is the same as that of the unscreened solver,and both implementations now have an overall time complexity of O(|P|·log|P|).On typical examples,hierarchical clustering reduces execution time by a factor of almost two,and the reconstructed surface is visually indistinguishable.5.3Conforming OctreesTo account for the adaptivity of the octree,Algorithm1subtracts off the constraints met at all coarser resolutions before relaxing at a given depth(steps2-3),resulting in an algorithm with log-linear time complexity.We obtain an implementation with linear complexity by forcing the octree to be conforming.Specifically, we define two octree cells to be mutually visible if the supports of their associated B-splines overlap,and we require that if a cell at depth d is in the octree,then all visible cells at depth d−1must also be in the tree.Making the tree conforming requires the addition of new nodes at coarser depths,but this still results in O(4d)nodes at depth d.While the conforming octree does not satisfy the condition that a coarser solution can be prolonged into afiner one,it has the property that the solution obtained at depths{0,...,d−1}that is visible to a node at depth d can be expressed entirely in terms of the coefficients at depth d−ing an accumulation vector to store the visible part of the solution,we obtain the linear-time implementation in Algorithm2.3Note that the weight w d(p)is unrelated to the screening weight2d introduced in Section4.3for scale-independent screening.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.6•M.Kazhdan and H.HoppeHere,P d d−1is the B-spline prolongation operator,expressing a solution at depth d−1in terms of coefficients at depth d.The number of nonzero entries in P d d−1is O(4d),since each column has at most43nonzero entries,so steps2-5of Algorithm2all have complexity O(4d).Thus,the overall complexity of both the unscreened and screened solvers becomes O(|P|).Algorithm2:Conforming Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine.2ˆx d−1=P d−1d−2ˆx d−2Upsample coarseraccumulation vector.3ˆx d−1=ˆx d−1+x d−1Add in coarser solution.4b d=b d−A d d−1ˆx d−1Remove constraintsmet at coarser depths.5Relax A d x d=b d Adjust the system at depth d.5.4Implementation DetailsThe algorithm is implemented in C++,using OpenMP for multi-threaded parallelization.We use a conjugate-gradient solver to re-lax the system at each multigrid level.With the exception of the octree construction,most of the operations involved in the Poisson reconstruction can be categorized as operations that either“accu-mulate”or“distribute”information[Bolitho et al.2007,2009].The former do not introduce write-on-write conflicts and are trivial to parallelize.The latter only involve linear operations,and are par-allelized using a standard map-reduce approach:in the map phase we create a duplicate copy of the data for each thread to distribute values into,and in the reduce phase we merge the copies by taking their sum.6.RESULTSWe evaluate the algorithm(Screened)by comparing its accuracy and computational efficiency with several prior methods:the original Poisson reconstruction of Kazhdan et al.[2006](Poisson), the Wavelet reconstruction of Manson et al.[2008](Wavelet),and the Smooth Signed Distance reconstruction of Calakli and Taubin [2011](SSD).For the new algorithm,we set the screening weight toα=4and use Neumann boundary conditions in all experiments.(Numerical results obtained using Dirichlet boundaries were indistinguishable.) For the prior methods,we set algorithmic parameters to values recommended by the authors,using Haar Wavelets in the Wavelet reconstruction and setting the value/normal/Hessian weights to 1/1/0.25in the SSD reconstruction.For Poisson,SSD,and Screened we set the“samples-per-node”parameter to1and the “bounding-box-scale”parameter to1.1.(For Wavelet the bounding box scale is hard-coded at1and there is no parameter to adjust the sampling density.)6.1AccuracyWe run three different types of experiments.Real Scanner Data.To evaluate the accuracy of the different reconstruction algorithms on real-world data,we gathered several scanned datasets:the Awakening(10M points),the Stanford Bunny (0.2M points),the David(11M points),the Lucy(1.0M points), and the Neptune(2.4M points).For each dataset,we randomly partitioned the points into two equal-sized subsets:input points for the reconstruction algorithms,and validation points to measure point-to-reconstruction distances.Figure4a shows reconstructions results for the Neptune and David models at depth10.It also shows surface cross-sections overlaid with the validation points in their vicinity.These images reveal that the Poisson reconstruction(far left),and to a lesser extent the SSD reconstruction(center left),over-smooth the data,while the Wavelet reconstruction(center left)has apparent derivative discontinuities.In contrast,our screened Poisson approach(far right)provides a reconstruction that faithfullyfits the samples without introducing noise.Figure4b shows quantitative results across all datasets,in the form of RMS errors,measured using the distances from the validation points to the reconstructed surface.(We also computed the maximum error,but found that its sensitivity to individual outlier points made it an unreliable and unindicative statistic.)As thefigure indicates,the Screened Poisson reconstruction(blue)is always more accurate than both the original Poisson reconstruction algorithm(red)and the Wavelet reconstruction(purple),and generates reconstruction whose RMS errors are comparable to or smaller than those of the SSD reconstruction(green).Clean Uniformly Sampled Data.To evaluate reconstruction accuracy on clean data,we used the approach of Osada et al.[2001] to generate oriented point sets by uniformly sampling the surfaces of the Fandisk,Armadillo Man,Dragon,and Raptor models.For each model,we generated datasets of100K and1M points and reconstructed surfaces from each point set using the four different reconstruction algorithms.As an example,Figure5a shows the reconstructions of the fandisk and raptor models using1M point samples at depth10.Despite the lack of noise in the input data,the Wavelet reconstruction has spurious high-frequency detail.Focusing on the sharp edges in the model,we also observe that the screened Poisson reconstruction introduces less smoothing,providing a reconstruction that is truer to the original data than either the original Poisson or the SSD reconstructions.Figure5b plots RMS errors across all models,measured bidirec-tionally between the original surface and the reconstructed surface using the Metro tool[Cignoni and Scopigno1998].As in the case of real scanner data,screened Poisson reconstruction always out-performs the original Poisson and Wavelet reconstructions,and is comparable to or better than the SSD reconstruction. Reconstruction Benchmark.We use the benchmark of Berger et al.[2011]to evaluate the accuracy of the algorithms under different simulations of scanner error,including nonuniform sampling,noise,and misalignment.The dataset consists of mul-tiple virtual scans of implicit surfaces representing the Anchor, Dancing Children,Daratech,Gargoyle,and Quasimodo models. As an example,Figure6a visualizes the error in the reconstructions of the anchor model from a virtual scan consisting of210K points (demarked with a dashed rectangle in Figure6b)at depth9.The error is visualized using a red-green-blue scale,with red signifyingACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.。
关于人脑和电脑的异同英语作文

关于人脑和电脑的异同英语作文In the age of technological advancement, the comparison between the human brain and computers has becomeincreasingly relevant. Both possess remarkable capabilities that have revolutionized our world, yet they differ significantly in their structure, function, and limitations. This essay aims to delve into the similarities and differences between the human brain and computers,exploring the essence of intelligence in both.**Structure and Physiology**The human brain is a complex organ composed of billions of neurons interconnected by synapses. Its structure is highly organized, with different regions specialized for specific functions such as sensation, motor control, emotion, and cognition. In contrast, computers are made upof silicon chips, transistors, and other electronic components. They lack the biological complexity and organic interconnectedness of the brain.**Processing Speed and Capacity**Computers excel in terms of processing speed and capacity. They can perform millions of calculations per second, store vast amounts of information, and retrieve data almost instantaneously. The human brain, on the other hand, has a limited processing capacity and speed. Itrelies on neural networks and synaptic connections to process information, which is relatively slower compared to computers.**Adaptability and Learning**The human brain demonstrates remarkable adaptability and learning capabilities. It can learn new skills, adapt to changing environments, and make decisions based on past experiences. Computers, although capable of learning through machine learning algorithms, lack the flexibility and adaptability of the human brain. They are limited to the data they are programmed with and cannot exceed the boundaries of their programming.**Emotional Intelligence**The human brain is unique in its ability to experience emotions. It can interpret and respond to emotional cues, understand the feelings of others, and exhibit empathy.Computers, despite their sophistication, lack emotional intelligence. They cannot feel, understand, or respond to emotions in the same way as humans.**Creativity and Imagination**The human brain is capable of extraordinary creativity and imagination. It can generate new ideas, concepts, and art forms that are truly unique and original. Computers, although they can generate combinations and permutations, lack the creative spark and imagination of the human mind. **Conclusion**In conclusion, while the human brain and computers share some commonalities, their differences are profound. The brain's unique structure, adaptability, emotional intelligence, and creativity make it a remarkable organ of intelligence. Computers, on the other hand, excel in terms of speed, capacity, and precision. Understanding the similarities and differences between the two can help us harness their respective strengths and develop more intelligent and empathetic machines.**人脑与电脑的异同:智能本质的探索**随着科技的进步,人脑与电脑的对比变得越来越相关。
九年级英语科技创新单选题50题

九年级英语科技创新单选题50题1. The development of AI ______ our daily life in many ways.A. changeB. changesC. is changingD. changed答案:C。
本题考查时态。
句子表达的是AI的发展正在多方面改变我们的日常生活,强调正在进行的动作,所以要用现在进行时。
选项A是一般现在时的动词原形,不能单独作谓语;选项B一般现在时的第三人称单数形式,不能体现正在进行的动作;选项D是一般过去时,不符合句子语境。
2. VR technology allows people to ______ in a virtual world.A. immersiveB. be immersiveC. immerseD. be immerse答案:C。
本题考查固定搭配。
allow sb. to do sth. 允许某人做某事,这里需要一个动词原形,immerse是动词,沉浸的意思。
选项A 是形容词;选项B和D的形式错误。
3. With the popularity of smart phones, people can get information ______ than before.A. easierB. more easilyC. easyD. most easily答案:B。
本题考查副词比较级。
这里修饰动词get,要用副词,且有than,要用比较级。
easily是副词,其比较级是more easily。
选项A是形容词比较级;选项C是形容词原级;选项D是副词最高级。
4. Many scientists believe that 5G technology will ______ the speed of data transmission.A. improveB. makeC. createD. find答案:A。
本题考查动词词义辨析。
小学上册第十二次英语第一单元全练全测(含答案)
小学上册英语第一单元全练全测(含答案)英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.Galaxies can collide and merge with ______.2.The scientific study of matter and its changes is called _______.3.What is the name of the famous mountain range in Europe?A. RockiesB. HimalayasC. AlpsD. Andes4.An endothermic reaction requires _____ from its surroundings.5.What do we call a shape with four equal sides?A. RectangleB. SquareC. TriangleD. Pentagon答案:B6.His favorite sport is ________.7.I like to help my dad in the ____.8.The _______ of an object can affect its movement.9.My teacher helps me with ____.10.Which of these colors is made by mixing blue and yellow?A. RedB. GreenC. PurpleD. Orange答案:B11. A _____ (植物网络) can connect enthusiasts globally.12.An ecosystem includes living and non-living ______.13.I enjoy playing with my ______ (玩具车) in the living room. It goes ______ (快).14. A pendulum swings back and ______ (forth).15.Which language is spoken in Brazil?A. SpanishB. PortugueseC. FrenchD. Italian答案:B16.The garden is ________ (美丽).17.中国的________ (legends) 经常包含神话与历史交织的故事。
计算机模型翻译英语作文
计算机模型翻译英语作文In the realm of language translation, computer models have become increasingly sophisticated, offering a range of solutions for translating English compositions. These models leverage advanced algorithms and vast databases of linguistic data to provide translations that are not only accurate but also contextually relevant. Here's a closer look at how computer models are being used to translate English essays and compositions.1. Machine Learning and Neural Networks: Modern translation models are often based on neural networks, which are designed to mimic the human brain's ability to learn and process information. These networks can be trained on large corpora of bilingual texts to understand the nuances of language.2. Contextual Understanding: Unlike simple word-for-word translations, computer models can take into account the context in which words are used. This is crucial for translating idioms, slang, and other expressions that do not have direct equivalents in other languages.3. Continuous Learning: One of the advantages of computer models is their ability to continuously learn and improve. As more data is fed into the system, the model refines its translation capabilities, leading to better accuracy over time.4. Customization: Computer models can be tailored to specific domains. For instance, a model trained on academic paperswill be better at translating technical terms and complex sentences found in scholarly compositions.5. Integration with Writing Tools: Many writing tools now integrate translation models to assist users in composing essays in a non-native language. This can be particularly helpful for students and professionals who need to write in English but may not be fluent.6. Challenges and Limitations: Despite their advancements, computer models still struggle with certain aspects of language, such as humor, sarcasm, and cultural references. Additionally, they can sometimes produce translations that are grammatically correct but do not make logical sense in the target language.7. The Role of the Human Translator: Even with the best computer models, the role of human translators remains critical. They can review and edit translations to ensure they are not only accurate but also convey the intended tone and meaning.8. Ethical Considerations: The use of computer models in translation raises ethical questions about the potential loss of jobs for human translators. It's important to consider the impact of technology on the translation industry and to find ways to integrate human expertise with machine capabilities.9. Future Prospects: As artificial intelligence continues toevolve, the future of computer models in translation looks promising. We can expect even more accurate and nuanced translations, potentially even including real-time translation for spoken language.10. Conclusion: Computer models have revolutionized the way we approach language translation, offering a powerful tool for translating English compositions. While they are not without their challenges, they provide a valuable resource for individuals and organizations operating in a globalized world.。
计算机科学与技术考试:2022人工智能真题模拟及答案(5)
计算机科学与技术考试:2022人工智能真题模拟及答案(5)共38道题1、专家系统是以()为基础,以推理为核心的系统。
(单选题)A. 专家B. 软件C. 知识D. 解决问题试题答案:C2、为了解决如何模拟人类的感性思维,例如视觉理解、直觉思维、悟性等,研究者找到一个重要的信息处理的机制是()。
(单选题)A. 专家系统B. 人工神经网络C. 模式识别D. 智能代理试题答案:B3、如果问题存在最优解,则下面几种搜索算法中,()必然可以得到该最优解。
(单选题)A. 广度优先搜索B. 深度优先搜索C. 有界深度优先搜索D. 启发式搜索试题答案:A4、人工智能的目的是让机器能够(),以实现某些脑力劳动的机械化。
(单选题)A. 具有智能B. 和人一样工作C. 完全代替人的大脑D. 模拟、延伸和扩展人的智能试题答案:D5、产生式系统的推理不包括()。
(单选题)A. 正向推理B. 逆向推理C. 双向推理D. 简单推理试题答案:D6、匹配是将两个知识模式进行()比较。
(单选题)A. 相同性B. 一致性C. 可比性D. 同类性试题答案:B7、专家系统的推理机的最基本的方式是()。
(单选题)A. 直接推理和间接推理B. 正向推理和反向推理C. 逻辑推理和非逻辑推理D. 准确推理和模糊推理试题答案:B8、下列哪部分不是专家系统的组成部分()。
(单选题)A. 用户B. 综合数据库C. 推理机D. 知识库试题答案:A9、在专家系统的开发过程中使用的专家系统工具一般分为专家系统的()和通用专家系统工具两类。
(单选题)A. 模型工具B. 外壳C. 知识库工具D. 专用工具试题答案:B10、专家系统的推理机的最基本的方式是()。
(单选题)A. 直接推理和间接推理B. 正向推理和反向推理C. 逻辑推理和非逻辑推理D. 准确推理和模糊推理试题答案:B11、要想让机器具有智能,必须让机器具有知识。
因此,在人工智能中有一个研究领域,主要研究计算机如何自动获取知识和技能,实现自我完善,这门研究分支学科叫()。
Geometric Modeling
Geometric ModelingGeometric modeling is an essential aspect of computer graphics and design, playing a crucial role in creating realistic and visually appealing digital representations of objects and environments. However, it also presents a myriad of challenges and complexities that need to be addressed to ensure accuracy and efficiency in the modeling process. One of the primary issues in geometric modeling is the representation of complex shapes and surfaces. While simple geometric primitives such as cubes, spheres, and cylinders can be easily defined using mathematical equations, more intricate forms like organic shapes and irregular surfaces pose a significant challenge. This requires the use of advanced techniques such as spline modeling, NURBS (Non-Uniform Rational B-Splines), and subdivision surfaces to accurately capture the nuances of complex geometries. Another critical problem in geometric modeling is the management of large-scale data. As models become more detailed and intricate, the amount of data required to represent them increases exponentially, leading to potential performance and storage issues. Efficient data structures and algorithms are essential to handle the complexity of large-scale geometric models, ensuring that they can be manipulated and rendered in real-time without compromising on quality. Furthermore, geometric modeling also involves the integration of physical properties and behaviors into the digital representations of objects. This includes simulating the effects of lighting, shading, and material properties to create realistic and immersive visual experiences. Achieving this level of realism requires a deep understanding of physics and mathematics, as well as sophisticated algorithms for simulating light interactions and material properties. In addition to technical challenges, geometric modeling also presents artistic and creative dilemmas. Designers and artists often grapple with finding the right balance between realism and stylization, as well as conveying emotions and narratives through the visual language of geometry. This requires a blend of technical expertise and artistic sensibility, as well as a deep understanding of human perception and psychology. Moreover, the process of geometric modeling is often iterative and collaborative, involving multiple stakeholders with varying expertise and perspectives. Communication and coordination among team members,including designers, modelers, engineers, and clients, are crucial to ensure that the final geometric model meets the requirements and expectations of all parties involved. This necessitates effective project management and collaboration tools to streamline the modeling process and facilitate seamless communication and feedback. In conclusion, geometric modeling is a multifaceted discipline that encompasses technical, artistic, and collaborative challenges. Addressing these issues requires a combination of technical expertise, creative vision, and effective communication and collaboration. By overcoming these challenges, geometric modeling can continue to push the boundaries of visual representation and create immersive and compelling digital experiences.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
lDept.of
Abstract
Computer Graphics, 26,2, July 1992
Surface Modeling with Oriented Particle Systems
Richard Szeliskit and David Tonnesen$
tlligltal Equipment orp.,Cambridge Research Lab, One Kendall Square, Bldg. 700, Cambridge, MA 02139 C
1
Introduction
The modeling of free-form surfaces is one of thecentraf issues of computer graphics. Spline models [3, 8] and deformable surface models [25] have been very successful in creating and animating such surfaces. However, these methods either require the discretization of the surface into patches (for spline surfaces) or the specification of local connectivity (for spring-mass systems). These steps can involve a significant amount of manual preprocessing before the surface model can be used. For shape design and rapid prototyping applications, we require a highly interactive system which does not force the designer to think about the underlying representation or be limited by its choice [18]. For example, we require the basic
Permission [u copy without fee all or part nf this material is granted prnvided that the copies are not made or distributed for direct commercial advantage. the ACM copyright nntice and the title of the publication and its date appear. and notice is given tha[ copying is by permission of the Asstxiatinn for Cnmputing Machinery. To copy otherwise, or tn republlsh, requires a fee and/or spxitic permission.
abilities to join severrd surfaces together, to split surfaces along arbitrary lines, or to extend existing surfaces, without specifying exact connectivity. For scientific visualization, data interpretation, and robotics applications, we require a modeling system that can interpolate a set of scattered 3-D data without knowing the topology of the ‘surface. To construct such a system, we will keep the ideas of deformation energies from elastic surface models, but use interacting particles to build our surfaces. Particle systems have been used in computer graphics by Reeves [16] and Sims [21] to model natural phenomena such as fire and waterfalls. In these models, particles move under the influence of force fields and constraints but do not interact with each other. More recent particle systems borrow ideas from molecular dynamics to model liquids and solids [12, 26, 29]. In these models, which have spherically symmetric potentiaf fields, particles arrange themselves into volumes rather than surfaces. In this paper, we develop orienfedparlicles, which overcome this natural tendency to form solids and prefer to form surfaces instead. Each particle has a local coordinate frame We dewhich is updated during the simulation [171. sign new interaction potentials which favor locally planar or locally spherical arrangements of particles. These interaction potentials are used in conjunction with more traditional long-range attraction forces and short-range repulsion forces which control the average inter-particle spacing. Our new surface model thus shares characteristics of both deformable surface models and particle systems. Like traditional spline models, it can be used to model free-form surfaces and to smoothly interpolate sparse data. Like interacting particle models of solids and liquids, our surfaces can be split, joined, or extended without the need for repararneterization or manual intervention. We can thus use our new technique as a tool for modeling a wider range of surface shapes. The remainder of the paper is organized as follows. In Section 2 we review traditional splines and deformable surface models, as well as particle systems and the potential functions traditionally used in molecular dynamics. In Section 3 we present our new oriented particle model and the new interaction potentials which favor locally planar and locally spherical arrangements. Section 4 presents the dyrsity of Toronto, Toronto, Canada M5S 1A4
Splines and deformable surface models are widely used in computer graphics to describe free-form surfaces. These methods require manual preprocessing to discretize the surface into patches and to specify their connectivity. We present a new model of elastic surfaces based on interacting particle systems, which, unlike previous techniques, can be used to sptiL join, or extend surfaces without the need for manual intervention. The particles we use have longrange attraction forces and short-range repulsion forces and follow Newtonian dynamics, much tiie recent computational models of fluids and solids. To enable our particles to model surface elements instead of point masses or volume elements, we add an orientation to each particle’s state. We devise new interaction potentials for our oriented particles which favor locally planar or spherical arrangements. We atso develop techniques for adding new particles automatically, which enables our surfaces to stretch and grow. We demonstrate the application of our new particle system to modefing surfaces in 3-D and the interpolation of 3-D point sets. Keywords: Surface interpolation, particle systems, physicallybased modeling, oriented particles, self-organizing systems, simulation. CR Categories and Subject Descriptors: 1.3.5 [Computer Graphics]: Computational Geometry and Object Modeling — Curve, surjace, solid, and object represe~adons; 1.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism — Animation.