Efficient and Accurate Delaunay Triangulation Protocols under Churn
[美]R·格伦·哈伯德《宏观经济学》R.GlennHubbard,AnthonyP
![[美]R·格伦·哈伯德《宏观经济学》R.GlennHubbard,AnthonyP](https://img.taocdn.com/s3/m/c5cc42cdd05abe23482fb4daa58da0116c171f30.png)
Macroeconomics R. GLENN HUBBARD COLUMBIA UNIVERSITY ANTHONY PATRICK O’BRIEN LEHIGH UNIVERSITY MATTHEW RAFFERTY QUINNIPIAC UNIVERSITY Boston Columbus Indianapolis New York San Francisco Upper Saddle RiverAmsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City So Paulo Sydney Hong Kong Seoul Singapore Taipei TokyoAbout the AuthorsGlenn Hubbard Professor Researcher and Policymaker R. Glenn Hubbard is the dean and Russell L. Carson Professor of Finance and Economics in the Graduate School of Business at Columbia University and professor of economics in Columbia’s Faculty of Arts and Sciences. He is also a research associate of the National Bureau of Economic Research and a director of Automatic Data Processing Black Rock Closed- End Funds KKR Financial Corporation and MetLife. Professor Hubbard received his Ph.D. in economics from Harvard University in 1983. From 2001 to 2003 he served as chairman of the White House Council of Economic Advisers and chairman of the OECD Economy Policy Commit- tee and from 1991 to 1993 he was deputy assistant secretary of the U.S. Treasury Department. He currently serves as co-chair of the nonpar-tisan Committee on Capital Markets Regulation and the Corporate Boards Study Group. ProfessorHubbard is the author of more than 100 articles in leading journals including American EconomicReview Brookings Papers on Economic Activity Journal of Finance Journal of Financial EconomicsJournal of Money Credit and Banking Journal of Political Economy Journal of Public EconomicsQuarterly Journal of Economics RAND Journal of Economics and Review of Economics and Statistics.Tony O’Brien Award-Winning Professor and Researcher Anthony Patrick O’Brien is a professor of economics at Lehigh University. He received a Ph.D. from the University of California Berkeley in 1987. He has taught principles of economics money and banking and interme- diate macroeconomics for more than 20 years in both large sections and small honors classes. He received the Lehigh University Award for Distin- guished Teaching. He was formerly the director of the Diamond Center for Economic Education and was named a Dana Foundation Faculty Fel- low and Lehigh Class of 1961 Professor of Economics. He has been a visit- ing professor at the University of California Santa Barbara and Carnegie Mellon University. Professor O’Brien’s research has dealt with such issues as the evolution of the U.S. automobile industry sources of U.S. economiccompetitiveness the development of U.S. trade policy the causes of the Great Depression and thecauses of black–white income differences. His research has been published in leading journals in-cluding American Economic Review Quarterly Journal of Economics Journal of Money Credit andBanking Industrial Relations Journal of Economic History Explorations in Economic History andJournal of PolicyHistory.Matthew Rafferty Professor and Researcher Matthew Christopher Rafferty is a professor of economics and department chairperson at Quinnipiac University. He has also been a visiting professor at Union College. He received a Ph.D. from the University of California Davis in 1997 and has taught intermediate macroeconomics for 15 years in both large and small sections. Professor Rafferty’s research has f ocused on university and firm-financed research and development activities. In particular he is interested in understanding how corporate governance and equity compensation influence firm research and development. His research has been published in leading journals including the Journal of Financial and Quantitative Analysis Journal of Corporate Finance Research Policy and the Southern Economic Journal. He has worked as a consultantfor theConnecticut Petroleum Council on issues before the Connecticut state legislature. He has alsowritten op-ed pieces that have appeared in several newspapers including the New York Times. iii Brief Contents Part 1: Introduction Chapter 1 The Long and Short of Macroeconomics 1 Chapter 2 Measuring the Macroeconomy 23 Chapter 3 The Financial System 59 Part 2: Macroeconomics in the Long Run: Economic Growth Chapter 4 Determining Aggregate Production 105 Chapter 5 Long-Run Economic Growth 143 Chapter 6 Money and Inflation 188 Chapter 7 The Labor Market 231 Part 3: Macroeconomics in the Short Run: Theory and Policy Chapter 8 Business Cycles 271 Chapter 9 IS–MP: A Short-Run Macroeconomic Model 302 Chapter 10 Monetary Policy in the Short Run 363 Chapter 11 Fiscal Policy in the Short Run 407 Chapter 12 Aggregate Demand Aggregate Supply and Monetary Policy 448 Part 4: Extensions Chapter 13 Fiscal Policy and the Government Budget in the Long Run 486 Chapter 14 Consumption and Investment 521 Chapter 15 The Balance of Payments Exchange Rates and Macroeconomic Policy 559 Glossary G-1 Index I-1ivContentsChapter 1 The Long and Short of Macroeconomics 1WHEN YOU ENTER THE JOB MARKET CAN MATTER A LOT ........................................................ 11.1 What Macroeconomics Is About........................................................................... 2 Macroeconomics in the Short Run and in the Long Run .................................................... 2 Long-Run Growth in the United States ............................................................................. 3 Some Countries Have Not Experienced Significant Long-Run Growth ............................... 4 Aging Populations Pose a Challenge to Governments Around the World .......................... 5 Unemployment in the United States ................................................................................. 6 How Unemployment Rates Differ Across Developed Countries ......................................... 7 Inflation Rates Fluctuate Over Time and Across Countries................................................. 7 Econo mic Policy Can Help Stabilize the Economy .. (8)International Factors Have Become Increasingly Important in Explaining Macroeconomic Events................................................................................. 91.2 How Economists Think About Macroeconomics ............................................. 11 What Is the Best Way to Analyze Macroeconomic Issues .............................................. 11 Macroeconomic Models.................................................................................................. 12Solved Problem 1.2: Do Rising Imports Lead to a Permanent Reductionin U.S. Employment. (12)Assumptions Endogenous Variables and Exogenous Variables in EconomicModels ........................................................................................................ 13 Forming and Testing Hypotheses in Economic Models .................................................... 14Making the Connection: What Do People Know About Macroeconomicsand How Do They KnowIt .............................................................................................. 151.3 Key Issues and Questions of Macroeconomics ............................................... 16An Inside Look: Will Consumer Spending Nudge Employers to Hire................................ 18Chapter Summary and Problems ............................................................................. 20 Key Terms and Concepts Review Questions Problems and Applications Data Exercise Theseend-of-chapter resource materials repeat in all chapters.Chapter 2 Measuring the Macroeconomy 23HOW DO WE KNOW WHEN WE ARE IN ARECESSION ........................................................... 23Key Issue andQuestion .................................................................................................... 232.1 GDP: Measuring Total Production and Total Income ..................................... 25 How theGovernment Calculates GDP (25)Production and Income (26)The Circular Flow of Income (27)An Example of Measuring GDP (29)National Income Identities and the Components of GDP (29)vvi CONTENTS Making the Connection: Will Public Employee Pensions Wreck State and Local Government Budgets.................................................................... 31 The Relationship Between GDP and GNP........................................................................ 33 2.2 Real GDP Nominal GDP and the GDP Deflator.............................................. 33 Solved Problem 2.2a: Calculating Real GDP . (34)Price Indexes and the GDP Deflator (35)Solved Problem 2.2b: Calculating the Inflation Rate ..........................................................36 The Chain-Weighted Measure of Real GDP ....................................................................37 Making the Connection: Trying to Hit a Moving Target: Forecasting with “Real-Time Data” .................................................................................. 37 Comparing GDP Across Countries................................................................................... 38 Making the Connection: The Incredible Shrinking Chinese Economy ................................ 39 GDP and National Income .............................................................................................. 40 2.3 Inflation Rates and Interest Rates ....................................................................... 41 The Consumer Price Index .............................................................................................. 42 Making the Connection: Does Indexing Preserve the Purchasing Power of Social Security Payments ................................................................ 43 How Accurate Is theCPI ............................................................................................... 44 The Way the Federal Reserve Measures Inflation ............................................................ 44 InterestRates .................................................................................................................. 45 2.4 Measuring Employment and Unemployment .. (47)Answering the Key Question ............................................................................................ 49 An Inside Look: Weak Construction Market Persists.......................................................... 50 Chapter 3 The Financial System 59 THE WONDERFUL WORLD OFCREDIT ................................................................................... 59 Key Issue and Question .................................................................................................... 59 3.1 Overview of the Financial System ...................................................................... 60 Financial Markets and Financial Intermediaries ................................................................ 61 Making the Connection: Is General Motors Making Cars or Making Loans .................... 62 Making the Connection: Investing in the Worldwide Stock Market . (64)Banking and Securitization (67)The Mortgage Market and the Subprime Lending Disaster (67)Asymmetric Information and Principal–Agent Problems in Financial Markets...................68 3.2 The Role of the Central Bank in the Financial System (69)Central Banks as Lenders of Last Resort ..........................................................................69 Bank Runs Contagion and Asset Deflation ....................................................................70 Making the Connection: Panics Then and Now: The Collapse of the Bank of United States in 1930 and the Collapse of Lehman Brothers in2008 (71)3.3 Determining Interest Rates: The Market for Loanable Funds and the Market forMoney .......................................................................................... 76 Saving and Supply in the Loanable Funds Market ........................................................... 76 Investment and the Demand for Loanable Funds ............................................................ 77 Explaining Movements in Saving Investment and the Real Interest Rate (78)CONTENTS .。
MATLAB约束Delaunay三角网

Creating and Editing Delaunay TriangulationsThe Delaunay triangulation is t he most widely use d triangulation i n s cientific com puti ng. T he properties associated with thetriangulation provi de a basis f or s olving a variety of geometric problems. T he following exam ples demonstrate how to create, e dit, and query Delaunay triangulations usi ng t he DelaunayTri class. Construction of constraine d Delaunay triangulations is also dem onstrated, together with an applications covering me dial axis computation and mesh morphing.Contents∙Example One: Create and Plot a 2D Delaunay Triangulation∙Example Two: Create and Plot a 3D Delaunay Triangulation∙Example Three: A ccess t he Triangulation Data Str uct ure∙Example Four: Edit a Delaunay Triangulation t o Insert or Rem ove Points∙Example Five: Create a Constraine d Delaunay Triangulation∙Example Six: Create a Constrained Delaunay Triangulation of a Geographical Map∙Example Seve n: Curve Re construction from a Point Cloud∙Example Eight: Com pute an Approximate Me dial Axis of a Polygonal Domain∙Example Nine: Morph a 2D Mesh to a Modified BoundaryExample One: Create and Plot a 2D Delaunay TriangulationThis example s hows you how to compute a 2D Delaunay triangulation and how t o plot the triangulation together with t he vertex and triangle labels.dt =DelaunayTriProperties:Constraints: []X: [10x2 double]Triangulation: [11x3 double]Example Two: Create and Plot a 3D Delaunay Triangulation This example s hows you how to compute a 3D Delaunay triangulation and how t o plot the triangulation.X =0.6557 0.7060 0.43870.0357 0.0318 0.38160.8491 0.2769 0.7655 0.9340 0.0462 0.7952 0.6787 0.0971 0.1869 0.7577 0.8235 0.4898 0.7431 0.6948 0.4456 0.3922 0.3171 0.6463 0.6555 0.9502 0.7094 0.1712 0.0344 0.7547dt =DelaunayTriProperties:Constraints: []X: [10x3 double] Triangulation: [22x4 double]Example Three: Access the Triangulation Data StructureThere are tw o w ays t o access t he triangulation data structure. O ne way is via the Triangulation pr operty, t he other way is usi ng indexi ng.Create a 2D Delaunay triangulation from 10 random poi nts.X =0.2760 0.75130.6797 0.25510.6551 0.50600.1626 0.69910.1190 0.89090.4984 0.95930.9597 0.54720.3404 0.13860.5853 0.14930.2238 0.2575dt =DelaunayTriProperties:Constraints: []X: [10x2 double] Triangulation: [12x3 double]ans =3 8 9The t hird verte x of the se cond triangle is;ans =9The first three triangles;ans =4 10 13 8 910 4 5Example Four: Edit a Delaunay Triangulation to Insert or Remove PointsThis example s hows you how to use inde x-based s ubs cripting t o i nsert or remove points. It is m ore efficie nt to edit a DelaunayTri to make minor modifications as oppose d t o recreating a new DelaunayTri from s cratch, this is espe cially true if the dataset is large.Properties:Constraints: []X: [15x2 double] Triangulation: [21x3 double]Replace the fifth poi ntdt =DelaunayTriProperties:Constraints: []X: [15x2 double] Triangulation: [21x3 double]Remove the fourth poi ntdt =DelaunayTriProperties:Constraints: []X: [14x2 double] Triangulation: [19x3 double]Example Five: Create a Constrained Delaunay Triangulation This example s hows you how to create a simple constrained Delaunay triangulation and illustrates t he effe ct of the constraints.Example Six: Create a Constrained Delaunay Triangulation of a Geographical MapLoad a map of the perimeter of t he conterminous United States. Construct a constrained Delaunay triangulation representi ng t h e polygon. T his triangulation spans a domai n that is bounded by the conve x hull of t he set of poi nts. Filter out the triangles that are within t he domain of t he polygon and plot them. Note: The dataset contai ns duplicate data poi nts; t hat is two or m ore data points have the same location. The duplicate points are rejected and t he DelaunayTri reformats the constraints accordingly.The Triangulation indices and Constraints are defined withrespect to the unique set of points in DelaunayTri property X.Warning: Intersecting edge constraints have been split, this may have addednew points into the triangulation.Example Seven: Curve Reconstruction from a Point CloudThis example highlights t he use of a Delaunay triangulation to reconstruct a polygonal boundary from a cloud of points. T he reconstruction is base d on the elegant Cr ust algorithm.Reference: N. Ame nta, M. Ber n, and D. Eppstein. The crust and the beta-skeleton: combi natorial cur ve re construction. Graphical Models and Image Processing, 60:125-135, 1998.Example Eight: Compute an Approximate Medial Axis of a Polygonal DomainThis example demonstrates the creation of an approximate Me dial Axis of a polygonal domain usi ng a constrained Delaunay triangulation. T he Medial A xis of a pol ygon is defi ned by the locus of the ce nter of a m aximal disk within the polygon interior.。
ABSTRACT Progressive Simplicial Complexes

Progressive Simplicial Complexes Jovan Popovi´c Hugues HoppeCarnegie Mellon University Microsoft ResearchABSTRACTIn this paper,we introduce the progressive simplicial complex(PSC) representation,a new format for storing and transmitting triangu-lated geometric models.Like the earlier progressive mesh(PM) representation,it captures a given model as a coarse base model together with a sequence of refinement transformations that pro-gressively recover detail.The PSC representation makes use of a more general refinement transformation,allowing the given model to be an arbitrary triangulation(e.g.any dimension,non-orientable, non-manifold,non-regular),and the base model to always consist of a single vertex.Indeed,the sequence of refinement transforma-tions encodes both the geometry and the topology of the model in a unified multiresolution framework.The PSC representation retains the advantages of PM’s.It defines a continuous sequence of approx-imating models for runtime level-of-detail control,allows smooth transitions between any pair of models in the sequence,supports progressive transmission,and offers a space-efficient representa-tion.Moreover,by allowing changes to topology,the PSC sequence of approximations achieves betterfidelity than the corresponding PM sequence.We develop an optimization algorithm for constructing PSC representations for graphics surface models,and demonstrate the framework on models that are both geometrically and topologically complex.CR Categories:I.3.5[Computer Graphics]:Computational Geometry and Object Modeling-surfaces and object representations.Additional Keywords:model simplification,level-of-detail representa-tions,multiresolution,progressive transmission,geometry compression.1INTRODUCTIONModeling and3D scanning systems commonly give rise to triangle meshes of high complexity.Such meshes are notoriously difficult to render,store,and transmit.One approach to speed up rendering is to replace a complex mesh by a set of level-of-detail(LOD) approximations;a detailed mesh is used when the object is close to the viewer,and coarser approximations are substituted as the object recedes[6,8].These LOD approximations can be precomputed Work performed while at Microsoft Research.Email:jovan@,hhoppe@Web:/jovan/Web:/hoppe/automatically using mesh simplification methods(e.g.[2,10,14,20,21,22,24,27]).For efficient storage and transmission,meshcompression schemes[7,26]have also been developed.The recently introduced progressive mesh(PM)representa-tion[13]provides a unified solution to these problems.In PM form,an arbitrary mesh M is stored as a coarse base mesh M0together witha sequence of n detail records that indicate how to incrementally re-fine M0into M n=M(see Figure7).Each detail record encodes theinformation associated with a vertex split,an elementary transfor-mation that adds one vertex to the mesh.In addition to defininga continuous sequence of approximations M0M n,the PM rep-resentation supports smooth visual transitions(geomorphs),allowsprogressive transmission,and makes an effective mesh compressionscheme.The PM representation has two restrictions,however.First,it canonly represent meshes:triangulations that correspond to orientable12-dimensional manifolds.Triangulated2models that cannot be rep-resented include1-d manifolds(open and closed curves),higherdimensional polyhedra(e.g.triangulated volumes),non-orientablesurfaces(e.g.M¨o bius strips),non-manifolds(e.g.two cubes joinedalong an edge),and non-regular models(i.e.models of mixed di-mensionality).Second,the expressiveness of the PM vertex splittransformations constrains all meshes M0M n to have the same topological type.Therefore,when M is topologically complex,the simplified base mesh M0may still have numerous triangles(Fig-ure7).In contrast,a number of existing simplification methods allowtopological changes as the model is simplified(Section6).Ourwork is inspired by vertex unification schemes[21,22],whichmerge vertices of the model based on geometric proximity,therebyallowing genus modification and component merging.In this paper,we introduce the progressive simplicial complex(PSC)representation,a generalization of the PM representation thatpermits topological changes.The key element of our approach isthe introduction of a more general refinement transformation,thegeneralized vertex split,that encodes changes to both the geometryand topology of the model.The PSC representation expresses anarbitrary triangulated model M(e.g.any dimension,non-orientable,non-manifold,non-regular)as the result of successive refinementsapplied to a base model M1that always consists of a single vertex (Figure8).Thus both geometric and topological complexity are recovered progressively.Moreover,the PSC representation retains the advantages of PM’s,including continuous LOD,geomorphs, progressive transmission,and model compression.In addition,we develop an optimization algorithm for construct-ing a PSC representation from a given model,as described in Sec-tion4.1The particular parametrization of vertex splits in[13]assumes that mesh triangles are consistently oriented.2Throughout this paper,we use the words“triangulated”and“triangula-tion”in the general dimension-independent sense.Figure 1:Illustration of a simplicial complex K and some of its subsets.2BACKGROUND2.1Concepts from algebraic topologyTo precisely define both triangulated models and their PSC repre-sentations,we find it useful to introduce some elegant abstractions from algebraic topology (e.g.[15,25]).The geometry of a triangulated model is denoted as a tuple (K V )where the abstract simplicial complex K is a combinatorial structure specifying the adjacency of vertices,edges,triangles,etc.,and V is a set of vertex positions specifying the shape of the model in 3.More precisely,an abstract simplicial complex K consists of a set of vertices 1m together with a set of non-empty subsets of the vertices,called the simplices of K ,such that any set consisting of exactly one vertex is a simplex in K ,and every non-empty subset of a simplex in K is also a simplex in K .A simplex containing exactly d +1vertices has dimension d and is called a d -simplex.As illustrated pictorially in Figure 1,the faces of a simplex s ,denoted s ,is the set of non-empty subsets of s .The star of s ,denoted star(s ),is the set of simplices of which s is a face.The children of a d -simplex s are the (d 1)-simplices of s ,and its parents are the (d +1)-simplices of star(s ).A simplex with exactly one parent is said to be a boundary simplex ,and one with no parents a principal simplex .The dimension of K is the maximum dimension of its simplices;K is said to be regular if all its principal simplices have the same dimension.To form a triangulation from K ,identify its vertices 1m with the standard basis vectors 1m ofm.For each simplex s ,let the open simplex smdenote the interior of the convex hull of its vertices:s =m:jmj =1j=1jjsThe topological realization K is defined as K =K =s K s .The geometric realization of K is the image V (K )where V :m 3is the linear map that sends the j -th standard basis vector jm to j 3.Only a restricted set of vertex positions V =1m lead to an embedding of V (K )3,that is,prevent self-intersections.The geometric realization V (K )is often called a simplicial complex or polyhedron ;it is formed by an arbitrary union of points,segments,triangles,tetrahedra,etc.Note that there generally exist many triangulations (K V )for a given polyhedron.(Some of the vertices V may lie in the polyhedron’s interior.)Two sets are said to be homeomorphic (denoted =)if there ex-ists a continuous one-to-one mapping between them.Equivalently,they are said to have the same topological type .The topological realization K is a d-dimensional manifold without boundary if for each vertex j ,star(j )=d .It is a d-dimensional manifold if each star(v )is homeomorphic to either d or d +,where d +=d:10.Two simplices s 1and s 2are d-adjacent if they have a common d -dimensional face.Two d -adjacent (d +1)-simplices s 1and s 2are manifold-adjacent if star(s 1s 2)=d +1.Figure 2:Illustration of the edge collapse transformation and its inverse,the vertex split.Transitive closure of 0-adjacency partitions K into connected com-ponents .Similarly,transitive closure of manifold-adjacency parti-tions K into manifold components .2.2Review of progressive meshesIn the PM representation [13],a mesh with appearance attributes is represented as a tuple M =(K V D S ),where the abstract simpli-cial complex K is restricted to define an orientable 2-dimensional manifold,the vertex positions V =1m determine its ge-ometric realization V (K )in3,D is the set of discrete material attributes d f associated with 2-simplices f K ,and S is the set of scalar attributes s (v f )(e.g.normals,texture coordinates)associated with corners (vertex-face tuples)of K .An initial mesh M =M n is simplified into a coarser base mesh M 0by applying a sequence of n successive edge collapse transforma-tions:(M =M n )ecol n 1ecol 1M 1ecol 0M 0As shown in Figure 2,each ecol unifies the two vertices of an edgea b ,thereby removing one or two triangles.The position of the resulting unified vertex can be arbitrary.Because the edge collapse transformation has an inverse,called the vertex split transformation (Figure 2),the process can be reversed,so that an arbitrary mesh M may be represented as a simple mesh M 0together with a sequence of n vsplit records:M 0vsplit 0M 1vsplit 1vsplit n 1(M n =M )The tuple (M 0vsplit 0vsplit n 1)forms a progressive mesh (PM)representation of M .The PM representation thus captures a continuous sequence of approximations M 0M n that can be quickly traversed for interac-tive level-of-detail control.Moreover,there exists a correspondence between the vertices of any two meshes M c and M f (0c f n )within this sequence,allowing for the construction of smooth vi-sual transitions (geomorphs)between them.A sequence of such geomorphs can be precomputed for smooth runtime LOD.In addi-tion,PM’s support progressive transmission,since the base mesh M 0can be quickly transmitted first,followed the vsplit sequence.Finally,the vsplit records can be encoded concisely,making the PM representation an effective scheme for mesh compression.Topological constraints Because the definitions of ecol and vsplit are such that they preserve the topological type of the mesh (i.e.all K i are homeomorphic),there is a constraint on the min-imum complexity that K 0may achieve.For instance,it is known that the minimal number of vertices for a closed genus g mesh (ori-entable 2-manifold)is (7+(48g +1)12)2if g =2(10if g =2)[16].Also,the presence of boundary components may further constrain the complexity of K 0.Most importantly,K may consist of a number of components,and each is required to appear in the base mesh.For example,the meshes in Figure 7each have 117components.As evident from the figure,the geometry of PM meshes may deteriorate severely as they approach topological lower bound.M 1;100;(1)M 10;511;(7)M 50;4656;(12)M 200;1552277;(28)M 500;3968690;(58)M 2000;14253219;(108)M 5000;029010;(176)M n =34794;0068776;(207)Figure 3:Example of a PSC representation.The image captions indicate the number of principal 012-simplices respectively and the number of connected components (in parenthesis).3PSC REPRESENTATION 3.1Triangulated modelsThe first step towards generalizing PM’s is to let the PSC repre-sentation encode more general triangulated models,instead of just meshes.We denote a triangulated model as a tuple M =(K V D A ).The abstract simplicial complex K is not restricted to 2-manifolds,but may in fact be arbitrary.To represent K in memory,we encode the incidence graph of the simplices using the following linked structures (in C++notation):struct Simplex int dim;//0=vertex,1=edge,2=triangle,...int id;Simplex*children[MAXDIM+1];//[0..dim]List<Simplex*>parents;;To render the model,we draw only the principal simplices ofK ,denoted (K )(i.e.vertices not adjacent to edges,edges not adjacent to triangles,etc.).The discrete attributes D associate amaterial identifier d s with each simplex s(K ).For the sake of simplicity,we avoid explicitly storing surface normals at “corners”(using a set S )as done in [13].Instead we let the material identifier d s contain a smoothing group field [28],and let a normal discontinuity (crease )form between any pair of adjacent triangles with different smoothing groups.Previous vertex unification schemes [21,22]render principal simplices of dimension 0and 1(denoted 01(K ))as points and lines respectively with fixed,device-dependent screen widths.To better approximate the model,we instead define a set A that associates an area a s A with each simplex s 01(K ).We think of a 0-simplex s 00(K )as approximating a sphere with area a s 0,and a 1-simplex s 1=j k 1(K )as approximating a cylinder (with axis (j k ))of area a s 1.To render a simplex s 01(K ),we determine the radius r model of the corresponding sphere or cylinder in modeling space,and project the length r model to obtain the radius r screen in screen pixels.Depending on r screen ,we render the simplex as a polygonal sphere or cylinder with radius r model ,a 2D point or line with thickness 2r screen ,or do not render it at all.This choice based on r screen can be adjusted to mitigate the overhead of introducing polygonal representations of spheres and cylinders.As an example,Figure 3shows an initial model M of 68,776triangles.One of its approximations M 500is a triangulated model with 3968690principal 012-simplices respectively.3.2Level-of-detail sequenceAs in progressive meshes,from a given triangulated model M =M n ,we define a sequence of approximations M i :M 1op 1M 2op 2M n1op n 1M nHere each model M i has exactly i vertices.The simplification op-erator M ivunify iM i +1is the vertex unification transformation,whichmerges two vertices (Section 3.3),and its inverse M igvspl iM i +1is the generalized vertex split transformation (Section 3.4).Thetuple (M 1gvspl 1gvspl n 1)forms a progressive simplicial complex (PSC)representation of M .To construct a PSC representation,we first determine a sequence of vunify transformations simplifying M down to a single vertex,as described in Section 4.After reversing these transformations,we renumber the simplices in the order that they are created,so thateach gvspl i (a i)splits the vertex a i K i into two vertices a i i +1K i +1.As vertices may have different positions in the different models,we denote the position of j in M i as i j .To better approximate a surface model M at lower complexity levels,we initially associate with each (principal)2-simplex s an area a s equal to its triangle area in M .Then,as the model is simplified,wekeep constant the sum of areas a s associated with principal simplices within each manifold component.When2-simplices are eventually reduced to principal1-simplices and0-simplices,their associated areas will provide good estimates of the original component areas.3.3Vertex unification transformationThe transformation vunify(a i b i midp i):M i M i+1takes an arbitrary pair of vertices a i b i K i+1(simplex a i b i need not be present in K i+1)and merges them into a single vertex a i K i. Model M i is created from M i+1by updating each member of the tuple(K V D A)as follows:K:References to b i in all simplices of K are replaced by refer-ences to a i.More precisely,each simplex s in star(b i)K i+1is replaced by simplex(s b i)a i,which we call the ancestor simplex of s.If this ancestor simplex already exists,s is deleted.V:Vertex b is deleted.For simplicity,the position of the re-maining(unified)vertex is set to either the midpoint or is left unchanged.That is,i a=(i+1a+i+1b)2if the boolean parameter midp i is true,or i a=i+1a otherwise.D:Materials are carried through as expected.So,if after the vertex unification an ancestor simplex(s b i)a i K i is a new principal simplex,it receives its material from s K i+1if s is a principal simplex,or else from the single parent s a i K i+1 of s.A:To maintain the initial areas of manifold components,the areasa s of deleted principal simplices are redistributed to manifold-adjacent neighbors.More concretely,the area of each princi-pal d-simplex s deleted during the K update is distributed toa manifold-adjacent d-simplex not in star(a ib i).If no suchneighbor exists and the ancestor of s is a principal simplex,the area a s is distributed to that ancestor simplex.Otherwise,the manifold component(star(a i b i))of s is being squashed be-tween two other manifold components,and a s is discarded. 3.4Generalized vertex split transformation Constructing the PSC representation involves recording the infor-mation necessary to perform the inverse of each vunify i.This inverse is the generalized vertex split gvspl i,which splits a0-simplex a i to introduce an additional0-simplex b i.(As mentioned previously, renumbering of simplices implies b i i+1,so index b i need not be stored explicitly.)Each gvspl i record has the formgvspl i(a i C K i midp i()i C D i C A i)and constructs model M i+1from M i by updating the tuple (K V D A)as follows:K:As illustrated in Figure4,any simplex adjacent to a i in K i can be the vunify result of one of four configurations in K i+1.To construct K i+1,we therefore replace each ancestor simplex s star(a i)in K i by either(1)s,(2)(s a i)i+1,(3)s and(s a i)i+1,or(4)s,(s a i)i+1and s i+1.The choice is determined by a split code associated with s.Thesesplit codes are stored as a code string C Ki ,in which the simplicesstar(a i)are sortedfirst in order of increasing dimension,and then in order of increasing simplex id,as shown in Figure5. V:The new vertex is assigned position i+1i+1=i ai+()i.Theother vertex is given position i+1ai =i ai()i if the boolean pa-rameter midp i is true;otherwise its position remains unchanged.D:The string C Di is used to assign materials d s for each newprincipal simplex.Simplices in C Di ,as well as in C Aibelow,are sorted by simplex dimension and simplex id as in C Ki. A:During reconstruction,we are only interested in the areas a s fors01(K).The string C Ai tracks changes in these areas.Figure4:Effects of split codes on simplices of various dimensions.code string:41422312{}Figure5:Example of split code encoding.3.5PropertiesLevels of detail A graphics application can efficiently transitionbetween models M1M n at runtime by performing a sequence ofvunify or gvspl transformations.Our current research prototype wasnot designed for efficiency;it attains simplification rates of about6000vunify/sec and refinement rates of about5000gvspl/sec.Weexpect that a careful redesign using more efficient data structureswould significantly improve these rates.Geomorphs As in the PM representation,there exists a corre-spondence between the vertices of the models M1M n.Given acoarser model M c and afiner model M f,1c f n,each vertexj K f corresponds to a unique ancestor vertex f c(j)K cfound by recursively traversing the ancestor simplex relations:f c(j)=j j cf c(a j1)j cThis correspondence allows the creation of a smooth visual transi-tion(geomorph)M G()such that M G(1)equals M f and M G(0)looksidentical to M c.The geomorph is defined as the modelM G()=(K f V G()D f A G())in which each vertex position is interpolated between its originalposition in V f and the position of its ancestor in V c:Gj()=()fj+(1)c f c(j)However,we must account for the special rendering of principalsimplices of dimension0and1(Section3.1).For each simplexs01(K f),we interpolate its area usinga G s()=()a f s+(1)a c swhere a c s=0if s01(K c).In addition,we render each simplexs01(K c)01(K f)using area a G s()=(1)a c s.The resultinggeomorph is visually smooth even as principal simplices are intro-duced,removed,or change dimension.The accompanying video demonstrates a sequence of such geomorphs.Progressive transmission As with PM’s,the PSC representa-tion can be progressively transmitted by first sending M 1,followed by the gvspl records.Unlike the base mesh of the PM,M 1always consists of a single vertex,and can therefore be sent in a fixed-size record.The rendering of lower-dimensional simplices as spheres and cylinders helps to quickly convey the overall shape of the model in the early stages of transmission.Model compression Although PSC gvspl are more general than PM vsplit transformations,they offer a surprisingly concise representation of M .Table 1lists the average number of bits re-quired to encode each field of the gvspl records.Using arithmetic coding [30],the vertex id field a i requires log 2i bits,and the boolean parameter midp i requires 0.6–0.9bits for our models.The ()i delta vector is quantized to 16bitsper coordinate (48bits per),and stored as a variable-length field [7,13],requiring about 31bits on average.At first glance,each split code in the code string C K i seems to have 4possible outcomes (except for the split code for 0-simplex a i which has only 2possible outcomes).However,there exist constraints between these split codes.For example,in Figure 5,the code 1for 1-simplex id 1implies that 2-simplex id 1also has code 1.This in turn implies that 1-simplex id 2cannot have code 2.Similarly,code 2for 1-simplex id 3implies a code 2for 2-simplex id 2,which in turn implies that 1-simplex id 4cannot have code 1.These constraints,illustrated in the “scoreboard”of Figure 6,can be summarized using the following two rules:(1)If a simplex has split code c12,all of its parents havesplit code c .(2)If a simplex has split code 3,none of its parents have splitcode 4.As we encode split codes in C K i left to right,we apply these two rules (and their contrapositives)transitively to constrain the possible outcomes for split codes yet to be ing arithmetic coding with uniform outcome probabilities,these constraints reduce the code string length in Figure 6from 15bits to 102bits.In our models,the constraints reduce the code string from 30bits to 14bits on average.The code string is further reduced using a non-uniform probability model.We create an array T [0dim ][015]of encoding tables,indexed by simplex dimension (0..dim)and by the set of possible (constrained)split codes (a 4-bit mask).For each simplex s ,we encode its split code c using the probability distribution found in T [s dim ][s codes mask ].For 2-dimensional models,only 10of the 48tables are non-trivial,and each table contains at most 4probabilities,so the total size of the probability model is small.These encoding tables reduce the code strings to approximately 8bits as shown in Table 1.By comparison,the PM representation requires approximately 5bits for the same information,but of course it disallows topological changes.To provide more intuition for the efficiency of the PSC repre-sentation,we note that capturing the connectivity of an average 2-manifold simplicial complex (n vertices,3n edges,and 2n trian-gles)requires ni =1(log 2i +8)n (log 2n +7)bits with PSC encoding,versus n (12log 2n +95)bits with a traditional one-way incidence graph representation.For improved compression,it would be best to use a hybrid PM +PSC representation,in which the more concise PM vertex split encoding is used when the local neighborhood is an orientableFigure 6:Constraints on the split codes for the simplices in the example of Figure 5.Table 1:Compression results and construction times.Object#verts Space required (bits/n )Trad.Con.n K V D Arepr.time a i C K i midp i (v )i C D i C Ai bits/n hrs.drumset 34,79412.28.20.928.1 4.10.453.9146.1 4.3destroyer 83,79913.38.30.723.1 2.10.347.8154.114.1chandelier 36,62712.47.60.828.6 3.40.853.6143.6 3.6schooner 119,73413.48.60.727.2 2.5 1.353.7148.722.2sandal 4,6289.28.00.733.4 1.50.052.8123.20.4castle 15,08211.0 1.20.630.70.0-43.5-0.5cessna 6,7959.67.60.632.2 2.50.152.6132.10.5harley 28,84711.97.90.930.5 1.40.453.0135.7 3.52-dimensional manifold (this occurs on average 93%of the time in our examples).To compress C D i ,we predict the material for each new principalsimplex sstar(a i )star(b i )K i +1by constructing an ordered set D s of materials found in star(a i )K i .To improve the coding model,the first materials in D s are those of principal simplices in star(s )K i where s is the ancestor of s ;the remainingmaterials in star(a i )K i are appended to D s .The entry in C D i associated with s is the index of its material in D s ,encoded arithmetically.If the material of s is not present in D s ,it is specified explicitly as a global index in D .We encode C A i by specifying the area a s for each new principalsimplex s 01(star(a i )star(b i ))K i +1.To account for this redistribution of area,we identify the principal simplex from which s receives its area by specifying its index in 01(star(a i ))K i .The column labeled in Table 1sums the bits of each field of the gvspl records.Multiplying by the number n of vertices in M gives the total number of bits for the PSC representation of the model (e.g.500KB for the destroyer).By way of compari-son,the next column shows the number of bits per vertex required in a traditional “IndexedFaceSet”representation,with quantization of 16bits per coordinate and arithmetic coding of face materials (3n 16+2n 3log 2n +materials).4PSC CONSTRUCTIONIn this section,we describe a scheme for iteratively choosing pairs of vertices to unify,in order to construct a PSC representation.Our algorithm,a generalization of [13],is time-intensive,seeking high quality approximations.It should be emphasized that many quality metrics are possible.For instance,the quadric error metric recently introduced by Garland and Heckbert [9]provides a different trade-off of execution speed and visual quality.As in [13,20],we first compute a cost E for each candidate vunify transformation,and enter the candidates into a priority queueordered by ascending cost.Then,in each iteration i =n 11,we perform the vunify at the front of the queue and update the costs of affected candidates.4.1Forming set of candidate vertex pairs In principle,we could enter all possible pairs of vertices from M into the priority queue,but this would be prohibitively expensive since simplification would then require at least O(n2log n)time.Instead, we would like to consider only a smaller set of candidate vertex pairs.Naturally,should include the1-simplices of K.Additional pairs should also be included in to allow distinct connected com-ponents of M to merge and to facilitate topological changes.We considered several schemes for forming these additional pairs,in-cluding binning,octrees,and k-closest neighbor graphs,but opted for the Delaunay triangulation because of its adaptability on models containing components at different scales.We compute the Delaunay triangulation of the vertices of M, represented as a3-dimensional simplicial complex K DT.We define the initial set to contain both the1-simplices of K and the subset of1-simplices of K DT that connect vertices in different connected components of K.During the simplification process,we apply each vertex unification performed on M to as well in order to keep consistent the set of candidate pairs.For models in3,star(a i)has constant size in the average case,and the overall simplification algorithm requires O(n log n) time.(In the worst case,it could require O(n2log n)time.)4.2Selecting vertex unifications fromFor each candidate vertex pair(a b),the associated vunify(a b):M i M i+1is assigned the costE=E dist+E disc+E area+E foldAs in[13],thefirst term is E dist=E dist(M i)E dist(M i+1),where E dist(M)measures the geometric accuracy of the approximate model M.Conceptually,E dist(M)approximates the continuous integralMd2(M)where d(M)is the Euclidean distance of the point to the closest point on M.We discretize this integral by defining E dist(M)as the sum of squared distances to M from a dense set of points X sampled from the original model M.We sample X from the set of principal simplices in K—a strategy that generalizes to arbitrary triangulated models.In[13],E disc(M)measures the geometric accuracy of disconti-nuity curves formed by a set of sharp edges in the mesh.For the PSC representation,we generalize the concept of sharp edges to that of sharp simplices in K—a simplex is sharp either if it is a boundary simplex or if two of its parents are principal simplices with different material identifiers.The energy E disc is defined as the sum of squared distances from a set X disc of points sampled from sharp simplices to the discontinuity components from which they were sampled.Minimization of E disc therefore preserves the geom-etry of material boundaries,normal discontinuities(creases),and triangulation boundaries(including boundary curves of a surface and endpoints of a curve).We have found it useful to introduce a term E area that penalizes surface stretching(a more sophisticated version of the regularizing E spring term of[13]).Let A i+1N be the sum of triangle areas in the neighborhood star(a i)star(b i)K i+1,and A i N the sum of triangle areas in star(a i)K i.The mean squared displacement over the neighborhood N due to the change in area can be approx-imated as disp2=12(A i+1NA iN)2.We let E area=X N disp2,where X N is the number of points X projecting in the neighborhood. To prevent model self-intersections,the last term E fold penalizes surface folding.We compute the rotation of each oriented triangle in the neighborhood due to the vertex unification(as in[10,20]).If any rotation exceeds a threshold angle value,we set E fold to a large constant.Unlike[13],we do not optimize over the vertex position i a, but simply evaluate E for i a i+1a i+1b(i+1a+i+1b)2and choose the best one.This speeds up the optimization,improves model compression,and allows us to introduce non-quadratic energy terms like E area.5RESULTSTable1gives quantitative results for the examples in thefigures and in the video.Simplification times for our prototype are measured on an SGI Indigo2Extreme(150MHz R4400).Although these times may appear prohibitive,PSC construction is an off-line task that only needs to be performed once per model.Figure9highlights some of the benefits of the PSC representa-tion.The pearls in the chandelier model are initially disconnected tetrahedra;these tetrahedra merge and collapse into1-d curves in lower-complexity approximations.Similarly,the numerous polyg-onal ropes in the schooner model are simplified into curves which can be rendered as line segments.The straps of the sandal model initially have some thickness;the top and bottom sides of these straps merge in the simplification.Also note the disappearance of the holes on the sandal straps.The castle example demonstrates that the original model need not be a mesh;here M is a1-dimensional non-manifold obtained by extracting edges from an image.6RELATED WORKThere are numerous schemes for representing and simplifying tri-angulations in computer graphics.A common special case is that of subdivided2-manifolds(meshes).Garland and Heckbert[12] provide a recent survey of mesh simplification techniques.Several methods simplify a given model through a sequence of edge col-lapse transformations[10,13,14,20].With the exception of[20], these methods constrain edge collapses to preserve the topological type of the model(e.g.disallow the collapse of a tetrahedron into a triangle).Our work is closely related to several schemes that generalize the notion of edge collapse to that of vertex unification,whereby separate connected components of the model are allowed to merge and triangles may be collapsed into lower dimensional simplices. Rossignac and Borrel[21]overlay a uniform cubical lattice on the object,and merge together vertices that lie in the same cubes. Schaufler and St¨u rzlinger[22]develop a similar scheme in which vertices are merged using a hierarchical clustering algorithm.Lue-bke[18]introduces a scheme for locally adapting the complexity of a scene at runtime using a clustering octree.In these schemes, the approximating models correspond to simplicial complexes that would result from a set of vunify transformations(Section3.3).Our approach differs in that we order the vunify in a carefully optimized sequence.More importantly,we define not only a simplification process,but also a new representation for the model using an en-coding of gvspl=vunify1transformations.Recent,independent work by Schmalstieg and Schaufler[23]de-velops a similar strategy of encoding a model using a sequence of vertex split transformations.Their scheme differs in that it tracks only triangles,and therefore requires regular,2-dimensional trian-gulations.Hence,it does not allow lower-dimensional simplices in the model approximations,and does not generalize to higher dimensions.Some simplification schemes make use of an intermediate vol-umetric representation to allow topological changes to the model. He et al.[11]convert a mesh into a binary inside/outside function discretized on a three-dimensional grid,low-passfilter this function,。
河南省平顶山市叶县高级中学2024-2025学年高二上学期9月月考英语试卷

河南省平顶山市叶县高级中学2024-2025学年高二上学期9月月考英语试卷一、听力选择题1.What did the woman buy for her mum?A.A hat.B.A coat.C.A T- shirt.2.What does the man like doing?A.Travelling alone.B.Joining a guided tour.C.Backpacking with friends. 3.Why is the woman broke at the end of the month?A.She likes shopping.B.She doesn't work hard.C.She earns little money. 4.What time will the man’s party probably start?A.At 7: 30 p.m.B.At 8: 00 p.m.C.At 11: 00 p.m.5.Where are the speakers probably?A.In a hospital.B.In the police office.C.On the street.听下面一段较长对话,回答以下小题。
6.What should the woman do to order checks?A.Wait in a line.B.Fill in a form.C.Check the mail.7.When will the woman probably get the check?A.In two days.B.In four days.C.In a week.听下面一段较长对话,回答以下小题。
8.What is the man’s attitude towards art class?A.Favourable.B.Unconcerned.C.Worried.9.What does the woman mean about talent?A.She wants to be a painter too.B.She knows how to draw and paint.C.She hopes she could have some kind of talent.10.What are the speakers mainly talking about?A.The man’s hobby.B.The talent of the woman.C.The woman’s favourite class.听下面一段较长对话,回答以下小题。
研究NLP100篇必读的论文---已整理可直接下载

研究NLP100篇必读的论⽂---已整理可直接下载100篇必读的NLP论⽂⾃⼰汇总的论⽂集,已更新链接:提取码:x7tnThis is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.这是100篇重要的⾃然语⾔处理(NLP)论⽂的列表,认真的学⽣和研究⼈员在这个领域应该知道和阅读。
This list is compiled by .本榜单由编制。
I welcome any feedback on this list. 我欢迎对这个列表的任何反馈。
This list is originally based on the answers for a Quora question I posted years ago: .这个列表最初是基于我多年前在Quora上发布的⼀个问题的答案:[所有NLP学⽣都应该阅读的最重要的研究论⽂是什么?]( -are-the-most-important-research-paper -which-all-NLP-students-should- definitread)。
I thank all the people who contributed to the original post. 我感谢所有为原创⽂章做出贡献的⼈。
This list is far from complete or objective, and is evolving, as important papers are being published year after year.由于重要的论⽂年复⼀年地发表,这份清单还远远不够完整和客观,⽽且还在不断发展。
美国留学盘点有机化学领域的大牛教授

在整个美国留学申请过程中,一定要了解该领域的大学教授。
作者申请了美国的有机化学专业,所以将自己了解的一些有机化学方向的美国大学教授进行简单介绍。
其实这些资料大家仔细去读各位教授的主页和paper都会了解的很清楚,这里相当于起到一个汇总以及抛砖引玉的作用。
再次声称本文观点为个人观点,各种信息来自于互联网以及文献。
学校的排序参考了US News 2011版本的Organic Chemistry 的排名,不代表本人观点。
所有查阅的教授的研究方向均为有机化学方向,chemical biology 以及biochemistry 不在范围之内。
大体来看,有机化学方向的申请主要集中在全合成以及方法学这两个小方向上面,而很多group 的研究方向都是同时涵盖了这两个方向,所以选择的余地相对较大。
一些常见的缩写列举如下。
JACS = the Journal of American Chemical SocietyACIE = Angewandte Chemie International EditionCNS paper = Cell paper、Nature paper and Science paperAP = Associate Professor or Assistant Professor,这两个词本身有很大区别。
Harvard UniversityHarvard is Harvard。
即便我已然非常满意自己今年的申请结果,每当说这句话的时候,我还是会多少有那么一点点的不甘心。
当然,这句话第一次从别人那里听到是在当时UPenn 的某个宣讲会上。
Prof. Kozlowski (postdoc 跟的是Evans)说到这句话的语气似乎透着无限的羡慕与怨念。
所以,就算当年看着Harvard CCB 的faculty list 里面那为数不多的几个做有机的教授的时候,你还是很难抵挡得住Harvard 这个词本身所带来的太多诱惑。
3维任意域内点集的Delaunay四面体化研究
第12卷第1I期中国图象图形学报V01.12.No.112007年I1月JournalofImageandGraphicsNov..20073维任意域内点集的Delaunay四面体化研究吴江斌”(华东建筑设计研究院.上海200002)朱合华2’2’(同济大学地下建筑与工程系.上毒200092)摘要Delaunay空球准则广瑟应用于3维四面体剖分算法,但标准的Delaunay四面体化只适用于点集的凸包区域,且要求不存在多点共球。
为了将Delaunay四面体化更广泛地应用于网络剖分,通过引人局部优化三角形面代替Deluany严格的空球准则,提出了3维任意域内点集Dduanay四面体化(DTETAD)的概念.并首先通过若干关键定理的证明,研究了一个四面体划分是DETEAD的充要条件,然后建立了DTETAD的空球准则。
该研究成果为拓展Dalaunay算法在更广泛范围的应用提供了理论依据。
关麓词Delaunay四面体化3维任意域中圈法分类号:TP391文献标识码:A文章编号:1006·896l(2007)11-2109-05DelaunayTetrahedralizationinanArbitraryDomainWUJiang.bin”。
ZHUHe.hua2’11(EastChinaArchitecture%"Institute.Shaaglmi200002)2’(Depanmemof&dmnhlEngineeringof%∞iⅡ‘{钟,咖r。
%4n一“200092)AbstractTheDalaunaycriterionoftheemptysphereiswidelyusedfor3dimensionalt眦ralle击mntessellation.ButoriginalDelaunaytetrahedralizationnotbeusedforthepointssetwithconstrainedboundaryandthedegeneratepointssetinwhichfourmort!pointsa托coplanarinwhichtivemo佬pointseo叩hcdc且1.TheconceptofDelaunaytctrahedralizationinarbitrarydomain(DTETAD)ispresentedbasedthedefinitionoflocaloptimizedtriangulationwhichisbrou出OUttosubstitutethestrictemptyspherecriterionofDelaunay.ThesufficientandnecessaryconditionfortetrahedralizationtobeDTETADa∞proved.andtheconditionalemptyspherecriterionofDTETADispresented.Theresearchestablisbe*thetheoreticfoundationfortheapplication0fDelaunayinanarbitrarydomain,KeywordsDelaunay.tetra]led叫ization.3-dimensional.arbitrarydomain引言Delaunay划分(Delaunaytessellation,DT)在2维三角化与3维四面体化中占有举足轻重的地位,其已广泛应用于网格剖分、几何实体造型、GIS领域”。
3D Alpha-Shape实现包文档说明书
Package‘alphashape3d’January24,2023Version1.3.2Date2023-01-24Title Implementation of the3D Alpha-Shape for the Reconstruction of3D Sets from a Point CloudAuthor Thomas Lafarge,Beatriz Pateiro-LopezMaintainer Beatriz Pateiro-Lopez<**********************>Depends geometry,rglImports RANNSuggests alphahullDescription Implementation in R of the alpha-shape of afinite set of points in the three-dimensional space.The alpha-shape generalizes the convex hull and allows to re-cover the shape of non-convex and even non-connected sets in3D,given a random sam-ple of points taken into it.Besides the computation of the alpha-shape,this package pro-vides users with functions to compute the volume of the alpha-shape,identify the connected com-ponents and facilitate the three-dimensional graphical visualization of the estimated set. License GPL-2NeedsCompilation yesRepository CRANDate/Publication2023-01-2414:10:02UTCR topics documented:ashape3d (2)components_ashape3d (4)inashape3d (5)plot.ashape3d (6)rtorus (7)surfaceNormals (8)volume_ashape3d (9)Index111ashape3d3Dα-shape computationDescriptionThis function calculates the3Dα-shape of a given sample of points in the three-dimensional space forα>0.Usageashape3d(x,alpha,pert=FALSE,eps=1e-09)Argumentsx A3-column matrix with the coordinates of the input points.Alternatively,an object of class"ashape3d"can be provided,see Details.alpha A single value or vector of values forα.pert Logical.If the input points are not in general position and pert it set to TRUE the observations are perturbed by adding random noise,see Details.eps Scaling factor used for data perturbation when the input points are not in general position,see Details.DetailsIf x is an object of class"ashape3d",then ashape3d does not recompute the3D Delaunay triangu-lation(it reduces the computational cost).If the input points x are not in general position and pert is set to TRUE,the function adds random noise to the data.The noise is generated from a normal distribution with mean zero and standard deviation eps*sd(x).ValueAn object of class"ashape3d"with the following components(see Edelsbrunner and Mucke(1994) for notation):tetra For each tetrahedron of the3D Delaunay triangulation,the matrix tetra stores the indices of the sample points defining the tetrahedron(columns1to4),a valuethat defines the intervals for which the tetrahedron belongs to theα-complex(column5)and for eachαa value(1or0)indicating whether the tetrahedronbelongs to theα-shape(columns6to last).triang For each triangle of the3D Delaunay triangulation,the matrix triang stores the indices of the sample points defining the triangle(columns1to3),a value(1or0)indicating whether the triangle is on the convex hull(column4),a value(1or0)indicating whether the triangle is attached or unattached(column5),values that define the intervals for which the triangle belongs to theα-complex(columns6to8)and for eachαa value(0,1,2or3)indicating,respectively,thatthe triangle is not in theα-shape or it is interior,regular or singular(columns9to last).As defined in Edelsbrunner and Mucke(1994),a simplex in theα-complex is interior if it does not belong to the boundary of theα-shape.Asimplex in theα-complex is regular if it is part of the boundary of theα-shapeand bounds some higher-dimensional simplex in theα-complex.A simplex intheα-complex is singular if it is part of the boundary of theα-shape but doesnot bounds any higher-dimensional simplex in theα-complex.edge For each edge of the3D Delaunay triangulation,the matrix edge stores the indices of the sample points defining the edge(columns1and2),a value(1or0)indicating whether the edge is on the convex hull(column3),a value(1or0)indicating whether the edge is attached or unattached(column4),values thatdefine the intervals for which the edge belongs to theα-complex(columns5to7)and for eachαa value(0,1,2or3)indicating,respectively,that the edge isnot in theα-shape or it is interior,regular or singular(columns8to last).vertex For each sample point,the matrix vertex stores the index of the point(column1),a value(1or0)indicating whether the point is on the convex hull(column2),values that define the intervals for which the point belongs to theα-complex(columns3and4)and for eachαa value(1,2or3)indicating,respectively,ifthe point is interior,regular or singular(columns5to last).x A3-column matrix with the coordinates of the original sample points.alpha A single value or vector of values ofα.xpert A3-column matrix with the coordinates of the perturbated sample points(only when the input points are not in general position and pert is set to TRUE).ReferencesEdelsbrunner,H.,Mucke,E.P.(1994).Three-Dimensional Alpha Shapes.ACM Transactions on Graphics,13(1),pp.43-72.ExamplesT1<-rtorus(1000,0.5,2)T2<-rtorus(1000,0.5,2,ct=c(2,0,0),rotx=pi/2)x<-rbind(T1,T2)#Value of alphaalpha<-0.25#3D alpha-shapeashape3d.obj<-ashape3d(x,alpha=alpha)plot(ashape3d.obj)#For new values of alpha,we can use ashape3d.obj as input(faster)alpha<-c(0.15,1)ashape3d.obj<-ashape3d(ashape3d.obj,alpha=alpha)plot(ashape3d.obj,indexAlpha=2:3)4components_ashape3d components_ashape3d Connected subsets computationDescriptionThis function calculates and clusters the different connected components of theα-shape of a given sample of points in the three-dimensional space.Usagecomponents_ashape3d(as3d,indexAlpha=1)Argumentsas3d An object of class"ashape3d"that represents theα-shape of a given sample of points in the three-dimensional space,see ashape3d.indexAlpha A single value or vector with the indexes of as3d$alpha that should be used for the computation,see Details.DetailsThe function components_ashape3d computes the connected components of theα-shape for each value ofαin as3d$alpha[indexAlpha]when indexAlpha is numeric.If indexAlpha="all"or indexAlpha="ALL"then the function computes the connected compo-nents of theα-shape for all values ofαin as3d$alpha.ValueIf indexAlpha is a single value then the function returns a vector v of length equal to the sample size.For each sample point i,v[i]represents the label of the connected component to which the point belongs(for isolated points,v[i]=-1).The labels of the connected components are ordered by size where the largest one(in number of vertices)gets the smallest label which is one.Otherwise components_ashape3d returns a list of vectors describing the connected components of theα-shape for each selected value ofα.See Alsoashape3d,plot.ashape3dExamplesT1<-rtorus(1000,0.5,2)T2<-rtorus(1000,0.5,2,ct=c(2,0,0),rotx=pi/2)x<-rbind(T1,T2)alpha<-c(0.25,2)ashape3d.obj<-ashape3d(x,alpha=alpha)inashape3d5plot(ashape3d.obj,indexAlpha="all")#Connected components of the alpha-shape for both values of alphacomp<-components_ashape3d(ashape3d.obj,indexAlpha="all")class(comp)#Number of components and points in each component for alpha=0.25table(comp[[1]])#Number of components and points in each component for alpha=2table(comp[[2]])#Plot the connected components for alpha=0.25plot(ashape3d.obj,byComponents=TRUE,indexAlpha=1)inashape3d Test of the inside of anα-shapeDescriptionThis function checks whether points are inside anα-shape.Usageinashape3d(as3d,indexAlpha=1,points)Argumentsas3d An object of class"ashape3d"that represents theα-shape of a given sample of points in the three-dimensional space,see ashape3d.indexAlpha A single value or vector with the indexes of as3d$alpha that should be used for the computation,see Details.points A3-column matrix with the coordinates of the input points.DetailsThe function inashape3d checks whether each point in points is inside theα-shape for each value ofαin as3d$alpha[indexAlpha].If indexAlpha="all"or indexAlpha="ALL"then the function checks whether each point in points is inside theα-shape for all values ofαin as3d$alpha.ValueIf indexAlpha is a single value then the function returns a vector of boolean of length the number of input points.The element at position i is TRUE if the point in points[i,]is inside theα-shape.Otherwise inashape3d returns a list of vectors of boolean values(each object in the list as described above).6plot.ashape3dSee Alsoashape3dExamplesT1<-rtorus(2000,0.5,2)T2<-rtorus(2000,0.5,2,ct=c(2,0,0),rotx=pi/2)x<-rbind(T1,T2)ashape3d.obj<-ashape3d(x,alpha=0.4)#Random sample of points in a planepoints<-matrix(c(5*runif(10000)-2.5,rep(0.01,5000)),nc=3)in3d<-inashape3d(ashape3d.obj,points=points)plot(ashape3d.obj,transparency=0.2)colors<-ifelse(in3d,"blue","green")points3d(points,col=colors)plot.ashape3d Plot theα-shape in3DDescriptionThis function plots theα-shape in3D using the package rgl.Usage##S3method for class ashape3dplot(x,clear=TRUE,col=c(2,2,2),byComponents=FALSE,indexAlpha=1,transparency=1,walpha=FALSE,triangles=TRUE,edges=TRUE,vertices=TRUE,...) Argumentsx An object of class"ashape3d"that represents theα-shape of a given sample of points in the three-dimensional space,see ashape3d.clear Logical,specifying whether the current rgl device should be cleared.col A vector of length three specifying the colors of the triangles,edges and vertices composing theα-shape,respectively.byComponents Logical,if TRUE the connected components of theα-shape are represented in different colors,see components_ashape3d.indexAlpha A single value or vector with the indexes of x$alpha that should be used for the computation,see Details.transparency The coefficient of transparency,from0(transparent)to1(opaque),used to plot theα-shape.walpha Logical,if TRUE the value ofαis displayed in the rgl device.rtorus7triangles Logical,if TRUE triangles are plotted.edges Logical,if TRUE edges are plotted.vertices Logical,if TRUE vertices are plotted....Material properties.See material3d for details.DetailsThe function plot.ashape3d opens a rgl device for each value ofαin x$alpha[indexAlpha].Device information is displayed in the console.If indexAlpha="all"or indexAlpha="ALL"then the function represents theα-shape for all values ofαin as3d$alpha.See Alsoashape3d,components_ashape3dExamplesT1<-rtorus(1000,0.5,2)T2<-rtorus(1000,0.5,2,ct=c(2,0,0),rotx=pi/2)x<-rbind(T1,T2)alpha<-c(0.15,0.25,1)ashape3d.obj<-ashape3d(x,alpha=alpha)#Plot the alpha-shape for all values of alphaplot(ashape3d.obj,indexAlpha="all")#Plot the connected components of the alpha-shape for alpha=0.25plot(ashape3d.obj,byComponents=TRUE,indexAlpha=2)rtorus Generate points in the torusDescriptionThis function generates n random points within the torus whose minor radius is r,major radius is R and center is ct.Usagertorus(n,r,R,ct=c(0,0,0),rotx=NULL)8surfaceNormalsArgumentsn Number of observations.r Minor radius(radius of the tube).R Major radius(distance from the center of the tube to the center of the torus).ct A vector with the coordinates of the center of the torus.rotx If not NULL,a rotation through an angle rotx(in radians)about the x-axis is performed.ExamplesT1<-rtorus(2000,0.5,2.5)bbox3d(color=c("white","black"))points3d(T1,col=4)T2<-rtorus(2000,0.5,2.5,ct=c(2,0,0.5),rotx=pi/2)points3d(T2,col=2)surfaceNormals Normal vectors computationDescriptionThis function calculates the normal vectors of all the triangles which belong to the boundary of the α-shape.UsagesurfaceNormals(x,indexAlpha=1,display=FALSE,col=3,scale=1,...)Argumentsx An object of class"ashape3d"that represents theα-shape of a given sample of points in the three-dimensional space,see ashape3d.indexAlpha A single value or vector with the indexes of as3d$alpha that should be used for the computation,see Details.display Logical,if TRUE,surfaceNormals open a new rgl device and display the re-latedα-shape and its normals vectors.col Color of the normal vectors.scale Scale parameter to control the length of the surface normals,only affect display....Material properties.See material3d for details.DetailsThe function surfaceNormals computes the normal vectors of all the triangles which belong to the boundary of theα-shape for each value ofαin x$alpha[indexAlpha].The magnitude of each vector is equal to the area of its associated triangle.If indexAlpha="all"or indexAlpha="ALL"then the function computes the normal vectors for all values ofαin as3d$alpha.ValueIf indexAlpha is a single value then the function returns an object of class"normals"with the following components:normals Three-column matrix with the euclidean coordinates of the normal vectors.centers Three-column matrix with the euclidean coordinates of the centers of the trian-gles that form theα-shape.Otherwise surfaceNormals returns a list of class"normals-List"(each object in the list as de-scribed above).See Alsoashape3dExamplesx<-rtorus(1000,0.5,1)alpha<-0.25ashape3d.obj<-ashape3d(x,alpha=alpha)surfaceNormals(ashape3d.obj,display=TRUE)volume_ashape3d Volume computationDescriptionThis function calculates the volume of theα-shape of a given sample of points in the three-dimensional space.Usagevolume_ashape3d(as3d,byComponents=FALSE,indexAlpha=1)Argumentsas3d An object of class"ashape3d"that represents theα-shape of a given sample ofpoints in the three-dimensional space,see ashape3d.byComponents Logical,if FALSE(default)volume_ashape3d computes the volume of thewholeα-shape.If TRUE,volume_ashape3d computes the volume of each con-nected component of theα-shape separately.indexAlpha A single value or vector with the indexes of as3d$alpha that should be used forthe computation,see Details.DetailsThe function volume_ashape3d computes the volume of theα-shape for each value ofαin as3d$alpha[indexAlpha] when indexAlpha is numeric.If indexAlpha="all"or indexAlpha="ALL"then the function computes the volume of theα-shapefor all values ofαin as3d$alpha.ValueIf indexAlpha is a single value then the function returns the volume of theα-shape(if the argumentbyComponents is set to FALSE)or a vector with the volumes of each connected component of theα-shape(if the argument byComponents is set to TRUE).Otherwise volume_ashape3d returns a list(each object in the list as described above).See Alsoashape3d,components_ashape3dExamplesC1<-matrix(runif(6000),nc=3)C2<-matrix(runif(6000),nc=3)+2x<-rbind(C1,C2)ashape3d.obj<-ashape3d(x,alpha=0.75)plot(ashape3d.obj,byComponents=TRUE)#Compute the volume of the alpha-shapevolume_ashape3d(ashape3d.obj)#Compute the volumes of the connected components of the alpha-shapevolume_ashape3d(ashape3d.obj,byComponents=TRUE)Indexashape3d,2,4–10components_ashape3d,4,6,7,10inashape3d,5material3d,7,8plot.ashape3d,4,6rgl,6rtorus,7surfaceNormals,8volume_ashape3d,911。
河网汇水区域的层次化剖分与地图综合
第36卷 第2期测 绘 学 报V ol .36,N o .22007年5月ACT A GEODAETI CA et CART OGRAPH ICA SI N I CAM ay ,2007文章编号:1001-1595(2007)02-0231-07中图分类号:P283.1 文献标识码:A河网汇水区域的层次化剖分与地图综合艾廷华,刘耀林,黄亚锋(武汉大学资源与环境科学学院地理信息系统教育部重点实验室,湖北武汉430079)Th e H ie rarch ica l W a te rsh ed Part it ion ing and G en e ra liza t ion o f R iv e r Ne tw o rkA I T ing -hu a ,LI U Y ao -lin ,HUANG Y a -feng(K ey Lab oratory of G eograph ic Inf or m a tion Syste m,School of R esource and Environ m e n t S ciences ,W uhan Un i versit y ,W uhan 430079,C hina)Ab st ra c t :For the genera liza tion of riv er ne t work ,the decis ion o f role i m portance of riv er b ranches p lay ing in ca tch -m en t ha s to cons ider three a spects in d iffe rent leve ls :the spa tia l d istribu tion pa ttern a t m a cro leve l ,the d istribu tion den s ity a t m eso lev el and the ind iv idua l geo m etric p roperties a t m icro leve l .To extra ct such stru ctured in for m a tion ,th is study trie s to bu ild the m ode l o f w a tershed h iera rch ica l pa rtit ion ing ba sed on D e launay triangu la tion.The dete r-m ina t ion of w a tershed a rea is reg a rded a s a spa tia l co m p etit ion p roces s and th rough the p a rtition ing s i m ila r to V orono i d iagramto obta in th e ba s in po lygon of each river b ranch w ith in the cove rage of one bran ch sub -net w ork and its ba ckground the D e la unay triangu la t ion is con stru cted and the ske leton is iden tified a s the w a tershed bounda ry.Then a h iera rch ica l re la tion is con stru ctedto exp ress the inclu s ion bet w een d ifferen t lev e l wa tershed s .The h ie ra rch i-ca l pa rtition ing m od el can support to co m pu te the p a ram e ters o f d is tribu tion den s ity ,d istance between ne ighb or riv ers ,the h ie ra rch ica l w a tershed a rea ,wh ich a re the p re-cond itiona l pa ram eters to deter m ine the i m portance of riv -er b ranch in the s i m p lifica tion and se lection of riv er net w ork.The study presents a m ethod to se lect river network ba sed on the w a tershed pa rtition ing m ode l .K e y w o rd s :D e launay triangu la t ion ;m ap g enera liza t ion ;riv er net w ork ;spa tia l ana ly s is摘 要:对于具有网络状结构的河系数据的综合化简,判断河流分支在河网中的重要性需要考虑三个层次的结构信息:全局范围内的空间分布模式;局域环境下的分布密度;单条河流的几何特征。
ReliabilityEngineeringandSystemSafety91(2006)992–1007
Reliability Engineering and System Safety 91(2006)992–1007Multi-objective optimization using genetic algorithms:A tutorialAbdullah Konak a,Ã,David W.Coit b ,Alice E.Smith caInformation Sciences and Technology,Penn State Berks,USA bDepartment of Industrial and Systems Engineering,Rutgers University cDepartment of Industrial and Systems Engineering,Auburn UniversityAvailable online 9January 2006AbstractMulti-objective formulations are realistic models for many complex engineering optimization problems.In many real-life problems,objectives under consideration conflict with each other,and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.In this paper,an overview and tutorial is presented describing genetic algorithms (GA)developed specifically for problems with multiple objectives.They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity.r 2005Elsevier Ltd.All rights reserved.1.IntroductionThe objective of this paper is present an overview and tutorial of multiple-objective optimization methods using genetic algorithms (GA).For multiple-objective problems,the objectives are generally conflicting,preventing simulta-neous optimization of each objective.Many,or even most,real engineering problems actually do have multiple-objectives,i.e.,minimize cost,maximize performance,maximize reliability,etc.These are difficult but realistic problems.GA are a popular meta-heuristic that is particularly well-suited for this class of problems.Tradi-tional GA are customized to accommodate multi-objective problems by using specialized fitness functions and introducing methods to promote solution diversity.There are two general approaches to multiple-objective optimization.One is to combine the individual objective functions into a single composite function or move all but one objective to the constraint set.In the former case,determination of a single objective is possible with methods such as utility theory,weighted sum method,etc.,but theproblem lies in the proper selection of the weights or utility functions to characterize the decision-maker’s preferences.In practice,it can be very difficult to precisely and accurately select these weights,even for someone familiar with the problem pounding this drawback is that scaling amongst objectives is needed and small perturbations in the weights can sometimes lead to quite different solutions.In the latter case,the problem is that to move objectives to the constraint set,a constraining value must be established for each of these former objectives.This can be rather arbitrary.In both cases,an optimization method would return a single solution rather than a set of solutions that can be examined for trade-offs.For this reason,decision-makers often prefer a set of good solutions considering the multiple objectives.The second general approach is to determine an entire Pareto optimal solution set or a representative subset.A Pareto optimal set is a set of solutions that are nondominated with respect to each other.While moving from one Pareto solution to another,there is always a certain amount of sacrifice in one objective(s)to achieve a certain amount of gain in the other(s).Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems/locate/ress0951-8320/$-see front matter r 2005Elsevier Ltd.All rights reserved.doi:10.1016/j.ress.2005.11.018ÃCorresponding author.E-mail address:konak@ (A.Konak).since thefinal solution of the decision-maker is always a trade-off.Pareto optimal sets can be of varied sizes,but the size of the Pareto set usually increases with the increase in the number of objectives.2.Multi-objective optimization formulationConsider a decision-maker who wishes to optimize K objectives such that the objectives are non-commensurable and the decision-maker has no clear preference of the objectives relative to each other.Without loss of generality, all objectives are of the minimization type—a minimization type objective can be converted to a maximization type by multiplying negative one.A minimization multi-objective decision problem with K objectives is defined as follows: Given an n-dimensional decision variable vector x¼{x1,y,x n}in the solution space X,find a vector x* that minimizes a given set of K objective functions z(x*)¼{z1(x*),y,z K(x*)}.The solution space X is gen-erally restricted by a series of constraints,such as g j(x*)¼b j for j¼1,y,m,and bounds on the decision variables.In many real-life problems,objectives under considera-tion conflict with each other.Hence,optimizing x with respect to a single objective often results in unacceptable results with respect to the other objectives.Therefore,a perfect multi-objective solution that simultaneously opti-mizes each objective function is almost impossible.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.If all objective functions are for minimization,a feasible solution x is said to dominate another feasible solution y (x1y),if and only if,z i(x)p z i(y)for i¼1,y,K and z j(x)o z j(y)for least one objective function j.A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space.A Pareto optimal solution cannot be improved with respect to any objective without worsening at least one other objective.The set of all feasible non-dominated solutions in X is referred to as the Pareto optimal set,and for a given Pareto optimal set,the corresponding objective function values in the objective space are called the Pareto front.For many problems,the number of Pareto optimal solutions is enormous(perhaps infinite).The ultimate goal of a multi-objective optimization algorithm is to identify solutions in the Pareto optimal set.However,identifying the entire Pareto optimal set, for many multi-objective problems,is practically impos-sible due to its size.In addition,for many problems, especially for combinatorial optimization problems,proof of solution optimality is computationally infeasible.There-fore,a practical approach to multi-objective optimization is to investigate a set of solutions(the best-known Pareto set)that represent the Pareto optimal set as well as possible.With these concerns in mind,a multi-objective optimization approach should achieve the following three conflicting goals[1]:1.The best-known Pareto front should be as close aspossible to the true Pareto front.Ideally,the best-known Pareto set should be a subset of the Pareto optimal set.2.Solutions in the best-known Pareto set should beuniformly distributed and diverse over of the Pareto front in order to provide the decision-maker a true picture of trade-offs.3.The best-known Pareto front should capture the wholespectrum of the Pareto front.This requires investigating solutions at the extreme ends of the objective function space.For a given computational time limit,thefirst goal is best served by focusing(intensifying)the search on a particular region of the Pareto front.On the contrary,the second goal demands the search effort to be uniformly distributed over the Pareto front.The third goal aims at extending the Pareto front at both ends,exploring new extreme solutions.This paper presents common approaches used in multi-objective GA to attain these three conflicting goals while solving a multi-objective optimization problem.3.Genetic algorithmsThe concept of GA was developed by Holland and his colleagues in the1960s and1970s[2].GA are inspired by the evolutionist theory explaining the origin of species.In nature,weak and unfit species within their environment are faced with extinction by natural selection.The strong ones have greater opportunity to pass their genes to future generations via reproduction.In the long run,species carrying the correct combination in their genes become dominant in their population.Sometimes,during the slow process of evolution,random changes may occur in genes. If these changes provide additional advantages in the challenge for survival,new species evolve from the old ones.Unsuccessful changes are eliminated by natural selection.In GA terminology,a solution vector x A X is called an individual or a chromosome.Chromosomes are made of discrete units called genes.Each gene controls one or more features of the chromosome.In the original implementa-tion of GA by Holland,genes are assumed to be binary digits.In later implementations,more varied gene types have been introduced.Normally,a chromosome corre-sponds to a unique solution x in the solution space.This requires a mapping mechanism between the solution space and the chromosomes.This mapping is called an encoding. In fact,GA work on the encoding of a problem,not on the problem itself.GA operate with a collection of chromosomes,called a population.The population is normally randomly initia-lized.As the search evolves,the population includesfitterA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007993andfitter solutions,and eventually it converges,meaning that it is dominated by a single solution.Holland also presented a proof of convergence(the schema theorem)to the global optimum where chromosomes are binary vectors.GA use two operators to generate new solutions from existing ones:crossover and mutation.The crossover operator is the most important operator of GA.In crossover,generally two chromosomes,called parents,are combined together to form new chromosomes,called offspring.The parents are selected among existing chromo-somes in the population with preference towardsfitness so that offspring is expected to inherit good genes which make the parentsfitter.By iteratively applying the crossover operator,genes of good chromosomes are expected to appear more frequently in the population,eventually leading to convergence to an overall good solution.The mutation operator introduces random changes into characteristics of chromosomes.Mutation is generally applied at the gene level.In typical GA implementations, the mutation rate(probability of changing the properties of a gene)is very small and depends on the length of the chromosome.Therefore,the new chromosome produced by mutation will not be very different from the original one.Mutation plays a critical role in GA.As discussed earlier,crossover leads the population to converge by making the chromosomes in the population alike.Muta-tion reintroduces genetic diversity back into the population and assists the search escape from local optima. Reproduction involves selection of chromosomes for the next generation.In the most general case,thefitness of an individual determines the probability of its survival for the next generation.There are different selection procedures in GA depending on how thefitness values are used. Proportional selection,ranking,and tournament selection are the most popular selection procedures.The procedure of a generic GA[3]is given as follows:Step1:Set t¼1.Randomly generate N solutions to form thefirst population,P1.Evaluate thefitness of solutions in P1.Step2:Crossover:Generate an offspring population Q t as follows:2.1.Choose two solutions x and y from P t based onthefitness values.ing a crossover operator,generate offspringand add them to Q t.Step3:Mutation:Mutate each solution x A Q t with a predefined mutation rate.Step4:Fitness assignment:Evaluate and assign afitness value to each solution x A Q t based on its objective function value and infeasibility.Step5:Selection:Select N solutions from Q t based on theirfitness and copy them to P t+1.Step6:If the stopping criterion is satisfied,terminate the search and return to the current population,else,set t¼t+1go to Step2.4.Multi-objective GABeing a population-based approach,GA are well suited to solve multi-objective optimization problems.A generic single-objective GA can be modified tofind a set of multiple non-dominated solutions in a single run.The ability of GA to simultaneously search different regions of a solution space makes it possible tofind a diverse set of solutions for difficult problems with non-convex,discon-tinuous,and multi-modal solutions spaces.The crossover operator of GA may exploit structures of good solutions with respect to different objectives to create new non-dominated solutions in unexplored parts of the Pareto front.In addition,most multi-objective GA do not require the user to prioritize,scale,or weigh objectives.Therefore, GA have been the most popular heuristic approach to multi-objective design and optimization problems.Jones et al.[4]reported that90%of the approaches to multi-objective optimization aimed to approximate the true Pareto front for the underlying problem.A majority of these used a meta-heuristic technique,and70%of all meta-heuristics approaches were based on evolutionary ap-proaches.Thefirst multi-objective GA,called vector evaluated GA (or VEGA),was proposed by Schaffer[5].Afterwards, several multi-objective evolutionary algorithms were devel-oped including Multi-objective Genetic Algorithm (MOGA)[6],Niched Pareto Genetic Algorithm(NPGA) [7],Weight-based Genetic Algorithm(WBGA)[8],Ran-dom Weighted Genetic Algorithm(RWGA)[9],Nondomi-nated Sorting Genetic Algorithm(NSGA)[10],Strength Pareto Evolutionary Algorithm(SPEA)[11],improved SPEA(SPEA2)[12],Pareto-Archived Evolution Strategy (PAES)[13],Pareto Envelope-based Selection Algorithm (PESA)[14],Region-based Selection in Evolutionary Multiobjective Optimization(PESA-II)[15],Fast Non-dominated Sorting Genetic Algorithm(NSGA-II)[16], Multi-objective Evolutionary Algorithm(MEA)[17], Micro-GA[18],Rank-Density Based Genetic Algorithm (RDGA)[19],and Dynamic Multi-objective Evolutionary Algorithm(DMOEA)[20].Note that although there are many variations of multi-objective GA in the literature, these cited GA are well-known and credible algorithms that have been used in many applications and their performances were tested in several comparative studies. Several survey papers[1,11,21–27]have been published on evolutionary multi-objective optimization.Coello lists more than2000references in his website[28].Generally, multi-objective GA differ based on theirfitness assign-ment procedure,elitisim,or diversification approaches.In Table1,highlights of the well-known multi-objective with their advantages and disadvantages are given.Most survey papers on multi-objective evolutionary approaches intro-duce and compare different algorithms.This paper takes a different course and focuses on important issues while designing a multi-objective GA and describes common techniques used in multi-objective GA to attain the threeA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007 994goals in multi-objective optimization.This approach is also taken in the survey paper by Zitzler et al.[1].However,the discussion in this paper is aimed at introducing the components of multi-objective GA to researchers and practitioners without a background on the multi-objective GA.It is also import to note that although several of the state-of-the-art algorithms exist as cited above,many researchers that applied multi-objective GA to their problems have preferred to design their own customized algorithms by adapting strategies from various multi-objective GA.This observation is another motivation for introducing the components of multi-objective GA rather than focusing on several algorithms.However,the pseudo-code for some of the well-known multi-objective GA are also provided in order to demonstrate how these proce-dures are incorporated within a multi-objective GA.Table1A list of well-known multi-objective GAAlgorithm Fitness assignment Diversity mechanism Elitism ExternalpopulationAdvantages DisadvantagesVEGA[5]Each subpopulation isevaluated with respectto a differentobjective No No No First MOGAStraightforwardimplementationTend converge to theextreme of each objectiveMOGA[6]Pareto ranking Fitness sharing byniching No No Simple extension of singleobjective GAUsually slowconvergenceProblems related to nichesize parameterWBGA[8]Weighted average ofnormalized objectives Niching No No Simple extension of singleobjective GADifficulties in nonconvexobjective function space Predefined weightsNPGA[7]Nofitnessassignment,tournament selection Niche count as tie-breaker in tournamentselectionNo No Very simple selectionprocess with tournamentselectionProblems related to nichesize parameterExtra parameter fortournament selectionRWGA[9]Weighted average ofnormalized objectives Randomly assignedweightsYes Yes Efficient and easyimplementDifficulties in nonconvexobjective function spacePESA[14]Nofitness assignment Cell-based density Pure elitist Yes Easy to implement Performance depends oncell sizesComputationally efficientPrior information neededabout objective spacePAES[29]Pareto dominance isused to replace aparent if offspringdominates Cell-based density astie breaker betweenoffspring and parentYes Yes Random mutation hill-climbing strategyNot a population basedapproachEasy to implement Performance depends oncell sizesComputationally efficientNSGA[10]Ranking based onnon-dominationsorting Fitness sharing bynichingNo No Fast convergence Problems related to nichesize parameterNSGA-II[30]Ranking based onnon-dominationsorting Crowding distance Yes No Single parameter(N)Crowding distance worksin objective space onlyWell testedEfficientSPEA[11]Raking based on theexternal archive ofnon-dominatedsolutions Clustering to truncateexternal populationYes Yes Well tested Complex clusteringalgorithmNo parameter forclusteringSPEA-2[12]Strength ofdominators Density based on thek-th nearest neighborYes Yes Improved SPEA Computationallyexpensivefitness anddensity calculationMake sure extreme pointsare preservedRDGA[19]The problem reducedto bi-objectiveproblem with solutionrank and density asobjectives Forbidden region cell-based densityYes Yes Dynamic cell update More difficult toimplement than othersRobust with respect to thenumber of objectivesDMOEA[20]Cell-based ranking Adaptive cell-baseddensity Yes(implicitly)No Includes efficienttechniques to update celldensitiesMore difficult toimplement than othersAdaptive approaches toset GA parametersA.Konak et al./Reliability Engineering and System Safety91(2006)992–10079955.Design issues and components of multi-objective GA 5.1.Fitness functions5.1.1.Weighted sum approachesThe classical approach to solve a multi-objective optimization problem is to assign a weight w i to each normalized objective function z 0i ðx Þso that the problem is converted to a single objective problem with a scalar objective function as follows:min z ¼w 1z 01ðx Þþw 2z 02ðx ÞþÁÁÁþw k z 0k ðx Þ,(1)where z 0i ðx Þis the normalized objective function z i (x )and P w i ¼1.This approach is called the priori approach since the user is expected to provide the weights.Solving a problem with the objective function (1)for a given weight vector w ¼{w 1,w 2,y ,w k }yields a single solution,and if multiple solutions are desired,the problem must be solved multiple times with different weight combinations.The main difficulty with this approach is selecting a weight vector for each run.To automate this process;Hajela and Lin [8]proposed the WBGA for multi-objective optimization (WBGA-MO)in the WBGA-MO,each solution x i in the population uses a different weight vector w i ¼{w 1,w 2,y ,w k }in the calculation of the summed objective function (1).The weight vector w i is embedded within the chromosome of solution x i .Therefore,multiple solutions can be simulta-neously searched in a single run.In addition,weight vectors can be adjusted to promote diversity of the population.Other researchers [9,31]have proposed a MOGA based on a weighted sum of multiple objective functions where a normalized weight vector w i is randomly generated for each solution x i during the selection phase at each generation.This approach aims to stipulate multiple search directions in a single run without using additional parameters.The general procedure of the RWGA using random weights is given as follows [31]:Procedure RWGA:E ¼external archive to store non-dominated solutions found during the search so far;n E ¼number of elitist solutions immigrating from E to P in each generation.Step 1:Generate a random population.Step 2:Assign a fitness value to each solution x A P t by performing the following steps:Step 2.1:Generate a random number u k in [0,1]for each objective k ,k ¼1,y ,K.Step 2.2:Calculate the random weight of each objective k as w k ¼ð1=u k ÞP K i ¼1u i .Step 2.3:Calculate the fitness of the solution as f ðx Þ¼P K k ¼1w k z k ðx Þ.Step 3:Calculate the selection probability of each solutionx A P t as follows:p ðx Þ¼ðf ðx ÞÀf min ÞÀ1P y 2P t ðf ðy ÞÀf minÞwhere f min ¼min f f ðx Þj x 2P t g .Step 4:Select parents using the selection probabilities calculated in Step 3.Apply crossover on the selected parent pairs to create N offspring.Mutate offspring with a predefined mutation rate.Copy all offspring to P t +1.Update E if necessary.Step 5:Randomly remove n E solutions from P t +1and add the same number of solutions from E to P t +1.Step 6:If the stopping condition is not satisfied,set t ¼t þ1and go to Step 2.Otherwise,return to E .The main advantage of the weighted sum approach is a straightforward implementation.Since a single objective is used in fitness assignment,a single objective GA can be used with minimum modifications.In addition,this approach is computationally efficient.The main disadvan-tage of this approach is that not all Pareto-optimal solutions can be investigated when the true Pareto front is non-convex.Therefore,multi-objective GA based on the weighed sum approach have difficulty in finding solutions uniformly distributed over a non-convex trade-off surface [1].5.1.2.Altering objective functionsAs mentioned earlier,VEGA [5]is the first GA used to approximate the Pareto-optimal set by a set of non-dominated solutions.In VEGA,population P t is randomly divided into K equal sized sub-populations;P 1,P 2,y ,P K .Then,each solution in subpopulation P i is assigned a fitness value based on objective function z i .Solutions are selected from these subpopulations using proportional selection for crossover and mutation.Crossover and mutation are performed on the new population in the same way as for a single objective GA.Procedure VEGA:N S ¼subpopulation size (N S ¼N =K )Step 1:Start with a random initial population P 0.Set t ¼0.Step 2:If the stopping criterion is satisfied,return P t .Step 3:Randomly sort population P t .Step 4:For each objective k ,k ¼1,y K ,perform the following steps:Step 4.1:For i ¼1þðk 21ÞN S ;...;kN S ,assign fit-ness value f ðx i Þ¼z k ðx i Þto the i th solution in the sorted population.Step 4.2:Based on the fitness values assigned in Step 4.1,select N S solutions between the (1+(k À1)N S )th and (kN S )th solutions of the sorted population to create subpopulation P k .Step 5:Combine all subpopulations P 1,y ,P k and apply crossover and mutation on the combined population to create P t +1of size N .Set t ¼t þ1,go to Step 2.A similar approach to VEGA is to use only a single objective function which is randomly determined each time in the selection phase [32].The main advantage of the alternating objectives approach is easy to implement andA.Konak et al./Reliability Engineering and System Safety 91(2006)992–1007996computationally as efficient as a single-objective GA.In fact,this approach is a straightforward extension of a single objective GA to solve multi-objective problems.The major drawback of objective switching is that the popula-tion tends to converge to solutions which are superior in one objective,but poor at others.5.1.3.Pareto-ranking approachesPareto-ranking approaches explicitly utilize the concept of Pareto dominance in evaluatingfitness or assigning selection probability to solutions.The population is ranked according to a dominance rule,and then each solution is assigned afitness value based on its rank in the population, not its actual objective function value.Note that herein all objectives are assumed to be minimized.Therefore,a lower rank corresponds to a better solution in the following discussions.Thefirst Pareto ranking technique was proposed by Goldberg[3]as follows:Step1:Set i¼1and TP¼P.Step2:Identify non-dominated solutions in TP and assigned them set to F i.Step3:Set TP¼TPF i.If TP¼+go to Step4,else set i¼iþ1and go to Step2.Step4:For every solution x A P at generation t,assign rank r1ðx;tÞ¼i if x A F i.In the procedure above,F1,F2,y are called non-dominated fronts,and F1is the Pareto front of population P.NSGA[10]also classifies the population into non-dominated fronts using an algorithm similar to that given above.Then a dummyfitness value is assigned to each front using afitness sharing function such that the worst fitness value assigned to F i is better than the bestfitness value assigned to F i+1.NSGA-II[16],a more efficient algorithm,named the fast non-dominated-sort algorithm, was developed to form non-dominated fronts.Fonseca and Fleming[6]used a slightly different rank assignment approach than the ranking based on non-dominated-fronts as follows:r2ðx;tÞ¼1þnqðx;tÞ;(2) where nq(x,t)is the number of solutions dominating solution x at generation t.This ranking method penalizes solutions located in the regions of the objective function space which are dominated(covered)by densely populated sections of the Pareto front.For example,in Fig.1b solution i is dominated by solutions c,d and e.Therefore,it is assigned a rank of4although it is in the same front with solutions f,g and h which are dominated by only a single solution.SPEA[11]uses a ranking procedure to assign better fitness values to non-dominated solutions at underrepre-sented regions of the objective space.In SPEA,an external list E of afixed size stores non-dominated solutions that have been investigated thus far during the search.For each solution y A E,a strength value is defined assðy;tÞ¼npðy;tÞN Pþ1,where npðy;tÞis the number solutions that y dominates in P.The rank r(y,t)of a solution y A E is assigned as r3ðy;tÞ¼sðy;tÞand the rank of a solution x A P is calculated asr3ðx;tÞ¼1þXy2E;y1xsðy;tÞ.Fig.1c illustrates an example of the SPEA ranking method.In the former two methods,all non-dominated solutions are assigned a rank of1.This method,however, favors solution a(in thefigure)over the other non-dominated solutions since it covers the least number of solutions in the objective function space.Therefore,a wide, uniformly distributed set of non-dominated solutions is encouraged.Accumulated ranking density strategy[19]also aims to penalize redundancy in the population due to overrepre-sentation.This ranking method is given asr4ðx;tÞ¼1þXy2P;y1xrðy;tÞ.To calculate the rank of a solution x,the rank of the solutions dominating this solution must be calculatedfirst. Fig.1d shows an example of this ranking method(based on r2).Using ranking method r4,solutions i,l and n are ranked higher than their counterparts at the same non-dominated front since the portion of the trade-off surface covering them is crowded by three nearby solutions c,d and e. Although some of the ranking approaches described in this section can be used directly to assignfitness values to individual solutions,they are usually combined with variousfitness sharing techniques to achieve the second goal in multi-objective optimization,finding a diverse and uniform Pareto front.5.2.Diversity:fitness assignment,fitness sharing,and nichingMaintaining a diverse population is an important consideration in multi-objective GA to obtain solutions uniformly distributed over the Pareto front.Without taking preventive measures,the population tends to form relatively few clusters in multi-objective GA.This phenom-enon is called genetic drift,and several approaches have been devised to prevent genetic drift as follows.5.2.1.Fitness sharingFitness sharing encourages the search in unexplored sections of a Pareto front by artificially reducingfitness of solutions in densely populated areas.To achieve this goal, densely populated areas are identified and a penaltyA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007997。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Efficient and Accurate Delaunay Triangulation Protocols under Churn∗Dong-Young Lee and Simon mDepartment of Computer SciencesThe University of Texas at Austin{dylee,lam}@November9,2007Technical Report#TR-07-59AbstractWe design a new suite of protocols for a set of nodes ind-dimension(d>1)to construct and maintain a distributedDelaunay triangulation(DT)in a dynamic environment.Thesuite includes join,leave,failure,and maintenance protocols.The join,leave,and failure protocols are proved to be correctfor a single join,leave,and failure,respectively.In practice,protocol processing of events may be concurrent.For a systemunder churn,it is impossible to maintain a correct distributedDT continually.We define an accuracy metric such that ac-curacy is100%if and only if the distributed DT is correct.The maintenance protocol is designed to recover from incor-rect system states and to improve accuracy.In designing theprotocols in this paper,we made use of two new observationsto substantially improve their efficiency.First,in the neigh-bor discovery process of a node,many replies to the node’squeries contain redundant information.Second,the use of anew failure protocol that employs a proactive approach to re-covery is better than the reactive approach used in[1,2].Ex-perimental results show that our new suite of protocols main-tains high accuracy for systems under churn and each systemconverges to100%accuracy after churning stopped.They aremuch more efficient than protocols in prior work[1,2].1.IntroductionDelaunay triangulation[3]and V oronoi diagram[4]havea long history and many applications in differentfields of sci-ence and engineering including networking applications,suchas greedy routing[5],finding the closest node to a given point,broadcast,multicast,etc.[1].A triangulation for a given set Sof nodes in a2D space is a subdivision of the convex hull ofnodes in S into non-overlapping triangles such that the ver-texes of each triangle are nodes in S.A Delaunay triangula-1Delaunay triangulation is defined in a Euclidean space.When we say ad-dimensional space in this paper,we mean a d-dimensional Euclidean space.2When a node fails,it becomes silent.We do not consider Byzantinefailures.tocol execution.A similar correctness property is proved for the leave and failure protocols.Note that these three protocols are adequate for a system whose churn rate is so low that joins,leaves,and failures occur serially,i.e., protocol executionfinishes for each event(join,leave,or failure)before another event occurs.In general,for sys-tems with a higher churn rate,we also provide a mainte-nance protocol,which is run periodically by each node.•Accuracy–It is impossible to maintain a correct dis-tributed DT continually for a system under churn.Note that correctness of a distributed DT is broken as soon as a join/leave/failure occurs and is recovered only after the join/leave/failure protocolfinishes execution.For-tunately,some applications,such as greedy routing,can work well on a reasonably“accurate”distributed DT.We previously presented an accuracy metric for a distributed DT[1];we will show that the accuracy of a distributed DT is1if and only if the distributed DT is correct.The maintenance protocol is designed to recover from incor-rect system states due to concurrent protocol process-ing and to improve accuracy.We found that in all of our experiments conducted to date with the maintenance protocol,each system that had been under churn would converge to100%accuracy some time after churning stopped.•Efficiency–We use the total number of messages sent during protocol execution as the measure of efficiency.Protocols are said to be more efficient when their execu-tion requires the use of fewer messages.In a previous paper[1],we presented examples of network-ing applications to run on top of a distributed DT,namely: greedy routing,finding the closest existing node to a given point,clustering of network nodes,as well as broadcast and multicast withing a given radius without session states.Three DT protocols were presented:join and leave protocols that were proved correct and a maintenance protocol that was shown to converge to100%accuracy after system churn. However,the join and maintenance protocols in this suite(the old protocols)were designed with correctness as the main goal and their execution requires the use of a large number of messages.To make the new join and maintenance protocols in this paper much more efficient,we make two novel observations. First,the objective of the join protocols is for a new node n to identify its neighbors(on the global DT),and for n’s neighbors to detect n’s join.n sends a request to an existing node u for n’s neighbors in u’s local information.When n receives a reply,it learns new neighbors and sends requests to those newly-learned neighbors.This process is recursively re-peated until u does notfind any more new neighbor.Whereas it is necessary to send messages to all neighbors,since the neighbors need to be notified that n has joined,we discoveredthat n only needs to hear back from just one neighbor in each simplex that includes n rather than from all neighbors.Fur-thermore,queries as well as replies for some simplexes can be combined so that just one query-reply between n and one neighbor is needed for multiple simplexes.Based on this ob-servation,we designed a new join protocol.We found that the new join protocol is much more efficient than our old join protocol.We have proved the new join protocol to be correct for a single join.We also apply the above observation to substantially re-duce the number of messages used by the new maintenance protocol.Furthermore,we make the following second obser-vation to greatly reduce the total number of all protocol mes-sages per unit time by reducing the frequency at which the new maintenance protocol runs.We keep the leave protocol in[1]unchanged in this paper because it can efficiently address graceful node leaves.How-ever,with the old suite of protocols,it is the old maintenance protocol’s job to detect node failures and repair the resulting distributed DT.To detect a node failure,the node was probed by all of its neighbors.Furthermore,the distributed DT was repaired in a reactive fashion.The process of reactively re-pairing a distributed DT after a failure is inevitably costly,be-cause the information needed for the repair was at the failed node and lost after failure.To improve overall efficiency,we designed a new failure protocol to handle node failures.The failure protocol em-ploys a proactive approach.Each node designates one of its neighbors as its monitor node.In the failure protocol,a node is probed only by its monitor node,eliminating dupli-cate probes.In addition,each node prepares a contingency plan and gives the contingency plan to its monitor node.The contingency plan includes all information to correctly update the distributed DT after its failure.Once the failure of a node is detected by its monitor node,the monitor node initiates fail-ure recovery.That is,each neighbor of the failed node is noti-fied of the failure as well as any new neighbor(s)that it should have after the failure.In this way,node failures are handled almost as efficiently as graceful node leaves in the leave pro-tocol.We have proved the new failure protocol to be correct for a single failure.Each node runs the maintenance protocol(new or old)pe-riodically.The communication cost of the maintenance proto-col increases as the period decreases(or frequency increases).Generally,as the churn rate increases,the maintenance proto-col needs to be run more frequently.In the old protocol suite, moreover,the old maintenance protocol needs to be run at the probing frequency because one of its functions is to re-cover from node failures.With the inclusion of an efficient failure protocol in the new protocol suite to handle failures separately,the new maintenance protocol can be run less of-ten.We found that the overall efficiency of the protocols as a whole is greatly improved as a result.2To the best of our knowledge,the only previous work for a dynamic distributed DT in a d-dimensional space is by Simon et al.[2].They proposed two sets of distributed algorithms: basic generalized algorithms and improved generalized algo-rithms.Each set consists of an entity insertion(node join)al-gorithm and an entity deletion(node failure)algorithm.Their basic entity insertion algorithm is similar to our old join pro-tocol.Their improved entity insertion algorithm is based on a centralizedflip algorithm[7]whereas our join protocols are based on a“candidate-set approach”and our correctness con-dition for a distributed DT.The two approaches are funda-mentally different.Their entity deletion algorithm and our failure protocols are also different.Our failure protocol is substantially more efficient than their improved entity dele-tion algorithm,which uses a reactive approach and allows duplicate probes.The centralizedflip algorithm is known to be correct[8].However,correctness of their distributed al-gorithms is not explicitly stly,they do not have any algorithm,like our maintenance protocols,for recovery from concurrent processing of joins and failures due to sys-tem churn.As a result,their algorithms failed to converge to 100%accuracy after system churn in our simulation experi-ments.A quick comparison of the four sets of proto-cols/algorithms is shown in Table1.More detailed experi-mental results are presented in Section7of this paper.convergence to100%accuracyafter system churnmediumSimon et al.’s im-proved algorithmsNolowOur new protocols YesTable1.A comparison of the old and new pro-tocols with Simon et al.’s basic and improved algorithms.The organization of this paper is as follows.In Section2, we introduce the concepts and definitions of a distributed DT, present our system model,and a correctness condition for a distributed DT.We present the new join protocol in Section 3,the new failure protocol in Section4,and the new mainte-nance protocol in Section5.The accuracy metric is defined in Section6and experimental results are presented in Section 7.We conclude in Section8.Figure 1.A Voronoi diagram(dashed lines) and the corresponding DT(solid lines)in a2-dimensional space.2.Distributed Delaunay triangulation2.1.Concepts and definitionsDefinition1.Consider a set of nodes S in a Euclidean space. The Voronoi diagram of S is a partitioning of the space into cells such that a node u∈S is the closest node to all points within its Voronoi cell V C S(u).That is,V C S(u)={p|D(p,u)≤D(p,w),for any w∈S}where D(x,y)denotes the distance between x and y.Note that a V oronoi cell in a d-dimensional space is a convex d-dimensional polytope enclosed by(d−1)-dimensional facets. Definition2.Consider a set of nodes S in a Euclidean space.V C S(u)and V C S(v)are neighboring Voronoi cells, or neighbors of each other,if and only if V C S(u)and V C S(v)share a facet,which is denoted by V F S(u,v).Definition3.Consider a set of nodes S in a Euclidean space. The Delaunay triangulation of S is a graph on S where two nodes u and v in S have an edge between them if and only if V C S(u)and V C S(v)are neighbors of each other.Figure1shows a V oronoi diagram(dashed lines)for a set of nodes in a2D space and a DT(solid lines)for the same set of nodes.V C S(u)and V C S(v)are neighbors of each other. We also say that u and v are neighbors of each other when V C S(u)and V C S(v)are neighbors of each other.Note that facets of a V oronoi cell perpendicularly bisect edges of a DT. Therefore,a DT is the dual of a V oronoi diagram.3Let us denote the DT of S as DT(S).Definition4.A distributed Delaunay triangulation of a set of nodes S is specified by{<u,N u>|u∈S},where N u represents the set of u’s neighbor nodes,which is locally de-termined by u.Definition5.A distributed Delaunay triangulation of a set of nodes S is correct if and only if both of the following condi-tions hold for every pair of nodes u,v∈S:i)if there exists an edge between u and v on the global DT of S,then v∈N u and u∈N v,and ii)if there does not exist an edge between u and v on the global DT of S,then v∈N u and u∈N v.That is,a distributed DT is correct when for every node u,N u is the same as the neighbors of u on DT(S).Since u does not have global knowledge,it is not straightforward to achieve correctness.2.2.System modelOur approach to construct a distributed DT is as follows. We assume that each node is associated with its coordinates in a d-dimensional Euclidean space.Each node has prior knowl-edge of its own coordinates,as is assumed in previous work [9,10,2,11,12].The mechanism to obtain coordinates is beyond the scope of this study.Coordinates may be given by an application,a GPS device[13],or topology-aware virtual coordinates[14].4Also when we say a node u knows another node v,we assume that u knows v’s coordinates as well.Let S be a set of nodes to construct a distributed DT from. We will present protocols to enable each node u∈S to get to know a set of its nearby nodes including u itself,denoted as C u,to be referred to as u’s candidate set.Then u determines the set of its neighbor nodes N u by calculating a local DT of C u,denoted by DT(C u).That is,v∈N u if and only if there exists an edge between u and v on DT(C u).To simply protocol descriptions,we assume that message delivery is reliable.In a real implementation,additional mechanisms such as ARQ may be used to ensure reliable mes-sage delivery.2.3.Correctness condition for a distributedDelaunay triangulationRecall that a distributed DT is correct when for every node u,N u is the same as the neighbors of u on DT(S). Since N u is the set of u’s neighbor nodes on DT(C u)in our model,to achieve a correct distributed DT,the neighbors of u on DT(C u)must be the same as the neighbors of u on DT(S).Note that C u is local information of u while S is global knowledge.Therefore in designing our protocols,we need to ensure that C u has enough information for u to cor-rectly identify its global neighbors.If C u is too limited,u cannot identify its global neighbors.For the extreme case of C u=S,u can identify its neighbors on the global DT since DT(C u)=DT(S);however,the communication over-head for each node to acquire global knowledge would be ex-tremely high.Figure2.An example offlipping in2D.3.2.Candidate-set approachIn a previous paper[1],we proposed a join protocol based on the distributed system model using candidate sets and the correctness condition for a distributed DT introduced in Sec-tion2.When a new node n joins a distributed DT,it isfirst led to the closest existing node z.5Then n sends a request to z for mutual neighbors of n and z on DT(C z).When n re-ceives the reply,n puts the mutual neighbors in its candidate set(C n)and re-calculates its neighbor set(N n).If nfinds any new neighbors,n sends requests to the new neighbors.This process is repeated recursively.We proved correctness of this protocol[15].Theflip algorithm and candidate-set approach are funda-mentally different.However,it is interesting to note that there is a correspondence between the two.Table2shows how steps of the two different approaches correspond to each other.Whereas the two approaches have corresponding steps,the steps are not exactly the same.For example,in step(b),n initially learns(d+1)neighbors in theflip algorithm.In step(b)of the candidate-set approach,n may be informed of any nodes that z knows.In step(c)of the candidate-set approach,multiple neighbors may send duplicate messages to n to inform n of the same new neighbor.In step(c)of theflip algorithm,only one node may reply that a simplex isflipped. This observation gave us an idea to substantially improve the efficiency of our join protocol.Candidate-set approachA joining node n isled to a closest existingnode z.(b)The simplex that en-closes n is divided into(d+1)simplexes.n contacts each of itsnew neighbors to seewhether there are otherpotential neighbors.(d)New(flipped)sim-plexes are recursivelychecked.Table2.Correspondence between join proto-col in the candidate-set approach andflip al-gorithm.3.3.New join protocolUsing the observation described above,we designed a new join protocol that is substantially more efficient than our old one.In addition to C n and N n,a joining node n keeps N queriedn,which includes the neighbors that are already queried during its join process.Instead of querying all new neighbors,n queries only one neighbor for each simplex thatdoes not include any node in N queriedn.Note that only one neighbor in each simplex needs to be queried.If a simplexincludes a node v∈N queriedn,it means that the simplex has already been checked by v.Furthermore,queries as well as replies for multiple simplexes may be combined.The new join protocol requires the general position assumption,which was not required for the old join protocol.Our new join protocol is still based on our candidate-set model and its correctness for a single join is proved using the correctness condition Theorem1.The new join protocol still has some duplicate computa-tion.Even though there is only one query-reply interaction for each simplex,DT is calculated independently at each node.Pseudocode of the new join protocol at a node is given in Figure3.The protocol execution loop at a joining node,say n,and the response actions at an existing node,say v,are presented below.6Protocol execution loop at a joining node nAt a joining node n,the join protocol runs as follows with a loop over steps3-6:SET1.A joining node n isfirst led to a closest existing node z.2.n sends a NEIGHBOR REQUEST message to z.C n is set to{n,z}and N queriednis set to{z}.Repeat steps3-6below until a reply has been received for every NEIGHBOR REQUEST message sent:3.n receives a NEIGHBOR REPLY message from anode,say v.The message includes mutual neighbors of n and v on DT(C v).4.n adds the newly learned neighbors(if any)to C n,andcalculates DT(C n).5.Among simplexes that include n on DT(C n),simplexesthat do not include any node in N queriedn are identifiedas unchecked simplexes.n selects some of its neighbors such that each unchecked simplex includes at least one selected neighbor.6.n sends NEIGHBOR REQUEST messages to theselected neighbors.N queriedn is updated to include theselected neighbors.For the non-selected new neighbors, NEIGHBORSETSETNOTIFICATION from n, v includes n into C v and re-calculates DT(C v).But v does not reply to n.3.4.Correctness of the new join protocol Lemma1.Let S be a set of nodes.For any subset C of S,let u∈C and v∈C,if v is a neighbor of u on DT(S),v is also a neighbor of u on DT(C).Lemma1is proved in[15](Lemma3in[15]).Lemma2.Let n denote a new joining node,S be a set of existing nodes,and S′=S {n}.Suppose that the exist-ing distributed DT of S is correct and no other node joins, leaves,or fails.Let T be a simplex that exists on DT(C n) at some time during protocol execution and does not exists on DT(S′).Let x=n be a node in T.Suppose that n sends a NEIGHBOR REQUEST to x.When n re-ceives a NEIGHBOR REPLY from x,T is removed from DT(C n).Proof.Since the existing distributed DT of S is correct,C x includes all the neighbors of x on DT(S).After x receives NEIGHBOR REQUEST,C x will include n,and thus C x will include all the neighbors of x on DT(S′).Consider the space that T occupies on DT(C n).Since T does not exist on DT(S′),the space is occupied by two or more different simplexes on DT(S′).Let T∗be one of thoseJoin(z)of node u;Input:u is the joining node,if u is the only node in thesystem,z=NULL;otherwise z is the closest existing nodeto u.if z=NULL thenSend(z,NEIGHBOR REQUEST)C u←{u,z},N u←∅,N queriedu←{z}elseC u←{u},N u←∅,N queriedu←∅end ifOn u’s receiving NEIGHBOR REQUEST from wif w∈C u thenC u←C u∪{w}N u←neighbor nodes of u on DT(C u)end ifN u w←{x|x is a neighbor of w on DT(C u)}Send(w,NEIGHBOR REPLY(N u w))On u’s receiving NEIGHBOR REPLY(N w u)from wC u←C u∪N w uUpdateNOTIFICATION from w if w∈C u thenC u←C u∪{w}N u←neighbor nodes of u on DT(C u)end ifUpdateNeighbors Check(T newu)for all v∈N checkudoSend(v,NEIGHBOR REQUEST)end forN queriedu←N queriedu∪N checkuN notif yu←N newu−N checkufor all v∈N notif yudoSend(v,NEIGHBORNeighbors Check(T newu)of node uN′u←∅while T newu=∅don←a node in T newuN′u←N′u∪nremove each simplex that contains n from T newuend whileReturn N′uFigure3.New join protocol at a node u.simplexes that includes both n and x.Note that there are d−1 other nodes in T∗,which are mutual neighbors of n and x on6DT(S′),and,by Lemma1,on DT(C x)as well.These d−1 nodes are included in the NEIGHBOR REPLY message from x to n.When n receives the NEIGHBOR REPLY message,the d−1nodes are included in C n and,by Lemma1, become neighbors of n on DT(C n).As a result,T∗is cre-ated on DT(C n).That means T,which overlaps with T∗,is removed from DT(C n).Lemma3.Let n denote a new joining node,S be a set of existing nodes,and S′=S {n}.Suppose that the exist-ing distributed DT of S is correct and no other node joins, leaves,or fails.Then when the new join protocolfinishes,C n includes all the neighbor nodes of n on DT(S′).Proof.Consider a neighbor v of n on DT(S′).We show that v will be included in C n when the join protocolfinishes.At step4of the protocol execution loop,n has some nodes in C n and calculates DT(C n).Suppose that v is not yet included in C n.Consider a straight line l from n to v.Let T be thefirst simplex on DT(C n)that l crosses.Such a simplex exists because v is not yet a neighbor of n on DT(C n).Note that T includes n. Let the other nodes of T be x1,x2,...,x d.Since l is an edge on DT(S′),T does not exist on DT(S′).We show by contradiction that n has not received a NEIGHBOR REPLY message from any node in T. Suppose that n has received a NEIGHBOR REPLY from a node in T.Then by Lemma2,T cannot exist on DT(C n),which contradicts the earlier assumption that T ex-ists because v is not included in C n yet.Next we show that n will receive a NEIGH-BOR REPLY message from a node in T.EitherT includes a node x a in N queriedn or T does not includeany node in N queriedn .In the former case,n has sent aNEIGHBOR REQUEST to x a and will receive a NEIGHBOR REPLY message from x a.In the latter case,by step5of the protocol execution loop,n will send a NEIGHBOR REQUEST to a node x b in T and then receive a NEIGHBOR REPLY message from x b.In each case,when n receives the NEIGHBOR REPLY message,T is removed from DT(C n)by Lemma2.Afterwards,if v is not a neighbor of n and l crosses an-other simplex on DT(C n),protocol execution continues and the above process repeats.This processfinishes in afinite number of iterations since the number of nodes in S isfinite and the number of possible simplexes in S is alsofinite.When there is no simplex that l crosses on DT(C n),l is an edge on DT(C n).Therefore v is included in C n.The following theorem shows that our new join protocol is correct for a single join.Theorem2.Let n denote a new joining node,S be a set of existing nodes,and S′=S {n}.Suppose that the existing distributed DT of S is correct and no other node joins,leaves,or fails.Then when the new join protocolfinishes,the updated distributed DT is correct.Proof.By Lemma3,when the join processfinishes,C n will include all of its neighbor nodes on DT(S′).In ad-dition,whenever n discovers a neighbor node v during the process,n sends either NEIGHBOR REQUEST or NEIGHBOROn change in N um u←the neighbor in N u with the least ID Calculate DT(N u);Note:u∈N ufor all v∈N u doN u v←{w|w is a neighbor of v on DT(N u)}end forSend(m u,CONTINGENCYPLAN(CP v)from v Set F AILURET IMER vSend(v,PING)Set P ING T IMER v to T+T O;T is current time,T O is the timeout value.On u’s receiving PING from vif v=m u thenSend(v,PONG(true))elseSend(v,PONG(false))end ifOn u’s receiving PONG(flag)from vif flag=true thenSet F AILURET IMER vend ifOn u’s expiration of P ING T IMER v for all w that CP v contains<w,N v w>doSend(w,FAILURET IMER vOn u’s receiving FAILUREInstead of querying all neighbors,a node u queries only one node for each simplex that includes u.Since a neighbor node may be included in multiple simplexes,the number of queried neighbors is much less than that of all neighbors.Another goal of the old maintenance protocol was fail-ure detection and recovery.In the old maintenance protocol, probing a node u was carried out by all neighbors of u.In our new set of protocols,the new failure protocol takes over the task of failure detection and recovery,where a node is probed by only one of its neighbor nodes.Thus the overall cost of our new maintenance and failure protocols is much less than the cost of the old maintenance protocol.Although failure recovery is not a major goal of the new maintenance protocol,if a failure is detected by a mes-sage timeout,this information is propagated via DELETE messages.This may be necessary in case of concur-rent failures.DELETE messages are propagated using the GRPB(greedy reverse-path broadcast)protocol in[1]. The maintenance protocol pseudocode(including GRPB) is given in Figure 5.Actions for receiving a NEIGH-BOR REQUEST message and the functions of Up-date Neighbors Check are the same as in Figure3.6.Accuracy metric for a system under churnWe define an accuracy metric as in[1],which we will use for experiments for a system of nodes under churn.We con-sider a node to be in-system from when itfinishes joining to when it starts leaving.Let DDT S be a distributed DT of a set of in-system nodes S.(Note that some nodes may be in the process of joining or leaving and not included.)Let N correct(DDT S)be the number of correct neighbor entries of all nodes and N wrong(DDT S)be the number of wrong neighbor entries of all nodes on DDT S.A neighbor entry v of a node u is correct when v is a neighbor of u on the global DT (namely,DT(S)),and wrong when u and v are not neighbors on the global DT.Let N(DT(S))be the number of edges on DT(S).Note that edges on a global DT are undirectional and thus are counted twice when compared with neighbor entries. The accuracy of DDT S is defined as follows:accuracy(DDT S)=N correct(DDT S)−N wrong(DDT S)T IMERN queriedu←∅T u←simplexes that contain u on DT(N u∪{u})N checku←Get T oSETT IMEOUTT IMER to T+P;T is current time,P is the period of failure probe.On u’s receiving NEIGHBOR REPLY(N v u)from vC u←C u∪N v uUpdateT IMEOUTT IMEOUTNeighbors(C u,N u)for all w∈N u doSend(w,DELETE(v,u))end forOn u’s receiving DELETE(v,w)from xif v∈C u thenC u←C u−{v}Update-0.6-0.4-0.2 0 0.2 0.4 0.6 0.8 1 0246810A c c u r a c yRound2D 3D 4D 5DFigure 6.Accuracy of the maintenance proto-col for a system with an initial ring configura-tion.10002000 3000 4000 5000 6000 7000 80009000 10000 11000 2345N u m b e r o f m e s s a g e sDimensionalityold join protocolSimon et al. insertion algorithmnew join protocol Figure 7.Costs of join protocols for 100serial joins.7.Experimental results7.1.Join protocolsFigure 7shows communication costs of join protocols.Each curve shows the number of messages for 100serial joins,increasing the system size from 200nodes to 300nodes,for different dimensionalities.Our new join protocol has much less cost than our old join protocol,and is slightly better than Simon et al.’s distributed algorithm.7.2.Failure protocolFigure 8compares communication costs of our failure pro-tocol and Simon et al.’s entity deletion algorithm.The num-ber of messages used to recover from 100serial failures from 300initial nodes is measured.Both our failure protocol and Simon et al.’s deletion algorithm use the same probing period of 10seconds.Our failure protocol is much more efficient50000100000150000200000 2500002 34 5N u m b e r o f m e s s a g e sDimensionalityour failure protocol (10)Simon et al. deletion algorithm (10)Figure 8.Costs of failure protocols for 100se-rial failures.0.950.960.970.980.9910 20 40 6080 100 120 140 160A c c u r a c yTimenew join and failure protocols (10)Simon et al. algorithms (10)Figure 9.Accuracy without a maintenance pro-tocol under system churn (join and fail).than Simon et al.’s entity deletion algorithm.7.3.Maintenance protocolsFigure 9shows accuracy of our protocols without a main-tenance protocol and Simon et al.’s algorithms under system churn.From a correct distributed DT of 400initial nodes in 3D,100concurrent joins and 100concurrent failures occur from time 10to 110second,with an average inter-arrival time of 1second,respectively.7In both our failure protocol and Si-mon et al.’s entity deletion algorithm,nodes are probed every 10seconds.The accuracy of the distributed DT is measured every 10seconds.Both our new join and failure protocols and Simon et al.’s entity insertion and deletion algorithms cannot fully recover after system churn,resulting in an incorrect dis-tributed DT.。