Smoothing Supernova Data to Reconstruct the Expansion History of the Universe and its Age

合集下载

mnn模型量化剪枝蒸馏

mnn模型量化剪枝蒸馏

mnn模型量化剪枝蒸馏英文回答:Quantization, pruning, and distillation are three commonly used techniques in model compression to reduce the size and improve the efficiency of deep neural networks. In this answer, I will explain the process of quantization, pruning, and distillation in the context of MNN model compression.Quantization is the process of reducing the precision of weights and activations in a neural network. By quantizing the model, we can represent the weights and activations using fewer bits, which leads to a smaller model size and faster inference time. For example, instead of using 32-bit floating-point numbers, we can use 8-bit integers to represent the weights and activations. This reduces the memory footprint and allows for more efficient computation on hardware with limited resources.Pruning, on the other hand, involves removing unnecessary connections or neurons from a neural network. The idea behind pruning is that not all connections or neurons contribute equally to the network's performance. By removing the less important connections or neurons, we can reduce the model size and improve the inference speed without sacrificing much accuracy. Pruning can be done based on various criteria, such as weight magnitude or activation importance. For example, we can prune connections with small weights or neurons that have low activation values.Distillation is a technique that involves training a smaller "student" network to mimic the behavior of a larger "teacher" network. The teacher network is usually a larger and more accurate model, while the student network is a smaller and less accurate model. The student network is trained to match the output probabilities of the teacher network, using a combination of the teacher's soft targets and the ground truth labels. The idea behind distillation is that the student network can learn from the teacher network's knowledge and generalize better than if it wastrained from scratch. This allows us to compress the knowledge of the larger model into a smaller model without sacrificing much accuracy.To illustrate the process of quantization, pruning, and distillation, let's consider the example of compressing a large image classification model.First, we can start by quantizing the weights and activations of the model. For instance, we can convert the 32-bit floating-point weights to 8-bit integers. This reduces the model size and allows for faster inference on hardware with limited resources.Next, we can apply pruning to remove unnecessary connections or neurons from the model. For example, we can prune connections with small weights or neurons that have low activation values. This further reduces the model size and improves the inference speed.Finally, we can use distillation to train a smaller student network to mimic the behavior of the larger teachernetwork. The student network is trained to match the output probabilities of the teacher network, using a combination of the teacher's soft targets and the ground truth labels. This allows us to compress the knowledge of the larger model into a smaller model without sacrificing much accuracy.中文回答:量化、剪枝和蒸馏是模型压缩中常用的三种技术,用于减小深度神经网络的大小并提高效率。

[ToG13]Poisson Surface Reconstruction

[ToG13]Poisson Surface Reconstruction

Screened Poisson Surface ReconstructionMICHAEL KAZHDANJohns Hopkins UniversityandHUGUES HOPPEMicrosoft ResearchPoisson surface reconstruction creates watertight surfaces from oriented point sets.In this work we extend the technique to explicitly incorporate the points as interpolation constraints.The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation.In contrast to other image and geometry processing techniques,the screening term is defined over a sparse set of points rather than over the full domain.We show that these sparse constraints can nonetheless be integrated efficiently.Because the modified linear system retains the samefinite-element discretization,the sparsity structure is unchanged,and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster,higher-quality surface reconstructions.Categories and Subject Descriptors:I.3.5[Computer Graphics]:Compu-tational Geometry and Object ModelingAdditional Key Words and Phrases:screened Poisson equation,adaptive octree,finite elements,surfacefittingACM Reference Format:Kazhdan,M.,and Hoppe,H.Screened Poisson surface reconstruction. ACM Trans.Graph.NN,N,Article NN(Month YYYY),PP pages.DOI=10.1145/XXXXXXX.YYYYYYY/10.1145/XXXXXXX.YYYYYYY1.INTRODUCTIONPoisson surface reconstruction[Kazhdan et al.2006]is a well known technique for creating watertight surfaces from oriented point samples acquired with3D range scanners.The technique is resilient to noisy data and misregistration artifacts.However, as noted by several researchers,it suffers from a tendency to over-smooth the data[Alliez et al.2007;Manson et al.2008; Calakli and Taubin2011;Berger et al.2011;Digne et al.2011].In this work,we explore modifying the Poisson reconstruc-tion algorithm to incorporate positional constraints.This mod-ification is inspired by the recent reconstruction technique of Calakli and Taubin[2011].It also relates to recent work in im-age and geometry processing[Nehab et al.2005;Bhat et al.2008; Chuang and Kazhdan2011],in which a datafidelity term is used to“screen”the associated Poisson equation.In our surface recon-struction context,this screening term corresponds to a soft con-straint that encourages the reconstructed isosurface to pass through the input points.The approach we propose differs from the traditional screened Poisson formulation in that the position and gradient constraints are defined over different domain types.Whereas gradients are constrained over the full3D space,positional constraints are introduced only over the input points,which lie near a2D manifold. We show how these two types of constraints can be efficiently integrated,so that we can leverage the original multigrid structure to solve the linear system without incurring a significant overhead in space or time.To demonstrate the benefits of screening,Figure1compares results of the traditional Poisson surface reconstruction and the screened Poisson formulation on a subset of11.4M points from the scan of Michelangelo’s David[Levoy et al.2000].Both reconstructions are computed over a spatial octree of depth10,corresponding to an effective voxel resolution of10243.Screening generates a model that better captures the input data(as visualized by the surface cross-sections overlaid with the projection of nearby samples), even though both reconstructions have similar complexity(6.8M and6.9M triangles respectively)and required similar processing time(230and272seconds respectively,without parallelization).1 Another contribution of our work is to modify both the octree structure and the multigrid implementation to reduce the time complexity of solving the Poisson system from log-linear to linear in the number of input points.Moreover we show that hierarchical point clustering enables screened Poisson reconstruction to attain this same linear complexity.2.RELA TED WORKReconstructing surfaces from scanned points is an important and extensively studied problem in computer graphics.The numerous approaches can be broadly categorized as follows. Combinatorial Algorithms.Many schemes form a triangula-tion using a subset of the input points[Cazals and Giesen2006]. Space is often discretized using a tetrahedralization or a voxel grid,and the resulting elements are partitioned into inside and outside regions using an analysis of cells[Amenta et al.2001; Boissonnat and Oudot2005;Podolak and Rusinkiewicz2005], eigenvector computation[Kolluri et al.2004],or graph cut [Labatut et al.2009;Hornung and Kobbelt2006].Implicit Functions.In the presence of sampling noise,a common approach is tofit the points using the zero set of an implicit func-tion,such as a sum of radial bases[Carr et al.2001]or piecewise polynomial functions[Ohtake et al.2005;Nagai et al.2009].Many techniques estimate a signed-distance function[Hoppe et al.1992; 1The performance of the unscreened solver is measured using our imple-mentation with screening weight set to zero.The implementation of the original Poisson reconstruction runs in412seconds.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.2•M.Kazhdan and H.HoppeFig.1:Reconstruction of the David head ‡,comparing traditional Poisson surface reconstruction (left)and screened Poisson surface reconstruction which incorporates point constraints (center).The rightmost diagram plots pixel depth (z )values along the colored segments together with the positions of nearby samples.The introduction of point constraints significantly improves fit accuracy,sharpening the reconstruction without amplifying noise.Bajaj et al.1995;Curless and Levoy 1996].If the input points are unoriented,an important step is to correctly infer the sign of the resulting distance field [Mullen et al.2010].Our work extends Poisson surface reconstruction [Kazhdan et al.2006],in which the implicit function corresponds to the model’s indicator function χ.The function χis often defined to have value 1inside and value 0outside the model.To simplify the derivations,inthis paper we define χto be 12inside and −12outside,so that its zero isosurface passes near the points.The function χis solved using a Laplacian system discretized over a multiresolution B-spline basis,as reviewed in Section 3.Alliez et al.[2007]form a Laplacian system over a tetrahedral-ization,and constrain the solution’s biharmonic energy;the de-sired function is obtained as the solution to an eigenvector prob-lem.Manson et al.[2008]represent the indicator function χusing a wavelet basis,and efficiently compute the basis coefficients using simple local sums over an adapted octree.Calakli and Taubin [2011]optimize a signed-distance function to have value zero at the points,have derivatives that agree with the point normals,and minimize a Hessian smoothness norm.The resulting optimization involves a bilaplacian operator,which requires estimating derivatives of higher order than in the Laplacian.The reconstructed surfaces are shown to have good accuracy,strongly suggesting the importance of explicitly fitting the points within the optimization.This motivated us to explore whether a Laplacian system could be extended in this respect,and also be compatible with a multigrid solver.Screened Poisson Surface Fitting.The method of Nehab et al.[2005],which simultaneously fits position and normal constraints,may also be viewed as the solution of a screened Poisson equation.The fitting algorithm assumes that a 2D parametric domain (i.e.,a plane or triangle mesh)is already established.The position and derivative constraints are both defined over this 2D domain.In contrast,in Poisson surface reconstruction the 2D domain manifold is initially unknown,and therefore the goal is to infer an indicator function χrather than a parametric function.This leads to a hybrid problem with derivative (Laplacian)constraints defined densely over 3D and position constraints defined sparsely on the set of points sampled near the unknown 2D manifold.3.REVIEW OF POISSON SURFACE RECONSTRUCTIONThe approach of Poisson surface reconstruction is based on the observation that the (inward pointing)normal field of the boundary of a solid can be interpreted as the gradient of the solid’s indicator function.Thus,given a set of oriented points sampling the boundary,a watertight mesh can be obtained by (1)transforming the oriented point samples into a continuous vector field in 3D,(2)finding a scalar function whose gradients best match the vector field,and (3)extracting the appropriate isosurface.Because our work focuses primarily on the second step,we review it here in more detail.Scalar Function Fitting.Given a vector field V :R 3→R 3,thegoal is to solve for the scalar function χ:R 3→R minimizing:E (χ)=∇χ(p )− V (p ) 2d p .(1)Using the Euler-Lagrange formulation,the minimum is obtainedby solving the Poisson equation:∆χ=∇· V .System Discretization.The Galerkin formulation is used totransform this into a finite-dimensional system [Fletcher 1984].First,a basis {B 1,...,B N }:R 3→R is chosen,namely a collection of trivariate (usually triquadratic)B-spline functions.With respect to this basis,the discretization becomes:∆χ,B i [0,1]3= ∇· V ,B i [0,1]31≤i ≤Nwhere ·,· [0,1]3is the standard inner-product on the space of(scalar-and vector-valued)functions defined on the unit cube:F ,G [0,1]3=[0,1]3F (p )·G (p )d p , U , V [0,1]3=[0,1]3U (p ), V (p ) d p .Since the solution is itself expressed in terms of the basis functions:χ(p )=N∑i =1x i B i (p ),ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .Screened Poisson Surface Reconstruction•3finding the coefficients{x i}of the solution reduces to solving the linear system Ax=b where:A i j= ∇B i,∇B j [0,1]3and b i= V,∇B i [0,1]3.(2) The basis functions{B1,...,B N}are chosen to be compactly supported,so most pairs of functions do not have overlapping support,and thus the matrix A is sparse.Because the solution is expected to be smooth away from the input samples,the linear system is discretized byfirst adapting an octree to the input samples and then associating an(appropriately scaled and translated)trivariate B-spline function to each octree node. This provides high-resolution detail in the vicinity of the surface while reducing the overall dimensionality of the system.System Solution.Given the hierarchy defined by an octree of depth D,a multigrid approach is used to solve the linear system. The basis functions are partitioned according to the depths of their associated nodes and,for each depth d,a linear system A d x d=b d is defined using the corresponding B-splines{B d1,...,B d Nd},such thatχ(p)=∑D d=0∑i x d i B d i(p).Because the octree-selected B-spline functions do not form a complete grid at each depth,it is generally not possible to prolong the solution x d at depth d into the solution x d+1at depth d+1. (The B-spline associated with a given node is a sum of B-spline functions associated not only with its own child nodes,but also with child nodes of its neighbors.)Instead,the constraints at depth d+1are adjusted to account for the part of the solution already realized at coarser depths.Pseudocode for a cascadic solver,where the solution is only relaxed on the up-stroke of the V-cycle,is given in Algorithm1.Algorithm1:Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine2For d ∈{0,...,d−1}Remove the constraints3b d=b d−A dd x d met at coarser depths4Relax A d x d=b d Adjust the system at depth dHere,A dd is the N d×N d matrix used to transform solution coefficients at depth d into constraints at depth d:A dd i j= ∇B d i,∇B d j [0,1]3.Note that,by definition,A d=A dd.Isosurface Extraction.Solving the Poisson equation,one obtains a functionχthat approximates the indicator function.Ideally,the function’s zero level-set should therefore correspond to the desired surface.In practice however,the functionχcan differ from the true indicator function due to several sources of error:—The point sampling may be noisy,possibly containing outliers.—The Galerkin discretization is only an approximation of the continuous problem.—The point sampling density is approximated during octree construction.To mitigate these errors,in[Kazhdan et al.2006]the implicit function is adjusted by globally subtracting the average value of the function at the input samples.4.INCORPORA TING POINT CONSTRAINTSThe original Poisson surface reconstruction algorithm adjusts the implicit function using a single global offset such that its average value at all points is zero.However,the presence of errors can cause the implicit function to drift so that no global offset is satisfactory. Instead,we seek to explicitly interpolate the points.Given the set of input points P with weights w:P→R≥0,we add to the energy of Equation1a term that penalizes the function’s deviation from zero at the samples:E(χ)=V(p)−∇χ(p) 2d p+α·Area(P)∑p∈P∑p∈Pw(p)χ2(p)(3)whereαis a weight that trades off the importance offitting the gradients andfitting the values,and Area(P)is the area of the reconstructed surface,estimated by computing the local sampling density as in[Kazhdan et al.2006].In our implementation,we set the per-sample weights w(p)=1,although one can also use confidence values if these are available.The energy can be expressed concisely asE(χ)= V−∇χ, V−∇χ [0,1]3+α χ,χ (w,P)(4)where ·,· (w,P)is the bilinear,symmetric,positive,semi-definite form on the space of functions in the unit-cube,obtained by taking the weighted sum of function values:F,G (w,P)=Area(P)∑p∈P w(p)∑p∈Pw(p)·F(p)·G(p).4.1Interpretation as a Screened Poisson EquationThe energy in Equation4combines a gradient constraint integrated over the spatial domain with a value constraint summed at discrete points.As shown in the appendix,its minimization can be interpreted as a screened Poisson equation(∆−α˜I)χ=∇· V with an appropriately defined operator˜I.4.2DiscretizationWe apply a discretization similar to that in Section3to the minimization of the energy in Equation4.The coefficients of the solutionχwith respect to the basis{B1,...,B N}are again obtained by solving a linear system of the form Ax=b.The right-hand-side b is unchanged because the constrained value at the sample points is zero.Matrix A now includes the point constraints:A i j= ∇B i,∇B j [0,1]3+α B i,B j (w,P).(5) Note that incorporating the point constraints does not change the sparsity of matrix A because B i(p)·B j(p)is nonzero only if the supports of the two functions overlap,in which case the Poisson equation has already introduced a nonzero entry in the matrix.As in Section3,we solve this linear system using a cascadic multigrid algorithm–iterating over the octree depths from coarsest tofinest,adjusting the constraints,and relaxing the system.Similar to Equation5,the matrix used to transform a solution at depth d to a constraint at depth d is expressed as:A dd i j= ∇B d i,∇B d j [0,1]3+α B d i,B d j (w,P).ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.4•M.Kazhdan and H.HoppeFig.2:Visualizations of the reconstructed implicit function along a planar slice through the cow ‡(shown in blue on the left),for the original Poisson solver,and for the screened Poisson solver without and with scale-independent screening.This operator adjusts the constraint b d (line 3of Algorithm 1)not only by removing the Poisson constraints met at coarser resolutions,but also by modifying the constrained values at points where the coarser solution does not evaluate to zero.4.3Scale-Independent ScreeningTo balance the two energy terms in Equation 3,it is desirable to adjust the screening parameter αsuch that (1)the reconstructed surface shape is invariant under scaling of the input points with respect to the solver domain,and (2)the prolongation of a solution at a coarse depth is an accurate estimate of the solution at a finer depth in the cascadic multigrid approach.We achieve both these goals by adjusting the relative weighting of position and gradient constraints across the different octree depths.Noting that the magnitude of the gradient constraint scales with resolution,we double the weight of the interpolation constraint with each depth:A ddi j = ∇B d i ,∇B dj [0,1]3+2d α B d i ,B dj (w ,P ).The adaptive weight of 2d is chosen to keep the Laplacian and screening constraints around the surface in balance.To see this,assume that the points are locally planar,and consider the row of the system matrix corresponding to an octree node overlapping the points.The coefficients of the system in that row are the sum of Laplacian and screening terms.If we consider the rows corresponding to the child nodes that overlap the surface,we find that the contribution from the Laplacian constraints scales by a factor of 1/2while the contribution from the screening term scales by a factor of 1/4.2Thus,scaling the screening weights by a factor of two with each resolution keeps the two terms in balance.Figure 2shows the benefit of scale-independent screening in reconstructing a cow model.The leftmost image shows a plane passing through the bounding cube of the cow,and the images to the right show the values of the computed indicator function along that plane,for different implementations of the solver.As the figure shows,the unscreened Poisson solver provides a good approximation of the indicator functions,with values inside (resp.outside)the surface approximately 1/2(resp.-1/2).However,applying the same solver to the screened Poisson equation (second from right)provides a solution that is only correct near the input samples and returns to zero near the faces of the bounding cube,2Forthe Laplacian term,the Laplacian scales by a factor of 4with refinement,and volumetric integrals scale by a factor of 1/8.For the screening term,area integrals scale by a factor of 1/4.potentially resulting in spurious surface sheets away from the surface.It is only with scale-independent screening (right)that we obtain a high-quality solution to the screened Poisson ing this resolution adaptive weighting,our system has the property that the reconstruction obtained by solving at depth D is identical to the reconstruction that would be obtained by scaling the point set by 1/2and solving at depth D +1.To see this,we consider the two energies that guide the reconstruc-tion,E V (χ)measuring the extent to which the gradients of the so-lution match the prescribed vector field,and E (w ,P )(χ)measuring the extent to which the solution meets the screening constraint:E V (χ)=V (p )−∇χ(p )2d p E (w ,P )(χ)=Area (P )∑p ∈P w (p )∑p ∈Pw (p )χ2(p ).Scaling by 1/2,we obtain a new point set (˜w ,˜P)with positions scaled by 1/2,unchanged weights,˜w (p )=w (2p ),and scaled area,Area (˜P )=Area (P )/4;a new scalar field,˜χ(p )=χ(2p );and a new vector field,˜ V (p )=2 V (2p ).Computing the correspondingenergies,we get:E ˜ V (˜χ)=1E V(χ)and E (˜w ,˜P )(˜χ)=1E (w ,P )(χ).Thus,scaling the screening weight by a factor of two with eachsuccessive depth ensures that the sum of energies is unchanged (up to multiplication by a constant)so the minimizer remains the same.4.4Boundary ConditionsIn order to define the linear system,it is necessary to define the behavior of the function space along the boundary of the integration domain.In the original Poisson reconstruction the authors imposed Dirichlet boundary conditions,forcing the implicit function to havea value of −12along the boundary.In the present work we extend the implementation to support Neumann boundary conditions as well,forcing the normal derivative to be zero along the boundary.In principle these two boundary conditions are equivalent for watertight surfaces,since the indicator function has a constant negative value outside the model.However,in the presence of missing data we find Neumann constraints to be less restrictive because they only require that the implicit function have zero derivative across the boundary of the integration domain,a property that is compatible with the gradient constraint since the guiding vector field V is set to zero away from the samples.(Note that when the surface does cross the boundary of the domain,the Neumann boundary constraints create a bias to crossing the domain boundary orthogonally.)Figure 3shows the practical implications of this choice when reconstructing the Angel model,which was only scanned from the front.The left image shows the original point set and the reconstructions using Dirichlet and Neumann boundary conditions are shown to the right.As the figure shows,imposing Dirichlet constraints creates a water-tight surface that closes off before reaching the boundary while using Neumann constraints allows the surface to extend out to the boundary of the domain.ACM Transactions on Graphics,V ol.VV ,No.N,Article XXX,Publication date:Month YYYY .Screened Poisson Surface Reconstruction•5Fig.3:Reconstructions of the Angel point set‡(left)using Dirichlet(center) and Neumann(right)boundary conditions.Similar results can be seen at the bases of the models in Figures1 and4a,with the original Poisson reconstructions obtained using Dirichlet constraints and the screened reconstructions obtained using Neumann constraints.5.IMPROVED ALGORITHMIC COMPLEXITYIn this section we discuss the efficiency of our reconstruction al-gorithm.We begin by analyzing the complexity of the algorithm described above.Then,we present two algorithmic improvements. Thefirst describes how hierarchical clustering can be used to re-duce the screening overhead at coarser resolutions.The second ap-plies to both the unscreened and screened solver implementations, showing that the asymptotic time complexity in both cases can be reduced to be linear in the number of input points.5.1Efficiency of basic solverLet us begin by analyzing the computational complexity of the unscreened and screened solvers.We assume that the points P are evenly distributed over a surface,so that the depth of the adapted octree is D=O(log|P|)and the number of octree nodes at depth d is O(4d).We also note that the number of nonzero entries in matrix A dd is O(4d),since the matrix has O(4d)rows and each row has at most53nonzero entries.(Since we use second-order B-splines, basis functions are supported within their one-ring neighborhoods and the support of two functions will overlap only if one is within the two-ring neighborhood of the other.)Assuming that the matrices A dd have already been computed,the computational complexity for the different steps in Algorithm1is: Step3:O(4d)–since A dd has O(4d)nonzero entries.Step4:O(4d)–since A d has O(4d)nonzero entries and the number of relaxation steps performed is constant.Steps2-3:∑d−1d =0O(4d)=O(4d·d).Steps2-4:O(4d·d+4d)=O(4d·d).Steps1-4:∑D d=0O(4d·d)=O(4D·D)=O(|P|·log|P|). There still remains the computation of matrices A dd .For the unscreened solver,the complexity of computing A dd is O(4d),since each entry can be computed in constant time.Thus, the overall time complexity remains O(|P|·log|P|).For the screened solver,the complexity of computing A dd is O(|P|)since defining the coefficients requires accumulating the screening contribution from each of the points,and each point contributes to a constant number of rows.Thus,the overall time complexity is dominated by the cost of evaluating the coefficients of A dd which is:D∑d=0d−1∑d =0O(|P|)=O(|P|·D2)=O(|P|·log2|P|).5.2Hierarchical Clustering of Point ConstraintsOurfirst modification is based on the observation that since the basis functions at coarser resolutions are smooth,it is unnecessary to constrain them at the precise sample locations.Instead,we cluster the weighted points as in[Rusinkiewicz and Levoy2000]. Specifically,for each depth d,we define(w d,P d)where p i∈P d is the weighted average position of the points falling into octree node i at depth d,and w d(p i)is the sum of the associated weights.3 If all input points have weight w(p)=1,then w d(p i)is simply the number of points falling into node i.This alters the computation of the system matrix coefficients:A dd i j= ∇B d i,∇B d j [0,1]3+2dα B d i,B d j (w d,P d).Note that since d>d ,the value B d i,B d j (w d,P d)is obtained by summing over points stored with thefiner resolution.In particular,the complexity of computing A dd for the screened solver becomes O(|P d|)=O(4d),which is the same as that of the unscreened solver,and both implementations now have an overall time complexity of O(|P|·log|P|).On typical examples,hierarchical clustering reduces execution time by a factor of almost two,and the reconstructed surface is visually indistinguishable.5.3Conforming OctreesTo account for the adaptivity of the octree,Algorithm1subtracts off the constraints met at all coarser resolutions before relaxing at a given depth(steps2-3),resulting in an algorithm with log-linear time complexity.We obtain an implementation with linear complexity by forcing the octree to be conforming.Specifically, we define two octree cells to be mutually visible if the supports of their associated B-splines overlap,and we require that if a cell at depth d is in the octree,then all visible cells at depth d−1must also be in the tree.Making the tree conforming requires the addition of new nodes at coarser depths,but this still results in O(4d)nodes at depth d.While the conforming octree does not satisfy the condition that a coarser solution can be prolonged into afiner one,it has the property that the solution obtained at depths{0,...,d−1}that is visible to a node at depth d can be expressed entirely in terms of the coefficients at depth d−ing an accumulation vector to store the visible part of the solution,we obtain the linear-time implementation in Algorithm2.3Note that the weight w d(p)is unrelated to the screening weight2d introduced in Section4.3for scale-independent screening.ACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.6•M.Kazhdan and H.HoppeHere,P d d−1is the B-spline prolongation operator,expressing a solution at depth d−1in terms of coefficients at depth d.The number of nonzero entries in P d d−1is O(4d),since each column has at most43nonzero entries,so steps2-5of Algorithm2all have complexity O(4d).Thus,the overall complexity of both the unscreened and screened solvers becomes O(|P|).Algorithm2:Conforming Cascadic Poisson Solver1For d∈{0,...,D}Iterate from coarse tofine.2ˆx d−1=P d−1d−2ˆx d−2Upsample coarseraccumulation vector.3ˆx d−1=ˆx d−1+x d−1Add in coarser solution.4b d=b d−A d d−1ˆx d−1Remove constraintsmet at coarser depths.5Relax A d x d=b d Adjust the system at depth d.5.4Implementation DetailsThe algorithm is implemented in C++,using OpenMP for multi-threaded parallelization.We use a conjugate-gradient solver to re-lax the system at each multigrid level.With the exception of the octree construction,most of the operations involved in the Poisson reconstruction can be categorized as operations that either“accu-mulate”or“distribute”information[Bolitho et al.2007,2009].The former do not introduce write-on-write conflicts and are trivial to parallelize.The latter only involve linear operations,and are par-allelized using a standard map-reduce approach:in the map phase we create a duplicate copy of the data for each thread to distribute values into,and in the reduce phase we merge the copies by taking their sum.6.RESULTSWe evaluate the algorithm(Screened)by comparing its accuracy and computational efficiency with several prior methods:the original Poisson reconstruction of Kazhdan et al.[2006](Poisson), the Wavelet reconstruction of Manson et al.[2008](Wavelet),and the Smooth Signed Distance reconstruction of Calakli and Taubin [2011](SSD).For the new algorithm,we set the screening weight toα=4and use Neumann boundary conditions in all experiments.(Numerical results obtained using Dirichlet boundaries were indistinguishable.) For the prior methods,we set algorithmic parameters to values recommended by the authors,using Haar Wavelets in the Wavelet reconstruction and setting the value/normal/Hessian weights to 1/1/0.25in the SSD reconstruction.For Poisson,SSD,and Screened we set the“samples-per-node”parameter to1and the “bounding-box-scale”parameter to1.1.(For Wavelet the bounding box scale is hard-coded at1and there is no parameter to adjust the sampling density.)6.1AccuracyWe run three different types of experiments.Real Scanner Data.To evaluate the accuracy of the different reconstruction algorithms on real-world data,we gathered several scanned datasets:the Awakening(10M points),the Stanford Bunny (0.2M points),the David(11M points),the Lucy(1.0M points), and the Neptune(2.4M points).For each dataset,we randomly partitioned the points into two equal-sized subsets:input points for the reconstruction algorithms,and validation points to measure point-to-reconstruction distances.Figure4a shows reconstructions results for the Neptune and David models at depth10.It also shows surface cross-sections overlaid with the validation points in their vicinity.These images reveal that the Poisson reconstruction(far left),and to a lesser extent the SSD reconstruction(center left),over-smooth the data,while the Wavelet reconstruction(center left)has apparent derivative discontinuities.In contrast,our screened Poisson approach(far right)provides a reconstruction that faithfullyfits the samples without introducing noise.Figure4b shows quantitative results across all datasets,in the form of RMS errors,measured using the distances from the validation points to the reconstructed surface.(We also computed the maximum error,but found that its sensitivity to individual outlier points made it an unreliable and unindicative statistic.)As thefigure indicates,the Screened Poisson reconstruction(blue)is always more accurate than both the original Poisson reconstruction algorithm(red)and the Wavelet reconstruction(purple),and generates reconstruction whose RMS errors are comparable to or smaller than those of the SSD reconstruction(green).Clean Uniformly Sampled Data.To evaluate reconstruction accuracy on clean data,we used the approach of Osada et al.[2001] to generate oriented point sets by uniformly sampling the surfaces of the Fandisk,Armadillo Man,Dragon,and Raptor models.For each model,we generated datasets of100K and1M points and reconstructed surfaces from each point set using the four different reconstruction algorithms.As an example,Figure5a shows the reconstructions of the fandisk and raptor models using1M point samples at depth10.Despite the lack of noise in the input data,the Wavelet reconstruction has spurious high-frequency detail.Focusing on the sharp edges in the model,we also observe that the screened Poisson reconstruction introduces less smoothing,providing a reconstruction that is truer to the original data than either the original Poisson or the SSD reconstructions.Figure5b plots RMS errors across all models,measured bidirec-tionally between the original surface and the reconstructed surface using the Metro tool[Cignoni and Scopigno1998].As in the case of real scanner data,screened Poisson reconstruction always out-performs the original Poisson and Wavelet reconstructions,and is comparable to or better than the SSD reconstruction. Reconstruction Benchmark.We use the benchmark of Berger et al.[2011]to evaluate the accuracy of the algorithms under different simulations of scanner error,including nonuniform sampling,noise,and misalignment.The dataset consists of mul-tiple virtual scans of implicit surfaces representing the Anchor, Dancing Children,Daratech,Gargoyle,and Quasimodo models. As an example,Figure6a visualizes the error in the reconstructions of the anchor model from a virtual scan consisting of210K points (demarked with a dashed rectangle in Figure6b)at depth9.The error is visualized using a red-green-blue scale,with red signifyingACM Transactions on Graphics,V ol.VV,No.N,Article XXX,Publication date:Month YYYY.。

python 泰森多边形 -回复

python 泰森多边形 -回复

python 泰森多边形-回复Python泰森多边形——从数据集到可视化解读引言:泰森多边形(Voronoi Diagram)是一种用于解决空间数据分析问题的经典方法,它可以将平面上的点集划分为多个区域,并且每个区域内的点都离该区域内的某一个特定点最近。

在本文中,我们将探讨如何使用Python 中的SciPy库来构建泰森多边形,并通过可视化解读的方式对数据进行分析。

第一步:准备工作首先,我们需要安装Python和SciPy库。

可以通过pip命令来安装SciPy 库,如果还没有安装Python,需要先安装Python环境。

安装完成后,我们可以通过导入SciPy库来验证是否安装成功。

import scipyif scipy.__version__:print("SciPy successfully installed!")else:print("SciPy installation failed!")第二步:生成数据集在使用泰森多边形之前,我们需要准备一个数据集。

假设我们有一组平面上的点,可以通过随机生成的方式来创建数据集。

import numpy as npnp.random.seed(0)n_points = 100points = np.random.random((n_points, 2))在这段代码中,我们使用了NumPy库来生成一个大小为(100, 2)的二维数组,其中每个元素都是0到1之间的随机数。

第三步:构建泰森多边形在构建泰森多边形之前,我们需要导入scipy.spatial模块中的Voronoi 类。

from scipy.spatial import Voronoi, voronoi_plot_2dvor = Voronoi(points)在这段代码中,我们将数据集作为Voronoi类的参数来创建一个Voronoi 对象。

第四步:可视化泰森多边形为了能够更好地理解泰森多边形的分区情况,我们可以使用Matplotlib 库来进行可视化。

多尺度上采样方法的轻量级图像超分辨率重建

多尺度上采样方法的轻量级图像超分辨率重建

第 22卷第 4期2023年 4月Vol.22 No.4Apr.2023软件导刊Software Guide多尺度上采样方法的轻量级图像超分辨率重建蔡靖,曾胜强(上海理工大学光电信息与计算机工程学院,上海 200093)摘要:目前,大多数图像超分辨率网络通过加深卷积神经网络层数与拓展网络宽度提升重建能力,但极大增加了模型复杂度。

为此,提出一种轻量级图像超分辨率算法,通过双分支特征提取算法可使网络模型一次融合并输出不同尺度的特征信息,组合像素注意力分支分别对各像素添加权重,仅以较少参数为代价增强像素细节的特征表达。

同时,上采样部分结合亚像素卷积与邻域插值方法,分别提取特征深度、空间尺度信息,输出最终图像。

此外,组合注意力机制的亚像素卷积分支也进一步强化了重要信息,使输出图像具有更好的视觉效果。

实验表明,该模型在参数量仅为351K的情况下达到了与参数量为1 592K的CARN模型相似的重建性能,在部分测试集中的SSIM值高于CARN,证实了所提方法的有效性,可为轻量级图像超分辨率重建提供新的解决方法。

关键词:图像超分辨率重建;轻量级;像素注意力;多尺度上采样;图像处理DOI:10.11907/rjdk.221516开放科学(资源服务)标识码(OSID):中图分类号:TP391.41 文献标识码:A文章编号:1672-7800(2023)004-0168-07Lightweight Image Super-resolution Reconstruction using Multi-scaleUpsampling MethodCAI Jing, ZENG Sheng-qiang(School of Optical-Electrical and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093, China)Abstract:At present, most image super-resolution networks improve the reconstruction ability by deepening the convolution neural network layers and expanding the network width, but greatly increase the model complexity. To this end, a lightweight image super-resolution algo‐rithm is proposed. Through the two-branch feature extraction algorithm, the network model can be fused and output the feature information of different scales at one time, and the pixel attention branches are combined to add weights to each pixel respectively, which only enhances the feature expression of pixel details at the cost of fewer parameters. In addition, the up-sampling part combines subpixel convolution and neigh‐borhood interpolation methods to extract feature depth and spatial scale information respectively, and output the final image. In addition, the subpixel convolution integral branch of the combined attention mechanism further strengthens the important information and makes the output image have better visual effect. The experimental results show that the model achieves similar reconstruction performance to the CARN model with a parameter quantity of 1 592K when the parameter quantity is only 351K, and the SSIM value in some test sets is higher than the CARN value, which confirms the effectiveness of the proposed method and can provide a new solution for lightweight image super-resolution recon‐struction.Key Words:image super-resolution; lightweight; pixel attention; multi-scale upsampling; image processing0 引言图像超分辨率重建是指将低分辨率图像重建为与之对应的高分辨率图像重建,在机器视觉和图像处理领域是非常重要的课题。

pytorch的dataloader中数据增强技巧

pytorch的dataloader中数据增强技巧

pytorch的dataloader中数据增强技巧数据增强是在训练过程中对数据进行一系列随机变换,目的是增加训练样本的多样性,提高模型的鲁棒性和泛化能力。

在PyTorch中,通过DataLoader和transforms模块可以方便地实现数据增强技巧。

下面将介绍几种常用的数据增强技巧。

1. 随机水平翻转:随机将图像进行水平翻转,通过transforms.RandomHorizontalFlip()函数实现。

这种操作可以增加样本的多样性,尤其对于图像中左右对称的物体,如车辆、人脸等,有很好的效果。

2. 随机垂直翻转:随机将图像进行垂直翻转,通过transforms.RandomVerticalFlip()函数实现。

与水平翻转类似,这种操作也可以增加样本的多样性。

3. 随机裁剪:随机从图像中裁剪出指定大小的区域,通过transforms.RandomCrop()函数实现。

这可以模拟不同的拍摄角度或者目标对象位于图像不同位置的情况,增加样本的变化。

4. 随机旋转:随机对图像进行旋转,通过transforms.RandomRotation()函数实现。

这种操作可以增加样本的多样性,尤其对于目标对象旋转不变的应用场景,如人脸识别、物体检测等,有很好的效果。

5. 随机仿射变换:随机对图像进行仿射变换,通过transforms.RandomAffine()函数实现。

这种操作可以模拟图像的旋转、平移、缩放、错切等几何变换,增加样本的变化。

6. 随机亮度和对比度调整:随机调整图像的亮度和对比度,通过transforms.ColorJitter()函数实现。

这种操作可以模拟不同光照条件下的图像,增加样本的多样性。

7. 随机颜色扰动:随机对图像进行颜色扰动,包括修改亮度、对比度、饱和度和色调等,通过transforms.ColorJitter()函数实现。

这种操作可以模拟不同的摄像头或图像处理设备的效果,增加样本的多样性。

模型超参数 英文标准格式

模型超参数 英文标准格式

模型超参数英文标准格式在机器学习和深度学习中,超参数(Hyperparameters)是模型训练过程中设置的参数,其值在训练之前需要手动进行选择或调整。

以下是超参数的英文标准格式:
1. Learning rate:学习率
2. Batch size:批量大小
3. Number of epochs:训练轮数
4. Hidden layer size:隐藏层大小
5. Dropout rate:随机失活率
6. Regularization strength:正则化强度
7. Number of layers:层数
8. Activation function:激活函数
9. Optimization algorithm:优化算法
10. Weight initialization:权重初始化
11. Learning rate decay:学习率衰减
12. Momentum:动量
13. Loss function:损失函数
这些是一些常见的超参数,其英文标准格式在机器学习和深度学习的文献和实践中被广泛使用。

请注意,具体的超参数名称和格式可能会因不同的算法、库或框架而有所变化,但上述列出的超参数是相对通用的,适用于大多数机器学习和深度学习任务。

1/ 1。

fine-to-coarse reconstruction算法-概述说明以及解释

fine-to-coarse reconstruction算法-概述说明以及解释

fine-to-coarse reconstruction算法-概述说明以及解释1.引言1.1 概述:在计算机视觉领域,图像重建是一项重要的任务,其目的是从输入的低分辨率图像中生成高质量的高分辨率图像。

Fine-to-Coarse Reconstruction算法是一种常用的图像重建算法,它通过逐渐增加图像的分辨率级别,从粗到细地重建图像,以获得更加清晰、细节丰富的图像。

Fine-to-Coarse Reconstruction算法在图像处理和计算机视觉中有着广泛的应用,能够有效地提高图像质量和细节信息的还原程度。

本文将详细介绍Fine-to-Coarse Reconstruction算法的原理、应用和优势,希望能为读者提供深入了解和应用该算法的指导。

1.2 文章结构本文主要分为引言、正文和结论三部分。

在引言部分中,我们将对Fine-to-Coarse Reconstruction算法进行概述,并介绍文章的结构和目的。

在正文部分,我们将详细介绍Fine-to-Coarse Reconstruction算法的原理以及其在实际应用中的表现。

我们将重点讨论该算法在图像处理、计算机视觉等领域的应用,并探讨其优势和局限性。

最后,在结论部分,我们将对整篇文章进行总结,展望Fine-to-Coarse Reconstruction算法的未来发展方向,并留下一些思考和结束语。

整个文章结构清晰,层次分明,将帮助读者全面了解和理解Fine-to-Coarse Reconstruction算法的重要性和价值。

1.3 目的Fine-to-Coarse Reconstruction算法的目的是通过逐步从细节到整体的重建过程,实现对图像或模型的高效重建。

通过逐步迭代的方式,算法能够在保持细节的同时,提高重建的速度和准确性。

本文旨在深入探讨Fine-to-Coarse Reconstruction算法的原理、应用和优势,以期能够为相关研究和应用提供更多的启发和帮助。

一种改进的高斯频率域压缩感知稀疏反演方法(英文)

一种改进的高斯频率域压缩感知稀疏反演方法(英文)

AbstractCompressive sensing and sparse inversion methods have gained a significant amount of attention in recent years due to their capability to accurately reconstruct signals from measurements with significantly less data than previously possible. In this paper, a modified Gaussian frequency domain compressive sensing and sparse inversion method is proposed, which leverages the proven strengths of the traditional method to enhance its accuracy and performance. Simulation results demonstrate that the proposed method can achieve a higher signal-to- noise ratio and a better reconstruction quality than its traditional counterpart, while also reducing the computational complexity of the inversion procedure.IntroductionCompressive sensing (CS) is an emerging field that has garnered significant interest in recent years because it leverages the sparsity of signals to reduce the number of measurements required to accurately reconstruct the signal. This has many advantages over traditional signal processing methods, including faster data acquisition times, reduced power consumption, and lower data storage requirements. CS has been successfully applied to a wide range of fields, including medical imaging, wireless communications, and surveillance.One of the most commonly used methods in compressive sensing is the Gaussian frequency domain compressive sensing and sparse inversion (GFD-CS) method. In this method, compressive measurements are acquired by multiplying the original signal with a randomly generated sensing matrix. The measurements are then transformed into the frequency domain using the Fourier transform, and the sparse signal is reconstructed using a sparsity promoting algorithm.In recent years, researchers have made numerous improvementsto the GFD-CS method, with the goal of improving its reconstruction accuracy, reducing its computational complexity, and enhancing its robustness to noise. In this paper, we propose a modified GFD-CS method that combines several techniques to achieve these objectives.Proposed MethodThe proposed method builds upon the well-established GFD-CS method, with several key modifications. The first modification is the use of a hierarchical sparsity-promoting algorithm, which promotes sparsity at both the signal level and the transform level. This is achieved by applying the hierarchical thresholding technique to the coefficients corresponding to the higher frequency components of the transformed signal.The second modification is the use of a novel error feedback mechanism, which reduces the impact of measurement noise on the reconstructed signal. Specifically, the proposed method utilizes an iterative algorithm that updates the measurement error based on the difference between the reconstructed signal and the measured signal. This feedback mechanism effectively increases the signal-to-noise ratio of the reconstructed signal, improving its accuracy and robustness to noise.The third modification is the use of a low-rank approximation method, which reduces the computational complexity of the inversion algorithm while maintaining reconstruction accuracy. This is achieved by decomposing the sensing matrix into a product of two lower dimensional matrices, which can be subsequently inverted using a more efficient algorithm.Simulation ResultsTo evaluate the effectiveness of the proposed method, we conducted simulations using synthetic data sets. Three different signal types were considered: a sinusoidal signal, a pulse signal, and an image signal. The results of the simulations were compared to those obtained using the traditional GFD-CS method.The simulation results demonstrate that the proposed method outperforms the traditional GFD-CS method in terms of signal-to-noise ratio and reconstruction quality. Specifically, the proposed method achieves a higher signal-to-noise ratio and lower mean squared error for all three types of signals considered. Furthermore, the proposed method achieves these results with a reduced computational complexity compared to the traditional method.ConclusionThe results of our simulations demonstrate the effectiveness of the proposed method in enhancing the accuracy and performance of the GFD-CS method. The combination of sparsity promotion, error feedback, and low-rank approximation techniques significantly improves the signal-to-noise ratio and reconstruction quality, while reducing thecomputational complexity of the inversion procedure. Our proposed method has potential applications in a wide range of fields, including medical imaging, wireless communications, and surveillance.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :a s t r o -p h /0505329v 4 28 M a r 2007Mon.Not.R.Astron.Soc.000,000–000(0000)Printed 2February 2008(MN L A T E X style file v2.2)Smoothing Supernova Data to Reconstruct the ExpansionHistory of the Universe and its AgeArman Shafieloo 1,3,Ujjaini Alam 1,4,Varun Sahni 1,5and Alexei A.Starobinsky 2,61Inter University Centre for Astronomy &Astrophysics,Pune,India 2Landau Institute for Theoretical Physics,119334Moscow,Russia 3arman@iucaa.ernet.in 4ujjaini@iucaa.ernet.in 5varun@iucaa.ernet.in 6alstar@landau.ac.ru2February 2008ABSTRACTWe propose a non-parametric method of smoothing supernova data over redshift using a Gaussian kernel in order to reconstruct important cosmological quantities including H (z )and w (z )in a model independent manner.This method is shown to be successful in discriminating between different models of dark energy when the quality of data is commensurate with that expected from the future SuperNova Acceleration Probe (SNAP).We find that the Hubble parameter is especially well-determined and useful for this purpose.The look back time of the universe may also be determined to avery high degree of accuracy (<∼0.2%)in this method.By refining the method,it is also possible to obtain reasonable bounds on the equation of state of dark energy.We explore a new diagnostic of dark energy–the ‘w -probe ’–which can be calculated from the first derivative of the data.We find that this diagnostic is reconstructed extremely accurately for different reconstruction methods even if Ω0m is marginalized over.The w -probe can be used to successfully distinguish between ΛCDM and other models of dark energy to a high degree of accuracy.Key words:cosmology:theory—cosmological parameters—statistics1INTRODUCTIONThe nature of dark energy has been the subject of much debate over the past decade (for reviews see Sahni &Starobinsky (2000);Carroll (2001);Peebles &Ratra (2003);Padmanabhan (2003);Sahni (2004)).The supernova (SNe)type Ia data,which gave the first indications of the accelerated expansion of the universe,are expected to throw further light on this intriguing ques-tion as their quality steadily improves.While the number of SNe available to us has increased two-fold over the past couple of years (at present there are about 150SNe between redshifts of 0and 1.75,with 10SNe above a redshift of unity)(Riess et al.1998;Perlmutter et al.1999;Knop et al.2003;Tonry et al.2003;Riess et al.2004),the SNe data are still not of a quality to firmly distinguish different models of dark energy.In this connection,an important role in our quest for a deeper understanding of the nature of dark energy has been played by the ‘reconstruction program’.Commencing from the first theoretical exposition of the reconstruction idea –Starobinsky (1998);Huterer &Turner (1999);Nakamura &Chiba (1999),and Saini et al.(2000)which applied it to an early supernova data set–there have been many attempts to reconstruct the properties of dark energy directly from observational data without assuming any particular microscopic/phenomenological model for the former.When using SNe data for this purpose,the main obstacle is the necessity to:(i)differentiate the data once to pass from the luminosity distance d L to the Hubble parameter H (t )≡˙a (t )/a (t )and to the effective energy density of dark energy ǫDE ,(ii)differentiate the data a second time in order to obtain the deceleration parameter q ≡−¨a a/˙a 2,the dark energy effective pressure p DE ,and the equation of state parameter w (t )≡p DE /ǫDE .Here,a (t )is the scale factor of a Friedmann-Robertson-Walker (FRW)isotropic cosmological model which we further assume to be spatially flat,as predicted by the simplest variants of the inflationary scenario of the early Universe and confirmed by observational CMB data.To get around this obstacle,some kind of smoothing of d L data with respect to its argument –the redshift z (t )–is needed.One possible way is to parameterize the quantity which is of interest (H (z ),w (z ),etc.)by some2Arman Shafieloo,Ujjaini Alam,Varun Sahni,Alexei A.Starobinskyfunctional form containing a few free parameters and then determine the value of these parameters which produce the bestfit to the data.This implies an implicit smoothing of d L with a characteristic smoothing scale defined by the number of parameters,and with a weight depending on the form of parameterization.Different parameterizations have been used for:d L(Huterer&Turner1999;Saini et al. 2000;Chiba&Nakamura2000),H(z)(Sahni et al.2003; Alam et al.2004;Alam,Sahni&Starobinsky2004a), w(z)(Chevallier&Polarski2001;Weller&Albrecht 2002;Gerke&Efstathiou2002;Maor et al. 2002;Corasaniti&Copeland2003;Linder2003; Wang&Mukherjee2004;Saini,Weller&Bridle 2004;Nesseris&Perivelaroupolos2004;Gong2005a; Lazkoz,Nesseris&Perivelaroupolos2005)and V(z) (Simon,Verde&Jimenez2005;Guo,Ohta&Zhang2005). In Huterer&Turner(1999),a polynomial expansion of the luminosity distance was used to reconstruct the equation of state.However,Weller&Albrecht(2002)showed this ansatz to be inadequate since it needed an arbitrarily large number of parameters tofit even the simplestΛCDM equation of state.They proposed instead a polynomial ansatz for the equation of state which worked somewhat better.In Saini et al.(2000)a rational Pad`e-type ansatz for d L was proposed,which gave good results.In recent times there have been many more attempts at parameterizing dark energy.In Chevallier&Polarski(2001)and Linder (2003)an ansatz of the form w=w0+w a(1−a)was suggested for the equation of state.Corasaniti&Copeland (2003)suggested a four-parameter ansatz for the equation of state.Sahni et al.(2003)proposed a slightly different approach in which the dark energy density was expanded in a polynomial ansatz,the properties of which were examined in(Alam et al.2004;Alam,Sahni&Starobinsky 2004a;Alam et al.2004b).See Alam et al.(2003);Gong (2005b);Basset,Corasaniti&Kunz(2004)for a summary of different approaches to the reconstruction program and for a more extensive list of references.In spite of some ambiguity in the form of these different parameterizations, it is reassuring that they produce consistent results for the bestfit curve over the range0.1<∼z<∼1where we have sufficient amount of data(see,e.g.,Fig.10in Gong (2005b)).However it is necessary to point out that the current SNe data are not of a quality that could allow us to unambiguously differentiateΛCDM from evolving dark energy.That is why our focus in this paper will be on better quality data(from the SNAP experiment)which should be able to successfully address this important issue.A different,non-parametric smoothing procedure involves directly smoothing either d L,or any other quantity defined within redshifts bins,with some characteristic smoothing scale.Different forms of this approach have been elaborated in Wang&Lovelace (2001);Huterer&Starkman(2003);Saini(2003); Daly&Djorgovsky(2003,2004);Wang&Tegmark (2005);Espana-Bonet&Ruiz-Lapuente(2005).One of the advantages of this approach is that the dependence of the results on the size of the smoothing scale becomes explicit. We emphasize again that the present consensus seems to be that,while the cosmological constant remains a good fit to the data,more exotic models of dark energy are by no means ruled out(though their diversity has been significantly narrowed already).Thus,until the qualityof data improves dramatically,thefinal judgment on the nature of dark energy cannot yet be pronounced.In this paper,we develop a new reconstruction method which formally belongs to the second category,and whichis complementary to the approach offitting a parametric ansatz to the dark energy density or the equation of state. Most of the papers using the non-parametric approach cited above exploited a kind of top-hat smoothing in redshift space.Instead,we follow a procedure which is well knownand frequently used in the analysis of large-scale structure (Coles&Lucchin1995;Martinez&Saar2002);namely,we attempt to smooth noisy data directly using a Gaussian smoothing function.Then,from the smoothed data,we cal-culate different cosmological functions and,thus,extract information about dark energy.This method allows us to avoid additional noise due to sharp borders between bins. Furthermore,since our method does not assume any defi-nite parametric representation of dark energy,it does not bias results towards any particular model.We therefore ex-pect this method to give us model-independent estimates of cosmological functions,in particular,the Hubble parameterH(z)≡˙a(t)/a(t).On the basis of data expected from the SNAP satellite mission,we show that the Gaussian smooth-ing ansatz proposed in this paper can successfully distin-guish between rival cosmological models and help shed lighton the nature of dark energy.2METHODOLOGYIt is useful to recall that,in the context of structure for-mation,it is often advantageous to obtain a smoothed den-sityfieldδS(x)from afluctuating‘raw’densityfield,δ(x′), using a low passfilter F having a characteristic scale R f (Coles&Lucchin1995)δS(x,R f)= δ(x′)F(|x−x′|;R f)d x′.(1)Commonly usedfilters include:(i)the‘top-hat’filter,which has a sharp cutoffF TH∝Θ(1−|x−x′|/R TH),whereΘis the Heaviside step function(Θ(z)=0for z 0,Θ(z)=1for z>0)and(ii)the Gaussianfilter F G∝exp(−|x−x′|2/2R2G).For our purpose,we shallfind it use-ful to apply a variant of the Gaussianfilter to reconstructthe properties of dark energy from supernova data.In other words,we apply Gaussian smoothing to supernova data (which is of the form{ln d L(z i),z i})in order to extract in-formation about important cosmological parameters such asH(z)and w(z).The smoothing algorithm calculates the lu-minosity distance at any arbitrary redshift z to beln d L(z,∆)s=ln d L(z)g+N(z) i[ln d L(z i)−ln d L(z i)g]×exp −ln2 1+z i2∆2 ,(2) N(z)−1= i exp −ln2 1+z i2∆2 .Here,ln d L(z,∆)s is the smoothed luminosity distance atany redshift z which depends on luminosity distances of eachSmoothing Supernova Data to Reconstruct the Expansion History of the Universe and its Age3Table 1.Expected number of supernovae per redshift bin from the SNAP experiment∆z0.1–0.20.2–0.30.3–0.40.4–0.50.5–0.60.6–0.70.7–0.80.8–0.9N 1701551421301191079480dzd L (z )1−(H 0/H )2Ω0m (1+z )3.(4)Theresults will clearly depend upon the value of the scale ∆in (2).A large value of ∆produces a smooth result,but the accuracy of reconstruction worsens,while a small ∆gives a more accurate,but noisy result.Note that,for |z −z i |≪1,the exponent in Eq.(2)reduces to the form −(z −z i )2/2∆2(1+z )2.Thus,the effective Gaussian smooth-ing scale for this algorithm is ∆(1+z ).We expect to obtain an optimum value of ∆for which both smoothness and ac-curacy are reasonable.The Hubble parameter can also be used to obtained the weighted average of w 1+¯w =11+z=1δln(1+z ).(5)˜ρDE is the dark energy density ˜ρDE =ρDE /ρ0c (whereρ0c =3H 20/8πG ).We shall show in the section 5that ¯w ,which we call the w -probe,acts as an excellent diagnostic of dark energy,and can differentiate between different models of dark energy with greater accuracy than the equation of state.To check our method,we use data simulated accord-ing to the SuperNova Acceleration Probe (SNAP)experi-ment.This space-based mission is expected to observe close to 6000supernovae,of which about 2000supernovae can beused for cosmological purposes (Aldering et al.2004).We propose to use a distribution of 1998supernovae between redshifts of 0.1and 1.7obtained from Aldering et al.(2004).This distribution of 1998supernovae is shown in Table 1.Although SNAP will not be measuring supernovae at red-shifts below z =0.1,it is not unreasonable to assume that,by the time SNAP comes up,we can expect high quality data at low redshifts from other supernova surveys such as the Nearby SN Factory 1.Hence,in the low redshift region z <0.1,we add 25more supernovae of equivalent errors to the SNAP distribution,so that our data sample now con-sists of 2023supernovae .Using this distribution of data,we check whether the method is successful in reconstruct-ing different cosmological parameters,and also if it can help discriminate different models of dark energy.We simulate 1000realizations of data using the SNAP distribution with the error in the luminosity distance given by σln d L =0.07–the expected error for SNAP.We also consider the possible effect of weak-lensing on high redshift supernovae by adding an uncertainty of σlens (z )≈0.46(0.00311+0.08687z −0.00950z 2)(as in Wang &Tegmark (2005)).Initially,we use a simple model of dark energy when simulating data –an evolving model of dark energy with w =−a/a 0=−1/(1+z )and Ω0m =0.3.It will clearly be of interest to see whether this model can be reconstructed accurately and discriminated from ΛCDM using this method.From the SNAP distribution,we obtain smoothed data at 2000points taken uniformly between the minimum and maximum of the distributions used.Once we are assured of the efficacy of our method,we shall also at-tempt to reconstruct other models of dark energy.Among these,one is the standard cosmological constant (ΛCDM)model with w =−1.The other is a model with a constant equation of state,w =−0.5.Such models with constant equation of state are known as quiessence models of dark energy (Alam et al.2003)and we shall refer to this model as the “quiessence model”throughout the paper.These three models are complementary to each other.For the ΛCDM model,the equation of state is constant at w =−1,w re-mains constant at −0.5for the quiessence model and for the evolving model,w (z )varies rapidly,increasing in value from w 0=−1at the present epoch to w ≃0at high redshifts.14Arman Shafieloo,Ujjaini Alam,Varun Sahni,Alexei A.Starobinsky∆=0.24Fiducial Model:w =−1/(1+z)Figure 1.The smoothing scheme of equation (2)is used to determine H (z )and w (z )from 1000realizations of the SNAP dataset.The smoothing scale is ∆=0.24.The dashed line in each panel represents the fiducial w =−1/(1+z )‘metamorphosis’model while the solid lines show the mean Hubble parameter (left),the mean equation of state (right),and 1σlimits around these quantities.The dotted line in both panels is ΛCDM.Note that the mean Hubble parameter is reconstructed so accurately that the fiducial model (dashed line)is not visible in the left panel.3RESULTSIn this section we show the results obtained when our smoothing scheme is applied to data expected from the SNAP experiment.The first issue we need to consider is that of the guess model.As mentioned earlier,the guess model in equation (2)is ing a guess model will naturally cause the results to be somewhat biased towards the guess model at low and high redshifts where there is paucity of data.Therefore we use an iterative method to estimate the guess model from an initial guess.Iterative process to obtain Guess modelTo estimate the guess model for our smoothing scheme,we use the following iterative method.We start with a sim-ple cosmological model,such as ΛCDM,as our initial guessmodel–ln d g 0L =ln d ΛCDML.The result obtained from this analysis,ln d 1L ,is expected to be closer to the real model than the initial guess.We now use this result as our nextguess model–ln d g 1L =ln d 1L and obtain the next result ln d 2L .With each iteration,we expect the guess model to become more accurate,thus giving a result that is less and less biased towards the initial guess model used.A few points about the iterative method should be noted here.•Using different models for the initial guess does not af-fect the final result provided the process is iterated several times.For example,if we use a w =−1/(1+z )‘metamor-phosis’model to simulate the data and use either ΛCDM or the w =−0.5quiessence model as our initial guess,theresults for the two cases converge by >∼5iterations.•Using a very small value of ∆will result in a accurate but noisy guess model,therefore after a few iterations,the result will become too noisy to be of any use.Therefore,we should use a large ∆for this process in order to obtain smoother results.•The bias of the final result will decrease with each it-eration,since with each iteration we get closer to the true model.The bias decreases non-linearly with the number of iterations M .Generally,after about 10iterations,for mod-erate values of ∆,the bias is acceptably small.Beyond this,the bias still decreases with the number of iterations but the decrease is negligible while the process takes more time and results in larger errors on the parameters.•It is important to choose a value of ∆which gives a small value of bias and also reasonably small errors on the derived cosmological parameters.To estimate the value of ∆in (2),we consider the following relation between the reconstructed results,quality and quantity of the data and the smoothing parameters.One can show that the relative error bars on H (z )scale as (Tegmark 2002)δHN 1/2∆3/2,(6)where N is the total number of supernovae (for approxi-mately uniform distribution of supernovae over the redshift range)and σis the noise of the data.From the above equa-tion we see that a larger number of supernovae or larger width of smoothing,∆,will decrease the error bars on re-constructed H ,but as we shall show in appendix A,the bias of the method is approximately related to ∆2.This im-plies that,by increasing ∆we will also increase the bias of the results.We attempt to estimate ∆such that the error bars on H be of the same order as σ,which is a reasonable expectation.If we consider a single iteration of our method,then for N ≃2000we get ∆0≃N −1/3≃0.08.However,with eachSmoothing Supernova Data to Reconstruct the Expansion History of the Universe and its Age5∆=0.24Fiducial Model:w =−1/(1+z )0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6H 0TzFigure 2.The smoothing scheme of equation (2)is used to de-termine the look-back time of the universe,T (z )=t (0)−t (z ),from 1000realizations of the SNAP dataset for a w =−1/(1+z )‘metamorphosis’model.The smoothing scale is ∆=0.24.The solid lines show the mean look-back time and the 1σlimits around it.The look-back time for the fiducial model matches exactly with the mean for the smoothing scheme.The dotted line shows the ΛCDM model.iteration,the errors on the parameters will increase.There-fore using this value of ∆when we use an iterative process to find the guess model will result in such large errors on the cosmological parameters as to render the reconstruction exercise meaningless.It shall be shown in Appendix A that at the M-th iteration,the error on ln d L will be approxi-mately δM (ln d L )≃√M ∆0.Therefore,if we wish to stop the boot-strapping after 10iterations,then ∆optimal ≃3∆0≃0.24.This is the optimal value of ∆we shall use for best results for our smoothing procedure.Considering all these factors,we use a smoothing scale ∆=0.24for the smoothing procedure of Eq (2)with a itera-tive method for finding the guess model (with ΛCDM as the initial guess).The boot-strapping is stopped after 10itera-tions.We will see that the results reconstructed using these parameters do not contain noticeable bias and the errors on the parameters are also satisfactory.Figure 1shows the reconstructed H (z )and w (z )with 1σerrors for the w =−1/(1+z )evolving model of dark energy.From this figure we can see that the Hubble param-eter is reconstructed quite accurately and can successfully be used to differentiate the model from ΛCDM.The equa-tion of state,however,is somewhat noisier.There is also a slight bias in the equation of state at low and high redshifts.Since the w =−1/(1+z )model has an equation of state which is very close to w =−1at low redshifts,we see that w (z )cannot discriminate ΛCDM from the fiducial model at z <∼0.2at the 1σconfidence level.Age of the UniverseWe may also use this smoothing scheme to calculate other cosmological parameters of interest such as the age of the universe at a redshift z :t (z )=H −10∞zdz ′(1+z ′)H (z ′).(8)Figure 2shows the reconstructed T (z )with 1σerrors forthe w =−1/(1+z )‘metamorphosis’model using the SNAP distribution.For this model the current age of the universe is about 13Gyrs and the look-back time at z ≃1.7is about 9Gyrs for a Hubble parameter of H 0=70km/s/Mpc.We see that the look-back time is reconstructed extremely ing this method we may predict this parameter with a high degree of success and distinguish between the fiducial look-back time and that for ΛCDM even at the 10σconfidence level.Indeed any cosmological parameter which can be obtained by integrating the Hubble parameter will be reconstructed without problem,since integrating involves a further smoothing of the results.Looking at these results,we draw the conclusion that the method of smoothing supernova data can be expected to work quite well for future SNAP data as far as the Hubble parameter is ing this method,we may recon-struct the Hubble parameter and therefore the expansion history of the universe accurately.We find that the methodis very efficient in reproducing H (z )to an accuracy of <∼2%within the redshift interval 0<z <1,and to <∼4%at z ≃1.7,as demonstrated in figure 1.Furthermore,using the Hubble parameter,one may expect to discriminate be-tween different families of models such as the metamorphosis model w =−1/(1+z )and ΛCDM.This method also repro-duces very accurately the look-back time for a given model,as seen in fig 2.It reconstructs the look-back time to anaccuracy of <∼0.2%at z ≃1.7.4REDUCING NOISE THROUGH DOUBLESMOOTHINGAs we saw in the preceding section,the method of smoothing supernova data to extract information on cosmological pa-rameters works very well if we employ the first derivative of the data to reconstruct the Hubble parameter.It also works reasonably for the second derivative,which is used to deter-mine w (z ),but the errors on w (z )are somewhat large.In this section,we examine a possible way in which the equa-tion of state may be extracted from the data to give slightly better results.The noise in each parameter translates into larger noise levels on its successive derivatives.We have seen earlier that,using the smoothing scheme (2),one can obtain H (z )from the smoothed d L (z )fairly successfully.However,small noises in H (z )propagate into larger noises in w (z ).Therefore,it is logical to assume that if H (z )were smoother,the resul-tant w (z )might also have smaller errors.So,we attempt to6Arman Shafieloo,Ujjaini Alam,Varun Sahni,Alexei A.Starobinsky∆=0.24,Double SmoothingFiducial Model:w =−1/(1+z)Figure 3.The double smoothing scheme of equations(2)and (9)has been used to obtain H (z )and w (z )from 1000realizations of the SNAP dataset.The smoothing scale is ∆=0.24.The dashed line in each panel represents the fiducial w =−1/(1+z )‘metamorphosis’model while the solid lines represent the mean and 1σlimits around it.The dotted line in both panels is ΛCDM.In the left panel H (z )for the fiducial model matches exactly with the mean for the smoothing scheme.∆=0.24,Double SmoothingFiducial Model:w =−1Figure 4.The double smoothing scheme of equations (2)and (9)has been used to obtain H (z )and w (z )from 1000realizations of the SNAP dataset.The smoothing scale is ∆=0.24.The dashed line in each panel represents the fiducial ΛCDM model with w =−1while the solid lines represent the mean and 1σlimits around it.In the left panel H (z )for the fiducial model matches exactly with the mean for the smoothing scheme.smooth H (z )a second time after obtaining it from d L (z ).The procedure in this method is as follows –first,we smooth noisy data ln d L (z )to obtain ln d L (z )s using equation (2).We differentiate this to find H (z )s using equation (3).We then further smooth this Hubble parameter by using thesame smoothing scheme at the new redshifts H (z,∆)s 2=H (z )g +N (z )i[H (z i )s −H (z i )g ]×exp−ln 21+zi2∆2,(9)Smoothing Supernova Data to Reconstruct the Expansion History of the Universe and its Age7∆=0.24,Double SmoothingFiducial Model:w =−0.5Figure 5.The double smoothing scheme of equations (2)and (9)has been used to obtain H (z )and w (z )from 1000realizations of the SNAP dataset.The smoothing scale is ∆=0.24.The dashed line in each panel represents the fiducial quiessence model with w =−0.5while the solid lines represent the mean and 1σlimits around it.The dotted line is ΛCDM.N (z )−1=iexp−ln21+zi2∆2.We then use this H (z,∆)s 2to obtain w (z )using equa-tion (4).This has the advantage of making w (z )less noisy than before,while using the same number of parameters.However,repeated smoothing can also result in the loss of information.The result for the SNAP distribution using this double smoothing scheme for the w =−1/(1+z )model is shown in figure 3.We use ∆=0.24for smoothing both ln d L (z )and H (z ).Comparing with figure 1,we find that there is an improvement in the reconstruction of H (z )as well as w (z ).Thus,errors on the Hubble parameter decrease slightly and errors on w (z )also become somewhat smaller.We now explore this scheme further for other models of dark energy.We first consider a w =−1ΛCDM model.In figure 4,we show the results for this model.We find that the Hubble parameter accurately reconstructed and even w is well reconstructed,with a little bias at high redshift.The next model we reconstruct is a w =−0.5quiessence model.The results for double smoothing are shown in fig 5.There is a little bias for this model at the low redshifts,although it is still well within the error bars.We note that in all three cases,a slight bias is notice-able at low or high redshifts.This is primarily due to edge effects–since at low (high)redshift,any particular point will have less (more)number of supernovae to the left than to the right.Even by estimating the guess model through an iterative process,it is difficult to completely get rid of this effect.In order to get rid of this effect,we would require to use much larger number of iterations for the guess model,but this would result in very large errors on the parameters.However,this bias is so small as to be negligible and cannot affect the results in any way.Looking at these three figures,we can draw the follow-ing conclusions.The Hubble parameter is quite well recon-structed by the method of double smoothing in all three cases while the errors on the equation of state also decrease.At low and high redshifts,a very slight bias persists.Despite this,the equation of state is reconstructed quite accurately.Also,since the average error in w (z )is somewhat less than that in the single smoothing scheme (figure 1),the equation of state may be used with better success in discriminating different models of dark energy using the double smoothing procedure.5THE w -PROBEIn this section we explore the possibility of extracting infor-mation about the equation of state from the reconstructed Hubble parameter by considering a weighted average of the equation of state,which we call the w -probe .An important advantage of this approach is that there is no need to go to the second derivative of the luminosity distance for in-formation on the equation of state.Instead,we consider the weighted average of the equation of state (Alam et al.2004)1+¯w =11+z,(10)which can be directly expressed in terms of the differ-ence in dark energy density ˜ρDE =ρDE /ρ0c (where ρ0c =3H 20/8πG )over a range of redshift as 1+¯w (z 1,z 2)=1δln(1+z )=1H 2(z 2)−Ω0m (1+z 2)3ln1+z 1。

相关文档
最新文档