Mesh Partitioning Approach to Energy Efficient Data Layout
面向Mesh片上网络的快速层次化多目标映射方法_英文_

⾯向Mesh⽚上⽹络的快速层次化多⽬标映射⽅法_英⽂_北京⼤学学报(⾃然科学版),第44卷,第5期,2008年9⽉Acta Scientiarum Naturalium Universitatis Pekinensis ,V ol.44,N o.5(Sept.2008)⾼技术研究发展计划专项经费(2005AA111010)资助收稿⽇期:2007208223;修回⽇期:2008204218A F ast H ierarchical Multi 2Objective M apping Approachfor Mesh 2B ased N etw orks 2on 2ChipLI N Hua,ZHANGLiang ,T ONG Dong ,LI X ianfeng ,CHE NG XuM icroprocess or Research and Development Center ,School of E lectrics Engineering and C om puter Science ,Peking University ,Beijing 100871; C orresponding Author ,E 2mail :linhua86@/doc/94fe8d0002020740be1e9bdb.htmlAbstract The authors proposes a fast hierarchical multi 2objective mapping approach (H M Map )for mesh 2based N oC.Based on partition and multi 2objective heuristic techniques ,H M Map automatically maps large number of IP cores onto N oC architecture and makes g ood tradeoffs between communication energy and latency.Experimental results show that proposed approach achieves shorter execution time ,lower energy and latency com pared with others.With the increasing of N oC size ,the optimization effect of H M Map becomes m ore obvious.K ey w ords Systems 2on 2Chip ;Netw orks 2on 2Chip ;topological mapping ;multi 2objective optimization⾯向Mesh ⽚上⽹络的快速层次化多⽬标映射⽅法林桦 张良 佟冬 李险峰 程旭北京⼤学信息科学技术学院微处理器研究开发中⼼,北京100871; 通讯作者,E 2mail :linhua86@/doc/94fe8d0002020740be1e9bdb.html 摘要 (N oC )拓扑映射的近似最优解,提出⼀种⾯向Mesh N oC 的层次化多⽬标映射⽅法———H M Map 。
聚类分析(clusteranalysis)

聚类分析(cluster analysis)medical aircraftClustering analysis refers to the grouping of physical or abstract objects into a class consisting of similar objects. It is an important human behavior. The goal of cluster analysis is to classify data on a similar basis. Clustering comes from many fields, including mathematics, computer science, statistics, biology and economics. In different applications, many clustering techniques have been developed. These techniques are used to describe data, measure the similarity between different data sources, and classify data sources into different clusters.CatalogconceptMainly used in businessOn BiologyGeographicallyIn the insurance businessOn Internet applicationsIn E-commerceMain stepsCluster analysis algorithm conceptMainly used in businessOn BiologyGeographicallyIn the insurance businessOn Internet applicationsIn E-commerceMain stepsClustering analysis algorithmExpand the concept of editing this paragraphThe difference between clustering and classification is that the classes required by clustering are unknown. Clustering is a process of classifying data into different classes or clusters, so objects in the same cluster have great similarity, while objects between different clusters have great dissimilarity. From a statistical point of view, clustering analysis is a way to simplify data through data modeling. Traditional statistical clustering analysis methods include system clustering method, decomposition method, adding method, dynamic clustering method, ordered sample clustering,overlapping clustering and fuzzy clustering, etc.. Cluster analysis tools, such as k- mean and k- center point, have been added to many famous statistical analysis packages, such as SPSS, SAS and so on. From the point of view of machine learning, clusters are equivalent to hidden patterns. Clustering is an unsupervised learning process for searching clusters. Unlike classification, unsupervised learning does not rely on predefined classes or class labeled training instances. Automatic marking is required by clustering learning algorithms, while instances of classification learning or data objects have class tags. Clustering is observational learning, not sample learning. From the point of view of practical application, clustering analysis is one of the main tasks of data mining. Moreover, clustering can be used as an independent tool to obtain the distribution of data, to observe the characteristics of each cluster of data, and to concentrate on the analysis of specific cluster sets. Clustering analysis can also be used as a preprocessing step for other algorithms (such as classification and qualitative inductive algorithms).Edit the main application of this paragraphCommerciallyCluster analysis is used to identify different customer groups and to characterize different customer groups through the purchase model. Cluster analysis is an effective tool for market segmentation. It can also be used to study consumer behavior, to find new potential markets, to select experimental markets, and to be used as a preprocessing of multivariate analysis.On BiologyCluster analysis is used to classify plants and plants and classify genes so as to get an understanding of the inherent structure of the populationGeographicallyClustering can help the similarity of the databases that are observed in the earthIn the insurance businessCluster analysis uses a high average consumption to identify groups of car insurance holders, and identifies a city's property groups based on type of residence, value, locationOn Internet applicationsCluster analysis is used to categorize documents online to fix informationIn E-commerceA clustering analysis is a very important aspect in the construction of Web Data Mining in electronic commerce, through clustering with similar browsing behavior of customers, and analyze the common characteristics of customers, help the users of e-commerce can better understand their customers, provide more suitable services to customers.Edit the main steps of this paragraph1. data preprocessing,2. defines a distance function for measuring similarity between data points,3. clustering or grouping, and4. evaluating output. Data preprocessing includes the selection of number, types and characteristics of the scale, it relies on the feature selection and feature extraction, feature selection important feature, feature extraction feature transformation input for a new character, they are often used to obtain an appropriate feature set to avoid the "cluster dimension disaster" data preprocessing, including outlier removal data, outlier is not dependent on the general data or model data, so the outlier clustering results often leads to a deviation, so in order to get the correct clustering, we must eliminate them. Now that is similar to the definition of a class based, so different data in the same measure of similarity feature space for clustering step is very important, because the diversity of types and characteristics of the scale, the distance measure must be cautious, it often depends on the application, for example,Usually by definition in the feature space distance metric to evaluate the differences of the different objects, many distance are applied in different fields, a simple distance measure, Euclidean distance, are often used to reflect the differences between different data, some of the similarity measure, such as PMC and SMC, to the concept of is used to characterize different data similarity in image clustering, sub image error correction can be used to measure the similarity of two patterns. The data objects are divided into differentclasses is a very important step, data based on different methods are divided into different classes, classification method and hierarchical method are two main methods of clustering analysis, classification methods start from the initial partition and optimization of a clustering criterion. Crisp Clustering, each data it belonged to a separate class; Fuzzy Clustering, each data it could be in any one class, Crisp Clustering and Fuzzy Clusterin are the two main technical classification method, classification method of clustering is divided to produce a series of nested a standard based on the similarity measure, it can or a class separability for merging and splitting is similar between the other clustering methods include density based clustering model, clustering based on Grid Based clustering. To evaluate the quality of clustering results is another important stage, clustering is a management program, there is no objective criteria to evaluate the clustering results, it is a kind of effective evaluation, the index of general geometric properties, including internal separation between class and class coupling, the quality is generally to evaluate the clustering results, effective index in the determination of the number of the class is often played an important role, the best value of effective index is expected to get from the real number, a common class number is decided to select the optimum values for a particular class of effective index, is the the validity of the standard index the real number of this index can, many existing standards for separate data set can be obtained very good results, but for the complex number According to a collection, it usually does not work, for example, for overlapping classes of collections.Edit this section clustering analysis algorithmClustering analysis is an active research field in data mining, and many clustering algorithms are proposed. Traditional clustering algorithms can be divided into five categories: partitioning method, hierarchical method, density based method, grid based method and model-based method. The 1 division method (PAM:PArtitioning method) first create the K partition, K is the number of partition to create; and then use a circular positioning technology through the object from a division to another division to help improve the quality of classification. Including the classification of typical: K-means, k-medoids, CLARA (Clustering LARge Application), CLARANS (Clustering Large Application based upon RANdomized Search). FCM 2 level (hierarchical method) method to create a hierarchical decomposition of the given data set. The method can be divided into two operations: top-down (decomposition) and bottom-up (merging). In order to make up for the shortcomings of decomposition and merging, hierarchical merging is often combined with other clustering methods, such as cyclic localization. This includes the typical methods of BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) method, it firstly set the tree structure to divide the object; then use other methods to optimize the clustering. CURE (Clustering, Using, REprisentatives) method, which uses fixed numbers to represent objects to represent the corresponding clustering, and then shrinks the clusters according to the specified amount (to the clustering center). ROCK method, it uses the connection between clusters to cluster and merge. CHEMALOEN method, it constructs dynamic model in hierarchical clustering. 3 density based method, according to the density to complete the object clustering. It grows continuouslyaccording to the density around the object (such as DBSCAN). The typical density based methods include: DBSCAN(Densit-based Spatial Clustering of Application with Noise): the algorithm by growing enough high density region to clustering; clustering can find arbitrary shape from spatial databases with noise in. This method defines a cluster as a set of point sets of density connectivity. OPTICS (Ordering, Points, To, Identify, the, Clustering, Structure): it does not explicitly generate a cluster, but calculates an enhanced clustering order for automatic interactive clustering analysis.. 4 grid based approach,Firstly, the object space is divided into finite elements to form a grid structure, and then the mesh structure is used to complete the clustering. STING (STatistical, INformation, Grid) is a grid based clustering method that uses the statistical information stored in the grid cell. CLIQUE (Clustering, In, QUEst) and Wave-Cluster are a combination of grid based and density based methods. 5, a model-based approach, which assumes the model of each cluster, and finds data appropriate for the corresponding model. Typical model-based methods include: statistical methods, COBWEB: is a commonly used and simple incremental concept clustering method. Its input object is represented by a symbolic quantity (property - value) pair. A hierarchical cluster is created in the form of a classification tree. CLASSIT is another version of COBWEB. It can incrementally attribute continuous attributes. For each node of each property holds the corresponding continuous normal distribution (mean and variance); and the use of an improved classification ability description method is not like COBWEB (value) and the calculation of discrete attributes but theintegral of the continuous attributes. However, CLASSIT methods also have problems similar to those of COBWEB. Therefore, they are not suitable for clustering large databases. Traditional clustering algorithms have successfully solved the clustering problem of low dimensional data. However, due to the complexity of data in practical applications, the existing algorithms often fail when dealing with many problems, especially for high-dimensional data and large data. Because traditional clustering methods cluster in high-dimensional data sets, there are two main problems. The high dimension data set the existence of a large number of irrelevant attributes makes the possibility of the existence of clusters in all the dimensions of almost zero; to sparse data distribution data of low dimensional space in high dimensional space, which is almost the same distance between the data is a common phenomenon, but the traditional clustering method is based on the distance from the cluster, so high dimensional space based on the distance not to build clusters. High dimensional clustering analysis has become an important research direction of cluster analysis. At the same time, clustering of high-dimensional data is also the difficulty of clustering. With the development of technology makes the data collection becomes more and more easily, cause the database to larger scale and more complex, such as trade transaction data, various types of Web documents, gene expression data, their dimensions (attributes) usually can reach hundreds of thousands or even higher dimensional. However, due to the "dimension effect", many clustering methods that perform well in low dimensional data space can not obtain good clustering results in high-dimensional space. Clustering analysis of high-dimensional data is a very active field in clustering analysis, and it is also a challenging task. Atpresent, cluster analysis of high-dimensional data is widely used in market analysis, information security, finance, entertainment, anti-terrorism and so on.。
Geant4模拟质子入射InP产生的位移损伤

第19卷 第1期 太赫兹科学与电子信息学报Vo1.19,No.1 2021年2月 Journal of Terahertz Science and Electronic Information Technology Feb.,2021 文章编号:2095-4980(2021)01-0176-05Geant4模拟质子入射InP产生的位移损伤白雨蓉,贺朝会,谢 飞,李永宏,臧 航(西安交通大学核科学与技术学院,陕西西安 710049)摘 要:磷化铟(InP)作为重要的第二代半导体材料,禁带宽度大,电子漂移速度快,抗辐照性能比Si,GaAs好,可作为制备空间飞行器上电学器件的备选材料。
随着半导体器件的尺寸纳米化,空间环境中低能质子辐照元件所导致的位移损伤成为影响元件电学性能的主要因素之一。
本文使用Geant4模拟得到低能质子入射InP产生的初级撞出原子(PKA)种类及占比和不同能量质子的非电离能量损失(NIEL)的深度分布。
结果表明:质子俘获和核反应的概率随质子能量的增加而增加,进而使弹性碰撞产生的反冲原子In,P的占比减少,其他反冲原子占比增加;NIEL峰值随质子能量的增加而降低,且NIEL峰有向前移动的趋势,即随着质子能量增加,位移损伤严重区域逐渐由材料末端移至材料表面。
关键词:非电离能量损失模型;Geant4;空间质子辐射;磷化铟中图分类号:TN304.2+3文献标志码:A doi:10.11805/TKYDA2019383Geant4 simulation of displacement damage induced by proton irradiation in InPBAI Yurong,HE Chaohui,XIE Fei,LI Yonghong,ZANG Hang(School of Nuclear Science and Technology,Xi’an Jiaotong University,Xi’an Shaanxi 710049,China)Abstract:As an important second-generation semiconductor material, indium phosphide has wide bandgap, fast electron drift and better radiation resistance than Si and GaAs. It can be used as analternative material for the preparation of electrical devices on space vehicles. With the nano-size ofsemiconductor devices, the displacement damage caused by low-energy proton irradiation in spaceenvironment is one of the main factors affecting the electrical properties of components. In this paper, thetypes and proportions of Primary Knock-on Atom(PKA) produced by low energy protons irradiation and thedepth distribution of Non-Ionizing Energy Loss(NIEL) of protons with different energies are obtained byGeant4 simulation. The results show that the probability of proton capture and nuclear reaction increaseswith the increase of proton energy, which decreases the proportion of recoil atoms In and P and enhancesother recoil atoms in elastic collision. The NIEL peak tends to move forward in depth of the bulk materialwith the increase of proton energy, which means the area of serious displacement damage gradually shiftsfrom the end of the material to the surface of the material.Keywords:Non-Ionizing Energy Loss;Geant4;space proton irradiation damage;InP对半导体器件的位移损伤研究始于20世纪70年代,主要以地面辐照试验为主,M Yamaguchi, R J Walters 等[1-4]对InP,GaAs,GaN等III-V族化合物半导体材料做了一系列的粒子束辐照实验,得到位移损伤对III-V族半导体器件电学性能的影响规律。
Abaqus划分网格技巧小结

Abaqus中三维几何体生成结构网格的分割方法图1 可以直接生成结构网格的三维几何体图2 不可以直接生成结构网格的三维几何体几何体中有孔圆弧≥900 有不能生成二维结构网格的面一个项点有三条以上的边共用图3 分割示例无法生成结构网格的问题分割示例 1 几何体中有孔如将孔分割成半圆或者1/4圆从而去除孔 2 圆弧≥900 如将1800圆弧分成两个900圆弧3 有不能生成二维结构网格的面如半圆周面只含有二个边而一个面至少含有三条以上的边界才能生成二维结构网格所以将半圆分半这样每个面便有三条边 4 一个顶点只能有三条边共用 5 一个区域至少有四个面如四面体6 如果区域中包含拓朴关系则这个区域只能有六个面。
如果多于六个面则如图4所示可以使用virtual topology对面进行合并直至所含的面只有六个面。
图4 使用virtual topology对面进行合并直至所含的面只有六个面7 面与面之间的角度最好接近于90°如果面与面角度≥1500则需进行分割8 对区域中的每个面则有以下要求8.1如果区域不是立方体cube则其面必须是一个整面单一面不能含有多个面片8 8.2 如果区域是立方体cubea side can be a connected set of faces that are on the same geometric surface. In addition the pattern of the faces must allow rows and columns of hexahedral elements to be created in a regular grid pattern along that entire side when the cube is meshed. For example Figure 5shows two acceptable face patterns and the resulting regular grid pattern of elements created by meshing the cubes using the structured meshing technique. Figure 5 Acceptable face patterns and the resulting meshes Figure 6 Unacceptable face patterns. The face pattern shown on the left is unacceptable for structured meshing because each face has only three sides. Each face in the pattern shown on the right has four sides but the pattern does not allow a regular grid of elements to be created on the partitioned side of the cube as shown in Figure 6. Figure 7 A regular grid of elements cannot be created Abaqus中三维几何体生成扫描网格的分割方法无法生成扫描网格的问题分割示例 1 起始面和目标面必须只能是一个整面a single face或是四边形组成的面片组合四边形的面片可以生成规则网格form a regular grid pattern。
CASTEP模块计算表面上的吸附能

Z的坐标值应为1.39 Å,此既为原子层间的距离。 注意:一个fcc(110)体系,do 可通过下列公式得到: .
在弛豫表面之前,如果仅仅是只需要弛豫表面,我们必需要束 缚住内部Pd原子。
不包括最高层的Pd原子,按住SHIFT键选中所有的Pd原子。从菜 单栏中选中Modify | Constraints,勾选上Fix fractional position。关闭对话框。
注意, 此时表 面垂直 于z轴, 习惯。
在3D Viewer上单击右键,选择Display Style选项,选择 Line,则从结构图上可清楚看到O-ABC。记住相对方位,恢 复显示位Ball and Stick。
转动晶格,使z轴垂直于屏幕。打开Display Style 对话框,选择 Lattice 标签,将Display style 由 Default 改为 Original。关闭对话 框。
选择Properties标签,选中 Density of states。把k-point set改 为Gamma,勾选Calculate PDOS 选项。按下Run按钮。
出现如下对话框,选择No。
出现如下信息,表示CO优化成功。 查看OC的原子坐标,与实验值有差异。
从菜单栏中选择File | Save Project,然后在选中Window | Close All。我们可以进行下一步操作。 4.构造Pd(110)面 下面我们将要用到从Pd bulk中获得的Pd优化结构。 在Pd bulk/Pd CASTEPGeomOpt文档中打开Pd.xsd。
现 在 显 示 的 是 一 个 空 3D 模 型 文 档 。 我 们 可 以 使 用 Build Crystal工具来创建一个空晶格单元,然后在上面添加CO分子。
ansys单元划分技巧(ANSYScellpartitioningtechniques)

ansys单元划分技巧(ANSYS cell partitioning techniques)As we all know, meshing is one of the most critical steps for the finite element analysis. The quality of the mesh directly affects the accuracy and speed of the calculation. In ANSYS, as you all know, there are three steps in mesh generation: defining unit attributes (including real constants), defining mesh attributes on a geometric model, and dividing meshes. Here, we address only a few of the problems involved in this procedure, especially some of the problems associated with complex models.I. free mesh partitioningFree meshing is one of the most automated meshing techniques. It can automatically generate triangular or quadrilateral meshes on surfaces (planes and surfaces), and automatically generate tetrahedral meshes on the body. Usually, can use smart size control technology ANSYS (SMARTSIZE command) to the size and the density distribution automatic control grid, can also be manually set the size of the grid (AESIZE, LESIZE, KESIZE, ESIZE series of command and control) distribution network and selection algorithm (MOPT command). For complex geometric models, the method is time-saving and labor-saving, but the disadvantage is that the number of units is usually large and the computational efficiency is reduced. At the same time, because this method can only generate tetrahedron element for 3D complex model, two order tetrahedron element (unit 92) is recommended to obtain better calculation accuracy. If the choice is hexahedron unit, this method will automatically order hexahedral elements degenerated into consistent tetrahedral elements. Therefore, it is best not to use hexahedral linear (no intermediate nodes, such as No. 45 unit), because thetetrahedron unit of the unit after the degradation of linear, with rigid stiffness, accuracy if the poor; two times using hexahedral elements (such as the No. 95 unit), because it is a degenerate form, the number of nodes and hexahedron prototype unit consistent, just the number of nodes in the same position. Therefore, we can use the TCHG command to tetrahedron tetrahedron model in the form of degradation in non degenerate reduce the number of nodes, each unit, improve the efficiency of solution. In some cases, must be used to degenerate form of hexahedral element free mesh, for example, in mixed mesh (described below), only with hexahedral elements to form Pyramid transition unit. For computational fluid dynamics and electromagnetic field analysis considering skin effects, the function of the layer grid (controlled by the LAYER1 and LAYER2 domains of the LESIZE command) is very useful for free meshing.Two, mapping meshMesh mapping is a kind of structured mesh method structured model, the original concept is: the surface is only a quadrilateral surface mesh number should be kept the same on the edge, forming unit for all quadrilateral; for the body, only the hexahedron mesh number corresponding to the line and the surface consistent the formation of all hexahedral unit. In ANSYS, these conditions have been greatly relaxed, including:The 1 sides may be a triangle, a quadrilateral, or any other polygon. For polygons over four edges, you must use the LCCAT command to associate some edges into one edge so that the mesh is divided,A triangle or quadrilateral is still used; or the AMAP command defines 3 to 4 vertices (the program automatically combines all the lines between two vertices) to map them.The number of mesh pairs on the 2 sides can be different, but there are some restrictions.A triangular mapping mesh can be formed on the 3 sides.The 4 body can be a tetrahedron, a five face, a hexahedron or any other polyhedron. For polyhedra with more than six faces, you must use the ACCAT command to join some surfaces into one surface so that the mesh is still four or five or hexahedral.The mesh number of the corresponding lines and planes on the 5 body can be different, but there are some restrictions.For 3D complex geometric models, the usual approach is to use ANSYS Boolean operations to cut them into a series of four or five or hexahedron, and then mesh these cut bodies into mesh. Of course, this pure Mapping Division is cumbersome, requiring more time and effort. The triangle mesh mapping can often for free mesh service, in order to make the free mesh body meet some specific requirements, such as: a narrow body node the short side direction requirements must have certain number of units, some positions must be in a straight line, etc.. In this volume meshing in its surface first mesh can be a good way to control many complex models, but don't forget to clear in the end body surface mesh mesh (also available for auxiliary grid virtual unit type MESH200 to divide the mesh, not clear).Three, dragging, sweeping, slightly meshingFor the surface through drag, rotate, offset (VDRAG, VROTAT, VOFFST, VEXT and other series of commands) complex 3D solid way of generation, the first generation of shell in the original surface (or MESH200) unit in the form of surface mesh, and three dimensional grid entities formed automatically in the form at the same time; the formation of the complex 3D entity, if it is in a direction of the topological form of consistent use (manual or automatic) sweep mesh (VSWEEP command) function to divide grid; the formation of the two kinds of unit almost all hexahedral element. Usually, the sweep form grid is a very good way for complex geometry, through some simple segmentation processing, you can automatically form a regular hexahedron grid, it has more advantages and flexibility than the partition map grid.Four, mixed mesh partitionHybrid mesh, which is based on the geometric model, according to the characteristics of each part, uses the free, mapping, sweeping and other meshing methods to form a finite element model with the best comprehensive effect. The method of hybrid mesh generation should be considered in terms of computational accuracy, computation time and modeling workload. Usually, in order to improve the accuracy and reduce the computation time, should first consider suitable for sweeping and mesh mapping the region of the first division of hexahedral mesh, the grid can be linear (no node), can also be two times (a node), if there is no suitable area,Should as far as possible to create the appropriate area through a variety of means of segmentation of Boolean operations (especially for the area or site concerned); secondly, the segmentation is no longer has to tetrahedral free mesh area, using hexahedron element nodes in the free net (automatically degenerates to suitable for free into the form at this time, the unit), in the region and has swept or mapped mesh area on the interface will automatically form Pyramid (no transition element hexahedral element nodes without degradation in Pyramid form). The Pyramid transition unit in ANSYS has great flexibility: if the adjacent non hexahedral element node in Pyramid unit four plane quadrilateral element edge, automatically cancel the intermediate node, to ensure the coordination of the grid. At the same time, the reduced order tetrahedral elements should be automatically converted into non degenerate tetrahedral elements by using the previously described TCHG command, so that the efficiency of the solution can be improved. If the accuracy of the analytical model of the requirements is not high, or the accuracy of free mesh area requirements is not high, may be in the free mesh area using hexahedral element nodes in the network not to (automatically degenerate into tetrahedron without node), this time, although there is no transition unit in Pyramid between the hexahedral and tetrahedral elements partition partition, but if the unit has no area of hexahedral element nodes, as are linear element, can ensure that the coordination unit.Five. Coupling and constraint equations using degrees of freedomFor some complex geometry models, the constraint equations andthe degree of freedom coupling function of ANSYS can be used to facilitate the division of fine grids and to reduce the computational scale. For example, using the CEINTF command to the adjacent body in independent grid (usually by mapping or sweep mode) and then "bond", because each individual has no relation between the geometry, so do not care about the mutual effects of the grid, so you can freely use a variety of means divided a good grid, and the grid "bonding" is the difference of shape function of degree of freedom coupling, so the displacement of the connecting position of the continuity can be an absolute guarantee, if you are very concerned about the stress at the joint, can be described as follows in the local position establish sub region model analysis. Again, for the circular symmetry model (such as rotating machinery, etc.) can only establish a sector as the analysis model, using the CPCYC command can establish the degree of freedom coupling conditions corresponding to all the nodes automatically two section of the sector (using the MSHCOPY command can be very conveniently in the two cutting surface to generate the corresponding grid).Six, use the sub region model and other meansThe sub area model is a first overall, after the analysis of local technology (also known as the method of cutting boundary conditions), for the complex geometric model concerned only with local accurate results, this method can be used to minimize the workload, to get the results you want. The process is: first establish a general analysis model, and ignore the characteristics of a series of small models, such as the guide hole and slot angle, etc. (according to Saint Venant principle, local small model does not change the results of the specialimpact analysis model),。
氧原子在α钛晶体中扩散的第一性原理研究

氧原子在α钛晶体中扩散的第一性原理研究杨亮;王才壮;林仕伟;曹阳【期刊名称】《物理学报》【年(卷),期】2017(066)011【摘要】在材料领域杂质原子的迁移是一个基础而永恒的主题.采用基于密度泛函理论的第一性原理方法,研究了氧原子在α钛(α-Ti)晶体中的间隙占位情况,并计算了氧原子稳定占位点间隙能、电子态密度、电荷差分密度及其邻近钛原子的位移情况.采用基于过渡态搜索理论的CI-NEB (climbing image nudged elastic band)方法预测了稳定态氧原子在α-Ti晶体中的扩散路径、扩散势垒及相应的跳转频率,并由此推算出氧原子在不同位点之间跳转的扩散系数.研究结果表明,间隙氧原子在六角密排钛晶体结构中共有七种占位,但仅存在三个可稳定占据的间隙位点:八面体中心位点、六面体中心位点及0.28 nm钛一钛键中心位点.各稳定间隙位点之间的扩散具有不对称性,因此可确定三种稳定间隙氧原子位点间存在七条独立扩散路径.获取计算不同路径扩散系数所需要的微观参数,包括扩散势垒、扩散长度、不同扩散路径上鞍点氧原子的跳转频率,最终预测了不同间隙位点之间氧原子的扩散系数值,其中八面体中心扩散到邻近键位的扩散系数与实验值相符合.通过对间隙氧原子扩散行为的深入了解,希望能对控制钛合金中氧的扩散、提高钛金属中氧的含量及相关研究提供基础理论支持.%How impurity atoms move through a crystal is a fundamental and renewed issue in condensed matter physics and materials science.Diffusion of oxygen (O) in titanium (Ti) affects the formation of titanium-oxides and the design of Tibasedalloys.Moreover,the kinetics of initial growth of titania-nanotubes viaanodization of a titanium metal substrate also involves the diffusion of oxygen.Therefore,the understanding of the migration mechanism of oxygen atoms in α-Ti is extremely important for controlling oxygen diffusion in Ti alloys.In this work,we show how the diffusion coefficient can be predicted directly from first-principles studies without any empirical fitting parameters.By performing the first-principles calculations based on the density functional theory (DFT) through using the Vienna ab initio Simulation Package (VASP),we obtain three locally stable interstitial oxygen sites in the hexagonal closed-packed (hcp) lattice of titanium.These sites are octahedral center (OC) site,hexahedral center (HE) site,and Ti Ti bond center crowdion (CR) site with interstitial energies of-2.83,-1.61,and-1.48 eV,respectively.From the interstitial energies it follows that oxygen atom prefers to occupy the octahedral site.From electronic structure analysis,it is found that the Ti O bonds possess some covalent characteristics and are strong and ing the three stable O sites from our calculations,we propose seven migration pathways for oxygen diffusion in hcp Ti and quantitatively determine the transition state and diffusion barrier with the saddle point along the minimum energy diffusion path by the climbing image nudged elastic band (CI-NEB) method.The microscopic diffusion barriers (△E) from the first-principles calculations are important for quantitatively describing the temperature dependentdiffusioncoefficientsDfromArrheniusformulaD=L2v*exp(-△EkBT),where v* is the jumping frequency and L is the atomic displacement of each jump.The jumping frequency v* is determined from3NΠviv*=i=1/3N-1,Πvjj=1where vi and vj are the vibration frequency of oxygen atom at the initial state and the transition state respectively.This analysis leads to the formula for calculating the temperature dependent diffusion coefficient by using the microscopic parameters (vi and △E) from first-principles calculations 3NΠviD=L2 i=1/3N-1×exp(-△E/kBT)Πvjj=1 without any fitting ing the above formula and the vibration frequencies and diffusion barriers from first-principles calculations,we calculate the diffusion coefficients among different interstitial sites.It is found that the diffusion coefficient from the octahedral center site to the available site nearby is in good agreement with the experimental result,i.e.,the diffusion rate D is 1.0465 × 10-6 m2.s-1 with △E of 0.5310 eV.The jump from the crowdion site to the octahedral interstitial site prevails over all the other jumps,as a result of its low energy barrier and thus leading to markedly higher diffusivity values.The diffusion of oxygen atoms is mainly controlled by the jump occurring between OC and CR sites,resulting in high diffusion anisotropy.This finding of oxygen diffusion behavior in Ti provides a useful insight into the kinetics at initial stage of oxidation in Ti which is very relevant to many technological applications of Ti-based materials.【总页数】10页(P251-260)【作者】杨亮;王才壮;林仕伟;曹阳【作者单位】南京理工大学化工学院,南京 210094;海南大学材料与化工学院,海口570228;Division of Materials Sciences and Engineering, Ames Laboratory,Ames 50011, USA;海南大学材料与化工学院,海口 570228;海南大学材料与化工学院,海口 570228【正文语种】中文【相关文献】1.β钛中合金原子与间隙氧的相互作用及氧在β钛合金中扩散的第一性原理研究[J], 祝令刚;胡青苗;杨锐2.β钛中合金原子与间隙氧的相互作用及氧在β钛合金中扩散的第一性原理研究[J], 祝令刚;胡青苗;杨锐3.氧原子在Al(111)表面扩散的第一性原理研究 [J], 魏丽静;代秀红;郭建新4.氢在Zr(Cr,Fe)2第二相晶体中扩散的第一性原理研究 [J], 杨云龙; 王登京5.NiTi(110)表面氧原子吸附和扩散的第一性原理研究 [J], 刘坤;王福合因版权原因,仅展示原文概要,查看原文内容请购买。
A Label Field Fusion Bayesian Model and Its Penalized Maximum Rand Estimator for Image Segmentation

1610IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010A Label Field Fusion Bayesian Model and Its Penalized Maximum Rand Estimator for Image SegmentationMax MignotteAbstract—This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature. Index Terms—Bayesian model, Berkeley image database, color textured image segmentation, energy-based model, label field fusion, Markovian (MRF) model, probabilistic Rand index.I. INTRODUCTIONIMAGE segmentation is a frequent preprocessing step which consists of achieving a compact region-based description of the image scene by decomposing it into spatially coherent regions with similar attributes. This low-level vision task is often the preliminary and also crucial step for many image understanding algorithms and computer vision applications. A number of methods have been proposed and studied in the last decades to solve the difficult problem of textured image segmentation. Among them, we can cite clustering algorithmsManuscript received February 20, 2009; revised February 06, 2010. First published March 11, 2010; current version published May 14, 2010. This work was supported by a NSERC individual research grant. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Peter C. Doerschuk. The author is with the Département d’Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Faculté des Arts et des Sciences, Montréal H3C 3J7 QC, Canada (e-mail: mignotte@iro.umontreal.ca). Color versions of one or more of the figures in this paper are available online at . Digital Object Identifier 10.1109/TIP.2010.2044965[1], spatial-based segmentation methods which exploit the connectivity information between neighboring pixels and have led to Markov Random Field (MRF)-based statistical models [2], mean-shift-based techniques [3], [4], graph-based [5], [6], variational methods [7], [8], or by region-based split and merge procedures, sometimes directly expressed by a global energy function to be optimized [9]. Years of research in segmentation have demonstrated that significant improvements on the final segmentation results may be achieved either by using notably more sophisticated feature selection procedures, or more elaborate clustering techniques (sometimes involving a mixture of different or non-Gaussian distributions for the multidimensional texture features [10], [11]) or by taking into account prior distribution on the labels, region process, or the number of classes [9], [12], [13]. In all cases, these improvements lead to computationally expensive segmentation algorithms and, in the case of energy-based segmentation models, to costly optimization techniques. The segmentation approach, proposed in this paper, is conceptually different and explores another strategy initially introduced in [14]. Instead of considering an elaborate and better designed segmentation model of textured natural image, our technique explores the possible alternative of fusing (i.e., efficiently combining) several quickly estimated segmentation maps associated with simpler segmentation models for a final reliable and accurate segmentation result. These initial segmentations to be fused can be given either by different algorithms or by the same algorithm with different values of the internal parameters such as several -means clustering results with different values of , or by several -means results using different distance metrics, and applied on an input image possibly expressed in different color spaces or by other means. The fusion model, presented in this paper, is derived from the recently introduced probabilistic rand index (PRI) [15], [16] which measures the agreement of one segmentation result to multiple (manually generated) ground-truth segmentations. This measure efficiently takes into account the inherent variation existing across hand-labeled possible segmentations. We will show that this non-parametric measure allows us to derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Finally, this fusion model emerges as a classical optimization problem in which the Gibbs energy function related to this model has to be minimized. In other words, or analytically expressed in the regularization framework, each quickly estimated segmentation (to be fused) provides a set of constraints in terms of pairs of pixel labels (i.e., binary cliques) that should be equal or not. Finally, our fusion result is found1057-7149/$26.00 © 2010 IEEEMIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1611by searching for a segmentation map that minimizes an energy function encoding this precomputed set of binary constraints (thus optimizing the so-called PRI criterion). In our application, this final optimization task is performed by a robust multiresolution coarse-to-fine minimization strategy. This fusion of simple, quickly estimated segmentation results appears as an interesting alternative to complex, computationally demanding segmentation models existing in the literature. This new strategy of segmentation is validated in the Berkeley natural image database (also containing, for quantitative evaluations, ground truth segmentations obtained from human subjects). Conceptually, our fusion strategy is in the framework of the so-called decision fusion approaches recently proposed in clustering or imagery [17]–[21]. With these methods, a series of energy functions are first minimized before their outputs (i.e., their decisions) are merged. Following this strategy, Fred et al. [17] have explored the idea of evidence accumulation for combining the results of multiple clusterings. Reed et al. have proposed a Gibbs energy-based fusion model that differs from ours in the likelihood and prior energy design, as final merging procedure (for the fusion of large scale classified sonar image [21]). More precisely, Reed et al. employed a voting scheme-based likelihood regularized by an isotropic Markov random field priorly used to inpaint regions where the likelihood decision is not available. More generally, the concept of combining classifiers for the improvement of the performance of individual classifiers is known, in machine learning field, as a committee machine or mixture of experts [22], [23]. In this context, Dietterich [23] have provided an accessible and informal reasoning, from statistical, computational and representational viewpoints, of why ensembles can improve results. In this recent field of research, two major categories of committee machines are generally found in the literature. Our fusion decision approach is in the category of the committee machine model that utilizes an ensemble of classifiers with a static structure type. In this class of committee machines, the responses of several classifiers are combined by means of a mechanism that does not involve the input data (contrary to the dynamic structure type-based mixture of experts). In order to create an efficient ensemble of classifiers, three major categories of methods have been suggested whose goal is to promote diversity in order to increase efficiency of the final classification result. This can be done either by using different subsets of the input data, either by using a great diversity of the behavior between classifiers on the input data or finally by using the diversity of the behavior of the input data. Conceptually, our ensemble of classifiers is in this third category, since we intend to express the input data in different color spaces, thus encouraging diversity and different properties such as data decorrelation, decoupling effects, perceptually uniform metrics, compaction and invariance to various features, etc. In this framework, the combination itself can be performed according to several strategies or criteria (e.g., weighted majority vote, probability rules: sum, product, mean, median, classifier as combiner, etc.) but, none (to our knowledge) uses the PRI fusion (PRIF) criterion. Our segmentation strategy, based on the fusion of quickly estimated segmentation maps, is similar to the one proposed in [14] but the criterion which is now used in this new fusion model is different. In [14], the fusion strategy can be viewed as a two-stephierarchical segmentation procedure in which the first step remains identical and a set of initial input texton segmentation maps (in each color space) is estimated. Second, a final clustering, taking into account this mixture of textons (expressed in the set of different color space) is then used as a discriminant feature descriptor for a final -mean clustering whose output is the final fused segmentation map. Contrary to the fusion model presented in this paper, this second step (fusion of texton segmentation maps) is thus achieved in the intra-class inertia sense which is also the so-called squared-error criterion of the -mean algorithm. Let us add that a conceptually different label field fusion model has been also recently introduced in [24] with the goal of blending a spatial segmentation (region map) and a quickly estimated and to-be-refined application field (e.g., motion estimation/segmentation field, occlusion map, etc.). The goal of the fusion procedure explained in [24] is to locally fuse label fields involving labels of two different natures at different level of abstraction (i.e., pixel-wise and region-wise). More precisely, its goal is to iteratively modify the application field to make its regions fit the color regions of the spatial segmentation with the assumption that the color segmentation is more detailed than the regions of the application field. In this way, misclassified pixels in the application field (false positives and false negatives) are filtered out and blobby shapes are sharpened, resulting in a more accurate final application label field. The remainder of this paper is organized as follows. Section II describes the proposed Bayesian fusion model. Section III describes the optimization strategy used to minimize the Gibbs energy field related to this model and Section IV describes the segmentation model whose outputs will be fused by our model. Finally, Section V presents a set of experimental results and comparisons with existing segmentation techniques.II. PROPOSED FUSION MODEL A. Rand Index The Rand index [25] is a clustering quality metric that measures the agreement of the clustering result with a given ground truth. This non-parametric statistical measure was recently used in image segmentation [16] as a quantitative and perceptually interesting measure to compare automatic segmentation of an image to a ground truth segmentation (e.g., a manually hand-segmented image given by an expert) and/or to objectively evaluate the efficiency of several unsupervised segmentation methods. be the number of pixels assigned to the same region Let (i.e., matched pairs) in both the segmentation to be evaluated and the ground truth segmentation , and be the number of pairs of pixels assigned to different regions (i.e., misand . The Rand index is defined as matched pairs) in to the total number of pixel pairs, i.e., the ratio of for an image of size pixels. More formally [16], and designate the set of region labels respecif tively associated to the segmentation maps and at pixel location and where is an indicator function, the Rand index1612IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010is given by the following relation:given by the empirical proportion (3) where is the delta Kronecker function. In this way, the PRI measure is simply the mean of the Rand index computed between each [16]. As a consequence, the PRI pair measure will favor (i.e., give a high score to) a resulting acceptable segmentation map which is consistent with most of the segmentation results given by human experts. More precisely, the resulting segmentation could result in a compromise or a consensus, in terms of level of details and contour accuracy exhibited by each ground-truth segmentations. Fig. 8 gives a fusion map example, using a set of manually generated segmentations exhibiting a high variation, in terms of level of details. Let us add that this probabilistic metric is not degenerate; all the bad segmentations will give a low score without exception [16]. C. Generative Gibbs Distribution Model of Correct Segmentations (i.e., the pairwise empirical As indicated in [15], the set ) defines probabilities for each pixel pair computed over an appealing generative model of correct segmentation for the image, easily expressed as a Gibbs distribution. In this way, the Gibbs distribution, generative model of correct segmentation, which can also be considered as a likelihood of , in the PRI sense, may be expressed as(1) which simply computes the proportion (value ranging from 0 to 1) of pairs of pixels with compatible region label relationships between the two segmentations to be compared. A value of 1 indicates that the two segmentations are identical and a value of 0 indicates that the two segmentations do not agree on any pair of points (e.g., when all the pixels are gathered in a single region in one segmentation whereas the other segmentation assigns each pixel to an individual region). When the number of and are much smaller than the number of data labels in points , a computationally inexpensive estimator of the Rand index can be found in [16]. B. Probabilistic Rand Index (PRI) The PRI was recently introduced by Unnikrishnan [16] to take into accounttheinherentvariabilityofpossible interpretationsbetween human observers of an image, i.e., the multiple acceptable ground truth segmentations associated with each natural image. This variability between observers, recently highlighted by the Berkeley segmentation dataset [26] is due to the fact that each human chooses to segment an image at different levels of detail. This variability is also due image segmentation being an ill-posed problem, which exhibits multiple solutions for the different possible values of the number of classes not known a priori. Hence, in the absence of a unique ground-truth segmentation, the clustering quality measure has to quantify the agreement of an automatic segmentation (i.e., given by an algorithm) with the variation in a set of available manual segmentations representing, in fact, a very small sample of the set of all possible perceptually consistent interpretations of an image [15]. The authors [16] address this concern by soft nonuniform weighting of pixel pairs as a means of accounting for this variability in the ground truth set. More formally, let us consider a set of manually segmented (ground truth) images corresponding to an be the segmentation to be compared image of size . Let with the manually labeled set and designates the set of reat pixel gion labels associated with the segmentation maps location , the probabilistic RI is defined bywhere is the set of second order cliques or binary cliques of a Markov random field (MRF) model defined on a complete graph (each node or pixel is connected to all other pixels of is the temperature factor of the image) and this Boltzmann–Gibbs distribution which is twice less than the normalization factor of the Rand Index in (1) or (2) since there than pairs of pixels for which are twice more binary cliques . is the constant partition function. After simplification, this yields(2) where a good choice for the estimator of (the probability of the pixel and having the same label across ) is simply (4)MIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1613where is a constant partition function (with a factor which depends only on the data), namelywhere is the set of all possible (configurations for the) segof size pixels. Let us add mentations into regions that, since the number of classes (and thus the number of regions) of this final segmentation is not a priori known, there are possibly, between one and as much as regions that the number of pixels in this image (assigning each pixel to an individual can region is a possible configuration). In this setting, be viewed as the potential of spatially variant binary cliques (or pairwise interaction potentials) of an equivalent nonstationary MRF generative model of correct segmentations in the case is assumed to be a set of representative ground where truth segmentations. Besides, , the segmentation result (to be ), can be considered as a realization of this compared to generative model with PRand, a statistical measure proportional to its negative likelihood energy. In other words, an estimate of , in the maximum likelihood sense of this generative model, will give a resulting segmented map (i.e., a fusion result) with a to be fused. high fidelity to the set of segmentations D. Label Field Fusion Model for Image Segmentation Let us consider that we have at our disposal, a set of segmentations associated to an image of size to be fused (i.e., to efficiently combine) in order to obtain a final reliable and accurate segmentation result. The generative Gibbs distribution model of correct segmentations expressed in (4) gives us an interesting fusion model of segmentation maps, in the maximum PRI sense, or equivalently in the maximum likelihood (ML) sense for the underlying Gibbs model expressed in (4). In this framework, the set of is computed with the empirical proportion estimator [see (3)] on the data . Once has been estimated, the resulting ML fusion segmentation map is thus defined by maximizing the likelihood distributiontions for different possible values of the number of classes which is not a priori known. To render this problem well-posed with a unique solution, some constraints on the segmentation process are necessary, favoring over segmentation or, on the contrary, merging regions. From the probabilistic viewpoint, these regularization constraints can be expressed by a prior distribution of treated as a realization of the unknown segmentation a random field, for example, within a MRF framework [2], [27] or analytically, encoded via a local or global [13], [28] prior energy term added to the likelihood term. In this framework, we consider an energy function that sets a particular global constraint on the fusion process. This term restricts the number of regions (and indirectly, also penalizes small regions) in the resulting segmentation map. So we consider the energy function (6) where designates the number of regions (set of connected pixels belonging to the same class) in the segmented is the Heaviside (or unit step) function, and an image , internal parameter of our fusion model which physically represents the number of classes above which this prior constraint, limiting the number of regions, is taken into account. From the probabilistic viewpoint, this regularization constraint corresponds to a simple shifted (from ) exponential distribution decreasing with the number of regions displayed by the final segmentation. In this framework, a regularized solution corresponds to the maximum a posteriori (MAP) solution of our fusion model, i.e., that maximizes the posterior distribution the solution , and thus(7) with is the regularization parameter controlling the contribuexpressing fidelity to the set of segtion of the two terms; encoding our prior knowledge or mentations to be fused and beliefs concerning the types of acceptable final segmentations as estimates (segmentation with a number of limited regions). In this way, the resulting criteria used in this resulting fusion model can be viewed as a penalized maximum rand estimator. III. COARSE-TO-FINE OPTIMIZATION STRATEGY A. Multiresolution Minimization Strategy Our fusion procedure of several label fields emerges as an optimization problem of a complex non-convex cost function with several local extrema over the label parameter space. In order to find a particular configuration of , that efficiently minimizes this complex energy function, we can use a global optimization procedure such as a simulated annealing algorithm [27] whose advantages are twofold. First, it has the capability of avoiding local minima, and second, it does not require a good solution. initial guess in order to estimate the(5) where is the likelihood energy term of our generative fusion . model which has to be minimized in order to find Concretely, encodes the set of constraints, in terms of pairs of pixel labels (identical or not), provided by each of the segmentations to be fused. The minimization of finds the resulting segmentation which also optimizes the PRI criterion. E. Bayesian Fusion Model for Image Segmentation As previously described in Section II-B, the image segmentation problem is an ill-posed problem exhibiting multiple solu-1614IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 6, JUNE 2010Fig. 1. Duplication and “coarse-to-fine” minimization strategy.An alternative approach to this stochastic and computationally expensive procedure is the iterative conditional modes (ICM) introduced by Besag [2]. This method is deterministic and simple, but has the disadvantage of requiring a proper initialization of the segmentation map close to the optimal solution. Otherwise it will converge towards a bad local minima . In order associated with our complex energy function to solve this problem, we could take, as initialization (first such as iteration), the segmentation map (8) i.e., in choosing for the first iteration of the ICM procedure amongst the segmentation to be fused, the one closest to the optimal solution of the Gibbs energy function of our fusion model [see (5)]. A more robust optimization method consists of a multiresolution approach combined with the classical ICM optimization procedure. In this strategy, rather than considering the minimization problem on the full and original configuration space, the original inverse problem is decomposed in a sequence of approximated optimization problems of reduced complexity. This drastically reduces computational effort and provides an accelerated convergence toward improved estimate. Experimentally, estimation results are nearly comparable to those obtained by stochastic optimization procedures as noticed, for example, in [10] and [29]. To this end, a multiresolution pyramid of segmentation maps is preliminarily derived, in order to for each at different resolution levels, and a set estimate a set of of similar spatial models is considered for each resolution level of the pyramidal data structure. At the upper level of the pyramidal structure (lower resolution level), the ICM optimization procedure is initialized with the segmentation map given by the procedure defined in (8). It may also be initialized by a random solution and, starting from this initial segmentation, it iterates until convergence. After convergence, the result obtained at this resolution level is interpolated (see Fig. 1) and then used as initialization for the next finer level and so on, until the full resolution level. B. Optimization of the Full Energy Function Experiments have shown that the full energy function of our model, (with the region based-global regularization constraint) is complex for some images. Consequently it is preferable toFig. 2. From top to bottom and left to right; A natural image from the Berkeley database (no. 134052) and the formation of its region process (algorithm PRIF ) at the (l = 3) upper level of the pyramidal structure at iteration [0–6], 8 (the last iteration) of the ICM optimization algorithm. Duplication and result of the ICM relaxation scheme at the finest level of the pyramid at iteration 0, 1, 18 (last iteration) and segmentation result (region level) after the merging of regions and the taking into account of the prior. Bottom: evolution of the Gibbs energy for the different steps of the multiresolution scheme.perform the minimization in two steps. In a first step, the minimization is performed without considering the global constraint (considering only ), with the previously mentioned multiresolution minimization strategy and the ICM optimization procedure until its convergence at full resolution level. At this finest resolution level, the minimization is then refined in a second step by identifying each region of the resulting segmentation map. This creates a region adjacency graph (a RAG is an undirected graph where the nodes represent connected regions of the image domain) and performs a region merging procedure by simply applying the ICM relaxation scheme on each region (i.e., by merging the couple of adjacent regions leading to a reduction of the cost function of the full model [see (7)] until convergence). In the second step, minimization can also be performed . according to the full modelMIGNOTTE: LABEL FIELD FUSION BAYESIAN MODEL AND ITS PENALIZED MAXIMUM RAND ESTIMATOR FOR IMAGE SEGMENTATION1615with its four nearest neighbors and a fixed number of connections (85 in our application), regularly spaced between all other pixels located within a square search window of fixed size 30 pixels centered around . Fig. 3 shows comparison of segmentation results with a fully connected graph computed on a search window two times larger. We decided to initialize the lower (or third upper) level of the pyramid with a sequence of 20 different random segmentations with classes. The full resolution level is then initialized with the duplication (see Fig. 1) of the best segmentation result (i.e., the one associated to the lowest Gibbs energy ) obtained after convergence of the ICM at this lower resolution level (see Fig. 2). We provide details of our optimization strategy in Algorithm 1. Algo I. Multiresolution minimization procedure (see also Fig. 2). Two-Step Multiresolution Minimization Set of segmentations to be fusedPairwise probabilities for each pixel pair computed over at resolution level 1. Initialization Step • Build multiresolution Pyramids from • Compute the pairwise probabilities from at resolution level 3 • Compute the pairwise probabilities from at full resolution PIXEL LEVEL Initialization: Random initialization of the upper level of the pyramidal structure with classes • ICM optimization on • Duplication (cf. Fig 1) to the full resolution • ICM optimization on REGION LEVEL for each region at the finest level do • ICM optimization onFig. 4. Segmentation (image no. 385028 from Berkeley database). From top to bottom and left to right; segmentation map respectively obtained by 1] our multiresolution optimization procedure: = 3402965 (algo), 2] SA : = 3206127, 3] rithm PRIF : = 3312794, 4] SA : = 3395572, 5] SA : = 3402162. SAFig. 3. Comparison of two segmentation results of our multiresolution fusion procedure (algorithm PRIF ) using respectively: left] a subsampled and fixed number of connections (85) regularly spaced and located within a square search window of size = 30 pixels. right] a fully connected graph computed on a search window two times larger (and requiring a computational load increased by 100).NUU 0 U 00 U 0 U 0D. Comparison With a Monoresolution Stochastic Relaxation In order to test the efficiency of our two-step multiresolution relaxation (MR) strategy, we have compared it to a standard monoresolution stochastic relaxation algorithm, i.e., a so-called simulated annealing (SA) algorithm based on the Gibbs sampler [27]. In order to restrict the number of iterations to be finite, we have implemented a geometric temperature cooling schedule , where is the [30] of the form starting temperature, is the final temperature, and is the maximal number of iterations. In this stochastic procedure, is crucial. The temperathe choice of the initial temperature ture must be sufficiently high in the first stages of simulatedC. Algorithm In order to decrease the computational load of our multiresolution fusion procedure, we only use two levels of resolution in our pyramidal structure (see Fig. 2): the full resolution and an image eight times smaller (i.e., at the third upper level of classical data pyramidal structure). We do not consider a complete graph: we consider that each node (or pixel) is connected。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Abstract
Memory access consumes a significant amount of energy in data transfer intensive applications. The selection of a memory location from a CMOS memory cell array involves driving row and column select lines. A switching event on a row select line often consumes significantly more energy in comparison to a switching event on a column select line. In order to exploit this difference in energy consumption of row and column select lines, we propose a novel data layout method that aims to minimize row switching activity by assigning spatially and temporally local data items to the same row. The problem of minimum row switching data layout has been formulated as a multi-way mesh partitioning problem. The constraints imposed on the problem formulation ensure that the complexity of the address generator required to implement the optimized data layout is bounded and that the data layout optimization can be applied to all address generator synthesis methods. Our experiments demonstrate that our method can significantly reduce row transition counts over row major data layout.
application of energy optimizations to ASICs which would not be applicable to general purpose systems.
CMOS memory cell arrays are usually organized into rectangular blocks of memory cells. The selection of a memory location involves driving row, column and, in large memories, block select signals. Signal transitions on high capacitance select lines such as the row and block select lines consume more energy in comparison to those on column select lines [4].
The novel contributions of this paper are: (1) formulation of the minimum row switching data layout problem as a mesh partitioning problem, (2) evaluation of the quality
In this paper, we present a mesh partitioning approach to minimizing the energy consumption of memory access through data layout that minimizes row switching. A multiway graph partitioning approach to this problem has been recently proposed [5]. The graph partitioning problem formulation does not impose a structure on the optimized data layout. Therefore, it is general and achieves good results over a broad spectrum of access patterns. However, due to the lack of structure of the optimized data layout, multi-way graph partitioning approach suffers from two drawbacks. One drawback is that it does not limit the complexity of the resulting address generator. The other is that graph partitioning approach cannot easily be used with address generator synthesis methods which require address equations expressed in terms of application specific units (ASUs) [7] such as adders, multipliers and multiplexors. Mesh partitioning problem formulation reported here imposes additional constraints on the graph partitioning approach so that the complexity of the address generator is limited and row minimization optimization can more easily be applied to ASU type address generators.
Memory hierarchies that exploit data reuse can be used to minimize memory access energy [12]. Data layout optimizations are a complementary memory access energy reduction technique that can be applied together with other optimizations such as memory hierarchy. In this paper we also examine the interaction of data layout methods with simple memory hierarchies.
In order to minimize switching activity caused by memory access, it is necessary to have some knowledge about the access sequences. For many application-specific integrated circuits (ASICs), the access sequences are usually known a priori. ASICs may also contain data dependent access sequences. However, because the application is known, statistical information can be collected about the pattern of access. Information about the access sequences enable the
Mesh Partitioning Approach to Energy Efficient Data Layout
Sambuddhi Hettiaratchi
Peter Y.K. Cheung
Department of Electrical and Electronic Engineering Imperial College of Science, Technology and Medicine, London
2. Previous Work
The relevant previous work address the problem of reducing memory access energy in ASICs at the behavioural level. Research on data layout are also briefly discussed.
1. Introduction
Memory access consumes a significant amount of energy in data transfer intensive applications [11], such as video and image processing. Dynamic power dissipation is significant in CMOS circuits, and therefore, behavioural level energy minimization efforts often attempt tபைடு நூலகம் minimize signal transition counts, particularly on high capacitance nodes [8].