ABSTRACT Simulation Sampling with Live-points
石灰土与紫色土中铅的等温吸附-解吸特性

石灰土与紫色土中铅的等温吸附-解吸特性摘要:为了探讨铅在紫色土、石灰土中的环境容量,通过野外采样、室内模拟试验对两种土壤中铅的吸附-解吸特性进行了研究。结果表明,铅在紫色土、石灰土中的吸附平衡均可采用Langmuir、Freundlich和Temkin等温吸附方程来拟合,石灰土以Langmuir方程拟合最佳,而紫色土以Freundlich方程拟合最佳;由Langmuir方程求得石灰土对铅的最大吸附量(19 728.77 mg/kg)大于紫色土对铅的最大吸附量(12 194.68 mg/kg);紫色土、石灰土吸附态铅的解吸量都随着铅吸附量的增大而增大,两种土壤的解吸能力都比较弱。研究可为不同类型土壤的铅污染治理提供理论依据。关键词:紫色土;石灰土;铅;吸附-解吸Isothermal Adsorption-desorption Characteristics of Lead in Purple and Calcareous SoilAbstract: In order to probe into the environmental capacity of lead in purple, calcareous soil and provide a theoretical basis for lead pollution of different types of soil. The adsorption-desorption properties of the two kinds of soil to lead were studied through field sampling, laboratory simulation experiments and equilibrium constant oscillation. The results showed that Langmuir, Freundlich and Temkin's adsorption equations could be used to fit adsorption equilibrium of lead in the two kinds of soil. The best fitting was Langmuir's equation in calcareous soil and Freundlich's equation in purple soil, respectively. The maximum adsorption of lead in calcareous soil was bigger than that in purple soil. Besides, the desorption of lead in the two soils increased with the increase of lead adsorption, but the desorption was weak.Key words: calcareous soil; purple soil; lead; adsorption-desorption土壤是人类赖以生存的环境因素之一,也是重金属元素生物地球化学循环的重要环节。吸附-解吸是重金属元素在土壤生态系统中一种常见的反应过程。铅是土壤中重要的重金属污染元素之一,通过食物链的层层积累,土壤中的铅最终危害人体健康。外源铅进入土壤后,与土体进行一系列的物理化学反应而逐渐达到动态平衡,其在不同土壤中的吸附、解吸特性因土壤性质、环境因素的不同而存在很大的差异[1,2]。有研究表明,铅在土壤中的化学行为,特别是吸附与解吸特性,控制着铅的迁移转化过程和植物对铅的吸收[3,4]。近年来,随着农业生产中农药和化肥的大量使用,汽车尾气的大量排放,城市污水及垃圾处理不当等诸多因素,导致土壤中的铅含量急剧增加,土壤的铅污染现象越来越普遍[5,6]。土壤铅污染直接导致了作物的产量和品质降低并直接或间接地危害人的身体健康,因此研究土壤对铅的吸附-解吸特性,对寻求有效控制土壤中重金属环境行为的对策具有重要意义。关于不同类型土壤对铅的吸附-解吸的报道虽有很多,但鲜见有对紫色土和石灰土的报道。为此,就这两种土壤对铅的吸附-解吸特性进行研究,可为农业发展提供基础数据以及为不同类型土壤的重金属环境容量的制定、铅污染土壤的治理提供理论依据。1 材料与方法1.1 供试土壤供试土壤为重庆市北碚区内农用地中的两种土壤:紫色土和石灰土。北碚区的紫色土是由三叠系飞仙关组暗紫色泥岩及侏罗系自流井组暗紫色、杂色泥岩、沙溪庙组灰棕紫色泥岩风化物的坡积、残积母质发育形成的;区内石灰土主要是石灰土类中的黄色石灰土,是由二叠系及三叠系厚层灰岩溶蚀后的残积物发育形成的[7]。试验样品的采集深度为0~20 cm。土样采集后,经风干,去杂,磨细,通过1 mm 筛后装袋备用。紫色土pH 6.09(土水质量比为1∶3),有机质含量18.86 g/kg,全铅含量31.81 mg/kg,黏粒含量197.23 g/kg;石灰土pH 6.56(土水质量比为1∶3),有机质含量22.44 g/kg,全铅含量39.50 mg/kg,黏粒含量397.45 g/kg。1.2 试验方法1.2.1 溶液的配制称取2.549 7 g NaNO3配制3 L浓度为0.01 mol/L的NaNO3溶液备用。称取3.181 2 g Pb(NO3)2 配制2 L以0.01 mol/L的NaNO3溶液为背景液的铅(Pb2+)浓度为1 000 mg/L的溶液,再用铅浓度1 000 mg/L的溶液稀释(以0.01 mol/L的NaNO3溶液为稀释剂)成铅浓度分别为25、50、100、150、200、250、300、350、400、450、500、600、700、800、900 mg/L的梯度溶液各100 mL备用。1.2.2 吸附试验先称取试管重量,然后分别称取风干紫色土、石灰土1.000 g 于50 mL塑料离心管中(干土重+试管重为G1),再向离心管中以土液比为1∶20(W/V)向紫色土中加入含铅(Pb2+)量为0、25、50、100、150、200、250、300、350、400、450、500、600、700p1.2.3 解吸试验将吸附试验平衡后的上清液倾尽,然后对湿土和试管进行称重(G2),G2与G1之差为水分校正值。再向试管内加入0.01 mol/L的NaNO3溶液20 mL, 在往返振荡机上振荡2 h后,放入25 ℃恒温箱中静置22 h,然后以4 500 r/min 离心10 min,取上清液测定铅含量, 再综合水分校正值和测定溶液中铅浓度分别计算两种土壤解吸的铅含量。1.2.4 分析方法土壤理化性质及全铅含量测定参考文献[8]的方法,pH测定采用电位法,有机质含量测定采用重铬酸钾外加热法,土壤质地测定采用甲种比重计法。吸附后土壤全铅(HF-HClO4-HNO3消煮)及平衡溶液中的铅浓度采用原子吸收分光光度法测定。数据分析、拟合、作图采用SPSS 17.0、Origin 8.0和Excel 2003软件完成。2 结果与分析2.1 两种土壤对铅的吸附两种土壤对铅的吸附测定结果见表1。从表1中的吸附率看,紫色土、石灰土对铅的吸附率都很高,与土壤中铅的化合物溶解度很低,很容易被土壤胶体吸附有关[9]。有研究显示,铅离子吸附受土壤pH影响很大,在土壤通常酸碱度范围(pH 4~9)内,土壤颗粒表面总电荷为负(负电荷胶体一般多于正电荷胶体),因此铅离子受吸附而在土壤中保存,不易移动[10]。供试土壤的酸度都在这个范围内,所以pH对两种土壤铅的吸附也有很大的影响。两种土壤中,石灰土在处理浓度范围内吸附率都在96%以上,变化较小;紫色土在达到一定添加量后其吸附率减小得较快,说明紫色土已接近最大吸铅量,而石灰土的变化趋势不是很明显。有研究表明,土壤黏粒含量与重金属离子吸附呈明显的正相关[11]。石灰土黏粒含量高于紫色土是石灰土吸附量大于紫色土的原因之一。2.2 两种土壤吸附过程的数学模型拟合图1是两种土壤中Pb2+的等温吸附曲线。由图1可知,石灰土、紫色土对Pb2+的吸附能力在不同的平衡浓度下具有差异。石灰土在平衡液Pb2+浓度高于15 mg/L 时,曲线趋于平缓;紫色土在平衡液Pb2+浓度高于40 mg/L时,曲线趋于平缓。两种土壤中铅的等温吸附曲线与Pb2+在其他类型土壤上得到的等温吸附曲线相似[12]。土壤对重金属离子吸附机理的描述,最基本的手段是建立平衡模型,因此采用Langmuir方程、Freundlich方程和Temkin方程进行拟合[13]。Langmuir方程:■=■+■C; Freundlich方程:Y=K2C1/n;Temkin方程:Y=a+K3lnC 。式中,Y(mg/kg)为吸附平衡时铅的吸附量,C为吸附平衡时溶液中Pb2+浓度(mg/kg),M为最大吸附量,K1、K2、K3、n、a为各模型参数。将试验结果(表1)分别用3个吸附方程拟合,结果(表2)表明,相关系数都达到了极显著水平(P<0.01),表明两种土壤对铅的吸附主要是化学吸附或吸附能力很强的物理吸附,因为Langmuir方程和Temkin方程都只适用于化学吸附或吸附能力很强的物理吸附[14]。但Freundlich方程和Temkin方程均无法表征出土壤的吸附容量,而据有关报道[15],Langmuir方程能同时表征土壤的吸附容量和吸附强度,常将其式中的吸附容量当作实际最大吸附量加以运用。所以用Langmuir方程来描述两种土壤对铅的吸附优于Freundlich方程和Temkin方程。但从拟合结果看,石灰土以Langmuir方程拟合最佳,而紫色土以Freundlich方程拟合最佳。用Langmuir 方程求出的石灰土和紫色土两种土壤的最大吸附量分别为19 728.77、12 194.68 mg/kg。两种土壤对铅的最大吸附量为石灰土>紫色土,究其原因可能与土壤的矿物组成[14]、阳离子代换量[16]、有机质含量[17]、黏粒含量等基本理化性质不同有关。2.3 两种土壤被吸附铅的解吸及与其吸附的关系从表1两种土壤被吸附铅的解吸试验结果可以看出,总体上紫色土的解吸能力要大于石灰土,说明石灰土不但吸附量大而且吸附得还很牢固,并且石灰土中铅污染的潜在危害比紫色土大。紫色土在铅处理浓度低于600 mg/L时,解吸能力较低;在铅处理浓度大于或等于600 mg/L时,解吸量增大趋势明显,说明在浓度比较低的范围内,紫色土对铅的吸附能力很强,吸附得很牢固。从总体上看,随着吸附量的增大,两种土壤的解吸量都随之增大。进行两种土壤吸附量与解吸量的相关分析如下:■石灰土=(2.491×10-4)x-0.164,r=0.975(P<0.01)■紫色土=0.014x-44.658,r=0.877(P<0.01)相关系数都达到了极显著水平,表明两种土壤的吸附量与解吸量之间呈现极显著的线性相关。3 结论1)两种土壤对铅的吸附能力都比较强,进入土壤中的铅大部分能被土壤吸附固定下来,说明铅在紫色土、石灰土中的迁移转化能力很弱,所以在紫色土、石灰土分布区中具有外在来源铅的区域要合理利用土地。2)由于两种土壤对铅的吸附是以化学吸附或吸附能力很强的物理吸附为主,所以被吸附的铅都很难解吸出来,其中石灰土的解吸能力最弱,而解吸能力弱增加了潜在危害。两种土壤吸附态铅的解吸量都随着铅吸附量的增加而增加,解吸量与吸附量呈极显著的线性相关。参考文献:[1] 谭长银,岳振华,罗槐林. 菜园土对铅的吸持解吸特性研究[J].水土保持学报,2000,14(2):88-91.[2] KALBITZ K, WENNICH R. Mobilization of heavy metal in polluted wetland soil and its dependence on dissolved organic matter[J]. The Science of the Total Environment,1998,209: 27-39.[3] TILLER K G,戴丽莉. 土壤中的主要重金属和有毒重金属及其生态关系[J].土壤学进展,1987,15(2):37-41.[4] 夏荣基. 土壤的重金属污染问题[J].农业环境保护,1985,4(1):5-9.[5] 张辉,马东升.公路重金属污染的形态特征及其解吸吸持能力探讨[J].环境化学,1998,17(6):564-568.[6] 李宗利,薛澄泽. 污灌土壤中Pb、Cd形态的研究[J]. 农业环境保护,1994,13(4):152-157.[7] 重庆市北碚区地方志编纂委员会. 北碚自然地理[M].重庆:西南师范大学出版社,1986.145-184.[8] 鲁如坤. 土壤农业化学分析[M]. 北京:中国农业科学技术出版社,1999.477-490.[9] 余国营, 吴燕玉. 土壤环境中重金属元素的相互作用及其对吸持特性的影响[J].环境化学,1997,16(1):30-36.[10] BASTA N T, TABATABAI M A. Effect of cropping systems on adsorption of metals by soils: Ⅱ. Effect of pH[J]. Soil Sci, 1992, 153(3):195-204.[11] 王云,魏复盛. 土壤环境元素化学[M].北京:中国环境科学出版社,1995.199-200.[12] 乔显亮,骆永明. 我国部分城市污泥化学组成及其农用标准初探[J].土壤,2001,33(4):205-209.[13] 隋红建,饶纪龙. 土壤离子吸持机理模型及其应用[J].土壤学进展,1995,23(1):27-31.[14] 杨金燕,杨肖娥,何振立,等. 土壤中铅的吸附-解吸行为研究进展[J].生态环境,2005,14(1):102-107.[15] 陈铭.有机质和游离Fe对湘南红壤表面电荷性质的影响[J]. 热带亚热带土壤科学,1997,6(1):20-25.[16] BASTA N T,TABATABAI M A. Path-analysis of heavy metal adsorption by soil[J]. Agron J,1993,85(5):1054-1057.[17] 李军,张玉龙,陈维新. 有机质对土壤铅吸附特性的影响[J].沈阳农业大学学报, 1992,23(专辑):38-42.。
SCI论文摘要中常用的表达方法

SCI论文摘要中常用的表达方法要写好摘要,需要建立一个适合自己需要的句型库(选择的词汇来源于SCI高被引用论文)引言部分(1)回顾研究背景,常用词汇有review, summarize, present, outline, describe等(2)说明写作目的,常用词汇有purpose, attempt, aim等,另外还可以用动词不定式充当目的壮语老表达(3)介绍论文的重点内容或研究范围,常用词汇有study, present, include, focus, emphasize, emphasis, attention等方法部分(1)介绍研究或试验过程,常用词汇有test study, investigate, examine,experiment, discuss, consider, analyze, analysis等(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等(3)介绍应用、用途,常用词汇有use, apply, application等结果部分(1)展示研究结果,常用词汇有show, result, present等(2)介绍结论,常用词汇有summary, introduce,conclude等讨论部分(1)陈述论文的论点和作者的观点,常用词汇有suggest, repot, present, expect, describe 等(2)说明论证,常用词汇有support, provide, indicate, identify, find, demonstrate, confirm, clarify等(3)推荐和建议,常用词汇有suggest,suggestion, recommend, recommendation, propose,necessity,necessary,expect等。
摘要引言部分案例词汇review•Author(s): ROBINSON, TE; BERRIDGE, KC•Title:THE NEURAL BASIS OF DRUG CRA VING - AN INCENTIVE-SENSITIZATION THEORY OF ADDICTION•Source: BRAIN RESEARCH REVIEWS, 18 (3): 247-291 SEP-DEC 1993 《脑研究评论》荷兰SCI被引用1774We review evidence for this view of addiction and discuss its implications for understanding the psychology and neurobiology of addiction.回顾研究背景SCI高被引摘要引言部分案例词汇summarizeAuthor(s): Barnett, RM; Carone, CD; 被引用1571Title: Particles and field .1. Review of particle physicsSource: PHYSICAL REVIEW D, 54 (1): 1-+ Part 1 JUL 1 1996:《物理学评论,D辑》美国引言部分回顾研究背景常用词汇summarizeAbstract: This biennial review summarizes much of Particle Physics. Using data from previous editions, plus 1900 new measurements from 700 papers, we list, evaluate, and average measuredproperties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review.SCI摘要引言部分案例attentionSCI摘要方法部分案例considerSCI高被引摘要引言部分案例词汇outline轮廓概要•Author(s): TIERNEY, L SCI引用728次•Title:MARKOV-CHAINS FOR EXPLORING POSTERIOR DISTRIBUTIONS 引言部分回顾研究背景,常用词汇outline•Source: ANNALS OF STATISTICS, 22 (4): 1701-1728 DEC 1994•《统计学纪事》美国•Abstract: Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm.In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several Variance reduction techniques and also gives guidance on the choice of sample size and allocation.SCI高被引摘要引言部分案例回顾研究背景presentAuthor(s): L YNCH, M; MILLIGAN, BG SC I被引用661Title: ANAL YSIS OF POPULATION GENETIC-STRUCTURE WITH RAPD MARKERS Source: MOLECULAR ECOLOGY, 3 (2): 91-99 APR 1994《分子生态学》英国Abstract: Recent advances in the application of the polymerase chain reaction make it possible to score individuals at a large number of loci. The RAPD (random amplified polymorphic DNA) method is one such technique that has attracted widespread interest.The analysis of population structure with RAPD data is hampered by the lack of complete genotypic information resulting from dominance, since this enhances the sampling variance associated with single loci as well as induces bias in parameter estimation. We present estimators for several population-genetic parameters (gene and genotype frequencies, within- and between-population heterozygosities, degree of inbreeding and population subdivision, and degree of individual relatedness) along with expressions for their sampling variances. Although completely unbiased estimators do not appear to be possible with RAPDs, several steps are suggested that will insure that the bias in parameter estimates is negligible. To achieve the same degree of statistical power, on the order of 2 to 10 times more individuals need to be sampled per locus when dominant markers are relied upon, as compared to codominant (RFLP, isozyme) markers. Moreover, to avoid bias in parameter estimation, the marker alleles for most of these loci should be in relatively low frequency. Due to the need for pruning loci with low-frequency null alleles, more loci also need to be sampled with RAPDs than with more conventional markers, and sole problems of bias cannot be completely eliminated.SCI高被引摘要引言部分案例词汇describe•Author(s): CLONINGER, CR; SVRAKIC, DM; PRZYBECK, TR•Title: A PSYCHOBIOLOGICAL MODEL OF TEMPERAMENT AND CHARACTER•Source: ARCHIVES OF GENERAL PSYCHIATRY, 50 (12): 975-990 DEC 1993《普通精神病学纪要》美国•引言部分回顾研究背景,常用词汇describe 被引用926•Abstract: In this study, we describe a psychobiological model of the structure and development of personality that accounts for dimensions of both temperament and character. Previous research has confirmed four dimensions of temperament: novelty seeking, harm avoidance, reward dependence, and persistence, which are independently heritable, manifest early in life, and involve preconceptual biases in perceptual memory and habit formation. For the first time, we describe three dimensions of character that mature in adulthood and influence personal and social effectiveness by insight learning about self-concepts.Self-concepts vary according to the extent to which a person identifies the self as (1) an autonomous individual, (2) an integral part of humanity, and (3) an integral part of the universe as a whole. Each aspect of self-concept corresponds to one of three character dimensions called self-directedness, cooperativeness, and self-transcendence, respectively. We also describe the conceptual background and development of a self-report measure of these dimensions, the Temperament and Character Inventory. Data on 300 individuals from the general population support the reliability and structure of these seven personality dimensions. We discuss the implications for studies of information processing, inheritance, development, diagnosis, and treatment.摘要引言部分案例•(2)说明写作目的,常用词汇有purpose, attempt, aimSCI高被引摘要引言部分案例attempt说明写作目的•Author(s): Donoho, DL; Johnstone, IM•Title: Adapting to unknown smoothness via wavelet shrinkage•Source: JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 90 (432): 1200-1224 DEC 1995 《美国统计学会志》被引用429次•Abstract: We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrink, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N.log(N) as a function of the sample size N. SureShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods-kernels, splines, and orthogonal series estimates-even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale.Examples of SureShrink are given. The advantages of the method are particularly evident when the underlying function has jump discontinuities on a smooth backgroundSCI高被引摘要引言部分案例To investigate说明写作目的•Author(s): OLTV AI, ZN; MILLIMAN, CL; KORSMEYER, SJ•Title: BCL-2 HETERODIMERIZES IN-VIVO WITH A CONSERVED HOMOLOG, BAX, THAT ACCELERATES PROGRAMMED CELL-DEATH•Source: CELL, 74 (4): 609-619 AUG 27 1993 被引用3233•Abstract: Bcl-2 protein is able to repress a number of apoptotic death programs. To investigate the mechanism of Bcl-2's effect, we examined whether Bcl-2 interacted with other proteins. We identified an associated 21 kd protein partner, Bax, that has extensive amino acid homology with Bcl-2, focused within highly conserved domains I and II. Bax is encoded by six exons and demonstrates a complex pattern of alternative RNA splicing that predicts a 21 kd membrane (alpha) and two forms of cytosolic protein (beta and gamma). Bax homodimerizes and forms heterodimers with Bcl-2 in vivo. Overexpressed Bax accelerates apoptotic death induced by cytokine deprivation in an IL-3-dependent cell line. Overexpressed Bax also counters the death repressor activity of Bcl-2. These data suggest a model in which the ratio of Bcl-2 to Bax determines survival or death following an apoptotic stimulus.SCI高被引摘要引言部分案例purposes说明写作目的•Author(s): ROGERS, FJ; IGLESIAS, CA•Title: RADIATIVE ATOMIC ROSSELAND MEAN OPACITY TABLES•Source: ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES, 79 (2): 507-568 APR 1992 《天体物理学杂志增刊》美国SCI被引用512•Abstract: For more than two decades the astrophysics community has depended on opacity tables produced at Los Alamos. In the present work we offer new radiative Rosseland mean opacity tables calculated with the OPAL code developed independently at LLNL. We give extensive results for the recent Anders-Grevesse mixture which allow accurate interpolation in temperature, density, hydrogen mass fraction, as well as metal mass fraction. The tables are organized differently from previous work. Instead of rows and columns of constant temperature and density, we use temperature and follow tracks of constant R, where R = density/(temperature)3. The range of R and temperature are such as to cover typical stellar conditions from the interior through the envelope and the hotter atmospheres. Cool atmospheres are not considered since photoabsorption by molecules is neglected. Only radiative processes are taken into account so that electron conduction is not included. For comparison purposes we present some opacity tables for the Ross-Aller and Cox-Tabor metal abundances. Although in many regions the OPAL opacities are similar to previous work, large differences are reported.For example, factors of 2-3 opacity enhancements are found in stellar envelop conditions.SCI高被引摘要引言部分案例aim说明写作目的•Author(s):EDV ARDSSON, B; ANDERSEN, J; GUSTAFSSON, B; LAMBERT, DL;NISSEN, PE; TOMKIN, J•Title:THE CHEMICAL EVOLUTION OF THE GALACTIC DISK .1. ANALYSISAND RESULTS•Source: ASTRONOMY AND ASTROPHYSICS, 275 (1): 101-152 AUG 1993 《天文学与天体物理学》被引用934•Abstract:With the aim to provide observational constraints on the evolution of the galactic disk, we have derived abundances of 0, Na, Mg, Al, Si, Ca, Ti, Fe, Ni, Y, Zr, Ba and Nd, as well as individual photometric ages, for 189 nearby field F and G disk dwarfs.The galactic orbital properties of all stars have been derived from accurate kinematic data, enabling estimates to be made of the distances from the galactic center of the star s‘ birthplaces. 结构式摘要•Our extensive high resolution, high S/N, spectroscopic observations of carefully selected northern and southern stars provide accurate equivalent widths of up to 86 unblended absorption lines per star between 5000 and 9000 angstrom. The abundance analysis was made with greatly improved theoretical LTE model atmospheres. Through the inclusion of a great number of iron-peak element absorption lines the model fluxes reproduce the observed UV and visual fluxes with good accuracy. A new theoretical calibration of T(eff) as a function of Stromgren b - y for solar-type dwarfs has been established. The new models and T(eff) scale are shown to yield good agreement between photometric and spectroscopic measurements of effective temperatures and surface gravities, but the photometrically derived very high overall metallicities for the most metal rich stars are not supported by the spectroscopic analysis of weak spectral lines.•Author(s): PAYNE, MC; TETER, MP; ALLAN, DC; ARIAS, TA; JOANNOPOULOS, JD•Title:ITERA TIVE MINIMIZATION TECHNIQUES FOR ABINITIO TOTAL-ENERGY CALCULATIONS - MOLECULAR-DYNAMICS AND CONJUGA TE GRADIENTS•Source: REVIEWS OF MODERN PHYSICS, 64 (4): 1045-1097 OCT 1992 《现代物理学评论》美国American Physical Society SCI被引用2654 •Abstract: This article describes recent technical developments that have made the total-energy pseudopotential the most powerful ab initio quantum-mechanical modeling method presently available. In addition to presenting technical details of the pseudopotential method, the article aims to heighten awareness of the capabilities of the method in order to stimulate its application to as wide a range of problems in as many scientific disciplines as possible.SCI高被引摘要引言部分案例includes介绍论文的重点内容或研究范围•Author(s):MARCHESINI, G; WEBBER, BR; ABBIENDI, G; KNOWLES, IG;SEYMOUR, MH; STANCO, L•Title: HERWIG 5.1 - A MONTE-CARLO EVENT GENERA TOR FOR SIMULATING HADRON EMISSION REACTIONS WITH INTERFERING GLUONS SCI被引用955次•Source: COMPUTER PHYSICS COMMUNICATIONS, 67 (3): 465-508 JAN 1992:《计算机物理学通讯》荷兰Elsevier•Abstract: HERWIG is a general-purpose particle-physics event generator, which includes the simulation of hard lepton-lepton, lepton-hadron and hadron-hadron scattering and soft hadron-hadron collisions in one package. It uses the parton-shower approach for initial-state and final-state QCD radiation, including colour coherence effects and azimuthal correlations both within and between jets. This article includes a brief review of the physics underlying HERWIG, followed by a description of the program itself. This includes details of the input and control parameters used by the program, and the output data provided by it. Sample output from a typical simulation is given and annotated.SCI高被引摘要引言部分案例presents介绍论文的重点内容或研究范围•Author(s): IDSO, KE; IDSO, SB•Title: PLANT-RESPONSES TO ATMOSPHERIC CO2 ENRICHMENT IN THE FACE OF ENVIRONMENTAL CONSTRAINTS - A REVIEW OF THE PAST 10 YEARS RESEARCH•Source: AGRICULTURAL AND FOREST METEOROLOGY, 69 (3-4): 153-203 JUL 1994 《农业和林业气象学》荷兰Elsevier 被引用225•Abstract:This paper presents a detailed analysis of several hundred plant carbon exchange rate (CER) and dry weight (DW) responses to atmospheric CO2 enrichment determined over the past 10 years. It demonstrates that the percentage increase in plant growth produced by raising the air's CO2 content is generally not reduced by less than optimal levels of light, water or soil nutrients, nor by high temperatures, salinity or gaseous air pollution. More often than not, in fact, the data show the relative growth-enhancing effects of atmospheric CO2 enrichment to be greatest when resource limitations and environmental stresses are most severe.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasizing •Author(s): BESAG, J; GREEN, P; HIGDON, D; MENGERSEN, K•Title: BAYESIAN COMPUTATION AND STOCHASTIC-SYSTEMS•Source: STATISTICAL SCIENCE, 10 (1): 3-41 FEB 1995《统计科学》美国•SCI被引用296次•Abstract: Markov chain Monte Carlo (MCMC) methods have been used extensively in statistical physics over the last 40 years, in spatial statistics for the past 20 and in Bayesian image analysis over the last decade. In the last five years, MCMC has been introduced into significance testing, general Bayesian inference and maximum likelihood estimation. This paper presents basic methodology of MCMC, emphasizing the Bayesian paradigm, conditional probability and the intimate relationship with Markov random fields in spatial statistics.Hastings algorithms are discussed, including Gibbs, Metropolis and some other variations. Pairwise difference priors are described and are used subsequently in three Bayesian applications, in each of which there is a pronounced spatial or temporal aspect to the modeling. The examples involve logistic regression in the presence of unobserved covariates and ordinal factors; the analysis of agricultural field experiments, with adjustment for fertility gradients; and processing oflow-resolution medical images obtained by a gamma camera. Additional methodological issues arise in each of these applications and in the Appendices. The paper lays particular emphasis on the calculation of posterior probabilities and concurs with others in its view that MCMC facilitates a fundamental breakthrough in applied Bayesian modeling.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focuses •Author(s): HUNT, KJ; SBARBARO, D; ZBIKOWSKI, R; GAWTHROP, PJ•Title: NEURAL NETWORKS FOR CONTROL-SYSTEMS - A SURVEY•Source: AUTOMA TICA, 28 (6): 1083-1112 NOV 1992《自动学》荷兰Elsevier•SCI被引用427次•Abstract:This paper focuses on the promise of artificial neural networks in the realm of modelling, identification and control of nonlinear systems. The basic ideas and techniques of artificial neural networks are presented in language and notation familiar to control engineers. Applications of a variety of neural network architectures in control are surveyed. We explore the links between the fields of control science and neural networks in a unified presentation and identify key areas for future research.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focus•Author(s): Stuiver, M; Reimer, PJ; Bard, E; Beck, JW;•Title: INTCAL98 radiocarbon age calibration, 24,000-0 cal BP•Source: RADIOCARBON, 40 (3): 1041-1083 1998《放射性碳》美国SCI被引用2131次•Abstract: The focus of this paper is the conversion of radiocarbon ages to calibrated (cal) ages for the interval 24,000-0 cal BP (Before Present, 0 cal BP = AD 1950), based upon a sample set of dendrochronologically dated tree rings, uranium-thorium dated corals, and varve-counted marine sediment. The C-14 age-cal age information, produced by many laboratories, is converted to Delta(14)C profiles and calibration curves, for the atmosphere as well as the oceans. We discuss offsets in measured C-14 ages and the errors therein, regional C-14 age differences, tree-coral C-14 age comparisons and the time dependence of marine reservoir ages, and evaluate decadal vs. single-year C-14 results. Changes in oceanic deepwater circulation, especially for the 16,000-11,000 cal sp interval, are reflected in the Delta(14)C values of INTCAL98.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasis •Author(s): LEBRETON, JD; BURNHAM, KP; CLOBERT, J; ANDERSON, DR•Title: MODELING SURVIV AL AND TESTING BIOLOGICAL HYPOTHESES USING MARKED ANIMALS - A UNIFIED APPROACH WITH CASE-STUDIES •Source: ECOLOGICAL MONOGRAPHS, 62 (1): 67-118 MAR 1992•《生态学论丛》美国•Abstract: The understanding of the dynamics of animal populations and of related ecological and evolutionary issues frequently depends on a direct analysis of life history parameters. For instance, examination of trade-offs between reproduction and survival usually rely on individually marked animals, for which the exact time of death is most often unknown, because marked individuals cannot be followed closely through time.Thus, the quantitative analysis of survival studies and experiments must be based oncapture-recapture (or resighting) models which consider, besides the parameters of primary interest, recapture or resighting rates that are nuisance parameters. 结构式摘要•T his paper synthesizes, using a common framework, these recent developments together with new ones, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets. The effects on survival and capture rates of time, age, and categorical variables characterizing the individuals (e.g., sex) can be considered, as well as interactions between such effects. This "analysis of variance" philosophy emphasizes the structure of the survival and capture process rather than the technical characteristics of any particular model. The flexible array of models encompassed in this synthesis uses a common notation. As a result of the great level of flexibility and relevance achieved, the focus is changed from fitting a particular model to model building and model selection.SCI摘要方法部分案例•方法部分•(1)介绍研究或试验过程,常用词汇有test,study, investigate, examine,experiment, discuss, consider, analyze, analysis等•(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等•(3)介绍应用、用途,常用词汇有use, apply, application等SCI高被引摘要方法部分案例discusses介绍研究或试验过程•Author(s): LIANG, KY; ZEGER, SL; QAQISH, B•Title: MULTIV ARIATE REGRESSION-ANAL YSES FOR CATEGORICAL-DATA •Source:JOURNAL OF THE ROY AL STA TISTICAL SOCIETY SERIES B-METHODOLOGICAL, 54 (1): 3-40 1992《皇家统计学会志,B辑:统计方法论》•SCI被引用298•Abstract: It is common to observe a vector of discrete and/or continuous responses in scientific problems where the objective is to characterize the dependence of each response on explanatory variables and to account for the association between the outcomes. The response vector can comprise repeated observations on one variable, as in longitudinal studies or genetic studies of families, or can include observations for different variables.This paper discusses a class of models for the marginal expectations of each response and for pairwise associations. The marginal models are contrasted with log-linear models.Two generalized estimating equation approaches are compared for parameter estimation.The first focuses on the regression parameters; the second simultaneously estimates the regression and association parameters. The robustness and efficiency of each is discussed.The methods are illustrated with analyses of two data sets from public health research SCI高被引摘要方法部分案例介绍研究或试验过程examines•Author(s): Huo, QS; Margolese, DI; Stucky, GD•Title: Surfactant control of phases in the synthesis of mesoporous silica-based materials •Source: CHEMISTRY OF MATERIALS, 8 (5): 1147-1160 MAY 1996•SCI被引用643次《材料的化学性质》美国•Abstract: The low-temperature formation of liquid-crystal-like arrays made up of molecular complexes formed between molecular inorganic species and amphiphilic organic molecules is a convenient approach for the synthesis of mesostructure materials.This paper examines how the molecular shapes of covalent organosilanes, quaternary ammonium surfactants, and mixed surfactants in various reaction conditions can be used to synthesize silica-based mesophase configurations, MCM-41 (2d hexagonal, p6m), MCM-48 (cubic Ia3d), MCM-50 (lamellar), SBA-1 (cubic Pm3n), SBA-2 (3d hexagonal P6(3)/mmc), and SBA-3(hexagonal p6m from acidic synthesis media). The structural function of surfactants in mesophase formation can to a first approximation be related to that of classical surfactants in water or other solvents with parallel roles for organic additives. The effective surfactant ion pair packing parameter, g = V/alpha(0)l, remains a useful molecular structure-directing index to characterize the geometry of the mesophase products, and phase transitions may be viewed as a variation of g in the liquid-crystal-Like solid phase. Solvent and cosolvent structure direction can be effectively used by varying polarity, hydrophobic/hydrophilic properties and functionalizing the surfactant molecule, for example with hydroxy group or variable charge. Surfactants and synthesis conditions can be chosen and controlled to obtain predicted silica-based mesophase products. A room-temperature synthesis of the bicontinuous cubic phase, MCM-48, is presented. A low-temperature (100 degrees C) and low-pH (7-10) treatment approach that can be used to give MCM-41 with high-quality, large pores (up to 60 Angstrom), and pore volumes as large as 1.6 cm(3)/g is described.Estimates 介绍研究或试验过程SCI高被引摘要方法部分案例•Author(s): KESSLER, RC; MCGONAGLE, KA; ZHAO, SY; NELSON, CB; HUGHES, M; ESHLEMAN, S; WITTCHEN, HU; KENDLER, KS•Title:LIFETIME AND 12-MONTH PREV ALENCE OF DSM-III-R PSYCHIATRIC-DISORDERS IN THE UNITED-STA TES - RESULTS FROM THE NATIONAL-COMORBIDITY-SURVEY•Source: ARCHIVES OF GENERAL PSYCHIATRY, 51 (1): 8-19 JAN 1994•《普通精神病学纪要》美国SCI被引用4350次•Abstract: Background: This study presents estimates of lifetime and 12-month prevalence of 14 DSM-III-R psychiatric disorders from the National Comorbidity Survey, the first survey to administer a structured psychiatric interview to a national probability sample in the United States.Methods: The DSM-III-R psychiatric disorders among persons aged 15 to 54 years in the noninstitutionalized civilian population of the United States were assessed with data collected by lay interviewers using a revised version of the Composite International Diagnostic Interview. Results: Nearly 50% of respondents reported at least one lifetime disorder, and close to 30% reported at least one 12-month disorder. The most common disorders were major depressive episode, alcohol dependence, social phobia, and simple phobia. More than half of all lifetime disorders occurred in the 14% of the population who had a history of three or more comorbid disorders. These highly comorbid people also included the vast majority of people with severe disorders.Less than 40% of those with a lifetime disorder had ever received professional treatment,and less than 20% of those with a recent disorder had been in treatment during the past 12 months. Consistent with previous risk factor research, it was found that women had elevated rates of affective disorders and anxiety disorders, that men had elevated rates of substance use disorders and antisocial personality disorder, and that most disorders declined with age and with higher socioeconomic status. Conclusions: The prevalence of psychiatric disorders is greater than previously thought to be the case. Furthermore, this morbidity is more highly concentrated than previously recognized in roughly one sixth of the population who have a history of three or more comorbid disorders. This suggests that the causes and consequences of high comorbidity should be the focus of research attention. The majority of people with psychiatric disorders fail to obtain professional treatment. Even among people with a lifetime history of three or more comorbid disorders, the proportion who ever obtain specialty sector mental health treatment is less than 50%.These results argue for the importance of more outreach and more research on barriers to professional help-seekingSCI高被引摘要方法部分案例说明研究或试验方法measure•Author(s): Schlegel, DJ; Finkbeiner, DP; Davis, M•Title:Maps of dust infrared emission for use in estimation of reddening and cosmic microwave background radiation foregrounds•Source: ASTROPHYSICAL JOURNAL, 500 (2): 525-553 Part 1 JUN 20 1998 SCI 被引用2972 次《天体物理学杂志》美国•The primary use of these maps is likely to be as a new estimator of Galactic extinction. To calibrate our maps, we assume a standard reddening law and use the colors of elliptical galaxies to measure the reddening per unit flux density of 100 mu m emission. We find consistent calibration using the B-R color distribution of a sample of the 106 brightest cluster ellipticals, as well as a sample of 384 ellipticals with B-V and Mg line strength measurements. For the latter sample, we use the correlation of intrinsic B-V versus Mg, index to tighten the power of the test greatly. We demonstrate that the new maps are twice as accurate as the older Burstein-Heiles reddening estimates in regions of low and moderate reddening. The maps are expected to be significantly more accurate in regions of high reddening. These dust maps will also be useful for estimating millimeter emission that contaminates cosmic microwave background radiation experiments and for estimating soft X-ray absorption. We describe how to access our maps readily for general use.SCI高被引摘要结果部分案例application介绍应用、用途•Author(s): MALLAT, S; ZHONG, S•Title: CHARACTERIZATION OF SIGNALS FROM MULTISCALE EDGES•Source: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 14 (7): 710-732 JUL 1992•SCI被引用508次《IEEE模式分析与机器智能汇刊》美国•Abstract: A multiscale Canny edge detection is equivalent to finding the local maxima ofa wavelet transform. We study the properties of multiscale edges through the wavelet。
simulation

City and urban simulation
A city simulator can be a city-building game but can also be a tool used by urban planners to understand how cities are likely to evolve in response to various policy dimulate simulator simulation Not to be confused with Stimulation.
★ Definition ★ Application Fields
Wooden mechanical horse simulator
Application Fields
Computer simulation
A computer simulation, a computer model, or a computational model is a computer gramme, or network of computers, that attempts to simulate an abstract model of a particular system
the training of civilian and military personnel "live" simulation "virtual" simulation "constructive" simulation
Clinical healthcare simulators
Medical simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Type of models ▲Active models ▲ Interactive models ▲ Computer simulators 3DiTeams learner is percussing the patients chest in virtual field hospital
设备带有恶化特性的作业车间调度模型与算法

第41卷第3期自动化学报Vol.41,No.3 2015年3月ACTA AUTOMATICA SINICA March,2015设备带有恶化特性的作业车间调度模型与算法黄敏1,2付亚平1,2王洪峰1,2朱兵虎1,2王兴伟1,2摘要考虑到现实作业车间调度中设备具有恶化特性,针对作业的处理时间是开始时间的线性递增函数的作业车间调度问题,建立了以最小化最迟完成时间为目标的优化模型,进而设计了嵌套分割算法进行求解.该算法在抽样阶段嵌入单亲遗传算法以提高抽样的多样性和质量.实例结果表明,所提出的算法在解决该问题上可以获得较高质量的解,并且具有很好的鲁棒性.关键词嵌套分割算法,单亲遗传算法,作业车间,设备恶化,调度问题引用格式黄敏,付亚平,王洪峰,朱兵虎,王兴伟.设备带有恶化特性的作业车间调度模型与算法.自动化学报,2015,41(3): 551−558DOI10.16383/j.aas.2015.c131067Job-shop Scheduling Model and Algorithm with Machine Deterioration HUANG Min1,2FU Ya-Ping1,2WANG Hong-Feng1,2ZHU Bing-Hu1,2WANG Xing-Wei1,2Abstract For the job-shop scheduling problem,a job-shop scheduling model with machine deterioration is built in order to minimize makespan,considering that the processing time of jobs is a linearly increasing function of the start time.Then a nested partition method is designed for solving it.In the sampling process,the partheno-genetic algorithm is embedded into the nested partition method in order to ensure the diversity of sampling and quality.Simulation experiments show that the proposed algorithm for solving job-shop scheduling problem with machine deterioration can get higher quality solutions and have a better robustness.Key words Nested partition method,partheno-genetic algorithm,job shop,machine deterioration,scheduling problem Citation Huang Min,Fu Ya-Ping,Wang Hong-Feng,Zhu Bing-Hu,Wang Xing-Wei.Job-shop scheduling model and algorithm with machine deterioration.Acta Automatica Sinica,2015,41(3):551−558调度是影响制造型企业生产效率的关键因素,建立合理的调度模型及寻找有效的调度方法和优化技术是提高制造型企业生产效率、降低生产成本的重要途径.作业车间调度问题(Job-shop scheduling problem,JSSP)是众多制造型企业普遍存在的生产调度问题,其蕴含着复杂的生产制约关系,既要考虑收稿日期2013-11-19录用日期2014-10-27Manuscript received November19,2013;accepted October27, 2014国家杰出青年科学基金(71325002,61225012),国家自然科学基金(71071028,71001018),流程工业综合自动化国家重点实验室基础科研业务费(2013ZCX11),中央高校基本科研业务费专项基金(N1304040 17)资助Supported by National Science Foundation for Distinguished Young Scholars of China(71325002,61225012),National Natu-ral Science Foundation of China(71071028,71001018),Funda-mental Research Funds for State Key Laboratory of Synthetical Automation for Process Industries(2013ZCX11),and Funda-mental Research Funds for the Central Universities(N1304040 17)本文责任编委李乐飞Recommended by Associate Editor LI Le-Fei1.东北大学信息科学与工程学院沈阳1108192.流程工业综合自动化国家重点实验室(东北大学)沈阳1108191.College of Information Science and Engineering,Northeast-ern University,Shenyang1108192.State Key Laboratory of Synthetical Automation for Process Industries,Northeastern University,Shenyang110819作业加工的先后次序关系,又要考虑加工进程的协调,是一类复杂的组合优化问题.众多学者对作业车间调度问题进行了深入而系统的研究[1].传统的作业车间调度的研究假设作业处理时间为可知的常数[2].而在实际加工过程中,处理时间受到诸多不确定因素的影响,如文献[3−4]考虑了作业处理时间服从三角模糊数的不确定作业车间调度问题,文献[5−6]分别研究了作业的处理时间服从均匀分布和正态分布的作业车间调度问题.上述研究考虑作业的处理时间预先确定、模糊或随机的作业车间调度问题,但均未考虑由于设备工作效率降低或工件物理特性变化而引起的处理时间延长的问题.20世纪90年代,Gupta等[7]考虑到工件特性、设备磨损及操作者疲惫程度的影响,假设工件的处理时间是其开始时间的线性递增函数,首先提出了工件具有恶化特点的调度模型.这种考虑恶化的调度模型弥补了传统调度中模型缺乏实用性的不足,且具有广泛的应用背景,如钢铁生产、清洁维护及服务业等.目前,考虑恶化情形的调度问题的研究多以单机[8−10]和流水线[11−12]为背景,而在作业车间背景下的研究较少[13].Mosheiov[14]首次对具有恶化时552自动化学报41卷间的作业车间调度问题进行了复杂性研究,指出在实际处理时间与其开始时间呈线性关系的情况下,最小化最迟完成时间的作业车间调度问题是NP-complete问题.文献[13,15]研究带有恶化情形下的批量作业车间调度问题,假设作业的处理时间与其开始时间呈指数关系.文献[16]考虑了作业的处理时间具有线性恶化情形的柔性作业车间调度问题.现有对恶化特性的作业车间调度问题的研究,均假设作业在所有设备上的恶化系数相同.而在有些实际生产环境中,如带有自动和手控设备的作业车间,作业的恶化系数往往受到设备磨损、操作者疲惫程度及工序的复杂程度的影响,因此,各作业的不同工序往往具有不同的恶化系数.考虑到带有恶化情形的作业车间调度问题具有广泛的应用背景,本文针对具有自动和手控设备的作业车间,结合手控设备操作者的工作效率随其工作时间增加而降低的特点,研究了设备带有恶化特性的作业车间调度问题(Job-shop scheduling prob-lem with machine deterioration,JSSP-MD).考虑作业的不同工序具有不同的恶化系数,并假设各工序的实际处理时间为其在相应设备上开始时间的线性函数,建立了以最小化最迟完成时间为目标的调度模型,以最大化设备利用率.Garey等[17]已证明JSSP问题属于NP-hard问题,JSSP-MD是JSSP问题的扩展,具有更高的复杂性,因此,JSSP-MD也属于NP-hard问题.为此,本文在嵌套分割算法(Nested partition method,NPM)的抽样阶段嵌入单亲遗传算法(Partheno-genetic algorithm,PGA)的混合算法(Nested partition method embeddedpartheno-genetic algorithm,NP-PGA)进行求解.通过求解实例,验证和分析了该算法在解决JSSP-MD问题的执行效果.1问题描述与模型资源的有效利用对企业的管理决策是至关重要的,如何提高设备利用率、最小化最迟完成时间是衡量现实生产调度优化效果的重要指标.考虑最小化最迟完成时间和恶化时间的作业车间调度问题可以描述如下:n个作业在m台设备上加工,设J为待加工的作业集合,M为设备集合.集合J中的每个作业j(j=1,2,···,n,n表示作业总数)须按照预先确定的工艺顺序加工,A j={O jm1,O jm2,···,O jmK j },其中A j表示作业j的工艺路线,O jmk表示作业j的第k个工序在设备m k上加工,k=1, 2,3,···,K j,K j为作业j的工序总数.S ji和C ji 表示作业j在设备i上的开始处理时间和完成时间. p ji为作业j在设备i上的正常处理时间,作业在处理过程中,作业的实际处理时间会由于设备磨损及操作人员逐渐疲惫而增加,即作业的处理时间出现恶化现象.设αji为设备i加工作业j的恶化系数,假设恶化时间与其对应工序的开始时间呈线性关系,则作业j在设备i上的实际处理时间pji如式(1)所示,其中M1和M2分别表示不具有恶化特性的设备集和具有恶化特性的设备集.调度目标是确定各台设备上作业的加工顺序,以最小化最迟完成时间,并且满足如下约束条件:同一时刻同一设备只能处理一个作业;作业的加工不允许中断;不同作业的工序之间没有先后约束;所有设备在0时刻均可用.pji=p ji,∀j∈J,∀i∈M1p ji+αji S ji,∀j∈J,∀i∈M2(1)根据上述问题描述和符号定义,对JSSP-MD问题建立模型如下:min max1≤i≤mmax1≤j≤nC ji(2)C ji=S ji+pji,∀j∈J,∀i∈M(3)C ji≤S li∨C li≤S ji,j,l∈J,i∈M(4)C ji≤S jk,∀(o ji,o jk),∀A j,j∈J,i,k∈M(5)S ji,C ji≥0,∀j∈J,∀i∈M(6)pji=p ji,∀j∈J,∀i∈M1p ji+αji S ji,∀j∈J,∀i∈M2(7)模型中,式(2)为目标函数,表示最小化最迟完成时间;式(3)表示作业的各工序的完成时间为开始时间与实际处理时间和;式(4)和式(5)表示由工艺约束条件决定的各作业的各工序的先后加工顺序,以及加工各个作业的设备的先后顺序;式(6)为变量的取值约束;式(7)为作业的实际处理时间和正常处理时间的函数关系式.2NP-PGA算法NP算法是一种能够解决复杂的确定型和随机型优化问题的优化方法,其已被证明能够以概率1收敛到全局最优解.Shi等已将NP算法用于求解TSP问题、供应链网络优化、产品设计、资源分配等领域[18−19],并取得了很好的效果.2.1NP算法的基本思想设X为优化问题P的可行域空间,通过分割策略得到的区域称为可行域,各个可行域互不相交,且并集为整个可行域X.对于离散问题,只含一个解的可行域称为单解域.如果可行域σ∈X是通过分割可行域η∈X得到的,则称σ为η的子域,η为3期黄敏等:设备带有恶化特性的作业车间调度模型与算法553σ的父域.由初始可行域到达一个可行域所分割的次数称为该可行域的深度,初始可行域的深度为0,单解域具有最大的深度,故又称为最大深度域.NP算法包括四个基本算子[18]:分割(Parti-tion)、抽样(Sampling)、选区(Selection)和回溯(Backtracking).其基本思想是:1)在算法的第k次迭代中,如果认为σ(k)∈X是包含x∗的最可能域(The most promising region),则利用分割算子将σ(k)分割为M个子域,并将可行域X\σ(k)称为裙域(Surrounding region),得到M+1个互不相交的可行域;2)对每个可行域σj(k),j=1,2,3,···,M+1,利用抽样算子随机抽取N j个点x j1,x j 2,x j3,···,x jN j,j=1,2,3,···,M+1.计算相应的目标函数值f(x j1),f(x j2),f(x j3),···,f(x jN j),并选择最好值作为该可行域的可能性指数(Promising index)I(σj);3)基于抽取的样本,利用选区算子确定第k+1次迭代的最可能域.依据此方法继续分割,直到获得不可分割的单解域;4)如果最可能域为裙域X\σ(k),则利用回溯算子回溯至上次迭代的最可能域,并重新执行1)∼3).2.2NP-PGA算法求解JSSP-MD问题回溯次数过多会影响到NP算法的执行效率,为减少回溯次数,必须提高抽样算子抽取的样本质量,以保证算法沿着正确的方向搜索.为此,本文提出了NP-PGA算法,以提高NP算法的搜索效率.图1为NP-PGA算法的流程图.为有效地求解JSSP-MD问题,NP-PGA算法的基本算子设计方法如下.分割:假设n个作业在m台设备上加工,已知各个作业工艺路线可确定各设备上加工的作业集合.一个完整解分成m段,如图2所示,πj表示第j台设备上各作业相应工序的排序.分割算子每次确定一个作业在一台设备上的位置,依次确定第1台至第m台设备上作业的排序.以2台设备3个作业为例,工艺路线分别为:A1={O11,O12},A2={O22, O21},A3={O31,O32},其执行方法如图3所示.图中“∗”表示待分配作业的位置,第1层表示两台设备均处于待分配;第2层中第1个和第3个节点分别表示将作业1和作业3的第1个工序分配到第1台设备的第1个位置,第2个节点表示将作业2的第2个工序分配到第1台设备的第1个位置;第3层的两个衍生节点分别表示将作业1和作业3的第2个工序分配给第1台设备的第2个位置,以此类推,完成两台设备上的作业分配.抽样:嵌入PGA的抽样方法描述如下.1)从可行域中随机抽取N个解作为PGA的初始种群;2)对每一个体随机选择若干台未确定作业顺序的设备,随机地选择基因换位、基因段移位和基因段逆转操作,改变相应设备的作业的排序;3)计算个体的目标函数值,并取其倒数作为个体的适应度值;4)采用精英保留和轮盘赌方法选择下一代种群;5)重复执行2)∼4),直到满足停止条件,输出最优个体,并将该个体的目标值作为该可行域的可能性指数.重复上述操作直至完成对所有可行域的抽样操作.采用该抽样方法能够获得代表各个可行域质量的较好解,从而保证算法沿着正确的方向搜索.图1NP-PGA算法的流程图Fig.1Graph of NP-PGAalgorithm图2解的表达方式Fig.2Representation ofsolution图3可行域的分割方法Fig.3Partition method of feasible region选区:如果当前迭代中具有最优的可能性指数的可行域为当前最可能域的子域,则选取该子域为下次迭代的最可能域.回溯:如果当前迭代中具有最优的可能性指数的可行域为裙域,则需要进行回溯操作.本文采取两种回溯策略,分别为:1)回溯到当前最可能域的父554自动化学报41卷域;2)回溯到截至目前找到的最好解所在可行域的父域.根据上述算法设计,NP-PGA算法执行的伪码如图4所示.在完成算法参数及最可能域的初始化后,算法进入如下迭代过程:首先将最可能域分割为一定数量的子域并构造当前裙域;然后采用PGA方法对各可行域抽样,并计算各可行域的可能性指数;选择具有最好可能性指数的可行域,并依据其与当前最可能域的包含关系确定是否采取回溯策略;最后确定最可能域,并进入下一次迭代.3实验分析为验证算法的求解效果,本文从某加工企业选取三个不同规模的实例进行实验,计算机配置为Core2Duo2.4GHz CPU,2G RAM.实例1中有6个作业、6台设备;实例2中有7个作业、7台设备;实例3中有8个作业、8台设备,实验数据分别如表1∼3所示.表中p ji和αji分别表示作业j在工艺路线A j下各工序在相应设备i上的正常处理时间和恶化系数.每一实例分别运行30次,并分别取其最好值(Best)、最差值(Worst)、平均值(Mean)、标准方差(S)及平均CPU时间(Time,单位s)为评价指标.实验中NP算法采取两个停止条件:1)获得单解域;2)最大分割次数,本文取mn2.实验首先以实例1为例,验证不同回溯策略、抽样个数及PGA迭代次数对算法性能的影响.实验的结果及分析如下.图4NP-PGA算法伪码图Fig.4Pseudo-code of NP-PGA algorithm表1实例1的实验数据Table1Experimental data of thefirst instanceJ A j p jiαji1{O11,O14,O12,O13,O16}{8,2,9,4,17}{0.4,0.6,0,0.1,0.4} 2{O23,O26,O25,O21,O22}{11,1,9,7,8}{0.1,0.4,0.2,0.4,0} 3{O33,O31,O32,O34,O35}{17,10,7,20,9}{0.1,0.4,0,0.6,0.2} 4{O44,O42,O45,O43,O41}{1,9,16,18,2}{0.6,0,0.2,0.1,0.4} 5{O51,O56,O52,O55,O54}{2,3,3,10,11}{0.4,0.4,0,0.2,0.6} 6{O62,O66,O65,O64,O63}{3,17,13,4,14}{0,0.4,0.2,0.6,0.1}表2实例2的实验数据Table2Experimental data of the second instanceJ A j p jiαji1{O15,O13,O14,O16,O17,O12,O11}{19,18,7,18,3,10,15}{0.2,0.1,0.6,0.4,0.3,0,0.4} 2{O21,O24,O23,O22,O27,O25,O26}{14,11,12,5,2,20,14}{0.4,0.1,0.6,0.4,0.3,0,0.4} 3{O35,O36,O37,O33,O34,O31,O32}{13,2,4,10,16,9,8}{0.2,0.4,0.3,0.1,0.6,0.4,0} 4{O41,O44,O47,O46,O45,O43,O42}{7,16,4,1,9,4,9}{0.4,0.6,0.3,0.4,0.2,0.1,0} 5{O52,O56,O54,O51,O53,O57,O55}{6,3,7,8,9,13,6}{0,0.4,0.6,0.4,0.2,0.3,0.2} 6{O66,O62,O65,O64,O61,O63,O67}{1,4,5,10,11,4,12}{0.4,0,0.2,0.6,0.4,0.1,0.3} 7{O74,O76,O77,O73,O72,O75,O71}{11,12,15,8,16,7,13}{0.6,0.4,0.3,0.1,0,0.2,0.4}3期黄敏等:设备带有恶化特性的作业车间调度模型与算法555表3实例3的实验数据Table3Experimental data of the third instanceJ A j p jiαji1{O18,O17,O16,O11,O13}{7,20,17,4,8}{0.3,0.3,0.4,0.4,0.1}2{O23,O27,O24,O26,O22,O28}{3,19,13,1,9,15}{0.1,0.3,0.6,0.4,0,0.4}3{O37,O33,O34,O35}{14,5,12,18}{0.3,0.1,0.6,0.2}4{O44,O45,O42,O46,O47,O43,O41,O48}{17,9,6,9,16,5,8,20}{0.6,0.2,0,0.4,0.3,0.1,0.4,0.3} 5{O51,O56,O55,O53,O58}{10,7,20,17,14}{0.4,0.4,0.2,0.1,0.3}6{O66,O63,O64,O67,O65,O62,O68,O61}{13,14,8,19,16,18,14,14}{0.4,0,0.6,0.3,0.2,0,0.3,0.4} 7{O73,O72,O71,O75,O74,O78,O77,O76}{10,18,10,7,9,7,9,7}{0.1,0,0.4,0.2,0.6,0.3,0.3,0.4} 8{O81,O85,O88,O86,O82}{13,1,15,9,4}{0.4,0.2,0.3,0.4,0}3.1回溯策略对NP算法的影响基于标准的NP算法,验证回溯策略1)和2)及抽样个数对算法的影响.从表4(SN表示抽样个数)可以看出,两种策略均能在可接受的时间内获得相同的最好解.从均值和标准方差上看,策略2)的搜索效果优于策略1),且其稳定性较好.另外,增加抽样个数可以提高算法的求解效果和稳定性,但其求解时间也会相应增加.为提高算法的稳定性,本文选取策略2)作为后续实验的回溯策略.3.2抽样个数对NP-PGA的影响为验证抽样个数对NP-PGA的影响,固定PGA的迭代次数为50,且采取回溯策略2).表5(NG为迭代次数)给出了实验结果.从表5可以看出,抽样个数对NP-PGA影响的趋势类似于第3.1节中的结论,五种抽样个数的设置均能获得相同的最好解,从均值和标准方差上看,当抽样个数取30时,算法的稳定性较好.因此,在本文的后续实验中将抽样个数设置为30.3.3PGA迭代次数的影响为验证PGA迭代次数对算法的影响,固定抽样个数为30,且采取回溯策略2).表6为验证PGA的迭代次数对NP-PGA性能影响的实验结果.从表6中可以看出,随着迭代次数增大,算法的求解效果得到改善,均值和标准方差均有所下降,但也增大了计算开销,导致算法的求解时间增大.另外,当迭代次数为30时,NP-PGA的稳定性较好.因此,在后续实验中将PGA的迭代次数设置为30.3.4算法对比为验证NP-PGA的有效性,分别采用枚举算法(EA)、遗传算法(GA)、NP算法及NP-PGA求解实例1∼3.其中,EA可以获得问题的最优解,用以验证NP-PGA获得最优解的能力;GA已被证明可以有效求解作业车间调度问题,并被众多研究作为对比算法[15−16,20],NP算法用于验证本文所提出的改进策略的有效性.GA的参数设置为:种群规模为100,交叉率为0.7,变异率为0.3,迭代次数为500.NP-PGA采取回溯策略2),抽样次数为30, PGA迭代次数为30.NP的参数设置同NP-PGA.表7给出了求解的实验结果.对于所采用的3个实例(“–”表示EA不能在有效的时间内获得问题最优解),将NP-PGA得到的结果与EA、GA和NP 得到的结果进行对比.对于EA算法,J6×M6问题计算最优解的时间为3237.8s,并且该方法的计算结果与使用GA、NP和NP-PGA计算得到的结果一致.对于J7×M7问题,EA不能在有效的时间内获得最优解,其他3种算法均能够得到近似解.但是,显而易见的是,NP-PGA的求解结果优于GA和NP的求解结果,但求解时间相对较大.对于J8×M8问题,NP-PGA的求解结果优势更加明显,从最好解、均值、标准方差及求解时间上看, NP-PGA均好于GA和NP.对于NP算法,由于其采用随机抽样的方法,不能保证获得的样本解为可行域的最好解,从而造成回溯,而多次回溯必将会影响到算法的执行效率.针对这一问题,在抽样阶段嵌入PGA方法对NP加以改进.从表7的数据可以看出,NP-PGA算法与NP算法相比,最好解、均值、标准方差及求解时间均得到改进,并且随着问题规模的增大,NP-PGA的优势更加明显.通过在NP算法中嵌入PGA,可以提高抽样的多样性及样本质量,尽可能获得能够代表可行域的最好解,从而减少了回溯次数,提高了算法的求解性能.为直观地展示所提出的NP-PGA算法的有效性,分别选取三种算法30次求解的最好结果绘制收敛曲线.图5∼7分别给出了三种算法在求解J6×M6,J7×M7及J8×M8问题的收敛曲线图,图556自动化学报41卷表4回溯策略及抽样个数对算法影响Table4Experimental results on different backtracking strategies and sampling number策略SN Best Worst Mean S Time(s) 20178.83219.87192.4116.0319.2330178.83205.91185.859.4925.511)40178.83204.40185.0110.7829.2650178.83192.24183.8310.1239.3360178.83184.96181.778.9953.8020178.83221.46190.079.0614.3730178.83198.10185.319.8524.432)40178.83215.10184.739.3238.0050178.83195.34182.558.9657.4260178.83183.51180.228.3788.57表5抽样个数对算法影响Table5Experimental results on different sampling numberSN NG Best Worst Mean S Time(s) 1050178.83210.69186.5712.9417.74 2050178.83198.09186.0312.3236.50 3050178.83192.10181.69 5.4971.13 4050178.83192.10182.807.3185.09 5050178.83192.10181.43 5.7499.76表6PGA迭代次数对算法影响Table6Experimental results on different iteration times of PGASN NG Best Worst Mean S Time(s) 3010178.83210.69189.4313.2522.51 3020178.83210.69186.9910.1330.64 3030178.83188.84183.21 4.9745.19 3040178.83192.10183.83 5.8165.59 3050178.83198.09184.997.7182.45表7EA、GA、NP和NP-PGA算法对比结果Table7Comparison of experimental results obtained by EA,GA,NP and NP-PGAProblem Algorithm Best Mean S Time(s)J6×M6EA178.83178.8303237.8J6×M6GA178.83181.38 1.9233.39J6×M6NP178.83192.109.0638.03J6×M6NP-PGA178.83190.07 5.4945.19J7×M7EA−−−−J7×M7GA894.80917.0819.5051.68J7×M7NP901.51921.7511.8265.77J7×M7NP-PGA891.30910.0811.2074.52J8×M8EA−−−−J8×M8GA838.60842.7610.2295.88J8×M8NP833.43873.4217.26119.42J8×M8NP-PGA805.28820.8610.0488.933期黄敏等:设备带有恶化特性的作业车间调度模型与算法557中横坐标表示算法运行时间,纵坐标表示算法搜索到的最好解.从图中可以看出,NP-PGA 收敛的速度和效果均要优于GA 和NP,且在求解J 8×M 8问题上优势更加明显.图5三算法求解J 6×M 6收敛曲线Fig.5Convergence curve of three algorithms insolving J 6×M6图6三算法求解J 7×M 7收敛曲线Fig.6Convergence curve of three algorithms insolving J 7×M7图7三算法求解J 8×M 8收敛曲线Fig.7Convergence curve of three algorithms insolving J 8×M 84结论设备带有恶化特性的作业车间调度问题是制造型企业亟需解决的关键问题.本文针对设备带有恶化特性的作业车间调度问题进行了研究.首先,对作业车间环境下设备带有恶化特性的调度问题进行了描述,建立了以最小化最迟完成时间为目标的问题模型;进而针对问题特点设计了嵌入单亲遗传算法的混合嵌套分割算法进行问题求解;最后,通过与枚举算法、遗传算法及标准嵌套分割算法的对比分析,表明所提出的算法在求解质量、求解时间和稳定性方面均具有较好的性能,尤其在作业和设备数量增多时,所提出的算法具有更加明显的优势.该研究为设备带有恶化特性的作业车间调度问题提供了有效的建模和求解工具,在自动和手控设备并存的作业车间调度方面具有广泛的应用.未来研究可进一步分析设备恶化系数对算法性能的影响等.References1Brucker P.Scheduling Algorithms .Berlin:Springer-Verlag,2007.69−832Blazewicz J,Domschke W,Pesch E.The job shop scheduling problem:conventional and new solution techniques.Euro-pean Journal of Operational Research ,1996,93(1):1−333Wang L,Tang D B.An improved adaptive genetic algo-rithm based on hormone modulation mechanism for job-shop scheduling problem.Expert Systems with Applica-tions ,2011,38(6):7243−72504Qiao Wei,Wang Bing,Sun Jie.Uncertain job shop schedul-ing problems solved by genetic puter Inte-grated Manufacturing Systems ,2007,13(12):2452−2455(乔威,王冰,孙洁.用遗传算法求解一类不确定性作业车间调度问题.计算机集成制造系统,2007,13(12):2452−2455)5Li Fu-Ming,Zhu Yun-Long,Yin Chao-Wan,Song Xiao-Yu.Research on fuzzy job shop scheduling with alternative puter Integrated Manufacturing Systems ,2006,12(2):169−173(李富明,朱云龙,尹朝万,宋晓宇.可变机器约束的模糊作业车间调度问题研究.计算机集成制造系统,2006,12(2):169−173)6Yan Li-Jun,Li Zong-Bin,Wei Jun-Hu,Du Xuan.A new hy-brid optimization algorithm and its application in job shop scheduling.Acta Automatica Sinica ,2008,34(5):604−608(闫利军,李宗斌,卫军胡,杜轩.一种新的混合优化算法及其在车间调度中的应用.自动化学报,2008,34(5):604−608)7Gupta J N D,Gupta S K.Single facility scheduling with nonlinear processing puters and Industrial Engi-neering ,1988,14(4):387−3938Wu H P,Huang M.Improved estimation of distribution al-gorithm for the problem of single-machine scheduling with deteriorating jobs and different due putational and Applied Mathematics ,2014,33(3):557−5739Mosheiov G.Scheduling jobs under simple linear puters and Operations Research ,1994,21(6):653−65910Wu C C,Wu W H,Wu W H,Hsu P H,Yin Y Q,Xu J Y.A single-machine scheduling with a truncated linear deteri-oration and ready rmation Sciences ,2014,256:109−125558自动化学报41卷11Cheng M B,Tadikamalla P R,Shang J,Zhang S Q.Bi-criteria hierarchical optimization of two-machineflow shop scheduling problem with time-dependent deteriorating jobs.European Journal of Operational Research,2014,234(3): 650−65712Wang J B,Wang M Z.Solution algorithms for the total weighted completion time minimizationflow shop scheduling with decreasing linear deterioration.The International Jour-nal of Advanced Manufacturing Technology,2013,67(1−4): 243−25313Liu C H,Chen L S,Lin P S.Lot streaming multiple jobs with values exponentially deteriorating over time in a job-shop environment.International Journal of Production Re-search,2013,51(1):202−21414Mosheiov plexity analysis of job-shop scheduling with deteriorating jobs.Discrete Applied Mathematics, 2002,117(1−3):195−20915Liu C H.Scheduling jobs with values exponentially deterio-rating over time in a job shop environment.In:Proceedings of the2011International MultiConference of Engineers and Computer Scientists.Hong Kong,China:Newswood Lim-ited,2011.1113−111816Araghi M E T,Jolai F,Rabiee M.Incorporating learning effect and deterioration for solving a SDSTflexible job-shop scheduling problem with a hybrid meta-heuristic approach.International Journal of Computer Integrated Manufactur-ing,2013,27(8):733−74617Garey M R,Johnson D S,Sethi R.The complexity offlow shop and job shop scheduling.Mathematics of Operations Research,1976,1(2):117−12918Shi L,´Olafsson S.Nested Partitions Method,Theory and Applications.New York:Springer-Verlag,2008.131−22619Shi L,´Olafsson S.Nested partitions method for global op-timization.Operations Research,2000,48(3):390−40720Wang Y M,Yin H L,Qin K D.A novel genetic algorithm forflexible job shop scheduling problems with machine dis-ruptions.The International Journal of Advanced Manufac-turing Technology,2013,68(5−8):1317−1326黄敏东北大学信息科学与工程学院教授.主要研究方向为生产计划、调度与存储控制,物流与供应链管理,行为运筹,风险管理和软计算.E-mail:***************(HUANG Min Professor at theCollege of Information and Engineer-ing,Northeastern University.Her re-search interest covers production planning,scheduling and inventory control,behavioral operation,management of lo-gistics and supply chain,and risk management and soft computing.)付亚平东北大学信息科学与工程学院系统工程研究所博士研究生.主要研究方向为生产计划与调度,智能优化算法.本文通信作者.E-mail:********************(FU Ya-Ping Ph.D.candidate atthe College of Information and Engi-neering,Northeastern University.His research interest covers production planning and schedul-ing,optimization algorithm.Corresponding author of this paper.)王洪峰东北大学信息科学与工程学院副教授.主要研究方向为进化计算,生产计划与调度,物流与供应链管理.E-mail:***************(W ANG Hong-Feng Associateprofessor at the College of Informationand Engineering,Northeastern Univer-sity.His research interest covers evo-lutionary algorithm,production planning and scheduling, and management of logistics and supply chain.)朱兵虎东北大学信息科学与工程学院系统工程研究所硕士研究生.主要研究方向为生产计划与调度,智能优化算法.E-mail:*********************(ZHU Bing-Hu Master studentat the College of Information andEngineering,Northeastern University.His research interest covers production planning and scheduling,optimization algorithm.)王兴伟东北大学信息科学与工程学院教授.主要研究方向为下一代互联网,光互联网,移动互联网.E-mail:***************(W ANG Xing-Wei Professor atthe College of Information and Engi-neering,Northeastern University.Hisresearch interest covers next generation internet(NGI),optical internet,and mobile internet.)。
蒙特卡诺方法计算8psk

1. 绪论
武汉纺织大学 2011 届毕业设计(论文)
以概率和统计理论方法为基础的一种计算方法。将所求解 的问题同一定的概率模型相联系,用电子计算机实现统计模拟或抽样,以获得问 题的近似解。为象征性地表明这一方法的概率统计特征,故借用赌城蒙特卡洛命 名。又称统计模拟法、随机抽样技术。
8 Phase Shift Keying is a phase modulation algorithm.Phase modulation (PM) is developed from frequency modulation (FM). "8PSK" in the "PSK" that the use of phase shift keying, phase modulationphase shift keying is a form used to express a series of discrete state, 8PSK corresponding to eight kinds of state of the PSK. If it is half of its state, that is, 4species, compared with QPSK, 2 times if it is their state, compared with 16PSK. For 8PSK with 8 kinds of state, so each 8PSK symbol can encode three bits. Deterioration in the ability of anti-link 8PSK (noise immunity) as 4PSK, but provides a higher data throughput capacity.
ABSTRACT Progressive Simplicial Complexes

Progressive Simplicial Complexes Jovan Popovi´c Hugues HoppeCarnegie Mellon University Microsoft ResearchABSTRACTIn this paper,we introduce the progressive simplicial complex(PSC) representation,a new format for storing and transmitting triangu-lated geometric models.Like the earlier progressive mesh(PM) representation,it captures a given model as a coarse base model together with a sequence of refinement transformations that pro-gressively recover detail.The PSC representation makes use of a more general refinement transformation,allowing the given model to be an arbitrary triangulation(e.g.any dimension,non-orientable, non-manifold,non-regular),and the base model to always consist of a single vertex.Indeed,the sequence of refinement transforma-tions encodes both the geometry and the topology of the model in a unified multiresolution framework.The PSC representation retains the advantages of PM’s.It defines a continuous sequence of approx-imating models for runtime level-of-detail control,allows smooth transitions between any pair of models in the sequence,supports progressive transmission,and offers a space-efficient representa-tion.Moreover,by allowing changes to topology,the PSC sequence of approximations achieves betterfidelity than the corresponding PM sequence.We develop an optimization algorithm for constructing PSC representations for graphics surface models,and demonstrate the framework on models that are both geometrically and topologically complex.CR Categories:I.3.5[Computer Graphics]:Computational Geometry and Object Modeling-surfaces and object representations.Additional Keywords:model simplification,level-of-detail representa-tions,multiresolution,progressive transmission,geometry compression.1INTRODUCTIONModeling and3D scanning systems commonly give rise to triangle meshes of high complexity.Such meshes are notoriously difficult to render,store,and transmit.One approach to speed up rendering is to replace a complex mesh by a set of level-of-detail(LOD) approximations;a detailed mesh is used when the object is close to the viewer,and coarser approximations are substituted as the object recedes[6,8].These LOD approximations can be precomputed Work performed while at Microsoft Research.Email:jovan@,hhoppe@Web:/jovan/Web:/hoppe/automatically using mesh simplification methods(e.g.[2,10,14,20,21,22,24,27]).For efficient storage and transmission,meshcompression schemes[7,26]have also been developed.The recently introduced progressive mesh(PM)representa-tion[13]provides a unified solution to these problems.In PM form,an arbitrary mesh M is stored as a coarse base mesh M0together witha sequence of n detail records that indicate how to incrementally re-fine M0into M n=M(see Figure7).Each detail record encodes theinformation associated with a vertex split,an elementary transfor-mation that adds one vertex to the mesh.In addition to defininga continuous sequence of approximations M0M n,the PM rep-resentation supports smooth visual transitions(geomorphs),allowsprogressive transmission,and makes an effective mesh compressionscheme.The PM representation has two restrictions,however.First,it canonly represent meshes:triangulations that correspond to orientable12-dimensional manifolds.Triangulated2models that cannot be rep-resented include1-d manifolds(open and closed curves),higherdimensional polyhedra(e.g.triangulated volumes),non-orientablesurfaces(e.g.M¨o bius strips),non-manifolds(e.g.two cubes joinedalong an edge),and non-regular models(i.e.models of mixed di-mensionality).Second,the expressiveness of the PM vertex splittransformations constrains all meshes M0M n to have the same topological type.Therefore,when M is topologically complex,the simplified base mesh M0may still have numerous triangles(Fig-ure7).In contrast,a number of existing simplification methods allowtopological changes as the model is simplified(Section6).Ourwork is inspired by vertex unification schemes[21,22],whichmerge vertices of the model based on geometric proximity,therebyallowing genus modification and component merging.In this paper,we introduce the progressive simplicial complex(PSC)representation,a generalization of the PM representation thatpermits topological changes.The key element of our approach isthe introduction of a more general refinement transformation,thegeneralized vertex split,that encodes changes to both the geometryand topology of the model.The PSC representation expresses anarbitrary triangulated model M(e.g.any dimension,non-orientable,non-manifold,non-regular)as the result of successive refinementsapplied to a base model M1that always consists of a single vertex (Figure8).Thus both geometric and topological complexity are recovered progressively.Moreover,the PSC representation retains the advantages of PM’s,including continuous LOD,geomorphs, progressive transmission,and model compression.In addition,we develop an optimization algorithm for construct-ing a PSC representation from a given model,as described in Sec-tion4.1The particular parametrization of vertex splits in[13]assumes that mesh triangles are consistently oriented.2Throughout this paper,we use the words“triangulated”and“triangula-tion”in the general dimension-independent sense.Figure 1:Illustration of a simplicial complex K and some of its subsets.2BACKGROUND2.1Concepts from algebraic topologyTo precisely define both triangulated models and their PSC repre-sentations,we find it useful to introduce some elegant abstractions from algebraic topology (e.g.[15,25]).The geometry of a triangulated model is denoted as a tuple (K V )where the abstract simplicial complex K is a combinatorial structure specifying the adjacency of vertices,edges,triangles,etc.,and V is a set of vertex positions specifying the shape of the model in 3.More precisely,an abstract simplicial complex K consists of a set of vertices 1m together with a set of non-empty subsets of the vertices,called the simplices of K ,such that any set consisting of exactly one vertex is a simplex in K ,and every non-empty subset of a simplex in K is also a simplex in K .A simplex containing exactly d +1vertices has dimension d and is called a d -simplex.As illustrated pictorially in Figure 1,the faces of a simplex s ,denoted s ,is the set of non-empty subsets of s .The star of s ,denoted star(s ),is the set of simplices of which s is a face.The children of a d -simplex s are the (d 1)-simplices of s ,and its parents are the (d +1)-simplices of star(s ).A simplex with exactly one parent is said to be a boundary simplex ,and one with no parents a principal simplex .The dimension of K is the maximum dimension of its simplices;K is said to be regular if all its principal simplices have the same dimension.To form a triangulation from K ,identify its vertices 1m with the standard basis vectors 1m ofm.For each simplex s ,let the open simplex smdenote the interior of the convex hull of its vertices:s =m:jmj =1j=1jjsThe topological realization K is defined as K =K =s K s .The geometric realization of K is the image V (K )where V :m 3is the linear map that sends the j -th standard basis vector jm to j 3.Only a restricted set of vertex positions V =1m lead to an embedding of V (K )3,that is,prevent self-intersections.The geometric realization V (K )is often called a simplicial complex or polyhedron ;it is formed by an arbitrary union of points,segments,triangles,tetrahedra,etc.Note that there generally exist many triangulations (K V )for a given polyhedron.(Some of the vertices V may lie in the polyhedron’s interior.)Two sets are said to be homeomorphic (denoted =)if there ex-ists a continuous one-to-one mapping between them.Equivalently,they are said to have the same topological type .The topological realization K is a d-dimensional manifold without boundary if for each vertex j ,star(j )=d .It is a d-dimensional manifold if each star(v )is homeomorphic to either d or d +,where d +=d:10.Two simplices s 1and s 2are d-adjacent if they have a common d -dimensional face.Two d -adjacent (d +1)-simplices s 1and s 2are manifold-adjacent if star(s 1s 2)=d +1.Figure 2:Illustration of the edge collapse transformation and its inverse,the vertex split.Transitive closure of 0-adjacency partitions K into connected com-ponents .Similarly,transitive closure of manifold-adjacency parti-tions K into manifold components .2.2Review of progressive meshesIn the PM representation [13],a mesh with appearance attributes is represented as a tuple M =(K V D S ),where the abstract simpli-cial complex K is restricted to define an orientable 2-dimensional manifold,the vertex positions V =1m determine its ge-ometric realization V (K )in3,D is the set of discrete material attributes d f associated with 2-simplices f K ,and S is the set of scalar attributes s (v f )(e.g.normals,texture coordinates)associated with corners (vertex-face tuples)of K .An initial mesh M =M n is simplified into a coarser base mesh M 0by applying a sequence of n successive edge collapse transforma-tions:(M =M n )ecol n 1ecol 1M 1ecol 0M 0As shown in Figure 2,each ecol unifies the two vertices of an edgea b ,thereby removing one or two triangles.The position of the resulting unified vertex can be arbitrary.Because the edge collapse transformation has an inverse,called the vertex split transformation (Figure 2),the process can be reversed,so that an arbitrary mesh M may be represented as a simple mesh M 0together with a sequence of n vsplit records:M 0vsplit 0M 1vsplit 1vsplit n 1(M n =M )The tuple (M 0vsplit 0vsplit n 1)forms a progressive mesh (PM)representation of M .The PM representation thus captures a continuous sequence of approximations M 0M n that can be quickly traversed for interac-tive level-of-detail control.Moreover,there exists a correspondence between the vertices of any two meshes M c and M f (0c f n )within this sequence,allowing for the construction of smooth vi-sual transitions (geomorphs)between them.A sequence of such geomorphs can be precomputed for smooth runtime LOD.In addi-tion,PM’s support progressive transmission,since the base mesh M 0can be quickly transmitted first,followed the vsplit sequence.Finally,the vsplit records can be encoded concisely,making the PM representation an effective scheme for mesh compression.Topological constraints Because the definitions of ecol and vsplit are such that they preserve the topological type of the mesh (i.e.all K i are homeomorphic),there is a constraint on the min-imum complexity that K 0may achieve.For instance,it is known that the minimal number of vertices for a closed genus g mesh (ori-entable 2-manifold)is (7+(48g +1)12)2if g =2(10if g =2)[16].Also,the presence of boundary components may further constrain the complexity of K 0.Most importantly,K may consist of a number of components,and each is required to appear in the base mesh.For example,the meshes in Figure 7each have 117components.As evident from the figure,the geometry of PM meshes may deteriorate severely as they approach topological lower bound.M 1;100;(1)M 10;511;(7)M 50;4656;(12)M 200;1552277;(28)M 500;3968690;(58)M 2000;14253219;(108)M 5000;029010;(176)M n =34794;0068776;(207)Figure 3:Example of a PSC representation.The image captions indicate the number of principal 012-simplices respectively and the number of connected components (in parenthesis).3PSC REPRESENTATION 3.1Triangulated modelsThe first step towards generalizing PM’s is to let the PSC repre-sentation encode more general triangulated models,instead of just meshes.We denote a triangulated model as a tuple M =(K V D A ).The abstract simplicial complex K is not restricted to 2-manifolds,but may in fact be arbitrary.To represent K in memory,we encode the incidence graph of the simplices using the following linked structures (in C++notation):struct Simplex int dim;//0=vertex,1=edge,2=triangle,...int id;Simplex*children[MAXDIM+1];//[0..dim]List<Simplex*>parents;;To render the model,we draw only the principal simplices ofK ,denoted (K )(i.e.vertices not adjacent to edges,edges not adjacent to triangles,etc.).The discrete attributes D associate amaterial identifier d s with each simplex s(K ).For the sake of simplicity,we avoid explicitly storing surface normals at “corners”(using a set S )as done in [13].Instead we let the material identifier d s contain a smoothing group field [28],and let a normal discontinuity (crease )form between any pair of adjacent triangles with different smoothing groups.Previous vertex unification schemes [21,22]render principal simplices of dimension 0and 1(denoted 01(K ))as points and lines respectively with fixed,device-dependent screen widths.To better approximate the model,we instead define a set A that associates an area a s A with each simplex s 01(K ).We think of a 0-simplex s 00(K )as approximating a sphere with area a s 0,and a 1-simplex s 1=j k 1(K )as approximating a cylinder (with axis (j k ))of area a s 1.To render a simplex s 01(K ),we determine the radius r model of the corresponding sphere or cylinder in modeling space,and project the length r model to obtain the radius r screen in screen pixels.Depending on r screen ,we render the simplex as a polygonal sphere or cylinder with radius r model ,a 2D point or line with thickness 2r screen ,or do not render it at all.This choice based on r screen can be adjusted to mitigate the overhead of introducing polygonal representations of spheres and cylinders.As an example,Figure 3shows an initial model M of 68,776triangles.One of its approximations M 500is a triangulated model with 3968690principal 012-simplices respectively.3.2Level-of-detail sequenceAs in progressive meshes,from a given triangulated model M =M n ,we define a sequence of approximations M i :M 1op 1M 2op 2M n1op n 1M nHere each model M i has exactly i vertices.The simplification op-erator M ivunify iM i +1is the vertex unification transformation,whichmerges two vertices (Section 3.3),and its inverse M igvspl iM i +1is the generalized vertex split transformation (Section 3.4).Thetuple (M 1gvspl 1gvspl n 1)forms a progressive simplicial complex (PSC)representation of M .To construct a PSC representation,we first determine a sequence of vunify transformations simplifying M down to a single vertex,as described in Section 4.After reversing these transformations,we renumber the simplices in the order that they are created,so thateach gvspl i (a i)splits the vertex a i K i into two vertices a i i +1K i +1.As vertices may have different positions in the different models,we denote the position of j in M i as i j .To better approximate a surface model M at lower complexity levels,we initially associate with each (principal)2-simplex s an area a s equal to its triangle area in M .Then,as the model is simplified,wekeep constant the sum of areas a s associated with principal simplices within each manifold component.When2-simplices are eventually reduced to principal1-simplices and0-simplices,their associated areas will provide good estimates of the original component areas.3.3Vertex unification transformationThe transformation vunify(a i b i midp i):M i M i+1takes an arbitrary pair of vertices a i b i K i+1(simplex a i b i need not be present in K i+1)and merges them into a single vertex a i K i. Model M i is created from M i+1by updating each member of the tuple(K V D A)as follows:K:References to b i in all simplices of K are replaced by refer-ences to a i.More precisely,each simplex s in star(b i)K i+1is replaced by simplex(s b i)a i,which we call the ancestor simplex of s.If this ancestor simplex already exists,s is deleted.V:Vertex b is deleted.For simplicity,the position of the re-maining(unified)vertex is set to either the midpoint or is left unchanged.That is,i a=(i+1a+i+1b)2if the boolean parameter midp i is true,or i a=i+1a otherwise.D:Materials are carried through as expected.So,if after the vertex unification an ancestor simplex(s b i)a i K i is a new principal simplex,it receives its material from s K i+1if s is a principal simplex,or else from the single parent s a i K i+1 of s.A:To maintain the initial areas of manifold components,the areasa s of deleted principal simplices are redistributed to manifold-adjacent neighbors.More concretely,the area of each princi-pal d-simplex s deleted during the K update is distributed toa manifold-adjacent d-simplex not in star(a ib i).If no suchneighbor exists and the ancestor of s is a principal simplex,the area a s is distributed to that ancestor simplex.Otherwise,the manifold component(star(a i b i))of s is being squashed be-tween two other manifold components,and a s is discarded. 3.4Generalized vertex split transformation Constructing the PSC representation involves recording the infor-mation necessary to perform the inverse of each vunify i.This inverse is the generalized vertex split gvspl i,which splits a0-simplex a i to introduce an additional0-simplex b i.(As mentioned previously, renumbering of simplices implies b i i+1,so index b i need not be stored explicitly.)Each gvspl i record has the formgvspl i(a i C K i midp i()i C D i C A i)and constructs model M i+1from M i by updating the tuple (K V D A)as follows:K:As illustrated in Figure4,any simplex adjacent to a i in K i can be the vunify result of one of four configurations in K i+1.To construct K i+1,we therefore replace each ancestor simplex s star(a i)in K i by either(1)s,(2)(s a i)i+1,(3)s and(s a i)i+1,or(4)s,(s a i)i+1and s i+1.The choice is determined by a split code associated with s.Thesesplit codes are stored as a code string C Ki ,in which the simplicesstar(a i)are sortedfirst in order of increasing dimension,and then in order of increasing simplex id,as shown in Figure5. V:The new vertex is assigned position i+1i+1=i ai+()i.Theother vertex is given position i+1ai =i ai()i if the boolean pa-rameter midp i is true;otherwise its position remains unchanged.D:The string C Di is used to assign materials d s for each newprincipal simplex.Simplices in C Di ,as well as in C Aibelow,are sorted by simplex dimension and simplex id as in C Ki. A:During reconstruction,we are only interested in the areas a s fors01(K).The string C Ai tracks changes in these areas.Figure4:Effects of split codes on simplices of various dimensions.code string:41422312{}Figure5:Example of split code encoding.3.5PropertiesLevels of detail A graphics application can efficiently transitionbetween models M1M n at runtime by performing a sequence ofvunify or gvspl transformations.Our current research prototype wasnot designed for efficiency;it attains simplification rates of about6000vunify/sec and refinement rates of about5000gvspl/sec.Weexpect that a careful redesign using more efficient data structureswould significantly improve these rates.Geomorphs As in the PM representation,there exists a corre-spondence between the vertices of the models M1M n.Given acoarser model M c and afiner model M f,1c f n,each vertexj K f corresponds to a unique ancestor vertex f c(j)K cfound by recursively traversing the ancestor simplex relations:f c(j)=j j cf c(a j1)j cThis correspondence allows the creation of a smooth visual transi-tion(geomorph)M G()such that M G(1)equals M f and M G(0)looksidentical to M c.The geomorph is defined as the modelM G()=(K f V G()D f A G())in which each vertex position is interpolated between its originalposition in V f and the position of its ancestor in V c:Gj()=()fj+(1)c f c(j)However,we must account for the special rendering of principalsimplices of dimension0and1(Section3.1).For each simplexs01(K f),we interpolate its area usinga G s()=()a f s+(1)a c swhere a c s=0if s01(K c).In addition,we render each simplexs01(K c)01(K f)using area a G s()=(1)a c s.The resultinggeomorph is visually smooth even as principal simplices are intro-duced,removed,or change dimension.The accompanying video demonstrates a sequence of such geomorphs.Progressive transmission As with PM’s,the PSC representa-tion can be progressively transmitted by first sending M 1,followed by the gvspl records.Unlike the base mesh of the PM,M 1always consists of a single vertex,and can therefore be sent in a fixed-size record.The rendering of lower-dimensional simplices as spheres and cylinders helps to quickly convey the overall shape of the model in the early stages of transmission.Model compression Although PSC gvspl are more general than PM vsplit transformations,they offer a surprisingly concise representation of M .Table 1lists the average number of bits re-quired to encode each field of the gvspl records.Using arithmetic coding [30],the vertex id field a i requires log 2i bits,and the boolean parameter midp i requires 0.6–0.9bits for our models.The ()i delta vector is quantized to 16bitsper coordinate (48bits per),and stored as a variable-length field [7,13],requiring about 31bits on average.At first glance,each split code in the code string C K i seems to have 4possible outcomes (except for the split code for 0-simplex a i which has only 2possible outcomes).However,there exist constraints between these split codes.For example,in Figure 5,the code 1for 1-simplex id 1implies that 2-simplex id 1also has code 1.This in turn implies that 1-simplex id 2cannot have code 2.Similarly,code 2for 1-simplex id 3implies a code 2for 2-simplex id 2,which in turn implies that 1-simplex id 4cannot have code 1.These constraints,illustrated in the “scoreboard”of Figure 6,can be summarized using the following two rules:(1)If a simplex has split code c12,all of its parents havesplit code c .(2)If a simplex has split code 3,none of its parents have splitcode 4.As we encode split codes in C K i left to right,we apply these two rules (and their contrapositives)transitively to constrain the possible outcomes for split codes yet to be ing arithmetic coding with uniform outcome probabilities,these constraints reduce the code string length in Figure 6from 15bits to 102bits.In our models,the constraints reduce the code string from 30bits to 14bits on average.The code string is further reduced using a non-uniform probability model.We create an array T [0dim ][015]of encoding tables,indexed by simplex dimension (0..dim)and by the set of possible (constrained)split codes (a 4-bit mask).For each simplex s ,we encode its split code c using the probability distribution found in T [s dim ][s codes mask ].For 2-dimensional models,only 10of the 48tables are non-trivial,and each table contains at most 4probabilities,so the total size of the probability model is small.These encoding tables reduce the code strings to approximately 8bits as shown in Table 1.By comparison,the PM representation requires approximately 5bits for the same information,but of course it disallows topological changes.To provide more intuition for the efficiency of the PSC repre-sentation,we note that capturing the connectivity of an average 2-manifold simplicial complex (n vertices,3n edges,and 2n trian-gles)requires ni =1(log 2i +8)n (log 2n +7)bits with PSC encoding,versus n (12log 2n +95)bits with a traditional one-way incidence graph representation.For improved compression,it would be best to use a hybrid PM +PSC representation,in which the more concise PM vertex split encoding is used when the local neighborhood is an orientableFigure 6:Constraints on the split codes for the simplices in the example of Figure 5.Table 1:Compression results and construction times.Object#verts Space required (bits/n )Trad.Con.n K V D Arepr.time a i C K i midp i (v )i C D i C Ai bits/n hrs.drumset 34,79412.28.20.928.1 4.10.453.9146.1 4.3destroyer 83,79913.38.30.723.1 2.10.347.8154.114.1chandelier 36,62712.47.60.828.6 3.40.853.6143.6 3.6schooner 119,73413.48.60.727.2 2.5 1.353.7148.722.2sandal 4,6289.28.00.733.4 1.50.052.8123.20.4castle 15,08211.0 1.20.630.70.0-43.5-0.5cessna 6,7959.67.60.632.2 2.50.152.6132.10.5harley 28,84711.97.90.930.5 1.40.453.0135.7 3.52-dimensional manifold (this occurs on average 93%of the time in our examples).To compress C D i ,we predict the material for each new principalsimplex sstar(a i )star(b i )K i +1by constructing an ordered set D s of materials found in star(a i )K i .To improve the coding model,the first materials in D s are those of principal simplices in star(s )K i where s is the ancestor of s ;the remainingmaterials in star(a i )K i are appended to D s .The entry in C D i associated with s is the index of its material in D s ,encoded arithmetically.If the material of s is not present in D s ,it is specified explicitly as a global index in D .We encode C A i by specifying the area a s for each new principalsimplex s 01(star(a i )star(b i ))K i +1.To account for this redistribution of area,we identify the principal simplex from which s receives its area by specifying its index in 01(star(a i ))K i .The column labeled in Table 1sums the bits of each field of the gvspl records.Multiplying by the number n of vertices in M gives the total number of bits for the PSC representation of the model (e.g.500KB for the destroyer).By way of compari-son,the next column shows the number of bits per vertex required in a traditional “IndexedFaceSet”representation,with quantization of 16bits per coordinate and arithmetic coding of face materials (3n 16+2n 3log 2n +materials).4PSC CONSTRUCTIONIn this section,we describe a scheme for iteratively choosing pairs of vertices to unify,in order to construct a PSC representation.Our algorithm,a generalization of [13],is time-intensive,seeking high quality approximations.It should be emphasized that many quality metrics are possible.For instance,the quadric error metric recently introduced by Garland and Heckbert [9]provides a different trade-off of execution speed and visual quality.As in [13,20],we first compute a cost E for each candidate vunify transformation,and enter the candidates into a priority queueordered by ascending cost.Then,in each iteration i =n 11,we perform the vunify at the front of the queue and update the costs of affected candidates.4.1Forming set of candidate vertex pairs In principle,we could enter all possible pairs of vertices from M into the priority queue,but this would be prohibitively expensive since simplification would then require at least O(n2log n)time.Instead, we would like to consider only a smaller set of candidate vertex pairs.Naturally,should include the1-simplices of K.Additional pairs should also be included in to allow distinct connected com-ponents of M to merge and to facilitate topological changes.We considered several schemes for forming these additional pairs,in-cluding binning,octrees,and k-closest neighbor graphs,but opted for the Delaunay triangulation because of its adaptability on models containing components at different scales.We compute the Delaunay triangulation of the vertices of M, represented as a3-dimensional simplicial complex K DT.We define the initial set to contain both the1-simplices of K and the subset of1-simplices of K DT that connect vertices in different connected components of K.During the simplification process,we apply each vertex unification performed on M to as well in order to keep consistent the set of candidate pairs.For models in3,star(a i)has constant size in the average case,and the overall simplification algorithm requires O(n log n) time.(In the worst case,it could require O(n2log n)time.)4.2Selecting vertex unifications fromFor each candidate vertex pair(a b),the associated vunify(a b):M i M i+1is assigned the costE=E dist+E disc+E area+E foldAs in[13],thefirst term is E dist=E dist(M i)E dist(M i+1),where E dist(M)measures the geometric accuracy of the approximate model M.Conceptually,E dist(M)approximates the continuous integralMd2(M)where d(M)is the Euclidean distance of the point to the closest point on M.We discretize this integral by defining E dist(M)as the sum of squared distances to M from a dense set of points X sampled from the original model M.We sample X from the set of principal simplices in K—a strategy that generalizes to arbitrary triangulated models.In[13],E disc(M)measures the geometric accuracy of disconti-nuity curves formed by a set of sharp edges in the mesh.For the PSC representation,we generalize the concept of sharp edges to that of sharp simplices in K—a simplex is sharp either if it is a boundary simplex or if two of its parents are principal simplices with different material identifiers.The energy E disc is defined as the sum of squared distances from a set X disc of points sampled from sharp simplices to the discontinuity components from which they were sampled.Minimization of E disc therefore preserves the geom-etry of material boundaries,normal discontinuities(creases),and triangulation boundaries(including boundary curves of a surface and endpoints of a curve).We have found it useful to introduce a term E area that penalizes surface stretching(a more sophisticated version of the regularizing E spring term of[13]).Let A i+1N be the sum of triangle areas in the neighborhood star(a i)star(b i)K i+1,and A i N the sum of triangle areas in star(a i)K i.The mean squared displacement over the neighborhood N due to the change in area can be approx-imated as disp2=12(A i+1NA iN)2.We let E area=X N disp2,where X N is the number of points X projecting in the neighborhood. To prevent model self-intersections,the last term E fold penalizes surface folding.We compute the rotation of each oriented triangle in the neighborhood due to the vertex unification(as in[10,20]).If any rotation exceeds a threshold angle value,we set E fold to a large constant.Unlike[13],we do not optimize over the vertex position i a, but simply evaluate E for i a i+1a i+1b(i+1a+i+1b)2and choose the best one.This speeds up the optimization,improves model compression,and allows us to introduce non-quadratic energy terms like E area.5RESULTSTable1gives quantitative results for the examples in thefigures and in the video.Simplification times for our prototype are measured on an SGI Indigo2Extreme(150MHz R4400).Although these times may appear prohibitive,PSC construction is an off-line task that only needs to be performed once per model.Figure9highlights some of the benefits of the PSC representa-tion.The pearls in the chandelier model are initially disconnected tetrahedra;these tetrahedra merge and collapse into1-d curves in lower-complexity approximations.Similarly,the numerous polyg-onal ropes in the schooner model are simplified into curves which can be rendered as line segments.The straps of the sandal model initially have some thickness;the top and bottom sides of these straps merge in the simplification.Also note the disappearance of the holes on the sandal straps.The castle example demonstrates that the original model need not be a mesh;here M is a1-dimensional non-manifold obtained by extracting edges from an image.6RELATED WORKThere are numerous schemes for representing and simplifying tri-angulations in computer graphics.A common special case is that of subdivided2-manifolds(meshes).Garland and Heckbert[12] provide a recent survey of mesh simplification techniques.Several methods simplify a given model through a sequence of edge col-lapse transformations[10,13,14,20].With the exception of[20], these methods constrain edge collapses to preserve the topological type of the model(e.g.disallow the collapse of a tetrahedron into a triangle).Our work is closely related to several schemes that generalize the notion of edge collapse to that of vertex unification,whereby separate connected components of the model are allowed to merge and triangles may be collapsed into lower dimensional simplices. Rossignac and Borrel[21]overlay a uniform cubical lattice on the object,and merge together vertices that lie in the same cubes. Schaufler and St¨u rzlinger[22]develop a similar scheme in which vertices are merged using a hierarchical clustering algorithm.Lue-bke[18]introduces a scheme for locally adapting the complexity of a scene at runtime using a clustering octree.In these schemes, the approximating models correspond to simplicial complexes that would result from a set of vunify transformations(Section3.3).Our approach differs in that we order the vunify in a carefully optimized sequence.More importantly,we define not only a simplification process,but also a new representation for the model using an en-coding of gvspl=vunify1transformations.Recent,independent work by Schmalstieg and Schaufler[23]de-velops a similar strategy of encoding a model using a sequence of vertex split transformations.Their scheme differs in that it tracks only triangles,and therefore requires regular,2-dimensional trian-gulations.Hence,it does not allow lower-dimensional simplices in the model approximations,and does not generalize to higher dimensions.Some simplification schemes make use of an intermediate vol-umetric representation to allow topological changes to the model. He et al.[11]convert a mesh into a binary inside/outside function discretized on a three-dimensional grid,low-passfilter this function,。
Procedia Computer Science
Unlocking the promise of mobile value-added services byapplying new collaborative business models Original ResearchArticleTechnological Forecasting and Social Change , Volume 77, Issue 4, May 2010, Pages 678-693Peng-Ting Chen, Joe Z. Cheng Show preview | Related articles | Related reference work articlesPurchase $ 41.95 602 Software performance simulation strategies for high-level embedded system design Original ResearchArticlePerformance Evaluation , Volume 67, Issue 8, August2010, Pages 717-739Zhonglei Wang, Andreas Herkersdorf Close preview | Related articles |Related reference work articlesAbstract | Figures/Tables | ReferencesAbstractAs most embedded applications are realized in software, softwareperformance estimation is a very important issue in embedded system design.In the last decades, instruction set simulators (ISSs) have become anessential part of an embedded software design process. However, ISSs areeither slow or very difficult to develop. With the advent of multiprocessorsystems and their ever-increasing complexity, the software simulation strategybased on ISSs is no longer efficient enough for exploring the large designspace of multiprocessor systems in early design phases. Motivated by thelimitations of ISSs, a lot of recent research activities focused on softwaresimulation strategies based on native execution. In this article, we firstintroduce some existing software performance simulation strategies as well asour own approach for source level simulation, called SciSim , and provide adiscussion about their benefits and limitations. The main contribution of thisarticle is to introduce a new software performance simulation approach, callediSciSim (intermediate Source code instrumentation based Simulation), whichachieves high estimation accuracy, high simulation speed and lowPurchase $ 41.95implementation complexity. All these advantages make iSciSim well-suited for system level design. To show the benefits of the proposed approach, we present a quantitative comparison between iSciSim and the other discussed techniques, using a set of benchmarks.Article Outline1. Introduction2. Software performance simulation strategies2.1. Instruction set simulators2.2. Binary (Assembly) level simulation2.3. Source level simulation2.4. IR level simulation3. The SciSim approach for source level simulation3.1. Source code instrumentation3.2. Simulation3.3. Advantages and limitations of SciSim4. The iSciSim approach for performance simulation of compiler-optimized embedded software4.1. Intermediate source code generation4.2. Intermediate source code instrumentation4.2.1. Machine code extraction and mapping list construction4.2.2. Basic block list construction4.2.3. Static timing analysis4.2.4. Back-annotation of timing information4.3. Simulation4.3.1. Dynamic timing analysis4.3.2. Hardware and software co-simulation in SystemC5. Experimental results5.1. Source code vs. ISC5.2. Benchmarking SW simulation strategies 5.3. Dynamic cache simulation5.4. Simulation in SystemC6. Discussions and conclusions AcknowledgementsReferencesVitae603Computer anxiety and ICT integration in English classesamong Iranian EFL teachers Original Research ArticleProcedia Computer Science, Volume 3, 2011, Pages 203-209Mehrak Rahimi, Samaneh YadollahiClose preview | PDF (190 K) | Related articles | Related reference work articlesAbstract | ReferencesAbstractThe purpose of this study was to determine Iranian EFL teachers’ level of computeranxiety and its relationship with ICT integration into English classes and teachers’ personalcharacteristics. Data were collected from 254 Iranian EFL teachers by Computer AnxietyRating Scale, ICT integration rating scale, and a personal information questionnaire. Theresults indicated a positive relationship between computer anxiety and age; however,computer anxiety, gender, and experience of teaching were not found to be related. Aninverse correlation was found between computer anxiety and ICT integration. While ICTintegration correlated negatively with age and years of teaching experience, it was notfound to be related to gender.604An environmental decision support system for spatialassessment and selective remediation OriginalResearch ArticleEnvironmental Modelling & Software, Volume 26, Issue 6,June 2011, Pages 751-760Purchase$ 19.95Robert N. Stewart, S. Thomas Purucker Close preview | Related articles | Related reference work articles Abstract | Figures/Tables | ReferencesAbstractSpatial Analysis and Decision Assistance (SADA) is a Windows freewareprogram that incorporates spatial assessment tools for effective environmentalremediation. The software integrates modules for GIS, visualization,geospatial analysis, statistical analysis, human health and ecological riskassessment, cost/benefit analysis, sampling design, and decision support.SADA began as a simple tool for integrating risk assessment with spatialmodeling tools. It has since evolved into a freeware product primarily targetedfor spatial site investigation and soil remediation design, though itsapplications have extended into many diverse environmental disciplines thatemphasize the spatial distribution of data. Because of the variety of algorithmsincorporated, the user interface is engineered in a consistent and scalablemanner to expose additional functionality without a burdensome increase incomplexity. The scalable environment permits it to be used for both applicationand research goals, especially investigating spatial aspects important forestimating environmental exposures and designing efficient remedial designs.The result is a mature infrastructure with considerable environmental decisionsupport capabilities. We provide an overview of SADA’s central functions anddiscuss how the problem of integrating diverse models in a tractable mannerwas addressed.Article OutlineNomenclature1. Introduction2. Methods2.1. Sample design2.2. Data management and exploratory data analysis2.3. Spatial autocorrelation2.4. Spatial models3. Results 3.1. Scalable interfacing and decision support3.2. Risk assessment3.2.1. Human health risk3.2.2. Ecological risk3.3. Selective remedial design4. Discussion and ConclusionAcknowledgementsReferencesResearch highlights ► SADA is mature software for data visualization, processing, analysis, and modeling. ► User interface balances functional scalability and decision support. ► Widely used due to free availability and shallow learning curve . ► Integration of spatial estimation and risk tools allows for rich decision support. 605 CoDBT: A multi-source dynamic binary translator using hardware –software collaborativetechniques Original Research ArticleJournal of Systems Architecture , Volume 56, Issue 10,October 2010, Pages 500-508Haibing Guan, Bo Liu, Zhengwei Qi, Yindong Yang,Hongbo Yang, Alei LiangShow preview | Related articles | Related reference work articles Purchase $ 31.50606 An analysis of third-party logistics performance and service provision Original Research ArticleTransportation Research Part E: Logistics andTransportation Review , Volume 47, Issue 4, July 2011, Pages 547-570Chiung-Lin Liu, Andrew C. Lyons Purchase$ 41.95Show preview | Related articles | Related reference work articles 607 Intelligent QoS management for multimedia services support in wireless mobile ad hoc networks OriginalResearch ArticleComputer Networks , Volume 54, Issue 10, 1 July 2010, Pages 1692-1706Lyes Khoukhi, Soumaya Cherkaoui Show preview | Related articles | Related reference work articlesPurchase $ 31.50 608 Limit to improvement: Myth or reality?: Empirical analysis of historical improvement on threetechnologies influential in the evolution ofcivilization Original Research ArticleTechnological Forecasting and Social Change ,Volume 77,Issue 5, June 2010, Pages 712-729 Yu Sang Chang, Seung Jin Baek Show preview| Supplementary content| Related articles | Relatedreference work articlesPurchase $ 41.95 609An enhanced concept map approach to improving children’s storytelling ability Original Research ArticleComputers & Education , Volume 56, Issue 3, April 2011, Pages 873-884Chen-Chung Liu, Holly S.L. Chen, Ju-Ling Shih, Guo-Ting Huang, Baw-Jhiune Liu Show preview | Related articles | Related reference work articlesPurchase $ 24.95610Human –computer interaction: A stable discipline, a nascent science, and the growth of the longtail Original Research ArticleInteracting with Computers , Volume 22, Issue 1, January 2010,Pages 13-27Alan Dix Show preview | Related articles | Related reference work articlesPurchase$ 31.50 611Post-agility: What follows a decade of agility? Original Research ArticleInformation and Software Technology , Volume 53, Issue 5,May 2011, Pages 543-555Richard Baskerville, Jan Pries-Heje, Sabine MadsenShow preview | Related articles | Related reference work articlesPurchase $ 19.95612Confidentiality checking an object-oriented class hierarchy Original Research ArticleNetwork Security , Volume 2010, Issue 3, March 2010, Pages 16-20S. Chandra, R.A KhanShow preview | Related articles | Related reference work articlesPurchase $ 41.95 613 European national news Computer Law & Security Review , Volume 26, Issue 5, September 2010, Pages 558-563 Mark Turner Show preview | Related articles | Related reference work articlesPurchase $ 41.95 614 System engineering approach in the EU Test Blanket Systems Design Integration Original Research ArticleFusion Engineering and Design , In Press, CorrectedProof , Available online 23 February 2011D. Panayotov, P. Sardain, L.V. Boccaccini, J.-F. Salavy, F.Cismondi, L. Jourd’Heuil Show preview | Related articles | Related reference work articlesPurchase $ 27.95 615A knowledge engineering approach to developing mindtools for context-aware ubiquitouslearning Original Research ArticleComputers & Education, Volume 54, Issue 1, January 2010, Pages 289-297Hui-Chun Chu, Gwo-Jen Hwang, Chin-Chung Tsai Show preview | Related articles |Related reference work articles Purchase $ 24.95616“Hi Father”, “Hi Mother”: A multimodal analysis of a significant, identity changing phone call mediated onTV Original Research Article Journal of Pragmatics, Volume 42, Issue 2, February 2010,Pages 426-442Pirkko Raudaskoski Show preview | Related articles | Related reference work articlesPurchase $ 19.95 617Iterative Bayesian fuzzy clustering toward flexible icon-based assistive software for the disabled OriginalResearch ArticleInformation Sciences , Volume 180, Issue 3, 1 February 2010, Pages 325-340Purchase$ 37.95Sang Wan Lee, Yong Soo Kim, Kwang-Hyun Park,Zeungnam BienShow preview | Related articles | Related reference work articles 618 A framework of composable access control features: Preserving separation of access control concerns from models to code Original Research ArticleComputers & Security , Volume 29, Issue 3, May 2010, Pages 350-379Jaime A. Pavlich-Mariscal, Steven A. Demurjian, LaurentD. MichelShow preview | Related articles | Related reference work articlesPurchase $ 31.50 619 Needs, affect, and interactive products – Facets ofuser experience Original Research ArticleInteracting with Computers , Volume 22, Issue 5, September 2010, Pages 353-362Marc Hassenzahl, Sarah Diefenbach, Anja Göritz Show preview | Related articles | Related reference work articlesPurchase $ 31.50 620 An IT perspective on integrated environmental modelling: The SIAT case Original Research ArticleEcological Modelling , Volume 221, Issue 18, 10 September 2010,Pages 2167-2176P.J.F.M. Verweij, M.J.R. Knapen, W.P. de Winter, J.J.F. Wien, J.A. te Roller, S. Sieber, J.M.L. JansenShow preview | Related articles | Related reference work articlesPurchase $ 31.50。
手机屏幕图像缺陷检测方法的研究
关键词:缺陷检测;ROI 识别;特征提取;手机液晶屏
I
安徽大学硕士学位论文
手机屏幕图像缺陷检测方法的研究
Abstract
With the rapid progress of technology and industry, LCD screen electronic products have become an integral part of our lives and production. Reality shows a higher and higher request on the quality of LCD screen of these products, while the current level of technology can not avoid sorts of defects. So, the first problem ,which LCD manufactures need to face, is how to identify and solve the defects of the LCD screen rapidly and accurately. Current image defects of LCD products mainly rely on manual inspection that neither meet the accuracy detection of defects in LCD screen, nor guarantee the stability of test results. This thesis is precisely solves this problem as a starting point by using the mobile LCD screen as object of study and combining digital image processing, pattern recognition and computer technology, and in view of the common image defects, referring to the requirements of industrial production detection algorithm on efficiency and accuracy. This thesis have researched and designed an efficient detection algorithm. Simulation experiment results demonstrated my algorithm is efficiency and accurate on the image defects detection. First to standardize design of the mobile transmission system, and equip with acquisition equipments of appropriate models of high-speed image acquisition card, monitoring camera and computer equipment, complete the preparations of the hardware conditions. In the detection process, use surveillance camera to get the mobile image at first. And the images data are captured from the buffer of image acquisition card quickly by directshow technology. Then the weighted-averaged frame of bad frames could reduce the bad effect of the bad frames which are brought about by the harsh environment of image acquisition. In the image preprocessing stage, the noise will be removed by Gaussian pyramid sampling. Dynamic threshold value, getting from each RGB 3-channels by recursive iteration, will be helpful to detecting the screen rectangle by identifying and extracting shape feature of the image, and then using image two-dimensional geometric transformation to auto-correct the mobile
基于改进的RRT^()-connect算法机械臂路径规划
随着时代的飞速发展,高度自主化的机器人在人类社会中的地位与作用越来越大。
而机械臂作为机器人的一个最主要操作部件,其运动规划问题,例如准确抓取物体,在运动中躲避障碍物等,是现在研究的热点,对其运动规划的不断深入研究是非常必要的。
机械臂的运动规划主要在高维空间中进行。
RRT (Rapidly-exploring Random Tree)算法[1]基于随机采样的规划方式,无需对构型空间的障碍物进行精确描述,同时不需要预处理,因此在高维空间被广为使用。
近些年人们对于RRT算法的研究很多,2000年Kuffner等提出RRT-connect算法[2],通过在起点与终点同时生成两棵随机树,加快了算法的收敛速度,但存在搜索路径步长较长的情况。
2002年Bruce等提出了ERRT(Extend RRT)算法[3]。
2006年Ferguson等提出DRRT (Dynamic RRT)算法[4]。
2011年Karaman和Frazzoli提出改进的RRT*算法[5],在继承传统RRT算法概率完备性的基础上,同时具备了渐进最优性,保证路径较优,但是会增加搜索时间。
2012年Islam等提出快速收敛的RRT*-smart算法[6],利用智能采样和路径优化来迫近最优解,但是路径采样点较少,使得路径棱角较大,不利于实际运用。
2013年Jordan等通过将RRT*算法进行双向搜索,提出B-RRT*算法[7],加快了搜索速度。
同年Salzman等提出在下界树LBT-RRT中连续插值的渐进优化算法[8]。
2015年Qureshi等提出在B-RRT*算法中插入智能函数提高搜索速度的IB-RRT*算法[9]。
同年Klemm等结合RRT*的渐进最优和RRT-connect的双向搜基于改进的RRT*-connect算法机械臂路径规划刘建宇,范平清上海工程技术大学机械与汽车工程学院,上海201620摘要:基于双向渐进最优的RRT*-connect算法,对高维的机械臂运动规划进行分析,从而使规划过程中的搜索路径更短,效率更高。
移动煤矿样品摘取机器人Yuanfang Li等人的动态特性分析:悬挂臂的动态特性对移动煤矿样品摘取机
Dynamic Characteristics Analysis of the Hydraulic Arm ofMobile Coal Sampling RobotYuanfang Li1, Haibo Xu1, Jun Wang2, Rong Deng1 and Yufeng Lin11Xi'an Jiaotong University, Xi’an 710049, Shanxi, China2Xi'an Hongyu mining special mobile equipment Co., Xi’an 710075, Shaanxi, China Abstract—Dynamic characteristics of the hydraulic armaffects the mobile coal sampling robot’s accuracy and efficiency.The complex and varied working conditions put many highrequirements on the stability of the hydraulic arm. This papertook the hydraulic arm of the MCYY2000 mobile coal samplingrobot as the research object, and established a simplified modelof the hydraulic arm with SolidWorks. It carried out the analysisunder both the condition of no-sampling resistance and thecondition of variable sampling resistance. The analysis was donewith the module of multi-body dynamics simulation in Simulink.This paper helps to obtain the joint torques and hydraulicdriving forces of the hydraulic arm under different conditions. The results provide a basis for further work including accurate motion control, chatter reduction and safety improvement of the coal sampling robot.Keywords—coal sampling robot; hydraulic arm; complex working conditions; dynamic characteristicsI.I NTRODUCTIONThe mobile coal sampling robot is suitable for the sampling of carts, trains and coal heaps in places such as coal yards, steel mills, power plants, and harbors[1]. With its advantages of small size, high mobility, and wide adaptability, it has demonstrated an important position in the industry of mechanized coal sampling in recent years. Compared to manual sampling, the mobile coal sampling robot can reduce labor intensity and increase sampling efficiency[2].The MCYY2000 mobile coal sampling robot developed by Xi'an Hongyu Mining Special Mobile Equipment Co., Ltd. has the advantages of convenient movement, simple operation, and various control modes (manual, semi-automatic, and automatic), and can realize the integration of full-section sampling, crushing, shrinking, and collection. With high sampling efficiency, the sampling robot overcomes the disadvantages of low accuracy, low efficiency, and poor flexibility in the current manual sampling and mechanical sampling processes. As respectively shown by No.1-7 in Figure I, the whole structure of the sampling robot is mainly composed of the car chassis, the disposal storage device, the sample preparation device, the hydraulic arm, the hydraulic system, the driving room and the electrical system.FIGURE I. STRUCTURE OF THE MOBILE COAL SAMPLING ROBOT The hydraulic arm is the most important part of the mobile coal sampling robot. Its dynamic characteristics affects the sampling accuracy and sampling efficiency. Therefore, the dynamic characteristics of the hydraulic arm are important targets for the analysis and research of the coal sampling robot[3][4][5]. This paper takes the hydraulic arm of the MCYY2000 mobile coal sampling robot as the research object, establishes a simplified model of the hydraulic arm of the coal sampling robot in the SolidWorks, and carries out the analysis of no-sampling resistance and variable sampling resistance of the hydraulic arm through the multi-body dynamics simulation module of Simulink. The dynamic simulation analysis under the two working conditions helps to obtain joint torques and hydraulic driving forces. The analysis is used to provide the basis for follow-up accurate motion control, reducing flutter, and improved work accuracy and safety.II.I NTRODUCTION OF THE H YDRAULIC A RM ANDW ORKING C ONDITION A NALYSISAs respectively shown by No.1-11 in Figure II, the hydraulic arm of the mobile coal sampling robot is composed the base, the upper arm, the second arm, the telescopic arm, the mast, the sampling head, the upper arm cylinder, the second arm cylinder, the telescopic arm cylinder, the guide cylinder and the swing hydraulic motor. The base is connected with the slewing bearing, and the hydraulic motor provides power. The base drives the entire hydraulic arm to realize a 300° rotation. The upper arm, second arm and telescopic arm are driven by their respective hydraulic cylinders to achieve the motion of pitching and telescoping. The sampling cylinder is fixed in the mast, and the directly reciprocating motion of the sampling head guide rail is driven by moving the pulley block and the chain. This motion controls the vertical down sampling and the oblique down sampling at different angles. The mast makes it possible to keep the upper arm and the second arm stationary3rd International Conference on Electrical, Automation and Mechanical Engineering (EAME 2018)during sampling, so the sampling accuracy can be higher. The sampling head is a spiral structure[6] and can complete the deep sampling into coal heaps with a depth of 2 meters.FIGURE II. STRUCTURE OF THE HYDRAULIC ARM The related size parameters of the hydraulic arm of the coal sampling robot are shown in Table I. The parameters in the table are all from the actual design parameters of the MCYY2000 mobile coal sampling robot.TABLE I. RELATED SIZES OF THE HYDRAULIC ARMComponent Size / mmupper arm 2900second arm 2700telescopic arm 1000mast 3400Sampling head 2100Complex and varied working conditions[7] of coal sampling projects put many high requirements on the stability of the dynamic characteristics of the hydraulic arm:(1) When the sampling head of the hydraulic arm is moving at a low speed and operating the pitching movement with no-sampling resistance, the torque of each joint and the driving force of the hydraulic cylinder should be changed smoothly with small amplitude, so that the hydraulic arm can maintain safety and stability during its adjustment of the sampling angle.(2) When the hydraulic arm is sampling at a fixed sampling angle, the sampling head is subject to a varying sampling resistance. At this time, the joint torques and the hydraulic driving forces must avoid sharp changes or exceeding its safety range[8] so that the coal sampling robot can stay safe. The key research of this paper focuses on the dynamic characteristics of the hydraulic arm of the coal sampling robot under the two working conditions.III.A NALYSIS OF D YNAMIC C HARACTERISTICS OF THEH YDRAULIC A RMTo build a virtual prototype, simplified models should be used as much as possible. In order to reduce the simulation time[9], the number of parts should be reduced as much as possible while satisfying the integrity of the virtual prototyping simulation movement. According to the actual size of the hydraulic arm and the types of hydraulic cylinders, the components including the base, the upper arm, the second arm, the telescopic arm, the mast, the sampling head and hydraulic cylinders are modeled and assembled in SolidWorks. The virtual prototype of the hydraulic arm of the coal sampling robot is shown in Figure III.FIGURE III. VIRTUAL PROTOTYPE MODEL OF THE HYDRAULICARMAs shown by No.1-8 in Figure IV. the simplified schematic diagram of the movement mechanism includes three joints - join1, joint2 and joint3 - and five hydraulic cylinders - cylinder1, cylinder2, cylinder3, cylinder4 and cylinder5. The range of the motion of each joint variable and cylinder driving variable is shown in Table II.FIGURE IV. MOTION MECHANISM OF THE HYDRAULIC ARM OFTHE COAL SAMPLING ROBOTTABLE II. RANGE OF JOINT ANGLES AND CYLINDER LENGTHS Joint angle Range /(°) Cylinder length Range / mmjoint θ1 66-130 cylinders1 1750-2750 joint θ2 90-160cylinder s2 1450-2300cylinder s3 2700-3700 joint θ3 60-135cylinder s4 1650-2650cylinder s5 3400-5500Import the assembled model into Simulink and generate a block diagram of the model. Set the appropriate material properties and apply the necessary constraints[10] for each component in the model, and add torque sensors and force sensors for the rotating joints and hydraulic cylinders. The signal window modules are also added. The general Simulink dynamic analysis block diagram after settings is shown in Figure V. The multibody structure diagram of the hydraulic arm is shown in Figure VI.FIGURE V. GENERAL SIMULINK DYNAMIC ANALYSIS BLOCKDIAGRAMFIGURE VI. SIMSCAPE MULTIBODY STRUCTURE DIAGRAM A.Analysis of the Dynamic Characteristics of the HydraulicArm of Coal Sampling Robot under the Condition of No-sampling ResistanceWhen the hydraulic arm is under the no-sampling resistance condition, the sampling head only performs low-speed pitching movements. At this time, each joint torque and the hydraulic cylinder driving force should be stable and be of small-scale changes, so that the coal sampling robot can remain safe and stable during the adjustment of its sampling angle. When analyzing the dynamic characteristics of the hydraulic arm under this condition, the sampling resistance is set to zero. The curve of the length of the hydraulic cylinder s4 is shown in Figure VII. The lengths of cylinders s1, s2, s3 and s5 are respectively set to 2250mm, 1950mm, 2700mm and 3400mm. According to the relationship between the joint variables and the cylinder driving variables, the curve of the joint angle θ3 is shown in Figure VIII.FIGURE VII. CURVE OF THE LENGTH OF CYLINDER 4FIGURE VIII. CURVE OF JOINT ANGLE θ3The curves of the joint torques and the hydraulic cylinder driving forces are respectively shown in Figure IX and Figure X. With the extension and retraction of the mast cylinder s4, the torques of the joint1-joint3 firstly increase and then decrease within a smaller range, and the change trend is relatively stable. The torque of joint1 is the largest. The torque of joint2 is the next, and the torque of joint3 is the smallest. The driving forces of the hydraulic cylinders also change smoothly. The driving force of the hydraulic cylinder1 is the largest, and the driving force of the hydraulic cylinder3 remains basically unchanged.The results show that when the hydraulic arm of the coal sampling robot performs low-speed swing movement of its sampling head under the condition of no-sampling resistance, the joint torques and the driving forces of the hydraulic cylinders change smoothly and slightly. The driving forces of the hydraulic cylinders mainly overcome the effect of gravity. The simulation results are in accordance with the actual situation.FIGURE IX. CURVES OF JOINT TORQUESFIGURE X. CURVES OF CYLINDER DRIVING FORCESB.Analysis of the Dynamic Characteristics of the HydraulicArm of Coal Sampling Robot under the Condition ofVariable ResistanceIn the sampling process, the sampling head of the coal sampling robot is mainly subjected to three external loads including the insertion resistance, the gravity of coal and the lifting resistance. The insertion resistance and the ascending resistance are uncertain under different working conditions. According to formulas and relevant experiences, the insertion resistance and the ascending resistance are respectively set to 6000 N and 5000 N. The designing parameters of the coal sampling robot show that the coal sampling weight is about 200N, which is much smaller compared with the other two resistances. Therefore, the curve of the sampling resistance during vertical sampling process is shown in Figure XI. According to this, the dynamic characteristics of the hydraulic arm of the coal sampling robot under the variable resistance condition can be verified.FIGURE XI. CURVE OF THE SAMPLING RESISTANCEFIGURE XII. CURVE OF THE LENGTH OF CYLINDER 5 When analyzing the dynamic characteristics of the hydraulic arm under the variable resistance condition, the joint angles θ1 and θ2 respectively maintain 70° and 110°. The joint angle θ3 is set to 90°, which means the sampling head performs vertical sampling at a sampling angle of 90°. Curve of the length of Hydraulic cylinder 5 is shown in Figure XII.As shown in Figure XIII and Figure XIV when the sampling resistance is given, the curves of the joint torques and the driving forces of the hydraulic cylinders are no longer smooth. Instead, they show sharp turning changes with the changes of the sampling resistance. The joint 1 and the joint 2 show large torques and relatively large variation. The joint 3 shows relatively small torque. The driving forces of the hydraulic cylinder 1 and the hydraulic cylinder 2 are relatively large and the amplitude of their changes is also large. The driving forces of the hydraulic cylinder 4 and the hydraulic cylinder 5 change within a little range and are relatively stable. The hydraulic cylinder 3 basically has no change of driving force under this condition.The results show that the joint torques and the driving forces of the hydraulic cylinders have turning changes under the condition of variable resistance. Due to the low moving speed of the sampling head, the influence of inertial force and inertia torque is relatively small[11]. The driving forces of the hydraulic cylinders mainly overcome the gravity of the arm itself and the external sampling resistance. The simulation results are in accordance with the actual situation.FIGURE XIII. CURVES OF JOINT TORQUESFIGURE XIV. CURVES OF CYLINDER DRIVING FORCESIV.C ONCLUSIONSThis paper took the hydraulic arm of the MCYY2000 mobile coal sampling robot as the researching object. It established a simplified model of the hydraulic arm with SolidWorks, and carried out the dynamic simulation analysis of the hydraulic arm under both the condition of no-sampling resistance and the condition of variable resistance with the Simulink. The simulation results are basically in accordance with the actual situation.(1) Under the condition of no-sampling resistance, the hydraulic arm of the coal sampling robot performs low-speed swing movement of the sampling head. The joint torques and the driving forces of the hydraulic cylinders change smoothly and slightly. The driving forces of the hydraulic cylinders mainly overcome the effect of gravity.(2) Under the condition of variable resistance, the joint torques and the driving forces of the hydraulic cylinders show turning changes. Due to the low moving speed of the sampling head, the influence of inertial force and inertial torque are relatively small. The driving forces of the hydraulic cylinders mainly overcome the gravity of the arm itself and the external sampling resistance.In this paper, the joint torques and hydraulic driving forces of the hydraulic arm are obtained through dynamic simulation analysis. The results help to provide a basis for further work including accurate motion control, chatter reduction and safety improvement of the coal sampling robot.A CKNOWLEDGMENTThanks to the support of Xi'an Hongyu Mining Special Mobile Equipment Co., Ltd. And thanks to the help of Shaanxi Science & Technology Co-ordination & Innovation Project.R EFERENCES[1]Yang Jinhe and Liu Enqing. Discussion on mechanized sampling ofcommercial coal [J]. Coal Processing & Comprehensive Utilization, 2007(04): 29-30.[2]Sun Gang. Research on Performance Index of Coal Sampling Machine[J].Journal of China Coal Society, 2009, 34(06): 836-839.[3]Qu Can. Virtual Design of Sampling Arm for Vehicle Coal samplingrobot [D]. Xi'an: Chang’an University, 2014.[4]Lu Na. Dynamic Analysis of Sampling Arm of Coal Sampling MachineBased on ANSYS [D]. Xi'an: Chang’an University, 2014.[5]Li Longlong. Inverse Kinematics Analysis and Sampling TrajectoryControl Simulation of Coal Sampling Arm [D]. Xi'an: Xi’an University of Architecture and Technology, 2014.[6]Li Xuta, He Lile, Zhang Youzhen and Leng Mingyou. Analysis of SpiralDrill Pipe Fatigue Strength of Spiral Coal Sampling Device [J]. Coal Engineering, 2012(11): 93-94+98.[7]Zhu Xiaoyong and Zhang Yuangen. Common problems in coal samplingand its solution [J]. Modern Industrial Economy and Informationization, 2017, 7(16): 72-74.[8]Chen Chuanxiong and Kong Jian. Optimization Design and Analysis ofCoal Sampling Robot Transmission System [J]. Coal Technology, 2016,(02): 259-262.[9]Geng Chunxia and Ye Feng. Research on the Optimized Design ofSampling Arm of Coal Sampling Machine [J]. Coal Technology, 2013,(12): 14-16.[10]SUN Xuguo, HUANG Sunzhuo, LIN Shuwen, et al. Modeling andsimulation of excavator mechanism dynamics based on Matlab[J].Mechanical Engineer, 2007(9): 91-93.[11]Zheng Deshuai, Gu Lichen, Zhang Ping and Jia Yongfeng. AMESimmodeling and feasibility analysis of a new coal sampling arm [J].Machine Tool & Hydraulics, 2013, 41(13): 155-157.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Simulation Sampling with Live-pointsThomas F. Wenisch Roland E. Wunderlich Babak Falsafi James C. HoeComputer Architecture Laboratory (CALCM)Carnegie Mellon University, Pittsburgh, PA 15213-3890{rolandw, twenisch, babak, jhoe}@/~simflexABSTRACTCurrent simulation-sampling techniques construct accurate model state for each measurement by continuously warming large microar-chitectural structures (e.g., caches and the branch predictor) while functionally simulating the billions of instructions between measure-ments. This approach, called functional warming, is the main perfor-mance bottleneck of simulation sampling and requires hours of runtime while the detailed simulation of the sample requires only minutes. Existing simulators can avoid functional simulation by jumping directly to particular instruction stream locations with architectural state checkpoints. To replace functional warming, these checkpoints must additionally provide microarchitectural model state that is accurate and reusable across experiments while meeting tight storage constraints.In this paper, we present a simulation-sampling framework that replaces functional warming with live-points without sacrificing accuracy. A live-point stores the bare minimum of functionally-warmed state for accurate simulation of a limited execution window while placing minimal restrictions on microarchitectural configura-tion. Live-points can be processed in random rather than program order, allowing simulation results and their statistical confidence to be reported while simulations are in progress. Our framework matches the accuracy of prior simulation-sampling techniques (i.e.,±3% error with 99.7% confidence), while estimating the performance of an 8-way out-of-order superscalar processor running SPEC CPU2000 in 91seconds per benchmark, on average, using a 12GB live-point library.1. INTRODUCTIONComputer architecture research routinely employs detailed cycle-accurate simulation to explore and validate microarchitectural inno-vations. Ideally, simulation studies should use the same benchmarks used to assess real hardware. Unfortunately, benchmark applications that are tuned to run for minutes on real hardware can require over a month to execute on today’s high performance microarchitecture simulators [2,11,26,30].Past research advocates sampling [5,9,19,20,32,37,38]—that is, measuring only a subset of benchmark execution—as a technique to accelerate microarchitecture simulation. Many such studies advocate uniform sampling using rigorous statistical theory [5,9,20,37] to provide explicit validation that the measured portions accurately represent the behavior of a benchmark.A recent study of prevailing simulation-sampling approaches by Yi et al. [38] concluded that the S MARTS simulation-sampling approach [37] provides the highest estimation accuracy. The S MARTS design minimizes instructions simulated by measuring a large number (e.g., 10,000) of brief (e.g., 1000-instruction) simulation windows.S MARTS avoids measurement error from cold state by continuously warming large microarchitectural structures (e.g., caches and the branch predictor) while functionally simulating the billions of instructions between measurements, a warming strategy referred to as functional warming.Although functional warming enables accurate performance estima-tion, it limits S MARTS’s speed, occupying more than 99% of simula-tion runtime. Functional warming dominates simulation time because the entire benchmark’s execution must be functionally simulated, even though only a tiny fraction of the execution is simulated using detailed microarchitecture timing models.The second shortcoming of S MARTS is that functional warming requires simulation time proportional to benchmark length rather than sample size. As a result, the overall runtime of a S MARTS exper-iment remains constant even when the measured sample size is reduced by relaxing an experiment’s statistical confidence require-ments or through recently-proposed sampling optimizations such as matched-pair comparison [9] and stratified sampling [36]. Moreover, functional warming time will increase with the advent new bench-mark suites, such as SPEC CPU2006, that lengthen benchmarks to scale with hardware performance improvement [33]. Optimizations that accelerate functional warming, such as direct execution [4], do not improve S MARTS’s scaling behavior.In this paper, we propose live-points as a replacement for functional warming to provide reduced simulation turnaround time, propor-tional to sample size, without sacrificing accuracy. A live-point stores the necessary data to reconstruct warm state for a simulation sampling execution window. Although modern computer architecture simulators frequently provide checkpoint creation and loading capa-bilities [2,23], current checkpoint implementations: (1) do not provide complete microarchitectural model state, and (2) cannot scale to the required checkpoint library size (~10,000 checkpoints per benchmark) because of multi-terabyte storage requirements.We address the first limitation of conventional checkpoints by storing selected microarchitectural state in live-points, an approach we call checkpointed warming.The key challenge of checkpointed warming lies in storing microarchitectural state such that live-points can still simulate the range of microarchitectural configurations of interest. Fortunately, previous studies have shown that, with the exception of the branch predictor and memory hierarchy, the vast majority of microarchitectural state can be reconstructed dynamically with minimal simulation (a few thousand instructions), and thus need not be stored [37]. For the exceptional structures, researchers can often place limits on the configurations of interest (e.g., through trace-based studies). We design checkpointed warming to reproduce these structures under user-specified limits.We reduce the size of conventional checkpoints by three orders of magnitude through storing in live-points only the subset of statenecessary for limited execution windows, an approach we call live-state. Live-state exploits the brevity of simulation sampling execu-tion windows (thousands of instructions) to omit the vast majority of state. The minimal state subset can be known a priori only for the commit instruction stream, and is not known for wrong-path (specu-lative) instructions. However, whereas wrong-path instruction latency affects scheduling through pipeline resource contention, wrong-path operand values rarely affect instruction throughput. We exploit this observation by storing only the state required for correct path execution and approximate wrong-path scheduling.We present results from a live-point-enabled simulator derived from SimpleScalar 3.0 sim-outorder [2] simulating the execution of the SPEC CPU2000 (SPEC2K) benchmarks on two microarchitec-tural configurations to show:•Accelerated simulation with practical storage. Live-point simulation sampling is over 250times faster than existing simu-lation sampling approaches (on average 91 seconds per bench-mark) for an 8-way out-of-order superscalar while maintaining the estimated CPI error at ±3% with 99.7% confidence. Although functional warming produces an aggregate of 36TB of state while sampling SPEC2K, a gzip-compressed SPEC2K live-point library supporting 1MB caches requires just 12GB of storage.•Parallel simulation and online results. We construct indepen-dent live-points that can be processed in parallel and in an arbi-trary order. By randomizing the processing order, we can report unbiased results and their statistical confidence continuously during simulation. As more live-points are processed, results converge toward their final values and confidence improves. In contrast, simulators that use functional warming cannot report results until simulation is complete and require strict program-order simulation to allow for unbiased sampling, preventing parallel simulation.•Reusable live-point libraries. We ensure reusability of a fixed-size live-point library across comparative performance studies that have unpredictable sample size requirements using matched-pair sample comparison. Individual live-points can simulate a wide range of microarchitectural configurations using our check-pointed warming approach. Our live-points constrain only the configuration of the branch predictor (to a user-selected set of alternatives) and the cache/TLB hierarchy (through user-selected upper bounds on size and associativity). Our results demonstrate that checkpointed warming is more accurate (1.6% worst-case CPI bias) than currently-known checkpoint-based alternatives that do not constrain microarchitectural configuration (5.4% worst-case CPI bias).This paper is organized as follows. Section2 presents background on functional warming. We present our methodology in Section3. In Section4, we compare checkpointed warming to alternative warming methods in terms of accuracy, flexibility, and speed. We describe live-state, our storage approach for live-points, in Section5, and present the live-point experiment framework in Section6. Section7presents performance results and analysis. Related work is described in Section8. We conclude in Section9.2. BACKGROUNDSimulation sampling derives estimates of performance (CPI, power, etc.) of benchmark applications on a simulated microarchitecture from measurements of a sample of the benchmark’s dynamic instruc-tion stream. By choosing the measured sample according to estab-lished statistical sampling methods [16], simulation sampling can rely on statistical measures of confidence to validate that estimated results represent the behavior of the full benchmark.Although statistics provides us with probabilistic guarantees that esti-mated results are representative, these guarantees do not assure us that estimated results are error-free. Errors introduced into the indi-vidual measurements that make up a sample (e.g., by the measure-ment methodology) are referred to as bias, and are not accounted for by statistical confidence calculations. In simulation sampling, the most common cause of bias is the cold-start effect of unwarmed microarchitectural structures. For example, assuming empty caches may result in incorrectly low performance estimates.The primary challenge in simulation sampling is to devise a strategy to construct accurate initial state rapidly. For each measurement, the simulator must construct both architectural state (e.g., register and memory values) and microarchitectural state (e.g., pipeline compo-nents and the cache hierarchy) to avoid cold-start bias. A recent survey of simulation sampling approaches [38] concluded that the S MARTS simulation sampling approach [37] provides the highest esti-mation accuracy.S MARTS uses a two-tiered strategy to construct every measurement’s initial state as depicted in Figure1. Prior to each measurement, microarchitectural structures for which current state reflects the history of a small, bounded set of recent instructions—such as the reorder buffer or issue queue—are warmed through detailed warm-ing: brief simulation (e.g., a few thousand instructions) of the complete detailed performance model sufficient to warm such small structures. We refer to adjacent detailed warming and measurement intervals as a detailed window.The second component of the S MARTS warming strategy, functional warming, addresses state updates between two detailed windows. Like other simulation sampling frameworks [19,22,32,35], S MARTS functionally simulates each instruction to update architectural state. To minimize and bound detailed warming requirements, S MARTS continuously updates structures with microarchitectural state that persists across detailed windows—caches, TLBs, and branch predic-tors. These structures cannot be warmed sufficiently by a brief detailed warming period.Unfortunately, as proposed, functional warming is a performance bottleneck in simulation sampling [12,37]. Given typical cycle-accu-rate simulation models (e.g., SimpleScalar sim-outorder [2]), the performance measurement of a wide-issue out-of-order supersca-Figure 1. S MARTS two-tier warming strategy. Functional warming dominates runtime because it must cover billions of instructions.lar processor using the S MARTS strategy requires little detailed simu-lation: typically about a minute on a modern host machine. A S MARTS-based simulator’s total runtime, however, is orders of magnitude longer because the functional warming between detailed windows dominates runtime.Unlike functional warming, live-point simulation time is directly proportional to sample size. Sample size depends only on a proces-sor’s performance variability across a benchmark’s execution, and the desired statistical confidence [22,37].3. METHODOLOGYWe evaluate live-points in a sampling simulator based on the SimpleScalar 3.0 sim-outorder simulator [2] for the Alpha ISA. We modify sim-outorder’s memory subsystem to include a store buffer and miss status holding registers (MSHRs),and model inter-connect bottlenecks in the memory hierarchy. We encode live-points using ASN.1 DER format [15] and gzip compression, which incur minimal storage and processing time overhead. We use all 26 SPEC2K benchmarks [13] and evaluate all reference inputs except vpr-place and three perlbmk inputs, as these inputs fail to simulate correctly in sim-outorder. Overall, we include 41 benchmark/ input set combinations in this study.Without loss of generality, we use CPI (cycles-per-instruction) as our target metric for estimation. Simulation sampling, however, has been shown to be applicable to other performance metrics of choice [37, 38]. We measure CPI bias by averaging actual error (relative to full sim-outorder simulations) over five different samples, according to the methodology described in [37].We evaluate live-points with two microarchitectural configurations. Our baseline 8-way out-of-order superscalar model represents a processor in the current technology generation. The 16-way out-of-order superscalar configuration is included to reflect an aggressive future design point. This configuration has a wider datapath, larger out-of-order window, and larger caches, to exercise the effects of enlarged microarchitectural state. The details of the 8-way and 16-way configurations are summarized in Table1.We use the sampling approach from [37], periodic 1000-instruction measurement intervals, to identify measurement locations for all experiments. This sample design has been demonstrated to minimize the total number of instructions in detailed windows, and thus, detailed simulation time. However, live-points can also be applied to other sample designs (e.g., random sampling). We choose sample size to achieve precisely 99.7% confidence of ±3% error for each result.We report simulation runtimes for systems with 2.80GHz Intel Xeon (512KB L2) processors.4. WHY CHECKPOINTED WARMING? Functional warming repeats architectural state updates across differ-ent simulations of the same benchmark. (Simulating workloads for which architectural state varies across repeated runs—i.e., because of interrupt timing or different interleaving of multiprocessor instruc-tion streams—is beyond the scope of this work.) Frequently, microar-chitectural state updates are also identical across runs. Checkpoints can memoize the redundant calculation across runs, amortizing the one-time cost of computing warmed state. We are interested in finding the best way to take advantage of checkpoints to accelerate warming.Although some microarchitecture studies have suggested or used checkpoints to accelerate simulation [1,9,10,28], none have explored the space of microarchitecture warming solutions in the context of checkpointing. For each portion of model state generated by func-tional warming, we may choose either to construct the state dynami-cally, or store it in checkpoints. This choice impacts simulation sampling along three dimensions: the accuracy of the warmed state, the reusability of checkpoints across microarchitectural configura-tions, and the speed of simulation. In this section, we explore the warming method design space with respect to these three dimensions and justify our choice of checkpointed warming to implement live-points.4.1 Simulation sampling warming methodsThere is a rich design space of possible warming strategies that combine checkpoints and dynamic warming for various portions of architectural and microarchitectural model state. We restrict our exploration to strategies that use detailed warming to initialize queue and pipeline state. Detailed warming can reconstruct state for the vast majority of microarchitectural structures rapidly, and the amount of required warming can be determined via worst-case analysis [37]. By warming most structures dynamically, we avoid storing any state for these structures, and do not constrain model parameters that affect this state.Evaluation criteria. We focus our design exploration on warming alternatives for long-history structures, such as caches and branch predictors, for which detailed warming is prohibitively slow. We evaluate alternatives based on their accuracy, checkpoint reusability, and speed.With respect to accuracy, we consider only the bias introduced by the warming strategy. S MARTS demonstrated low bias—0.6% on aver-age, 1.6% worst case [37]—using functional warming. It is essential Table 1. Microarchitectural configurations.Parameter8-way (baseline)16-wayRUU/LSQ size128/64256/128Memory system32KB 2-way L1I/D2 ports, 8 MSHRs1MB 4-way L216-entry store buffer64KB 2-way L1I/D4 ports, 16 MSHRs4MB 8-way L232-entry store buffer L1/L2 line size32/128 bytes32/128 bytesL1/L2/mem latency1/12/100 cycles2/16/100 cycles ITLB/DTLB4-way 128 entries/4-way 256 entries200 cycle miss4-way 128 entries/4-way 256 entries200 cycle miss Functional units 4 I-ALU2 I-MUL/DIV2 FP-ALU1 FP-MUL/DIV16 I-ALU8 I-MUL/DIV8 FP-ALU4 FP-MUL/DIVBranch predictor Combined 2K tables7 cycle mispred.1 prediction/cycleCombined 8K tables10 cycle mispred.2 predictions/cycleDetailed warming2000 instructions4000 instructionsto maintain this high accuracy when accelerating warming because we cannot detect bias through statistical confidence calculations.We evaluate the reusability of a warming methodology in terms of the restrictions it places on simulator configuration. When we store the warmed state of microarchitectural structures in a checkpoint, we may be forced to limit some of the configuration parameters for that structure.Finally, we evaluate the speed of warming alternatives in two ways.First, we consider how fast measurements can be processed. For all alternatives, time to simulate the detailed window is the same, while functional warming and checkpoint decompression/loading time varies. Second, we consider whether detailed windows are indepen-dent, or must be simulated in program order. Independent windows can be simulated in parallel, and enable online reporting of measure-ment results.Warming methods. Figure 2 depicts alternatives in the warming strategy design space. At one extreme, functional warming is used for the entire duration between measurements, without checkpoints (as in S MARTS ). We refer to this method as full warming . The opposite extreme, checkpointed warming , eliminates all functional warming and stores long-history state in checkpoints. This approach requires limiting some design parameters of the checkpointed structures.Functionally-warming microarchitectural state for the entire duration between measurements is usually not necessary. In adaptive warm-ing , we store architectural state in checkpoints, and reconstruct long-history state with a reduced functional warming period. Adaptive warming requires a mechanism to determine precisely how little functional warming each detailed window requires.Trade-offs. Figure 3 illustrates the relationship between each warming alternative and our three evaluation criteria. Each alterna-tive optimizes for two of the design criteria (the two depicted nearest it), at the expense of the third.Full warming maximizes accuracy and flexibility, but its need for long periods of functional warming makes it slow, and its turnaround time scales with benchmark length. As full warming requires no checkpoints, no configuration parameters are fixed.Adaptive warming maintains the reusability of full warming and improves speed, but we show that it sacrifices accuracy. The accu-racy and speed of adaptive warming depend on a rigorous determina-tion of the minimal functional warming period for each detailed window. Unfortunately, determining the correct amount of warming remains a difficult and unsolved problem [18].Checkpointed warming matches the accuracy of full warming and maximizes speed, at the expense of checkpoint reusability. Check-pointed warming achieves this accuracy because it uses full warming simulation to generate the checkpointed state.Because checkpointed warming spends no time performing func-tional warming, it is the fastest alternative. The drawback of check-pointed warming is that it imposes limits on some aspects of the simulated microarchitectural parameters (e.g., the maximum size or associativity of a cache), which constrains checkpoint reusability.Reusability is important because we must amortize the one-time cost of checkpoint creation (roughly the cost of a full-warming simula-tion) over a series of experiments.Each of the three warming approaches suffers from a different key weakness. The speed of full warming has been quantified in [37]. We evaluate the accuracy of adaptive warming in Section 4.2. We then explore the reusability of checkpointed warming in Section 4.3.4.2 Adaptive warmingThe key challenge of achieving accuracy with adaptive warming lies in determining the functional warming period length. If the warming period is underestimated, simulation results will be biased. If the warming period is overestimated, we sacrifice simulation speed.A recently-proposed technique for determining cache warming requirements is Memory Reference Reuse Latency (MRRL) [12].MRRL collects a histogram of memory access reuse distances between each pair of detailed windows during a functional simulation of a benchmark. The warming length reported by MRRL is the length sufficient to cover 99.9% of the observed reuse distances. This prob-abilistic bound on cache warming requirements is configuration inde-pendent, because reuse latency is measured by instruction count in a functional simulator. The MRRL analysis outputs specific warming lengths (in instructions) for each detailed window, and must be run once per benchmark and sample design. The offline analysis pass takes roughly the same time as a full-warming simulation.MRRL has demonstrated low bias on large detailed windows (worst-case error of 2% for 50-million-instruction windows). This paper evaluates MRRL on the small detailed windows required by theFull warming (e.g., S )MARTS Checkpointed warmingAdaptive warming Functional warming Detailed warming Checkpoint loadMeasurement .........Figure 2. Simulation sampling warming methods. All methods use the same sample design and confidence intervals, only bias differs.Figure 3. Relative merits of warming methods.optimal sample design. Small windows are more susceptible to bias because warming errors are not amortized over a large measurement interval.We evaluate MRRL with a reuse probability of 99.9% as recom-mended in [12]. This reuse probability results in an average of 4.1 million instructions of warming prior to each detailed window, which is 20% of the average full warming interval (20.5 million instruc-tions). Thus, an approximation for the runtime of the adaptive warming strategy is 20% of the functional warming time of S MARTS, plus detailed simulation time, or about 1.5 hours on average per benchmark (8-way).We present the results of our accuracy evaluation of adaptive warming with MRRL for small windows in Figure4. Both average (1.1%) and worst-case error (5.4%) are considerably worse than full warming (0.6% on average; 1.6% worst-case). Error is high because short detailed windows are sensitive to accurate cache state.MRRL does not allow detailed windows to be simulated indepen-dently because cache state must be stitched [18] between consecutive windows. To obtain low bias, detailed windows must be simulated in program order, precluding parallelization and online result reporting (see Section6). If MRRL is used without stitched state (thereby assuming an empty cache at the start of each functional warming period) we observe a considerably higher CPI bias of 1.9% on aver-age, with a worst case of 11%.Because of the high worst-case error and relatively modest speedup of adaptive warming, we do not choose adaptive warming to imple-ment live-points. Increasing warming over MRRL (or increasing the MRRL reuse probability threshold) will improve accuracy, but further reduces the speed of adaptive warming.4.3 Checkpointed warmingThe key concern in evaluating checkpointed warming is the reusabil-ity of a set of checkpoints across a series of experiments. Because checkpointed warming uses a full-warming simulation to generate microarchitectural state for large structures, its accuracy is identical to full warming. When the generated live-points can be used for at least two experiments, checkpointed warming provides a net speed gain over full warming.To maximize the reusability of live-points, we wish to place as few constraints as possible on microarchitectural configuration. Check-pointed warming dynamically reconstructs the vast majority of microarchitectural structures (e.g., queues, ROB, etc.) through detailed warming. As such, the configurations of these dynamically-warmed structures are not constrained. For the remaining few struc-tures, for which detailed warming requirements are large or cannot be determined (e.g., caches and branch predictors), we store a represen-tation of the structure in each live-point. The reusability of a live-point library is limited by the flexibility of these representations. There are two basic approaches to increasing live-point reusability. First, we can collect state snapshots for multiple component configu-rations in a single creation pass. The second, preferable approach is to modify the saved representation such that a range of organizations can be reconstructed when a live-point is loaded. However, we cannot easily apply this adaptable approach to some structures, such as modern branch predictors, and so we must store multiple warmed configurations. Cache-like structures, including the TLB, can typi-cally be stored using adaptable data structures.Storing multiple configurations. The first approach is straight-forward and effective if the number of configurations of interest is relatively small. The major cost of live-point creation is the traversal of the entire benchmark instruction stream. Warming additional copies of a microarchitectural structure incurs a relatively small over-head. If the slowdown is less than a factor of two, it is a net win to collect state for both configurations in a single pass. We recommend this approach for storing branch predictor state.Storing adaptable warmed state. With cache-like structures, it is possible to exploit the properties of cache replacement algorithms to create a representation of cache state from which one can accurately reconstruct a range of configurations [14]. Barr et al. propose a data structure, called the Memory Timestamp Record (MTR), that records the timestamp of the last access to each cache block during functional warming [1]. The MTR allows a simulator to reconstruct a cache hierarchy of arbitrary sizes and associativities assuming least-recently-used replacement and a lower bound on cache block size. Storing an MTR in each live-point enables reusability across nearly arbitrary cache hierarchy organizations, but incurs a storage cost proportional to the application’s memory footprint. However, researchers can often place an upper bound on the maximum cache size of interest. For a given maximum size and associativity, we can instead store a timestamp-sorted list of the most recent accesses mapping to each set, referred to as a Cache Set Record (CSR) by Barr et al. [1]. A CSR requires the same storage as the tag array for the selected maximum cache size, and allows reconstruction of all smaller and/or less associative caches.Our analysis of simulation sampling warming methods demonstrates that checkpointed warming is both fast and accurate. The reusability weakness of checkpointed warming can be mitigated through careful planning of microarchitectural state representation. Thus, we choose to use checkpointed warming to implement live-points.5. LIVE-POINTS WITH LIVE-STATECurrent publicly-available computer architecture simulators already provide a checkpoint creation and loading capability that allows the simulator to move to a particular program trace location in constant time [2,23]. These checkpoint implementations store only architec-turally-visible system state (i.e., memory, architectural register and peripheral device state). A straightforward approach to implement checkpointed warming is to extend these existing checkpoints with functionally-warmed microarchitectural state as described in Section4.3. by adaptive warming using MRRL vs. full warming.。