2The coordination generalized particle model—An evolutionary approach to multi-sensor fusion

合集下载

一些英文审稿意见的模板

一些英文审稿意见的模板

最近在审一篇英文稿,第一次做这个工作,还有点不知如何表达。

幸亏遇上我的处女审稿,我想不会枪毙它的,给他一个major revision后接收吧。

呵呵网上找来一些零碎的资料参考参考。

+++++++++++++++++++++++++++++++1、目标和结果不清晰。

It is noted that your manuscript needs careful editing by someone with expertise in technical English editing paying particular attention to English grammar, spelling, and sentence structure so that the goals and results of the study are clear to the reader.2、未解释研究方法或解释不充分。

In general, there is a lack of explanation of replicates and statistical methods used in the study.Furthermore, an explanation of why the authors did these various experiments should be provided.3、对于研究设计的rationale:Also, there are few explanations of the rationale for the study design.4、夸张地陈述结论/夸大成果/不严谨:The conclusions are overstated. For example, the study did not showif the side effects from initial copper burst can be avoid with the polymer formulation.5、对hypothesis的清晰界定:A hypothesis needs to be presented。

Measurement of the Cross-Section for the Process Gamma-Gamma to Proton-Antiproton at sqrt(s

Measurement of the Cross-Section for the Process Gamma-Gamma to Proton-Antiproton at sqrt(s

a r X i v :h e p -e x /0307066v 125 J u l 20031Measurement of the Cross-Section for the Process γγ→p¯p at √s ee =183GeV and 189GeV with the OPAL detector at LEP.Results are presented for p¯pinvariant masses,W ,in the range 2.15<W <3.95GeV.The cross-section measurements are compared with previous data and with recent analytic calculations based on the quark-diquark model.1.INTRODUCTIONThe exclusive production of proton-antiproton (p¯p )pairs in the collision of two quasi-real pho-tons can be used to test predictions of QCD.At LEP the photons are emitted by the beam elec-trons 1and the p¯p pairs are produced in the pro-cess e +e −→e +e −γγ→e +e −p¯p .The application of QCD to exclusive photon-photon reactions is based on the work of Brod-sky and Lepage [1].Calculations based on this ansatz [2,3]use a specific model of the proton’s three-quark wave function by Chernyak and Zhit-nitsky [4].This calculation yields cross-sections about one order of magnitude smaller than the ex-isting experimental results [5,6,7,8,9,10,11],for p¯p centre-of-mass energies W greater than 2.5GeV.To model non-perturbative effects,the intro-duction of quark-diquark systems has been pro-posed [12].Recent studies [13]have extended the sys-tematic investigation of hard exclusive reac-tions within the quark-diquark model to photon-photon processes [14,15,16,17].The calculations of the integrated cross-section for the process γγ→p¯p in the angular range|cos θ∗|<0.6(where θ∗is the angle between the proton’s momentum and the electron beam di-rection in the p¯p centre-of-mass system)and for W >2.5GeV are in good agreement with ex-s ee =183GeV and 189GeV atLEP.The integrated luminosities for the two en-ergies are 62.8pb −1and 186.2pb −1.2.EVENT SELECTIONThe e +e −→e +e −p¯p events are selected by the following set of cuts:1.The sum of the energies measured in the barrel and endcap sections of the electro-magnetic calorimeter must be less than half the beam energy.2.Exactly two oppositely charged tracks are required with each track having at least 20hits in the central jet chamber to ensure a reliable determination of the specific energy loss d E/d x .The point of closest approach to the interaction point must be less than 1cm in the rφplane and less than 50cm in the z direction.3.For each track the polar angle must be in the range |cos θ|<0.75and the transverse momentum p ⊥must be larger than 0.4GeV.These cuts ensure a high trigger efficiency and good particle identification.24.The invariant mass W of the p¯p final state must be in the range 2.15<W <3.95GeV.The invariant mass is determined from the measured momenta of the two tracks using the proton mass.5.The events are boosted into the rest system of the measured p¯p final state.The scatter-ing angle of the tracks in this system has to satisfy |cos θ∗|<0.6.6.All eventsmustfulfil the trigger conditions described in [11].7.The large background from other exclusive processes,mainly the production of e +e −,µ+µ−,and π+π−pairs,is reduced by par-ticle identification using the specific energy loss d E/d x in the jet chamber and the energy in the electromagnetic calorimeter.The d E/d x probabilities of the tracks must be consistent with the p ands ee =183GeV and 128events at√d W d |cos θ∗|=N ev (W,|cos θ∗|)s ee is obtained from the differentialcross-section d σ(e +e −→e +e −p¯p )/d W using the luminosity function d L γγ/d W [20]:σ(γγ→p¯p )=d σ(e +e −→e +e −p¯p )d W.(2)3 The luminosity function d Lγγ/d W is calculatedby the Galuga program[21].The resulting dif-ferential cross-sections for the processγγ→p¯pin bins of W and|cosθ∗|are then summed over|cosθ∗|to obtain the total cross-section as a func-tion of W for|cosθ∗|<0.6.4.RESULT AND DISCUSSIONThe measured cross-sections[11]as a functionof W are showed in Fig.1.The average Win each bin has been determined by applying theprocedure described in[22].The measured cross-sectionsσ(γγ→p¯p)for2.15<W<3.95GeVand for|cosθ∗|<0.6are compared with theresults obtained by ARGUS[8],CLEO[9]andVENUS[10]in Fig.1a and to the results ob-tained by TASSO[5],JADE[6]and TPC/2γ[7]in Fig.1b.The quark-diquark model predic-tions[13]are also shown.Reasonable agree-ment is found between this measurement andthe results obtained by other experiments forW>2.3GeV.At lower W our measurementsagree with the measurements by JADE[6]andARGUS[8],but lie below the results obtainedby CLEO[9],and VENUS[10].The cross-section measurements reported here extend to-wards higher values of W than previous results.Fig.1c shows the measuredγγ→p¯p cross-sectionas a function of W together with some predic-tions based on the quark-diquark model[12,13].There is good agreement between our results andthe older quark-diquark model predictions[12].The most recent calculations[13]lie above thedata,but within the estimated theoretical uncer-tainties the predictions are in agreement with themeasurement.An important consequence of the pure quarkhard scattering picture is the power law which fol-lows from the dimensional counting rules[23,24].The dimensional counting rules state that an ex-clusive cross-section atfixed angle has an en-ergy dependence connected with the number ofhadronic constituents participating in the processunder investigation.We expect that for asymp-totically large W andfixed|cosθ∗|dσ(γγ→p¯p)4where n =8is the number of elementary fields and t =−W 2/2(1−|cos θ∗|).The introduction of diquarks modifies the power law by decreasing n to n =6.This power law is compared to the data in Fig.1c with σ(γγ→p¯p )∼W −2(n −3)us-ing three values of the exponent n :fixed values n =8,n =6,and the fitted value n =7.5±0.8obtained by taking into account statistical uncer-tainties only.More data covering a wider range of W would be required to determine the expo-nent n more precisely.The measured differential cross-sections d σ(γγ→p¯p )/d |cos θ∗|in different W ranges and for |cos θ∗|<0.6are showed in Fig.2.The differential cross-section in the range 2.15<W <2.55GeV lies below the results re-ported by VENUS [10]and CLEO [9](Fig.2a).Since the CLEO measurements are given for the lower W range 2.0<W <2.5GeV,we rescale their results by a factor 0.635which is the ratio of the two CLEO total cross-section measurements integrated over the W ranges 2.0<W <2.5GeV and 2.15<W <2.55GeV.This leads to a better agreement between the two measurements but the OPAL results are still consistently lower.The shapes of the |cos θ∗|dependence of all mea-surements are consistent apart from the highest |cos θ∗|bin,where the OPAL measurement is sig-nificantly lower than the measurements of the other two experiments.In Fig.2b-c the differential cross-sections d σ(γγ→p¯p )/d |cos θ∗|in the W ranges 2.35<W <2.85GeV and 2.55<W <2.95GeV are compared to the measurements by TASSO,VENUS and CLEO in similar W ranges.The measurements are consistent within the uncer-tainties.The comparison of the differential cross-section as a function of |cos θ∗|for 2.55<W <2.95GeV with the calculation of [13]at W =2.8GeV for different distribution ampli-tudes (DA)is shown in Fig.3a.The shapes of the curves of the pure quark model [2,3]and the quark-diquark model predictions [13]are consis-tent with those of the data.In Fig.3b the differential cross-section d σ(γγ→p¯p )/d |cos θ∗|is shown versus |cos θ∗|for 2.15<W <2.55GeV.The cross-section de-creases at large |cos θ∗|;the shape of the angular distribution is different from that at higher W0246810121416| cos(Θ∗) |d σ(γγ → p p _)/d | c o s (Θ∗) |(n b )00.511.522.533.54|cos(Θ∗)|d σ(γγ → p p _)/d |c o s (Θ∗)|(n b )00.511.522.53|cos(Θ∗) |d σ(γγ →p p _)/d |c o s (Θ∗)|(n b )Figure 2.Differential cross-sections for γγ→p¯pas a function of |cos θ∗|in different ranges of W ;a,c)compared with CLEO [9]and VENUS [10]data with statistical (inner error bars)and sys-tematic errors (outer bars)and b)compared with TASSO [5].The TASSO error bars are statistical only.The data points are slightly displaced for clarity.500.511.522.53| cos(Θ∗) |d σ(γγ → p p _)/d | c o s (Θ∗) |(n b )02468101214| cos(Θ∗) |d σ(γγ → p p _)/d | c o s (Θ∗) |(n b )Figure 3.Measured differential cross-section,d σ(γγ→p¯p )/d |cos θ∗|,with statistical (inner bars)and total uncertainties (outer bars)for a)2.55<W <2.95GeV and b)2.15<W <2.55GeV.The data are compared with the point-like approximation for the proton (4)scaled to fit the data.The other curves show the pure quark model [2],the diquark model of [12]with the Dziembowski distribution amplitudes (DZ-DA),and the diquark model of [13]using standard and asymptotic distribution amplitudes.values.This indicates that for low W the pertur-bative calculations of [2,3]are not valid.Another important consequence of the hard scattering picture is the hadron helicity conserva-tion rule.For each exclusive reaction like γγ→p¯p the sum of the two initial helicities equals the sum of the two final ones [25].According to the simplification used in [12],only scalar diquarks are considered,and the (anti)proton carries the helicity of the single (anti)quark.Neglecting quark masses,quark and antiquark and hence proton and antiproton have to be in opposite he-licity states.If the (anti)proton is considered as a point-like particle,simple QED rules deter-mine the angular dependence of the unpolarized γγ→p¯p differential cross-section [26]:d σ(γγ→p¯p )(1−cos 2θ∗).(4)This expression is compared to the data in two W ranges,2.55<W <2.95GeV (Fig.3a)and 2.15<W <2.55GeV (Fig.3b).The normali-sation in each case is determined by the best fit to the data.In the higher W range,the pre-diction (4)is in agreement with the data within the experimental uncertainties.In the lower W range this simple model does not describe the data.At low W soft processes such as meson exchange are expected to introduce other partial waves,so that the approximations leading to (4)become invalid [27].5.CONCLUSIONSThe cross-section for the process e +e −→e +e −p¯p has been measured in the p¯p centre-of-mass energy range of 2.15<W <3.95GeV using data taken with the OPAL detector at √6measurements lie below the results obtained by CLEO[9],and VENUS[10],but agree with the JADE[6]and ARGUS[8]measurements.The cross-section as a function of W is in agree-ment with the quark-diquark model predictions of[12,13].The power lawfit yields an exponent n= 7.5±0.8where the uncertainty is statistical only. Within this uncertainty,the measurement is not able to distinguish between predictions for the proton to interact as a state of three quasi-free quarks or as a quark-diquark system.These predictions are based on dimensional counting rules[23,24].The shape of the differential cross-section dσ(γγ→p¯p)/d|cosθ∗|agrees with the results of previous experiments in comparable W ranges, apart from the highest|cosθ∗|bin measured in the range2.15<W<2.55GeV.In this low W region contributions from soft processes such as meson exchange are expected to complicate the picture by introducing extra partial waves, and the shape of the measured differential cross-section dσ(γγ→p¯p)/d|cosθ∗|does not agree with the simple model that leads to the helicity conservation rule.In the high W region,2.55< W<2.95GeV,the experimental and theoretical differential cross-sections dσ(γγ→p¯p)/d|cosθ∗| agree,indicating that the data are consistent with the helicity conservation rule.REFERENCES1.G.P.Lepage and S.J.Brodsky,Phys.Rev.D22(1980)2157.2.G.R.Farrar, E.Maina and F.Neri,Nucl.Phys.B259(1985)702.3. lers and J.F.Gunion,Phys.Rev.D34(1986)2657.4.V.L.Chernyak and I.R.Zhitnitsky,Nucl.Phys.B246(1984)52.5.TASSO Collaboration,M.Althoffet al.,Phys.Lett.B130(1983)449.6.JADE Collaboration,W.Bartel et al.,Phys.Lett.B174(1986)350.7.TPC/Two Gamma Collaboration,H.Aiharaet al.,Phys.Rev.D36(1987)3506.8.ARGUS Collaboration,H.Albrecht et al.,Z.Phys.C42(1989)543.9.CLEO Collaboration,M.Artuso et al.,Phys.Rev.D50(1994)5484.10.VENUS Collaboration,H.Hamasaki et al.,Phys.Lett.B407(1997)185.11.OPAL Collaboration,G.Abbiendi et al.,Eur.Phys.J.C28(2003)45.12.M.Anselmino,P.Kroll and B.Pire,Z.Phys.C36(1987)89.13.C.F.Berger,B.Lechner and W.Schweiger,Fizika B8(1999)371.14.M.Anselmino, F.Caruso,P.Kroll andW.Schweiger,Int.J.Mod.Phys.A4(1989) 5213.15.P.Kroll,M.Sch¨u rmann and W.Schweiger,Int.J.Mod.Phys.A6(1991)4107.16.P.Kroll,Th.Pilsner,M.Sch¨u rmann andW.Schweiger,Phys.Lett.B316(1993)546.17.P.Kroll,M.Sch¨u rmann and P.A.M.Guichon,Nucl.Phys.A598(1996)435.18.OPAL Collaboration,K.Ahmet et al.,Nucl.Instr.Meth.A305(1991)275;19.R.Akers et al.,Z.Phys.C65(1995)47.20.F.E.Low,Phys.Rev.120(1960)582.21.G.A.Schuler,m.108(1998)279.fferty,T.R.Wyatt,Nucl.Instr.Meth.A355(1995)541.23.S.J.Brodsky and G.R.Farrar,Phys.Rev.Lett.31(1973)1153.24.V.A.Matveev,R.M.Muradian and A.N.Tavkhelidze,Nuovo Cim.Lett.7(1973)719.25.S.J.Brodsky and G.P.Lepage,Phys.Rev.D24(1981)2848.26.V.M.Budnev,I.F.Ginzburg,G.V.Meledinand V.G.Serbo,Phys.Rep.15(1974)181.27.S.J.Brodsky,F.C.Ern´e,P.H.Damgaard andP.M.Zerwas,Contribution to ECFA Work-shop LEP200,Aachen,Germany,Sep29-Oct 1,1986.。

小学上册第12次英语全练全测(含答案)

小学上册第12次英语全练全测(含答案)

小学上册英语全练全测(含答案)英语试题一、综合题(本题有50小题,每小题1分,共100分.每小题不选、错误,均不给分)1 I want to ___ a new game. (try)2 古代的________ (societies) 通过贸易与战争相互联系。

3 A windmill converts wind energy into ______.4 The first computer was created in the _______ century. (20)5 What is the name of the holiday celebrated on December 25th?A. ThanksgivingB. EasterC. ChristmasD. Halloween答案:C6 A ____ is a clever creature that can solve puzzles.7 A _______ is a type of chemical bond formed by sharing electrons.8 The __________ (航线) connects different countries.9 I have a _____ collection of stamps. (large)10 A non-polar solvent is used to dissolve ______ substances.11 The Sun's energy drives the Earth's ______.12 A mixture that contains particles that can be seen is called a _______ mixture.13 The turtle swims slowly in the _________. (水)14 I love making ______ (手工艺品) during art class. It’s fun to create something unique with my own hands.15 The dolphin is a very ______ (聪明的) animal.16 What do we call the middle of the day?A. MorningB. NoonC. EveningD. Night17 What is the primary color of a tiger's fur?A. BlackB. WhiteC. OrangeD. Brown18 I think it’s fun to go ________ (参加聚会).19 The cat is ______ on the couch. (sleeping)20 Bees help plants by ______ (授粉) their flowers.21 The __________ is shining brightly.22 In chemistry, a _______ is a shorthand way to represent a chemical substance. (化学符号)23 What is the capital of Suriname?A. ParamariboB. AlbinaC. Nieuw NickerieD. Moengo答案: A24 What do we call a group of wolves?A. PackB. FlockC. HerdD. Swarm答案:A25 I can ______ (count) to fifty.26 I like to spend time in the ______ (图书馆) because it’s quiet and filled with amazing books.27 Fossil fuels are a major source of ________ energy.28 My sister is my best _______ who loves to share secrets.29 A rabbit's diet consists mainly of ______ (胡萝卜).30 A _______ (骆驼) can go without water for days.31 The sun rises in the ________.32 My cousin has a __________ dog. (可爱的)33 What is the name of the first man on the moon?A. Neil ArmstrongB. Buzz AldrinC. Yuri GagarinD. John Glenn34 The ______ is known for her amazing voice.35 What is the term for the area of space where tiny particles collide and create cosmic rays?A. Cosmic Ray ZoneB. Particle AcceleratorC. High-Energy ZoneD. Collision Zone36 What do you call the layer of the Earth where we live?A. CrustB. MantleC. CoreD. Atmosphere答案: A37 My favorite animal is a ________ that can fly.38 Which of these is a cold season?A. WinterB. SummerC. SpringD. Fall39 What is the capital city of the Maldives?A. MaléB. Addu CityC. FuvahmulahD. Kulhudhuffushi40 Planting trees helps combat _____ (气候变化).41 Understanding ______ (植物生态) can help address climate issues.42 Which tool do we use to measure length?A. ScaleB. RulerC. ThermometerD. Clock答案:B43 The _____ (city/country) is big.44 A chemical change results in the formation of ______ substances.45 What is the name of the famous toy in "Toy Story"?A. Buzz LightyearB. WoodyC. RexD. Jessie答案:B46 The sun is ___ in the east. (rising)47 A _____ (植物艺术) can inspire creativity and beauty.48 The _____ (松树) stays green all year round. It is a symbol of strength. 松树四季常绿,是力量的象征。

Binder cumulants of an urn model and Ising model above critical dimension

Binder cumulants of an urn model and Ising model above critical dimension

a r X i v :c o n d -m a t /0201472v 1 [c o n d -m a t .s t a t -m e c h ] 25 J a n 2002Binder cumulants of an urn model and Ising model above critical dimensionAdam Lipowski 1),2)and Michel Droz 1)1)Department of Physics,University of Gen`e ve,CH 1211Gen`e ve 4,Switzerland 2)Department of Physics,A.Mickiewicz University,61-614Poznan,Poland(February 1,2008)Solving numerically master equation for a recently introduced urn model,we show that the fourth-and sixth-order cumulants remain constant along an exactly located line of critical points.Obtained values are in very good agreement with values predicted by Br´e zin and Zinn-Justin for the Ising model above the critical dimension.At the tricritical point cumulants acquire values which also agree with a suitably extended Br´e zin and Zinn-Justin approach.The concept of universality and scale invariance plays a fundamental role in the theory of critical phenomena [1].It is well known that at criticality the system is characterized by critical exponents.Calculation of these exponents for dimension of the system d lower than the so-called critical dimension d c is a highly nontrivial task [2].On the other hand for d >d c the behaviour of a given system is much simpler and critical exponents take mean-field values which are usually simple fractional numbers.However,not everything is clearly understood above the critical dimension.One of the examples is the Ising model (d c =4)where despite intensive research serious discrepancies between analytical [3]and numerical [4]calculations still persist.Of particular interest is the value of the Binder cumulant at the critical point.Several years ago Br´e zin and Zinn-Justin (BJ)calculated this quantity using field theory methods [5]and only recently numerical simulations for the d =5model are able to confirm it [6].Some other properties of the Ising model above critical dimension are still poorly explained by existing theories.For example,the theoretically predicted leading corrections to the susceptibility disagree even up the sign with numerical simulations [4].In addition to direct simulations of the nearest-neighbour Ising model,there are also some other ways to study the critical point of Ising model above critical dimension.For example,Luijten and Bl¨o te used the model with d ≤3but with long-range interactions [7].Using such an approach they confirmed with good accuracy the BJ predictions for the Binder cumulant.In the present paper we propose yet another approach to the problem of cumulants above critical ly,we calculate fourth-and sixth-order cumulants at the critical point of a recently introduced urn model [8].Albeit structureless,this model exhibits a mean-field Ising-type symmetry breaking.Along an exactly located critical line,the obtained values are in a very good agreement with values predicted by BJ.Let us notice that our calculations:(i)are not affected by the inaccuracy of the location of the critical point which is a serious problem in the case of the Ising model (ii)are based on the numerical solution of the master equation which offers a much better accuracy than Monte Carlo simulations.Moreover,we calculate these cumulants at the tricritical point and show that the obtained values are also in agreement with suitably extended calculations of BJ.That both the Ising model and the (structureless)urn model have the same cumulants is a manifestation of strong universality above the upper critical dimension:at the critical point not only the lattice structure but also the lattice itself becomes irrelevant.What really matters is the type of symmetry which is broken and since in both cases it is the same Z 2symmetry,the equality of cumulants follows.Our urn model was motivated by recent experiments on the spatial separation of shaken sand [9].In the present paper we are not concerned with the relation with granular matter and a more detailed justification of rules of the urn model is omitted [8].The model is defined as follows:N particles are distributed between two urns A and B and the number of particles in each urn is denoted as M and N −M ,respectively.Particles in a given urn (say A)are subject to thermal fluctuations and the temperature T of the urn depends on the number of particles in it as:T (x )=T 0+∆(1−x ),(1)where x is a fraction of a total number of particles in a given urn and T 0and ∆are positive constants.(For urn A and B,x =M/N and (N −M )/N ,respectively.)Next,we define dynamics of the model [8]:(i)One of the N particles is selected randomly.(ii)With probability exp[−1ǫ=2M−NN−1T(<M/N>)]=<N−M>exp[−12+<ǫ>)exp[−12+<ǫ>)]=(1T(1∆/2−∆/2,0<∆<23,T0=√3.Let us notice that a random selection ofparticles implies basically the mean-field nature of this model.Consequently,at the critical pointβ=1/2andγ≈1 (measured from the divergence of the variance of the order parameter),which are ordinary mean-field exponents. However,the calculation of the dynamical exponent z gives z=0.50(1)[8]while the mean-field value is2.We do not have convincing arguments which would explain such a small value of z.Presumably,this fact might be related with a structureless nature of our model.Defining p(M,t)as the probability that in a given urn(say A)at the time t there are M particles,the evolution of the model is described by the following master equationp(M,t+1)=N−M+1Np(M+1,t)ω(M+1)+ p(M,t){MN[1−ω(N−M)]}for M=1,2...N−1p(0,t+1)=1Np(N−1,t)ω(1)+p(N,t)[1−ω(N)],(6) whereω(M)=exp[−1<ǫ2>2,x6=<ǫ6> N−18,12and2x 4=14)]4≈2.188440...,x 6=34)]4≈6.565319....(9)The fact that one can restrict the expansion of the free energy to the lowest order term is by no means obvious [3].Such a restriction leads to the correct results but only above critical dimension where the model behaves according to the mean-field scenario with fluctuations playing negligible role.For d <d c additional terms in the expansion are also important and cumulants take different value.Numerical confirmation of the above results requires extensive Monte Carlo simulations,and a satisfactory confirmation was obtained only for x 4[6,10].Omitting detailed field theory analysis,we can extend the BJ approach to the tricritical point.At such a point also the quartic term vanishes which makes the sixth-order term the leading one and the probability distribution gets theform p (x )∼e −x 6.Simple calculations for such a distribution yieldx 4=Γ(56)2)2=2,x 6=Γ(16Γ(13(tricritical point).Arrows indicate the BJ results for the critical and the tricritical point.3456700.0020.0040.0060.0080.01x 6(N )1/NFIG.2.The same as in Fig.1but for the sixth-order cumulant x 6(N ).The BJ results (9)-(10)are indicated by small arrows in Figs.1-2.Even without any extrapolation one can see,especially for critical points,a good agreement with our results.Data in Figs.1-2shows strong finite-size corrections.To have a better estimations of asymptotic values in the limit N →∞we assume finite size corrections of the formx 4,6(N )=x 4,6(∞)+AN −ω.(11)The least-square fitting of our finite-N data to eq.(11)gives x 4,6(∞)which agree with BJ values (9)-(10)within the accuracy better than 0.1%.A better estimation of the correction exponent ωis obtained assuming that x 4,6(∞)are given by the BJ values.The exponent ωequals then the slope of the date in the logarithmic scale as presented in Figs.3-4.Our data shows that for the critical(tricritical)point ω=13).Let us notice that leading finite-size corrections to the Binder cumulant in the d =5Ising model at the critical point are also of the form N −0.5(with N being the linear system size)[7].Moreover,for the tricritical point but d <d c the probability distribution is known to exhibit a three-peak structure [11],which is different than the single-peak formp (x )∼e −x 6.-2-1.5-1-0.500.512 2.533.544.55l o g 10[x 4,6(B J )-x 4,6(N )]log 10(N)FIG.3.Logarithmic plot of x 4(BJ )−x 4(N )(+)and x 6(BJ )−x 6(N )(×)as a function for N for ∆=0.5.Dotted straight lines have slope 0.5.-1.4-1.2-1-0.8-0.6-0.4-0.200.20.422.533.544.55l o g 10[x 4,6(B J )-x 4,6(N )]log 10(N)FIG.4.Logarithmic plot of x 4(BJ )−x 4(N )(+)and x 6(BJ )−x 6(N )(×)as a function fo N for ∆=23.In summary,we calculated fourth-and sixth-order cumulants at the critical and tricritical points in an urn modelwhich undergoes a symmetry breaking transition.Our results confirm that,as predicted by Br´e zin and Zinn-Justin,the critical probability distributions of the rescaled order parameter has the form p (x )∼e −x 4.Similarly,for thetricritical point our results suggest that p (x )∼e −x 6.Although in our opinion convincing,the results are obtained using numerical methods.It would be desirable to have analytical arguments for the generation of such probability distributions.It seems that for the presented urn model this might be easier than for the Ising-type models.Let us notice that for the simplest urn model,which was introduced by Ehrenfest [12],the steady-state probability distribution can be calculated exactly in the continuumlimit of the master equation and the result has the form p (x )∼e −x 2,where x is now proportional to the differenceof occupancy ǫ.In the Ehrenfest model there is no critical point and we expect that a distribution of the type e −x2might characterize our model but offthe critical line(in the symmetric phase).We hope that when suitably extended, an analytic approach to our model might extract critical and tricritical distributions as well.Such an approach is left as a future problem.ACKNOWLEDGMENTSThis work was partially supported by the Swiss National Science Foundation and the project OFES00-0578”COSYC OF SENS”.。

材料导论中英文讲稿 (58)

材料导论中英文讲稿 (58)

Module 7-video 12What are particle-reinforced composites?什么是颗粒增强复合材料?Hello!Welcome to Introduction to Materials. Today, we are going to talk about particle-reinforced composites, also called particle or particulate composites.译文:大家好!欢迎走进《材料导论》课堂。

今天,我们来一起学习颗粒增强复合材料。

Particle composites containing reinforcing particles of one or more materials suspended in a matrix of a different materials. As with nearly all materials, structure determines properties, and so it is with particle composites.This Figure illustrates the geometrical and spatial characteristics of particles, such as the concentration, size, shape,distribution and orientation. They all contribute to the properties of these materials. 颗粒增强复合材料由基体和分散相构成,分散相粒子的几何和空间特性,如含量、大小、形状、分布、取向等结构因素都会影响颗粒复合材料的性能。

译文:颗粒增强复合材料是由一种或多种增强颗粒分散于另一种基体材料中构成的复合材料。

颗粒增强复合材料与其它几乎所有材料一样,其结构决定着性能。

From Little Bangs to the Big Bang

From Little Bangs to the Big Bang

a r X i v :a s t r o -p h /0504501v 1 22 A p r 2005From Little Bangs to the Big BangJohn EllisTheory Division,Physics Department,CERN,CH-1211Geneva 23,Switzerland E-mail:john.ellis@cern.ch CERN-PH-TH/2005-070astro-ph/0504501Abstract.The ‘Little Bangs’made in particle collider experiments reproduce the conditions in the Big Bang when the age of the Universe was a fraction of a second.It is thought that matter was generated,the structures in the Universe were formed and cold dark matter froze out during this very early epoch when the equation of state of the Universe was dominated by the quark-gluon plasma (QGP).Future Little Bangs may reveal the mechanism of matter generation and the nature of cold dark matter.Knowledge of the QGP will be an essential ingredient in quantitative understanding of the very early Universe..1.The Universe is Expanding The expansion of the Universe was first established by Hubble’s discovery that distant galaxies are receding from us,with redshifts proportional to their relative distances from us.Extrapolating the present expansion backwards,there is good evidence that the Universe was once 3000times smaller and hotter than today,provided by the cosmic microwave background (CMB)radiation.This has a thermal distribution and is very isotropic,and is thought to have been released when electrons combined with ions from the primordial electromagnetic plasma to form atoms.The observed small dipole anisotropy is due to the Earth’s motion relative to this cosmic microwave background,and the very small anisotropies found by the COBE satellite are thought to have led to the formation of structures in the Universe,as discussed later [1].Extrapolating further back in time,there is good evidence that the Universe was once a billion times smaller and hotter than today,provided by the abundances of light elements cooked in the Big Bang [2].The Universe contains about 24%by mass of 4He,and somewhat less Deuterium,3He and 7Li.These could only have been cooked by nuclear reactions in the very early Universe,when it was a billion times smaller and hotter than today.The detailed light-element abundances depend on the amount of matter in the Universe,and comparison between observations and calculations suggests that there is not enough matter to stop the present expansion,or even to explain the amount of matter in the galaxies and their clusters.The calculations of the light-element abundances also depend on the number of particle types,and in particular on the number of different neutrino types.This is now known from particle collider experiments to be three [3],with a corresponding number of charged leptons and quark pairs.2.The Very Early Universe and the Quark-Gluon PlasmaWhen the Universe was very young:t →0,also the scale factor a characterizing its size would have been very small:a →0,and the temperature T would have been very large,withcharacteristic relativistic particle energies E ∼T .In normal adiabatic expansion,T ∼1/a ,and,while the energy density of the Universe was dominated by relativistic matter,t ∼1/T 2.The following are some rough orders of magnitude:when the Universe had an age t ∼1second,the temperature was T ∼10,000,000,000degrees,and characteristic thermal energies were E ∼1MeV,comparable with the mass of theelectron.It is clear that one needs particle physics to describe the earlier history of the Universe [1].The very early Universe was presumably filled with primordial quark-gluon plasma (QGP).When the Universe was a few microseconds old,it is thought to have exited from this QGP phase,with the available quarks and gluons combining to make mesons and baryons.The primordial QGP would have had a very low baryon chemical potential µ.Experiments with RHIC reproduce cosmological conditions more closely than did previous SPS experiments,as seen in Fig.1,and the LHC will provide [4]an even closer approximation to the primordial QGP.I shall not discuss here the prospects for discovering quark matter inside dense astrophysical objects such as neutron stars,which would have a much larger baryon chemical potential.TµTc ~ 170 MeV µ ∼ o 922 MeV Figure 1.The phase diagram of hot and dense QCD for different values of the baryon chemical potential µand temperature T [5],illustrating the physics reaches of SPS,RHIC and the ALICE experiment at the LHC [4].To what extent can information about the early Universe cast light on the quark-hadron phase transition?The latest lattice simulations of QCD with two light flavours u,d and one moderately heavy flavour s suggest that there was no strong first-order transition.Instead,there was probably a cross-over between the quark and hadron phases,see,for example,Fig.2[5],during which the smooth expansion of the Universe is unlikely to have been modified substantially.Specifically,it is not thought that this transition would have induced inhomogeneities large enough to have detectable consequences today.3.Open Cosmological QuestionsThe Standard Model of cosmology leaves many important questions unanswered.Why is the Universe so big and old?Measurements by the WMAP satellite,in particular,indicate that its age is about 14,000,000,000years [6].Why is its geometry nearly Euclidean?Recent data indicate that it is almost flat,close to the borderline for eternal expansion.Where did the matter come from?The cosmological nucleosynthesis scenario indicates that there is approximately one proton in the Universe today for every 1,000,000,000photons,and no detectable amount0.81 1.2 1.4 1.6 1.82T/T 000.20.40.60.8µq /T =0.8µq /T =1.0µq /T =0.6µq /T =0.4µq /T =0.2∆(p/T 4)Figure 2.The growth of the QCD pressure with temperature,for different values of the baryon chemical potential µ[5].The rise is quite smooth,indication that there is not a strong first-order phase transition,and probably no dramatic consequences in the early Universe.of antimatter.How did cosmological structures form?If they did indeed form from the ripples observed in the CMB,how did these originate?What is the nature of the invisible dark matter thought to fill the Universe?Its presence is thought to have been essential for the amplification of the primordial perturbations in the CMB.It is clear that one needs particle physics to answer these questions,and that their solutions would have operated in a Universe filled with QGP.4.A Strange Recipe for a UniverseAccording to the ‘Concordance Model’suggested by a multitude of astrophysical and cosmological observations,the total density of the Universe is very close to the critical value:ΩT ot =1.02±0.02,as illustrated in Fig.3[6].The theory of cosmological inflation suggests that the density should be indistinguishable from the critical value,and this is supported by measurements of the CMB.On the other hand,the baryon density is small,as inferred not only from Big-Bang nucleosynthesis but also and independently from the CMB:ΩBaryons ∼few%.The CMB information on these two quantities comes from observations of peaks in the fluctuation spectrum in specific partial waves corresponding to certain angular scales:the position of the first peak is sensitive to ΩT ot ,and the relative heights of subsequent peaks are sensitive to ΩBaryons .The fraction Ωm of the critical density provided by all forms of matter is not very well constrained by the CMB data alone,but is quite tightly constrained by combining them with observations of high-redshift supernovae [7]and/or large-scale structures [8],each of which favours ΩMatter ∼0.3,as also seen in Fig.3.As seen in Fig.4,there is good agreement between BBN calculations and astrophysical observations for the Deuterium and 4He abundances [2].The agreement for 7Li is less striking,though not disastrously bad 1.The good agreement between the corresponding 1It seems unlikely that the low abundance of 7Li observed could have been modified significantly by the decaysFigure3.The density of matterΩm and dark energyΩΛinferred from WMAP and other CMB data(WMAPext),and from combining them with supernova and Hubble Space Telescope data[6].determinations ofΩBaryons obtained from CMB and Big-Bang nucleosynthesis calculations in conventional homogeneous cosmology imposes important constraints on inhomogeneous models of nucleosynthesis.In particular,they exclude the possibility thatΩBaryons might constitute a large fraction ofΩT ot.Significant inhomogeneities might have been generated at the quark-hadron phase transition,if it was stronglyfirst-order[10].Although,as already discussed,lattice calculations suggest that this is rather unlikely,heavy-ion collision experiments must be thefinal arbiter on the nature of the quark-hadron phase transition.5.Generating the Matter in the UniverseAs was pointed out by Sakharov[11],there are three essential requirements for generating the matter in the Universe via microphysics.First,one needs a difference between matter and antimatter interactions,as has been observed in the laboratory in the forms of violations of C and CP in the weak interactions.Secondly,one needs interactions that violate the baryon and lepton numbers,which are present as non-perturbative electroweak interactions and in grand unified theories,but have not yet been seen.Finally,one needs a breakdown of thermal equilibrium, which is possible during a cosmological phase transition,for example at the GUT or electroweak scale,or in the decays of heavy particles,such as a heavy singlet neutrinoνR[12].The issueof heavy particles[9]:it would be valuable to refine the astrophysical determinations.Figure4.Primordial light element abundances as predicted by BBN(light)and WMAP(dark shaded regions)[2],for(a)D/H,(b)the4He abundance Y p and(c)7Li/H[2].then is whether we will be able to calculate the resulting matter density in terms of laboratory measurements.Unfortunately,the Standard Model C and CP violation measured in the quark sector seem unsuitable for baryogenesis,and the electroweak phase transition in the Standard Model would have been second order.However,additional CP violation and afirst-order phase transition in an extended electroweak Higgs sector might have been able to generate the matter density[13],and could be testable at the LHC and/or ILC.An alternative is CP violation in the lepton sector,which could be probed in neutrino oscillation experiments,albeit indirectly, or possibly in the charged-lepton sector,which might be related more directly to the matter density[14].In any case,detailed knowledge of the QGP equation of state would be necessary if one were ever to hope to be able to calculate the baryon-to-entropy ratio with an accuracy of a few percent.6.The Formation of Structures in the UniverseThe structures seen in the Universe-clusters,galaxies,stars and eventually ourselves-are all thought to have developed from primordialfluctuations in the CMB.This idea is supported visually by observations of galaxies,which look smooth at the largest scales at high redshifts, but cluster at smaller scales at low redshifts[15].This scenario requires amplification of the smallfluctuations observed in the CMB,which is possible with massive non-relativistic weakly-interacting particles.On the other hand,relativistic light neutrinos would have escaped from smaller structures,and so are disfavoured as amplifiers.Non-relativistic‘cold dark matter’is preferred,as seen in a comparison of the available data on structures in the Universe with the cosmological Concordance Model[8].The hot news in the observational tests of this scenario has been the recent detection of baryonic ripples from the Big Bang[16],as seen in Fig.5.These are caused by sound waves spreading out from irregularities in the CMB,which show up in the correlation function between structures in the(near-)contemporary Universe as features with a characteristic size.In addition to supporting the scenario of structure formation by amplification of CMBfluctuations, these observations provide measurements of the expansion history and equation of state of the Universe.Figure5.The baryonic‘ripple’in the large-scale correlation function of luminous red galaxies observed in the Sloan Digital Sky Survey of galactic redshifts[16].7.Do Neutrinos Matter?Oscillation experiments tell us that neutrinos have very small but non-zero masses[17,18], and so must make up at least some of the dark matter.As already mentioned,since such light neutrinos move relativistically during the epoch of structure formation,they would have escaped from galaxies and not contributed to their formation,whereas they could have contributed to the formation of clusters.Conversely,the success of the cosmological Concordance Model enables one to set a cosmological upper limit on the sum of light neutrino masses,as seen in Fig.6:Σνmν<0.7eV[6],which is considerably more sensitive than direct laboratory searches.In the future,this cosmological sensitivity might attain the range indicated by atmospheric neutrino data[17].However,even if no dark matter effect of non-zero light neutrino masses is observed,this does not mean that neutrinos have no cosmological role,since unstable heavier neutrinos might have generated matter via the Sakharov mechanism[11].8.Particle Dark Matter CandidatesCandidates for the non-relativistic cold dark matter required to amplify CMBfluctuations include the axion[19],TeV-scale weakly-interacting massive particles(WIMPs)produced thermally in the early Universe,such as the lightest supersymmetric partner of a Standard Model particle(probably the lightest neutralinoχ),the gravitino(which is likely mainly to have been produced in the very early Universe,possibly thermally),and superheavy relic particles that might have been produced non-thermally in the very early Universe[20](such as the‘cryptons’predicted in some string models[21]).9.Supersymmetric Dark MatterSupersymmetry is a very powerful symmetry relating fermionic‘matter’particles to bosonic ‘force’particles[22].Historically,the original motivations for supersymmetry were purely theoretical:its intrinsic beauty,its ability to tame infinities in perturbation theory,etc.The first phenomenological motivation for supersymmetry at some accessible energy was that itFigure6.The likelihood function for the total neutrino densityΩνh2derived by WMAP[6]. The upper limit mν<0.23eV applies if there are three degenerate neutrinos.might also help explain the electroweak mass scale,by stabilizing the hierarchy of mass scales in physics[23].It was later realized also that the lightest supersymmetric particle(LSP)would be stable in many models[24].Moreover,it should weigh below about1000GeV,in order to stabilize the mass hierarchy,in which case its relic density would be similar to that required for cold dark matter[25].As described below,considerable effort is now put into direct laboratory searches for supersymmetry,as well as both direct and indirect astrophysical searches.Here I concentrate on the minimal supersymmetric extension of the Standard Model(MSSM), in which the Standard Model particles acquire superpartners and there are two doublets of Higgs fields.The interactions in the MSSM are completely determined by supersymmetry,but one must postulate a number of soft supersymmetry-breaking parameters,in order to accommodate the mass differences between conventional particles and their superpartners.These parameters include scalar masses m0,gaugino masses m1/2,and trilinear soft couplings A0.It is often assumed that these parameters are universal,so that there is a single m0,a single m1/2,and a single A0parameter at the input GUT scale,a scenario called the constrained MSSM(CMSSM). However,there is no deep theoretical justification for this universality assumption,except in minimal supergravity models.These models also make a prediction for the gravitino mass: m3/2=m0,which is not necessarily the case in the general CMSSM.As already mentioned,the lightest supersymmetric particle is stable in many models,this because of the multiplicative conservation of R parity,which is a combination of spin S,lepton number L and baryon number B:R=(−1)2S−L+3B.It is easy to check that conventional particles have R=+1and sparticles have R=−1.As a result,sparticles are always produced in pairs,heavier sparticles decay into lighter ones,and the lightest supersymmetric particle (LSP)is stable.The LSP cannot have strong or electromagnetic interactions,because these would bind it toconventional matter,creating bound states that would be detectable as anomalous heavy nuclei.Among the possible weakly-interacting scandidates for the LSP,one finds the sneutrino ,which has been excluded by a combination of LEP and direct searches for astrophysical dark matter,the lightest neutralino χ,and the gravitino .There are good prospects for detecting the neutralino or gravitino in collider experiments,and neutralino dark matter may also be detectable either directly or indirectly,but gravitino dark matter would be a nightmare for detection.10.Constraints on SupersymmetryImportant constraints on supersymmetry are imposed by the absences of sparticles at LEP and the Tevatron collider,implying that sleptons and charginos should weigh >100GeV [26],and that squarks and gluinos should weigh >250GeV,respectively.Important indirect constraints are imposed by the LEP lower limit on the mass of the lightest Higgs boson,114GeV [27],and the experimental measurement of b →sγdecay [28],which agrees with the Standard Model.The measurement of the anomalous magnetic moment of the muon,g µ−2,also has the potential to constrain supersymmetry,but the significance of this constraint is uncertain,in the absence of agreement between the e +e −annihilation and τdecay data used to estimate the Standard Model contribution to g µ−2[29].Finally,one of the strongest constraints on the supersymmetric parameter space is that imposed by the density of dark matter inferred from astrophysical and cosmological observations.If this is composed of the lightest neutralino χ,one has 0.094<Ωχh 2<0.129[6],and it cannot in any case be higher than this.For generic domains of the supersymmetric parameter space,this range constrains m 0with an accuracy of a few per cent as a function of m 1/2,as seen in Fig.7[30].m 0 (G e V )m 1/2 (GeV)m 0 (G e V )m 1/2 (GeV)Figure 7.The (m 1/2,m 0)planes for (a)tan β=10and tan β=50with µ>0and A 0=0,assumingm t =175GeV and m b (m b )Figure8.The factor h eff(T)calculated using different equations of state[31].11.The Relic Density and the Quark-Gluon PlasmaThe accurate calculation of the relic density depends not only on the supersymmetric model parameters,but also on the effective Hubble expansion rate as the relic particles annihilate and freeze out of thermal equilibrium[25]:˙n+3Hn=−<σann v>(n2−n2eq).This is,in turn,sensitive to the effective number of particle species:Y0≃ 4521g eff 13d ln h effm N L V S P (G e V )m LVSP (GeV)Figure 9.Scatter plot of the masses of the lightest visible supersymmetric particle (LVSP)and the next-to-lightest visible supersymmetric particle (NLVSP)in the CMSSM.The darker (blue)triangles satisfy all the laboratory,astrophysical and cosmological constraints.For comparison,the dark (red)squares and medium-shaded (green)crosses respect the laboratory constraints,but not those imposed by astrophysics and cosmology.In addition,the (green)crosses represent models which are expected to be visible at the LHC.The very light (yellow)points are those for which direct detection of supersymmetric dark matter might be possible [33].13.Strategies for Detecting Supersymmetric Dark MatterThese include searches for the annihilations of relic particles in the galactic halo:χχ→antiprotons or positrons,annihilations in the galactic centre:χχ→γ+...,annihilations in the core of the Sun or the Earth:χχ→ν+···→µ+···,and scattering on nuclei in the laboratory:χA →chiA .After some initial excitement,recent observations of cosmic-ray antiprotons are consistent with production by primary matter cosmic rays.Moreover,the spectra of annihilation positrons calculated in a number of CMSSM benchmark models [32]seem to fall considerably below the cosmic-ray background [37].Some of the spectra of photons from annihilations in Galactic Centre,as calculated in the same set of CMSSM benchmark scenarios,may rise above the expected cosmic-ray background,albeit with considerable uncertainties due to the unknown enhancement of the cold dark matter density.In particular,the GLAST experiment may have the best chance of detecting energetic annihilation photons [37],as seen in the left panel of Fig.10.Annihilations in the Solar System also offer detection prospects in some of the benchmark scenarios,particularly annihilations inside the Sun,which might be detectable in experiments such as AMANDA,NESTOR,ANTARES and particularly IceCUBE,as seen in the right panel of Fig.10[37].The rates for elastic dark matter scattering cross sections calculated in the CMSSM are typically considerably below the present upper limit imposed by the CDMS II experiment,in both the benchmark scenarios and the global fit to CMSSM parameters based on present data [38].However,if the next generation of direct searches for elastic scattering can reach a sensitivity of 10−10pb,they should be able to detect supersymmetric dark matter in many supersymmetric scenarios.Fig.11compares the cross sections calculated under a relatively optimistic assumption for the relevant hadronic matrix element σπN =64MeV,for choices of CMSSM parameters favoured at the 68%(90%)confidence level in a recent analysis using theFigure 10.Left panel:Spectra of photons from the annihilations of dark matter particles in the core of our galaxy,in different benchmark supersymmetric models [37].Right panel:Signals for muons produced by energetic neutrinos originating from annihilations of dark matter particles in the core of the Sun,in the same benchmark supersymmetric models [37].10-1210-1110-1010-910-810-710-610-50 200 400 600800 1000σ (p b )m χ (GeV)CMSSM, tan β=10, µ>0Σ=64 MeV CDMS II CL=90%CL=68%10-1210-1110-1010-910-810-710-610-5 0 200 400 600 800 1000σ (p b )m χ (GeV)CMSSM, tan β=50, µ>0Σ=64 MeV CDMS II CL=90%CL=68%Figure 11.Scatter plots of the spin-independent elastic-scattering cross section predicted in the CMSSM for (a)tan β=10,µ>0and (b)tan β=50,µ>0,each with σπN =64MeV [38].The predictions for models allowed at the 68%(90%)confidence levels [39]are shown by blue ×signs (green +signs).observables m W ,sin 2θW ,b →sγand g µ−2[39].14.Connections between the Big Bang and Little BangsAstrophysical and cosmological observations during the past few years have established a Concordance Model of cosmology,whose matter content is quite accurately determined.Most of the present energy density of the Universe is in the form of dark vacuum energy,with about 25%in the form of dark matter,and only a few %in the form of conventional baryonic matter.Two of the most basic questions raised by this Concordance Model are the nature of the dark matter and the origin of matter.Only experiments at particle colliders are likely to be able to answer these and other fundamental questions about the early Universe.In particular,experiments at the LHC will recreate quark-gluon plasma conditions similar to those when the Universe was less than a microsecond old[4],and will offer the best prospects for discovering whether the dark matter is composed of supersymmetric particles[40,41].LHC experiments will also cast new light on the cosmological matter-antimattter asymmetry[42].Moreover,discovery of the Higgs boson will take us closer to the possibilities for inflation and dark energy.There are many connections between the Big Bang and the little bangs we create with particle colliders.These connections enable us both to learn particle physics from the Universe,and to use particle physics to understand the Universe.References[1]J.R.Ellis,Lectures given at16th Canberra International Physics Summer School on The New Cosmology,Canberra,Australia,3-14Feb2003,arXiv:astro-ph/0305038;K.A.Olive,TASI lectures on Astroparticle Physics,arXiv:astro-ph/0503065.[2]R.H.Cyburt, B. D.Fields,K. A.Olive and E.Skillman,Astropart.Phys.23(2005)313[arXiv:astro-ph/0408033].[3]LEP Electroweak Working Group,http://lepewwg.web.cern.ch/LEPEWWG/Welcome.html.[4]ALICE Collaboration,http://pcaliweb02.cern.ch/NewAlicePortal/en/Collaboration/index.html.[5] C.R.Allton,S.Ejiri,S.J.Hands,O.Kaczmarek,F.Karsch,ermann and C.Schmidt,Nucl.Phys.Proc.Suppl.141(2005)186[arXiv:hep-lat/0504011].[6] D.N.Spergel et al.[WMAP Collaboration],Astrophys.J.Suppl.148(2003)175[arXiv:astro-ph/0302209].[7] A.G.Riess et al.[Supernova Search Team Collaboration],Astron.J.116(1998)1009[arXiv:astro-ph/9805201];S.Perlmutter et al.[Supernova Cosmology Project Collaboration],Astrophys.J.517(1999)565[arXiv:astro-ph/9812133].[8]N. A.Bahcall,J.P.Ostriker,S.Perlmutter and P.J.Steinhardt,Science284(1999)1481[arXiv:astro-ph/9906463].[9]J.R.Ellis,K.A.Olive and E.Vangioni,arXiv:astro-ph/0503023.[10]K.Jedamzik and J.B.Rehm,Phys.Rev.D64(2001)023510[arXiv:astro-ph/0101292].[11] A.D.Sakharov,Pisma Zh.Eksp.Teor.Fiz.5(1967)32[JETP Lett.5(1967SOPUA,34,392-393.1991UFNAA,161,61-64.1991)24].[12]M.Fukugita and T.Yanagida,Phys.Lett.B174(1986)45.[13]See,for example:M.Carena,M.Quiros,M.Seco and C.E.M.Wagner,Nucl.Phys.B650(2003)24[arXiv:hep-ph/0208043].[14]See,for example:J.R.Ellis and M.Raidal,Nucl.Phys.B643(2002)229[arXiv:hep-ph/0206174].[15]The2dF Galaxy Redshift Survey,.au/2dFGRS/.[16] D.J.Eisenstein et al.,arXiv:astro-ph/0501171;S.Cole et al.[The2dFGRS Collaboration],arXiv:astro-ph/0501174.[17]Y.Fukuda et al.[Super-Kamiokande Collaboration],Phys.Rev.Lett.81(1998)1562[arXiv:hep-ex/9807003].[18]See,for example:Q.R.Ahmad et al.[SNO Collaboration],Phys.Rev.Lett.87(2001)071301[arXiv:nucl-ex/0106015].[19]See,for example:S.Andriamonje et al.[CAST Collaboration],Phys.Rev.Lett.94(2005)121301[arXiv:hep-ex/0411033].[20] D.J.H.Chung,E.W.Kolb and A.Riotto,Phys.Rev.D59(1999)023501[arXiv:hep-ph/9802238].[21]K.Benakli,J.R.Ellis and D.V.Nanopoulos,Phys.Rev.D59(1999)047301[arXiv:hep-ph/9803333].[22]J.Wess and B.Zumino,Nucl.Phys.B70(1974)39.[23]L.Maiani,Proceedings of the1979Gif-sur-Yvette Summer School On Particle Physics,1;G.’t Hooft,inRecent Developments in Gauge Theories,Proceedings of the Nato Advanced Study Institute,Cargese,1979, eds.G.’t Hooft et al.,(Plenum Press,NY,1980);E.Witten,Phys.Lett.B105(1981)267.[24]P.Fayet,Unification of the Fundamental Particle Interactions,eds.S.Ferrara,J.Ellis andP.van Nieuwenhuizen(Plenum,New York,1980),p.587.[25]J.Ellis,J.S.Hagelin,D.V.Nanopoulos,K.A.Olive and M.Srednicki,Nucl.Phys.B238(1984)453;see alsoH.Goldberg,Phys.Rev.Lett.50(1983)1419.[26]The Joint LEP2Supersymmetry Working Group,http://lepsusy.web.cern.ch/lepsusy/.[27]LEP Higgs Working Group for Higgs boson searches,OPAL Collaboration,ALEPH Collaboration,DELPHICollaboration and L3Collaboration,Phys.Lett.B565(2003)61[arXiv:hep-ex/0306033].Search for neutral Higgs bosons at LEP,paper submitted to ICHEP04,Beijing,LHWG-NOTE-2004-01,ALEPH-2004-008,DELPHI-2004-042,L3-NOTE-2820,OPAL-TN-744,http://lephiggs.web.cern.ch/LEPHIGGS/papers/August2004。

材料科学与工程专业英语Unit2ClassificationofMaterials译文

材料科学与工程专业英语Unit2ClassificationofMaterials译文

Unit 2 Classification of MaterialsSolid materials have been conveniently grouped into three basic classifications: metals, ceramics, and polymers. This scheme is based primarily on chemical makeup and atomic structure, and most materials fall into one distinct grouping or another, although there are some intermediates. In addition, there are three other groups of important engineering materials —composites, semiconductors, and biomaterials.译文:译文:固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。

固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。

固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。

这个分类是首先基于这个分类是首先基于化学组成和原子结构来分的,化学组成和原子结构来分的,大多数材料落在明显的一个类别里面,大多数材料落在明显的一个类别里面,大多数材料落在明显的一个类别里面,尽管有许多中间品。

尽管有许多中间品。

除此之外,此之外, 有三类其他重要的工程材料-复合材料,半导体材料和生物材料。

有三类其他重要的工程材料-复合材料,半导体材料和生物材料。

Composites consist of combinations of two or more different materials, whereas semiconductors are utilized because of their unusual electrical characteristics; biomaterials are implanted into the human body. A brief explanation of the material types and representative characteristics is offered next.译文:复合材料由两种或者两种以上不同的材料组成,然而半导体由于它们非同寻常的电学性质而得到使用;生物材料被移植进入人类的身体中。

Signature of a Pairing Transition in the Heat Capacity of Finite Nuclei

Signature of a Pairing Transition in the Heat Capacity of Finite Nuclei

a rXiv:n ucl-t h /96v15Sep2Signature of a Pairing Transition in the Heat Capacity of Finite Nuclei S.Liu and Y.Alhassid Center for Theoretical Physics,Sloane Physics Laboratory,Yale University,New Haven,Connecticut 06520,U.S.A.(February 8,2008)Abstract The heat capacity of iron isotopes is calculated within the interacting shell model using the complete (pf +0g 9/2)-shell.We identify a signature of the pairing transition in the heat capacity that is correlated with the suppression of the number of spin-zero neutron pairs as the temperature increases.Our results are obtained by a novel method that significantly reduces the statis-tical errors in the heat capacity calculated by the shell model Monte Carlo approach.The Monte Carlo results are compared with finite-temperature Fermi gas and BCS calculations.Typeset using REVT E XPairing effects infinite nuclei are well known;examples include the energy gap in the spectra of even-even nuclei and an odd-even effect observed in nuclear masses.However,less is known about the thermal signatures of the pairing interaction in nuclei.In a macroscopic conductor,pairing leads to a phase transition from a normal metal to a superconductor below a certain critical temperature,and in the BCS theory[1]the heat capacity is characterized by afinite discontinuity at the transition temperature.As the linear dimension of the system decreases below the pair coherence length,fluctuations in the order parameter become important and lead to a smooth transition.The effects of both staticfluctuations[2,3] and small quantalfluctuations[4]have been explored in studies of small metallic grains.A pronounced peak in the heat capacity is observed for a large number of electrons,but for less than∼100electrons the peak in the heat capacity all but disappears.In the nucleus,the pair coherence length is always much larger than the nuclear radius,and large fluctuations are expected to suppress any singularity in the heat capacity.An interesting question is whether any signature of the pairing transition still exists in the heat capacity of the nucleus despite the largefluctuations.When only static and small-amplitude quantal fluctuations are taken into account,a shallow‘kink’could still be seen in the heat capacity of an even-even nucleus[5].This calculation,however,was limited to a schematic pairing model.Canonical heat capacities were recently extracted from level density measurements in rare-earth nuclei[6]and were found to have an S-shape that is interpreted to represent the suppression of pairing correlations with increasing temperature.The calculation of the heat capacity of thefinite interacting nuclear system beyond the mean-field and static-path approximations is a difficult problem.Correlation effects due to residual interactions can be accounted for in the framework of the interacting nuclear shell model.However,atfinite temperature a large number of excited states contribute to the heat capacity and very large model spaces are necessary to obtain reliable results.The shell model Monte Carlo(SMMC)method[7,8]enables zero-andfinite-temperature calculations in large spaces.In particular,the thermal energy E(T)can be computed versus temperature T and the heat capacity can be obtained by taking a numerical derivative C=dE/dT.However,thefinite statistical errors in E(T)lead to large statistical errors in the heat capacity at low temperatures(even for good-sign interactions).Such large errors occur already around the pairing transition temperature and thus no definite signatures of the pairing transition could be identified.Furthermore,the large errors often lead to spurious structure in the calculated heat capacity.Presumably,a more accurate heat capacity can be obtained by a direct calculation of the variance of the Hamiltonian,but in SMMC such a calculation is impractical since it involves a four-body operator.The variance of the Hamiltonian has been calculated using a different Monte Carlo algorithm[9],but that method is presently limited to a schematic pairing interaction.Here we report a novel method for calculating the heat capacity within SMMC that takes into account correlated errors and leads to much smaller statistical ing this method we are able to identify a signature of the pairing transition in realistic calculations of the heat capacity offinite nuclei.The signature is well correlated with the suppression in the number of spin-zero pairs across the transition temperature.The Monte Carlo approach is based on the Hubbard-Stratonovich(HS)representation of the many-body imaginary-time propagator,e−βH= D[σ]GσUσ,whereβis the in-verse temperature,Gσis a Gaussian weight and Uσis a one-body propagator that de-scribes non-interacting nucleons moving influctuating time-dependent auxiliaryfieldsσ. The canonical thermal expectation value of an observable O can be written as O = D[σ]GσTr(OUσ)/ D[σ]GσTr Uσ,where Tr denotes a canonical trace for N neutrons and Z protons.We can rewriteO = [Tr(OUσ)/Tr Uσ]ΦσWsamples.In particular the thermal energy can be calculated as a thermal average of the Hamilto-nian H .The heat capacity C =−β2∂E/∂βis then calculated by estimating the derivative as a finite differenceC =−β2E (β+δβ)−E (β−δβ)D [σ±]G σ±(β±δβ)Tr U σ±(β±δβ),(3)where the corresponding σfields are denoted by σ±.To have the same number of time slices N t in the discretized version of (3)as in the original HS representation of E (β),we define modified time slices ∆β±by N t ∆β±=β±δβ.We next change integration variables in (3)from σ±to σaccording to σ±=(∆β/∆β±)1/2σ,so that the Gaussian weight is left unchanged G σ±(β±δβ)≡exp −αn12|v α|(σα(τn ))2∆β =G σ(β)(v αare the interaction ‘eigenvalues’,obtained by writing the interaction in a quadratic form αv αˆρ2α/2,where ˆραare one-body densities).Rewriting (3)using the measure D [σ](the Jacobian resulting from the change in integration variables is constant and canceled between the numerator and denominator),we findE (β±δβ)= Tr HU σ±(β±δβ)Tr U σ(β)Φσ W Tr U σ(β)Φσ W ≡H ±among the quantities H±and Z±,which would lead to a smaller error for C.The covariances among H±and Z±as well as their variances can be calculated in the Monte Carlo and used to estimate the correlated error of the heat capacity.We have calculated the heat capacity for the iron isotopes52−62Fe using the complete (pf+0g9/2)-shell and the good-sign interaction of Ref.[10].Fig.1demonstrates the sig-nificant improvement in the statistical Monte Carlo errors.On the left panel of thisfigure we show the heat capacity of54Fe calculated in the conventional method,while the right panel shows the results from the new method.The statistical errors for T∼0.5−1MeV are reduced by almost an order of magnitude.The results obtained in the conventional calculation seem to indicate a shallow peak in the heat capacity around T∼1.25MeV,but the calculation using the improved method shows no such structure.The heat capacities of four iron isotopes55−58Fe,calculated with the new method,are shown in the top panel of Fig. 2.The heat capacities of the two even-mass iron isotopes (56Fe and58Fe)show a different behavior around T∼0.7−0.8MeV as compared with the two odd-mass isotopes(55Fe and57Fe).While the heat capacity of the odd-mass isotopes increases smoothly as a function of temperature,the heat capacity of the even-mass isotopes is enhanced for T∼0.6−1MeV,increasing sharply and then leveling off,displaying a ‘shoulder.’This‘shoulder’is more pronounced for the isotope with more neutrons(58Fe). To correlate this behavior of the heat capacity with a pairing transition,we calculated the number of J=0nucleon pairs in these nuclei.A J=0pair operator is defined as usual by∆†= a,m a>0(−1)j a−m aj a+1/2a†ja m aa†ja−m a,(5)where j a is the spin and m a is the spin projection of a single-particle orbit a.Pair-creation operators of the form(5)can be defined for protons(∆†pp),neutrons(∆†nn),and proton-neutrons(∆†pn).The average number ∆†∆ of J=0pairs(of each type)can be calculated exactly in SMMC as a function of temperature.The bottom panel of Fig.2shows the number of neutron pairs ∆†nn∆nn for55−58Fe.At low temperature the number of neutron pairs for isotopes with an even number of neutrons is significantly larger than that forisotopes with an odd number of neutrons.Furthermore,for the even-mass isotopes we observe a rapid suppression of the number of neutron pairs that correlates with the‘shoulder’observed in the heat capacity.The different qualitative behavior in the number of neutron pairs versus temperature between odd-and even-mass iron isotopes provides a clue to the difference in their heat capacities.A transition from a pair-correlated ground state to a normal state at higher temperatures requires additional energy for breaking of neutron pairs, hence the steeper increase observed in the heat capacity of the even-mass iron isotopes. Once the pairs are broken,less energy is required to increase the temperature,and the heat capacity shows only a moderate increase.It is instructive to compare the SMMC heat capacity with a Fermi gas and BCS calcula-tions.The heat capacity can be calculated from the entropy using the relation C=T∂S/∂T. The entropy S of uncorrelated fermions is given byS(T)=− a[f a ln f a+(1−f a)ln(1−f a)],(6) with f a being thefinite-temperature occupation numbers of the single-particle orbits a.Above the pairing transition-temperature T c,f a are just the Fermi-Dirac occupancies f a= [1+eβ(ǫa−µ)]−1,whereµis the chemical potential determined from the total number of particles andǫa are the single-particle energies.Below T c,it is necessary to take into account the BCS solution which has lower free energy.Since condensed pairs do not contribute to the entropy,the latter is still given by(6)but f a are now the quasi-particle occupancies[1],1f a=(ǫa−µ)2+∆2are the quasi-particle energies,where the gap∆(T)and the chemical potentialµ(T)are determined from thefinite-temperature BCS equations.In practice,we treat protons and neutrons separately.We applied the Fermi gas and BCS approximations to estimate the heat capacities of the iron isotopes.To take into account effects of a quadrupole-quadrupole interaction,we used an axially deformed Woods-Saxon potential to extract the single-particle spectrumǫa[11].A deformation parameterδfor the even iron isotopes can be extracted from experimental B(E2)values.However,since B(E2)values are not available for all of these isotopes,we used an alternate procedure.The excitation energy E x(2+1)of thefirst excited2+state in even-even nuclei can be extracted in SMMC by calculating J2 βat low temperatures and using a two-state model(the0+ground state and thefirst excited2+state)where J2 β≈6/(1+eβE x(2+1)/5)[12].The excitation energy of the2+1state is then used in the empirical formula of Bohr and Mottelson[13]τγ=(5.94±2.43)×1014E−4x(2+1)Z−2A1/3to estimate the meanγ-ray lifetimeτγand the corresponding B(E2).The deformation pa-rameterδis then estimated from B(E2)=[(3/4π)Zer20A2/3δ]2/5.Wefind(using r0=1.27 fm)δ=0.225,0.215,0.244,0.222,0.230,and0.220for the even iron isotopes52Fe–62Fe, respectively.For the odd-mass iron isotopes we adapt the deformations in Ref.[14].The zero-temperature pairing gap∆is extracted from experimental odd-even mass differences and used to determine the pairing strength G(needed for thefinite temperature BCS solu-tion).The top panels of Fig.3show the Fermi-gas heat capacity(dotted-dashed lines)for 59Fe(right)and60Fe(left)in comparison with the SMMC results(symbols).The SMMC heat capacity in the even-mass60Fe is below the Fermi-gas estimate for T<∼0.5MeV, but is enhanced above the Fermi gas heat capacity in the region0.5<∼T<∼0.9MeV. The line shape of the heat capacity is similar to the S-shape found experimentally in the heat capacity of rare-earth nuclei[6].We remark that the saturation of the SMMC heat capacity above∼1.5MeV(and eventually its decrease with T)is an artifact of thefinite model space.The solid line shown for60Fe is the result of the BCS calculation.There are two‘peaks’in the heat capacity corresponding to separate pairing transitions for neutrons (T n c≈0.9MeV)and protons(T p c≈1.2MeV).Thefinite discontinuities in the BCS heat capacity are shown by the dotted lines.The pairing solution describes well the SMMC results for T<∼0.6MeV.However,the BCS peak in the heat capacity is strongly suppressed around the transition temperature.This is expected in thefinite nuclear system because of the strongfluctuations in the vicinity of the pairing transition(not accounted for in themean-field approach).Despite the largefluctuations,a‘shoulder’still remains around the neutron-pairing transition temperature.The bottom panels of Fig.3show the number of spin-zero pairs versus temperature in SMMC.The number of p-p and n-p pairs are similar in the even and odd-mass iron isotopes. However,the number of n-n pairs at low T differs significantly between the two isotopes. The n-n pair number of60Fe decreases rapidly as a function of T,while that of59Fe decreases slowly.The S-shape or shoulder seen in the SMMC heat capacity of60Fe correlates well with the suppression of neutron pairs.Fig.4shows the complete systematics of the heat capacity for the iron isotopes in the mass range A=52−62for both even-mass(left panel)and odd-mass(right panel). At low temperatures the heat capacity approaches zero,as expected.When T is high,the heat capacity for all isotopes converges to approximately the same value.In the intermediate temperature region(T∼0.7MeV),the heat capacity increases with mass due to the increase of the density of states with mass.Pairing leads to an odd-even staggering effect in the mass dependence(see also in Fig.2)where the heat capacity of an odd-mass nucleus is significantly lower than that of the adjacent even-mass nuclei.For example,the heat capacity of57Fe is below that of both56Fe and58Fe.The heat capacities of the even-mass58Fe,60Fe,and62Fe all display a peak around T∼0.7MeV,which becomes more pronounced with an increasing number of neutrons.In conclusion,we have introduced a new method for calculating the heat capacity in which the statistical errors are strongly reduced.A systematic study in several iron isotopes reveals signatures of the pairing transition in the heat capacity offinite nuclei despite the largefluctuations.This work was supported in part by the Department of Energy grants putational cycles were provided by the San Diego Supercomputer Center (using NPACI resources),and by the NERSC high performance computing facility at LBL.REFERENCES[1]J.Bardeen,L.N.Cooper and J.R.Schrieffer,Phys.Rev.108,1175(1957).[2]B.Muhlschlegel,D.J.Scalapino and R.Denton,Phys.Rev.B6,1767(1972).[3]uritzen,P.Arve and G.F.Bertsch,Phys.Rev.Lett.61,2835(1988).[4]uritzen,A.Anselmino,P.F.Bortignon and R.A.Broglia,Ann.Phys.(N.Y.)223,216(1993).[5]R.Rossignoli,N.Canosa and P.Ring,Phys.Rev.Lett.80,1853(1998).[6]A.Schiller,A.Bjerve,M.Guttormsen,M.Hjorth-Jensen,F.Ingebretsen,E.Melby,S.Messelt,J.Rekstad,S.Siem,S.W.Odegard,arXive:nucl-ex/9909011.[7]ng,C.W.Johnson,S.E.Koonin and W.E.Ormand,Phys.Rev.C48,1518(1993).[8]Y.Alhassid,D.J.Dean,S.E.Koonin,ng,and W.E.Ormand,Phys.Rev.Lett.72,613(1994).[9]S.Rombouts S,K.Heyde and N.Jachowicz,Phys.Rev.C58,3295(1998).[10]H.Nakada,Y.Alhassid,Phys.Rev.Lett.79,2939(1997).[11]Y.Alhassid,G.F.Bertsch,S.Liu and H.Nakada,Phys.Rev.Lett.84,4313(2000).[12]H.Nakada,Y.Alhassid,Phys.Lett.B436,231(1998).[13]S.Raman et al.,Atomic Data and Nuclear Data Tables42,1(1989).[14]P.M¨o ller et al,Atomic Data Nucl.Data Tables59,185(1995),P.M¨o ller,J.R.Nix,and K.-L.Kratz,Atomic Data Nucl.Data Tables66,131(1997);G.Audi et al,Nucl.Phys.A624,1(1997).FIGURES00.51 1.52T (MeV)051015C 00.51 1.52T (MeV)FIG.1.The SMMC heat capacity of 54Fe.The left panel is the result of conventional SMMCcalculations.The right panel is calculated using the improved method (based on the representation (4)where a correlated error can be accounted for).0.40.81.21.6T (MeV)357911<∆+n n∆n n >051015CFIG.2.Top panel:the SMMC heat capacity vs.temperature T for55Fe(open circles),56Fe(solid diamonds),57Fe(open squares),and58Fe(solid triangles).Bottom panel:the number ofJ =0neutron pairs versus temperature for the same nuclei.0.511.5T (MeV)0246810<∆+∆>05101520C60Fe0.51 1.52T (MeV)59FeFIG.3.Top:Heat capacity versus T for 60Fe(left)and59Fe(right).The Monte Carlo resultsare shown by symbols.The dotted-dashed lines are the Fermi gas calculations,and the solid line (left panel only)is the BCS result.The discontinuities (dashed lines)correspond to a neutron (T c ∼0.9MeV)and proton (T c ∼1.2MeV)pairing transition.Above the pairing-transition temperature,the BCS results coincide with the Fermi gas results.Bottom panels:The number of J =0n -n (circles),p -p (squares),and n -p (diamonds)pairs vs.T for60Fe(left)and59Fe(right).0.40.8 1.2 1.6T (MeV)051015C00.40.8 1.2 1.6T (MeV)53Fe 55Fe 57Fe 59Fe 61FeFIG.4.The heat capacity of even-even (left panel)and odd-even (right panel)iron isotopes.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The coordination generalized particle model—An evolutionaryapproach to multi-sensor fusionXiang Fenga,*,Francis u a ,Dianxun ShuaibaDepartment of Computer Science,The University of Hong Kong,Hong KongbDepartment of Computer Science and Engineering,East China University of Science and Technology,Shanghai 200237,PR ChinaReceived 9November 2006;received in revised form 6January 2007;accepted 6January 2007Available online 14January 2007AbstractThe rising popularity of multi-source,multi-sensor networks supporting real-life applications calls for an efficient and intelligent approach to information fusion.Traditional optimization techniques often fail to meet the demands.The evolutionary approach pro-vides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems.We present a new evolution-ary approach based on the coordination generalized particle model (C-GPM)which is founded on the laws of physics.C-GPM treats sensors in the network as distributed intelligent agents with various degrees of autonomy.Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment.The proposed C-GPM approach can model the autonomy of as well as the social coordinations and interactive behaviors among sensors in a decen-tralized paradigm.Although the other existing evolutionary algorithms have their respective advantages,they may not be able to capture the entire dynamics inherent in the problem,especially those that are high-dimensional,highly nonlinear,and random.The C-GPM approach can overcome such limitations.We develop the C-GPM approach as a physics-based evolutionary approach that can describe such complex behaviors and dynamics of multiple sensors.Ó2007Elsevier B.V.All rights reserved.Keywords:Multi-sensor fusion;Sensor behavior;Sensor coordination;Evolutionary algorithm;Dynamic sensor resource allocation problem;Coordi-nation generalized particle model (C-GPM)1.IntroductionSensor fusion is a method of integrating signals from multiple sources into a single signal or piece of informa-tion.These sources are sensors or devices that allow for perception or measurement of the changing environment.The method uses ‘‘sensor fusion’’or ‘‘data fusion’’algo-rithms which can be classified into different groups,includ-ing (1)fusion based on probabilistic models,(2)fusion based on least-squares techniques,and (3)intelligent fusion.This paper presents an evolutionary approach to intelligent information fusion.Many applications in multi-sensor information fusion can be stated as optimization problems.Among the many different optimization techniques,evolutionary algorithms (EA)are a heuristic-based global search and optimization methods that have found their way into almost every area of real world optimization problems.EA provide a valu-able alternative to traditional methods because of their inherent parallelism and their ability to deal with difficult problems that feature non-homogeneous,noisy,incom-plete and/or obscured information,constrained resources,and massive processing of large amounts of data.Tradi-tional methods based on correlation,mutual information,local optimization,and sequential processing may perform poorly.EA are inspired by the principles of natural evolu-tion and genetics.Popular EA include genetic algorithm (GA)[1],simulated annealing algorithm (SA)[2],ant1566-2535/$-see front matter Ó2007Elsevier B.V.All rights reserved.doi:10.1016/j.inffus.2007.01.001*Corresponding author.Tel.:+852********;fax:+852********.E-mail address:xfeng@cs.hku.hk (X.Feng)./locate/inffusAvailable online at Information Fusion 9(2008)450–464colony optimization(ACO)[3],particle swarm optimiza-tion(PSO)[4],etc.,which have all been featured in either Nature or Science.In this paper,we propose the C-GPM approach as a new branch of EA,which is based on the laws of physics. Just like other EA drawing from observations of physical processes that occur in nature,the C-GPM approach is inspired by physical models of particle dynamics.Although the other existing EA have their respective advantages,they may not be able to capture the entire dynamics inherent in the problem,especially those that are high-dimensional, highly nonlinear,and random.The C-GPM approach can overcome such limitations.We develop the C-GPM approach as a physics-based evolutionary approach that can describe the complex behaviors and dynamics arising from interactions among multiple sensors.Our C-GPM algorithm,just like the other popular EA mentioned above,belongs to the class of meta-heuristics in artificial intelligence,which are approximate algorithms for obtaining good enough solutions to hard combinatorial optimization problems in a reasonable amount of compu-tation time.Similar to the other EA,the C-GPM algorithm is inher-ently parallel and can perform well in providing approxi-mating solutions to all types of problems.EA applied to the modeling of biological evolution are generally limited to explorations of micro-evolutionary processes.Some computer simulations,such as Tierra and Avida,however, attempt to model macro-evolutionary dynamics.C-GPM algorithm is an exploration of micro-evolutionary processes.In the physical world,mutual attraction between parti-cles causes motion.The reaction of a particle to thefield of potential would change the particle’s coordinates and energies.The change in the state of the particle is a result of the influence of the potential.In C-GPM,each particle is described by some differential dynamic equations,and the results of their calculations govern the movement(to a new state in thefield)of the particle.Specifically,each particle computes the combined effect of its own autono-mous self-driving force,thefield potential and the interac-tion potential.If the particles cannot eventually reach an equilibrium,they will proceed to execute a goal-satisfaction process.In summary,the relative differences between our C-GPM algorithm and other popular EA can be seen in Table1.The common features of these different approaches are listed as follows:•They draw from observations of physical processes that occur in nature.•They belong to the class of meta-heuristics,which are approximate algorithms used to obtain good enough solutions to hard combinatorial optimization problems in a reasonable amount of computation time.•They have inherent parallelism and the ability to deal with difficult problems.•They consistently perform well infinding approximating solutions to all types of problems.•They are mainly used in thefields of artificial intelligence.In this paper,we study some theoretical foundations of the C-GPM approach including the convergence of C-GPM.The structure of the paper is as follows.In Section 2,we discuss and formalize the problem model for the typ-ical multi-sensor system.In Section3,we present the evo-lutionary C-GPM approach to intelligent multi-sensor information fusion.In Section4,we describe an experiment to verify the claimed properties of the approach.We draw conclusion in Section5.2.Dynamic sensor resource allocationIn a sensor-based application with command and con-trol,a major prerequisite to the success of the command and control process is the effective use of the scarce and costly sensing resources.These resources represent an important source of information on which the command and control process bases most of its reasoning.Whenever there are insufficient resources to perform all the desired tasks,the sensor management must allocate the available sensors to those tasks that could maximize the effectiveness of the sensing process.Table1The C-GPM algorithm vs.other popular EAC-GPM GA SA ACO PSOInspired by Physical models of particledynamics NaturalevolutionThermodynamics Behaviors ofreal antsBiological swarm(e.g.,swarm of bees)Key components Energy function;differentialdynamic equations Chromosomes Energy function Pheromone laid Velocity-coordinatemodelExploration Both macro-evolutionary andmicro-evolutionary processes Macro-evolutionaryprocessesMicro-evolutionaryprocessesMacro-evolutionaryprocessesMacro-evolutionaryDynamics Can capture the entire dynamicsinherent in the problem Cannot capture Can capturepartlyCannot capture Cannot captureHigh-dimensional,highly nonlinear, random behaviors and dynamics Can describe Cannot describe Can describepartlyCannot describe Cannot describe X.Feng et al./Information Fusion9(2008)450–464451The dynamic sensor allocation problem consists of selecting sensors of a multi-sensor system to be applied to various objects of interest using feedback strategies.Con-sider the problem of n sensors,A ¼f A 1;...;A n g ,and m objects,T ¼f T 1;...;T m g .In order to obtain useful infor-mation about the state of each object,appropriate sensors should be assigned to various objects at the time intervals t 2f 0;1;...;T À1g .The collection of sensors applied to object k during interval t is represented by a vector X k ðt Þ¼f x 1k ;...;x ik ;...;x nk g ,wherex ik ðt Þ¼1if sensor i is used on object k at interval t0otherwise &Because of the limited resources sustaining the whole system,the planned sensor distributions must satisfy the following constraint for every t 2f 0;1;...;T À1g :X m k ¼1r ik ðt Þx ik ðt Þ¼1ð1Þwhere r ik denotes that quantity of resources consumed by sensor i on object k and 06r ik 61.The goal of sensor allocation is to try to achieve an opti-mal allocation of all sensors to all the objects after T stages.Let C ¼ðc ik Þn Âm be a two-dimensional weight vector.Sensor allocation can be defined as a problem to find a two-dimensional allocation vector R ¼ðr ik Þn Âm ,which max-imizes the objective in (2),subject to the constraint (1)z ðR Þ¼ðC ÞTRX ¼X n i ¼1X m k ¼1c ik r ik x ik ð2ÞLet f ik ðt Þrepresent the intention strength of socialcoordination.Thus we obtain an allocation-related matrix S ðt Þ¼½s ik ðt Þ n Âm ,as shown in Table 2,where s ik ðt Þ¼h r ik ðt Þ;c ik ðt Þ;x ik ðt Þ;f ik ðt Þi :For convenience,both r ik ðt Þand c ik ðt Þare normalized such that 06r ik ðt Þ61and 06c ik ðt Þ61.3.The C-GPM approach to sensor fusion 3.1.Physical model of C-GPMThis subsection discusses the physical meanings of the coordination generalized particle model (C-GPM)for sensor fusion in multi-sensor systems which involve social coordi-nations among the sensors.C-GPM treats every entry ofthe allocation-related matrix S ik as a particle,s ik ,in a force field.The problem solving process is hence transformed into one dealing with the kinematics and dynamics of par-ticles in the force field.The s ik ’s form the matrix S ik .For convenience,we let s ik represent both an entry of the matrix as well as its corresponding particle in the force field.Particle and force-field are two concepts of physics.Par-ticles in our C-GPM move not only under outside forces,but also under internal force;hence in this sense,they are somewhat different from particles in physics.As shown in Fig.1,the vertical coordinate of a particle s ik in force field F represents the utility obtained by sensor A i from being used on object T k .A particle experiences several kinds of forces simultaneously,which include the gravitational force of the force field,the pulling or pushing forces due to social coordinations among the sensors,and the particle’s own autonomous driving force.The forces on a particle are handled only along the vertical direction.Thus a particle will be driven by the resultant force of all the forces that act on it upwards or downwards along the vertical direction.The larger the upward/downward resul-tant force on a particle,the faster the upward/downward motion of the particle.When the resultant force on a par-ticle is equal to zero,the particle will stop moving,being in an equilibrium state.The upward gravitational force of the force field con-tributes an upward component of a particle’s motion,which represents the tendency that the particle pursues the collective benefit of the whole multi-sensor system.The other upward or downward components of the parti-cle’s motion,which are related to the social coordinations among the sensors,depend on the strengths and kinds of these coordinations.The pulling or pushing forces among particles make particles move to satisfy resource restric-tions,as well as reflect the social coordinations and behav-iors among the sensors.A particle’s own autonomous driving force is directly proportional to the degree the par-ticle tries to maximize its own profit (utility).This autono-mous driving force of a particle actually sets the C-GPM approach apart from the classical physical model.All the generalized particles simultaneously move in the force field,and once they have all reached their respective equilibrium positions,we have a feasible solution to the optimization problem in question.Table 2The matrix S ðt ÞSensors Objects T 1...T k...T mA 1r 11;c 11;x 11;f 11...r 1k ;c 1k ;x 1k ;f 1k ...r 1m ;c 1m ;x 1m ;f 1m ............A i r i 1;c i 1;x i 1;f i 1...r ik ;c ik ;x ik ;f ik ...r im ;c im ;x im ;f im ............A nr n 1;c n 1;x n 1;f n 1...r nk ;c nk ;x nk ;f nk...r nm ;c nm ;x nm ;fnm452X.Feng et al./Information Fusion 9(2008)450–464Because the problem in this paper is a one objective problem,we limit the particles movements to one dimen-sion.The design of C-GPM in fact allows forces of all directions to exist.These forces can be decomposed into their horizontal and vertical components.In this present work,only the vertical component may affect a particle’s motion.In a forthcoming paper,we will introduce the mul-tiple objectives problem where we will handle particles’movements along multiple dimensions.3.2.Mathematical model of C-GPMWe define in this subsection the mathematical model of C-GPM for the sensor allocation problem that involves n sensors and m objects.Let u ik ðt Þbe the distance from the current position of particle s ik to the bottom boundary of force field F at time t ,and let J ðt Þbe the utility sum of all particles,which we define as follows:u ik ðt Þ¼a ½1Àexp ðÀc ik ðt Þr ik ðt Þx ik ðt ÞÞJ ðt Þ¼Xn i ¼1X m k ¼1u ik ðt Þð3Þwhere 0<a <1.1Àe Àx is chosen as the definition of u ik be-cause 1Àe Àx is a monotone increasing function and be-tween 0and 1(Fig.2).At time t ,the potential energy functions,P ðt Þ,which is caused by the upward gravitational force of force field F ,is defined byP ðt Þ¼ 2ln X n i ¼1X m k ¼1exp ½Àu 2ik ðt Þ=2 2 À 2ln mnð4Þwhere 0< <1.The smaller P ðt Þis,the better.With Eq.(4),we attempt to construct a potential energy function,P ðt Þ,such that the decrease of its value would imply the in-crease of the minimal utility of all the sensors.We prove it in Proposition 3.This way we can optimize the multi-sen-sor fusion problem in the sense that we consider not only the aggregate utility,but also the individual personal utili-ties,especially the minimum one.In addition, represents the strength of upward gravitational force of the force field.The bigger is,the better.If we did not get a sufficiently satisfactory result by C-GPM,we can let smaller.The gravitational force of the force field causes the par-ticles to move to increase the corresponding sensors’mini-mal personal utility,and hence to realize max–min fair allocation and increase the whole utility of the multi-sensor system.Following the literature [5–8],we divide typical social coordinations between sensor A i and sensor A j into 12pos-sible types,as in Table 3.A ijk :To avoid the harmful consequence possibly caused by A j ,A i wants to change its own current intention.B ijk :To exploit the beneficial consequence possibly caused by A j ,A i wants to change its own current intention.C ijk :To benefit A j ,A i wants to change its own current intention regardless of self-benefit.D ijk :A i tries to allure A j to modify A j ’s cur-rent intention so that A i could avoid the harmful conse-quence possibly caused by A j .E ijk :A i tries to entice A j to modify A j ’s current intention so that A i could exploit the beneficial consequence possibly caused by A j .F ijk :A i tries to tempt A j to modify A j ’s current intention so that A i might benefit from this,while A j ’s interests might be infringed.G ijk :To compete with each other,neither A i nor A j will modify their own intention,but both A i and A j might enhance their intention strengths with respect to the K th goal (or object).H ijk :Neither A i nor A j will modify their own current intention,but both A i and A j might decrease their intention strengths with respect to the K th goal.I ijk :Due to disregard of the other side,neither A i nor A j will modify their own current intention.J ijk :To harm the other side,both A i and A j try to modify their own current intentions.K ijk :Both A i and A j try to modify their own current intentions so that they could implement the intention of the other side.L ijk :Both A i and A j try to modify their current intentions so that they can do some-thing else.Of the 12types of social coordination,types A ,B ,C ,D ,E and F and F are via unilateral communication,and types G,H,I,J,K and L by bilateral communication.Based on which sensor(s)will modify their current intention,the 12types can be conveniently grouped into the four categories.For A ijk ;B ijk ;C ijk ,it is A i ;for D ijk ;E ijk ;F ijk ,it is A j ;forFig.2.Graphical presentation of u ik ðt Þ.Table 3Social coordinations among sensors b ijk Type Namef ijk IA ijk Adaptive avoidance coordination À1B ijk Adaptive exploitation coordinationC ijk Collaboration coordination IID ijk Tempting avoidance coordination 1E ijk Tempting exploitation coordinationF ijk Deception coordinationIII G ijk Competition coordination 1H ijk Coalition coordinationI ijk Habituation/preference coordination IV J ijk Antagonism coordination À1K ijk Reciprocation coordination L ijkCompromise coordinationX.Feng et al./Information Fusion 9(2008)450–464453G ijk;H ijk;I ijk,none will;and for J ijk;K ijk;L ijk,both A i and A j will,and so we haveLðIÞ¼Lð10Þ¼f A ijk;B ijk;C ijk j8i;j;k g;LðIIÞ¼Lð01Þ¼f D ijk;E ijk;F ijk j8i;j;k g;LðIIIÞ¼Lð00Þ¼f G ijk;H ijk;I ijk j8i;j;k g;LðIVÞ¼Lð11Þ¼f J ijk;K ijk;L ijk j8i;j;k g;L¼ðLð01Þ[Lð10Þ[Lð10Þ[Lð11ÞÞ:The intention strength f ikðtÞof sensor A i with respect to object T k is defined byf ikðtÞ¼X nj¼1f ijkðtÞþX nj¼1f jikðtÞð5Þf ijkðtÞ¼1if b ijk2LðIIÞ[LðIIIÞÀ1if b ijk2LðIÞ[LðIVÞ(f jikðtÞ¼1if b jik2LðIÞ[LðIIIÞÀ1if b jik2LðIIÞ[LðIVÞ(ð6Þb ijk is the social coordination of sensor A i with respect to sensor A j for object T k,which gives rise to the change f ijkðtÞof intention strength f ikðtÞ.f ikðtÞof s ikðtÞrepresents the aggregate intention strength when more than one social coordination happen simulta-neously at time t.The greater f ikðtÞis,the more necessary would sensor A i have to modify its r ikðtÞfor object T k.At time t,the potential energy function,QðtÞ,is defined byQðtÞ¼X ni¼1X mk¼1r ikðtÞx ikðtÞÀ12ÀXi;k Z u ikf½1þexpðÀf ik xÞ À1À0:5g d xð7ÞThefirst term of QðtÞis related to the constraints on the sensors’capability;the second term involves social coordi-nations among the sensors,with f ik coming from Eqs.(5) and(6).Thefirst term of QðtÞcorresponds to a penalty function with respect to the constraint on the utilization of resources.Therefore,the sensors’resource utilization can be explicitly included as optimization objectives in the multi-sensor fusion problem.The second term of QðtÞis chosen as shown because we want o Qikto be a monotone decreasing sigmoid function,as shown in Fig.3.Àf½1þexpðÀf ik u ikÞ À1À0:5g is such a function.Thereforewe let o Qo u ik equal toÀf½1þexpðÀf ik u ikÞ À1À0:5g.Then o Qo u ikisintegrated to be Q.A particle in the forcefield can move upward along a vertical line under a composite force making up of•the upward gravitational force of the forcefield,•the upward or downward component of particle motion that is related to social coordinations among the sensors,•the pulling or pushing forces among the particles in order to satisfy resource restrictions,and•the particle’s own autonomous driving force.The four kinds of forces can all contribute to the parti-cles’upward movements.What is more,these forces pro-duce hybrid potential energy of the forcefield.The general hybrid potential energy function for particle s ik, E ikðtÞ,can be defined byE ikðtÞ¼k1u ikðtÞþk2JðtÞÀk3PðtÞÀk4QðtÞð8Þwhere0<k1;k2;k3;k4<1:Dynamic equations for particle s ik is defined byd u ikðtÞ=d t¼W1ðtÞþW2ðtÞW1ðtÞ¼Àu ikðtÞþc v ikðtÞW2ðtÞ¼k1þk2o JðtÞikÀk3o PðtÞikÀk4o QðtÞikh io u ikðtÞikh i2þo u ikðtÞikh i2&'8>>>>>><>>>>>>:ð9Þwhere c>1.And v ikðtÞis a piecewise linear function of u ikðtÞdefined byv ikðtÞ¼0if u ikðtÞ<0u ikðtÞif06u ikðtÞ611if u ikðtÞ>18><>:ð10ÞIn order to dynamically optimize sensor allocation,the particle s ik may alternately modify r ik and c ik,respectively, as follows:d c ikðtÞ=d t¼k1o u ikðtÞo c ikðtÞþk2o JðtÞo c ikðtÞÀk3o PðtÞo c ikðtÞÀk4o QðtÞo c ikðtÞð11Þd r ikðtÞ=d t¼k1o u ikðtÞo r ikðtÞþk2o JðtÞo r ikðtÞÀk3o PðtÞo r ikðtÞÀk4o QðtÞo r ikðtÞð12Þwhere o QðtÞik¼Àf½1þexpðÀf ikðtÞu ikðtÞÞ À1À0:5g is a sig-moid function of the aggregate intention strength f ikðtÞ. The graphical presentation of o Qikis shown in Fig.3.Note that f ikðtÞis related to social coordinations amongthe sensors at time t.o Qo u ikis a monotone decreasing function. By Eq.(9)the greater the value of f ikðtÞ,the more thereare Fig.3.Graphical presentation of o Qo u ik.454X.Feng et al./Information Fusion9(2008)450–464values of W 2ðt Þand hence u ik ðt Þwill increase,which implies that the social coordinations will strengthen the current allocation r ik ðt Þ.Because o Q o u ikis a monotone decreasing function,the greater the value of f ik ðt Þ,the smaller o Q ik ,the greater Ào Q ik,the greater W 2ðt Þby Eq.(9)and the greater D r ik ðt þ1Þby Eq.(12).Since r ik ðt þ1Þ¼r ik ðt ÞþD r ik ðt þ1Þ,the greater the value of f ik ðt Þ,the greater r ik ðt þ1Þ.Since u ik is a monotone increasing function,the greater r ik ðt þ1Þ,the greater u ik f ik ðt Þ")o Q o u ik #)Ào Q o u ik ")ð9ÞW 2"Ào Q o u ik")ð11ÞD r ik ðt þ1Þ")ar ik ðt þ1Þ")ð3Þbu ik "(a)r ik ðt þ1Þ¼r ik ðt ÞþD r ik ðt þ1Þ;(b)u ik is monotone increasing function.In the following,we derive some formal properties of the mathematical model presented above.Proposition 1.Updating the weights c ik and allotted resource r ik by Eqs.(11)and (12)respectively amounts to changing the speed of particle s ik by W 2ðt Þof Eq.(9).Denote the j th terms of Eqs.(11)and (12)by d c ik ðt ÞD E j and d r ik ðt ÞD E j;respectively.When allotted resource r ik is updated according to (12),the first and second terms of (12)will cause the following speed increments of the parti-cle s ik ,respectively:h d u ik ðt Þ=d t i r1¼o u ik ðt Þo r ik ðt Þd r ik ðt Þd t ()1¼k 1o u ik ðt Þo r ik ðt Þ !2ð13Þh d u ik ðt Þ=d t i r2¼o u ik ðt Þo r ik ðt Þd r ik ðt Þd t ()2¼k 2o u ik ðt Þo r ik ðt Þo J ðt Þo r ik ðt Þ¼k 2o u ik ðt Þo r ik ðt Þo J ðt Þo u ik ðt Þo u ik ðt Þo r ik ðt Þ¼k 2o J ðt Þo u ik ðt Þo u ik ðt Þo r ik ðt Þ !2ð14ÞSimilarly,the third and the fourth term of Eq.(12)will cause the following speed increments of the particle s ik :h d u ik ðt Þ=d t i r 3¼Àk 3o P ðt Þik o u ik ðt Þik !2h d u ik ðt Þ=d t i r4¼Àk 4o Q ðt Þo u ik ðt Þo u ik ðt Þo r ik ðt Þ!2Similarly,for Eq.(11),we have h d u ik ðt Þ=d t i cj ;j ¼1;2;3;4.We thus obtainX 4j ¼1½h d u ik ðt Þ=d t i c j þh d u ik ðt Þ=d t i r j¼k 1þk 2o J ðt Þo u ik ðt ÞÀk 3o P ðt Þo u ik ðt ÞÀk 4o Q ðt Þo u ik ðt Þ!o u ik ðt Þo r ik ðt Þ!2(þo u ik ðt Þo c ik ðt Þ!2)¼W 2ðt Þ:Therefore,updating c ðj Þik and r ðj Þik by (11)and (12),respec-tively,gives rise to the speed increment of particle s ik that is exactly equal to W 2ðt Þof Eq.(9).Proposition 2.The first and second terms of Eqs.(11)and (12)will enable the particle s ik to move upwards,that is,the personal utility of sensor A i from object T k increases,in direct proportion to the value of ðk 1þk 2Þ.According to Eqs.(13)and (14),the sum of the first and second terms of Eqs.(11)and (12)will beh d u ik ðt Þ=d t i r 1þh d u ik ðt Þ=d t i r 2þh d u ik ðt Þ=d t i c 1þh d u ik ðt Þ=d t i c2¼k 1þk 2o J ðt Þo u ik ðt Þ !o u ik ðt Þo r ik ðt Þ !2þo u ik ðt Þo c ik ðt Þ !2()¼ðk 1þk 2Þx 2ik ðt Þ½r 2ik ðt Þþc 2ik ðt Þ ½Àu ik ðt Þ 2P 0:Therefore,the first and second terms of (11)and (12)willcause u ik ðt Þto monotonically increase.Proposition 3.For C-GPM,if is very small,then decreas-ing the potential energy P ðt Þof Eq.(4)amounts to increasing the minimal utility of an sensor with respect to an object,min-imized over S ðt Þ.Supposing that H ðt Þ¼max i ;k fÀu 2ik ðt Þg ,we have½exp ðH ðt Þ=2 2Þ 2 26X n i ¼1X m k ¼1exp ðÀu 2ik ðt Þ=2 2Þ"#2 26½mn exp ðH ðt Þ=2 2Þ 2 2:Taking the logarithm of both sides of the above inequalities givesH ðt Þ62 2ln X n i ¼1X m k ¼1exp ðÀu 2ik ðt Þ=2 2Þ6H ðt Þþ2 2ln mn :Since mn is constant and is very small,we haveH ðt Þ%2 2ln X n i ¼1X m k ¼1exp ðÀu 2ik ðt Þ=2 2ÞÀ2 2ln mn ¼2P ðt Þ:It turns out that the potential energy P ðt Þat the time t rep-resents the maximum of Àu 2ik ðt Þamong all the particles s ik ,which is the minimal personal utility of a sensor with respect to an object at time t .Hence the decrease of potential energy P ðt Þwill result in the increase of the minimum of u ik ðt Þ.Proposition 4.Updating c ik and r ik according to Eqs.(11)and (12)amounts to increasing the minimal utility of a sensor with respect to an object in direct proportion to the value of k 3.The speed increment of particle s ik ,which is related to potential energy P ðt Þ,is given by d u ik ðt Þd t ()3¼h d u ik ðt Þ=d t i r 3þh d u ik ðt Þ=d t i c3¼Àk 3o P ðt Þo u ik ðt Þo u ik ðt Þo r ik ðt Þ !2þo u ik ðt Þo c ik ðt Þ!2():X.Feng et al./Information Fusion 9(2008)450–464455。

相关文档
最新文档