Renormalization constants of local operators within the Schrodinger functional scheme

合集下载

里德堡常量公式

里德堡常量公式

里德堡常量公式里德堡常量公式(Rydberg constant formula)是量子力学中一个非常重要的公式,用于描述氢原子和类似系统的能级结构。

这个公式由瑞德堡在1888年首次提出,在当今的物理学研究中仍然被广泛应用。

里德堡常量公式可以用数学符号表示为:1/λ = R * (1/n1^2 - 1/n2^2)其中,1/λ表示波长的倒数,R是里德堡常量,n1和n2是整数,代表氢原子的两个能级。

这个公式可以解释氢原子光谱中的谱线,也可以用于计算氢原子的能级差。

根据这个公式,我们可以得出几个重要的结论。

里德堡常量决定了氢原子光谱中的谱线位置和强度。

当n1和n2取不同的整数值时,波长的倒数1/λ也会有所变化。

这就解释了为什么氢原子光谱中会出现一系列离散的谱线。

这些谱线的位置和强度可以通过里德堡常量公式进行计算。

里德堡常量也反映了氢原子能级的结构。

根据公式可知,当n1和n2的值越大,能级差越小,对应的谱线波长越长。

这说明氢原子的能级是离散的,而且能级差随着能级的增加而变小。

里德堡常量还可以用于计算氢原子的能级。

通过测量氢原子光谱中的谱线波长,可以反推出相应的能级差。

结合里德堡常量公式,我们可以计算出氢原子的能级。

除了氢原子,里德堡常量公式还适用于其他类似系统,比如氦原子和其他单电子离子。

由于这些系统的结构和氢原子类似,里德堡常量的值在这些系统中也是相同的。

在实际应用中,里德堡常量公式被广泛用于原子光谱的研究和分析。

通过测量谱线的波长,可以推断出原子的能级结构和性质。

这对于研究原子物理、化学和天体物理等领域都具有重要意义。

里德堡常量公式是描述氢原子和类似系统能级结构的重要工具。

通过这个公式,我们可以理解氢原子光谱中的谱线分布,推断出原子的能级差,计算能级的位置和强度。

这个公式在量子力学的研究和应用中起着重要的作用,为我们揭示了微观世界的奥秘。

北京大学-物理化学-第2章-热力学第二定律

北京大学-物理化学-第2章-热力学第二定律
第二章 热力学第二定律
2.1 变化的方向性------不可逆性
除可逆过程外,一切变化都有一定的方 向和限度,都不会自动逆向进行。热力 学的不可逆过程。
各类变化过程的不可逆性不是孤立而是 彼此相关的,而且都可归结为借助外力 使系统复原时在环境留下一定量的功转 化为热的后果。
有可能在各种不同的热力学过程之间建 立起统一的普遍适用的判据,并由此去 判断复杂过程方向和限度。
热机效率(efficiency of the engine )
功功W与任,所另何吸一热的部机热分从之Q高c比传温值给(T称低h )为热温热源(T机吸c ) 热效热源率Qh.,,或将一称热部为机分热所转机作化转的为
换系数,用 表示。 恒小于1。
W Qh Qc
Qh
Qh
(Qc 0)

nR(Th
卡诺定理的意义:(1)引入了一个不等号 I R , 原则上解决了化学反应的方向问题;(2)解决了热
机效率的极限值问题。
卡诺定理:
所有工作在同温热源与同温冷源之间的热 机,其效率不可能超过可逆机。 Carnot循环:第二定律发展中重要里程碑。
指明了可逆过程的特殊意义
原则上可以根据Clausius或Kelvin说法来判断一个过程的 方向,但实际上这样做是很不方便,也太抽象,还不能指 出过程的限度。Clausius从分析Carnot过程的热功转化关 系入手,最终发现了热力学第二定律中最基本的状态函 数——熵。
即ABCD曲线所围面积为 热机所作的功。
卡诺循环(Carnot cycle)
•根据绝热可逆过程方程式
: 过程2 T V 1 h2
T V 1 c3
过程4:
T V 1 h1
TcV4 1

热力学与统计物理教案:第七章 玻尔兹曼统计

热力学与统计物理教案:第七章 玻尔兹曼统计

非简并性条件 e 1 愈容易满足。
一般气体在常温,常压下 e 104 ,满足非简并性条件,可用玻尔兹曼统计。
1
1
e
1
,也可改写为
V N
3
h
1 2 mkT
2
(*)
分子的德布罗意波长 h h , 理解为分子热运动的平均能量 ~ 3 kT (可由以后的
al
N el Z1
l h0r
式中的 h0r 与配分函数 Z1 所含的 h0r 相互抵消,与 h0 无关。
一个粒子的运动状态处于 l 的概率:
68
Pl
al N
1 el Z1
l h0r
A
l
Pl Al
1 Z1
l
Al el
l h0r
1 Z1
Ae d h0r
U
N
ln Z1 及 Yi
N
yi
ln Z1 与 h0
第七章 玻尔兹曼统计
§7.1 热力学量的统计表达式
1、 配分函数
配分函数是统计物理中最重要的热力学特性函数,知道了它,就可以得到平衡态系统的所
有热力学量。
系统的总粒子数 N
al
e l l
e
el l
l
l
l
令 Z1
el l
l
【对单粒子能级求和】
es
【对单粒子量子态求和】
s
称为(单粒子)配分函数,则
N
!
由于 F 与 S 有关,从而与微观状态数有关,所以对于两种系统得出不同的结果。
经典近似
由量子玻尔兹曼分布 al
l e l
和经典玻尔兹曼分布 al
e l
l h0r

量子力学 公式

量子力学 公式

量子力学公式
量子力学中的一些常见公式包括:
1. 薛定谔方程式:描述了量子物理学的宏观世界,即微观粒子如何随着时间的推移而演变。

其一般形式为:iℏ∂Ψ/∂t=HΨ,其中i是虚数单位,ℏ是普
朗克常数的约化常数,Ψ是波函数,H是哈密顿算符。

2. 波粒二象性:描述了物质粒子的波动性质和粒子性质之间的相互作用关系。

其表达式为λ=h/p,其中λ是波长,h是普朗克常数,p是粒子的动量。

3. 测量理论:物理量的测量和观测结果有一定的概率性和不确定性。

测量理论采用概率统计的方法来描述这种不确定性。

最常见的公式是海森堡不确定性原理:ΔxΔp≥h/4π,其中Δx和Δp分别表示位置和动量的不确定度,h 是普朗克常数。

4. 费米-狄拉克统计和玻色-爱因斯坦统计:描述了物质粒子的统计行为。

费米-狄拉克统计用于描述费米子(如电子、质子等)的行为,玻色-爱因斯坦统计用于描述玻色子(如光子、声子等)的行为。

5. 波函数的复共轭:Ψ^(r,t)。

6. 归一化条件:∫Ψ(r,t)^2d3r=1。

7. 位置算符:x。

8. 动量算符:-iℏ∇。

9. 能量算符:iℏ∂/∂t。

10. 完备性条件:∫ψn^(r)ψm(r)d3r=δnm。

以上公式仅供参考,如需更准确的信息,建议查阅量子力学相关的书籍或咨询专业人士。

Kernels and regularization on graphs

Kernels and regularization on graphs

Kernels and Regularization on GraphsAlexander J.Smola1and Risi Kondor21Machine Learning Group,RSISEAustralian National UniversityCanberra,ACT0200,AustraliaAlex.Smola@.au2Department of Computer ScienceColumbia University1214Amsterdam Avenue,M.C.0401New York,NY10027,USArisi@Abstract.We introduce a family of kernels on graphs based on thenotion of regularization operators.This generalizes in a natural way thenotion of regularization and Greens functions,as commonly used forreal valued functions,to graphs.It turns out that diffusion kernels canbe found as a special case of our reasoning.We show that the class ofpositive,monotonically decreasing functions on the unit interval leads tokernels and corresponding regularization operators.1IntroductionThere has recently been a surge of interest in learning algorithms that operate on input spaces X other than R n,specifically,discrete input spaces,such as strings, graphs,trees,automata etc..Since kernel-based algorithms,such as Support Vector Machines,Gaussian Processes,Kernel PCA,etc.capture the structure of X via the kernel K:X×X→R,as long as we can define an appropriate kernel on our discrete input space,these algorithms can be imported wholesale, together with their error analysis,theoretical guarantees and empirical success.One of the most general representations of discrete metric spaces are graphs. Even if all we know about our input space are local pairwise similarities between points x i,x j∈X,distances(e.g shortest path length)on the graph induced by these similarities can give a useful,more global,sense of similarity between objects.In their work on Diffusion Kernels,Kondor and Lafferty[2002]gave a specific construction for a kernel capturing this structure.Belkin and Niyogi [2002]proposed an essentially equivalent construction in the context of approx-imating data lying on surfaces in a high dimensional embedding space,and in the context of leveraging information from unlabeled data.In this paper we put these earlier results into the more principled framework of Regularization Theory.We propose a family of regularization operators(equiv-alently,kernels)on graphs that include Diffusion Kernels as a special case,and show that this family encompasses all possible regularization operators invariant under permutations of the vertices in a particular sense.2Alexander Smola and Risi KondorOutline of the Paper:Section2introduces the concept of the graph Laplacian and relates it to the Laplace operator on real valued functions.Next we define an extended class of regularization operators and show why they have to be es-sentially a function of the Laplacian.An analogy to real valued Greens functions is established in Section3.3,and efficient methods for computing such functions are presented in Section4.We conclude with a discussion.2Laplace OperatorsAn undirected unweighted graph G consists of a set of vertices V numbered1to n,and a set of edges E(i.e.,pairs(i,j)where i,j∈V and(i,j)∈E⇔(j,i)∈E). We will sometimes write i∼j to denote that i and j are neighbors,i.e.(i,j)∈E. The adjacency matrix of G is an n×n real matrix W,with W ij=1if i∼j,and 0otherwise(by construction,W is symmetric and its diagonal entries are zero). These definitions and most of the following theory can trivially be extended toweighted graphs by allowing W ij∈[0,∞).Let D be an n×n diagonal matrix with D ii=jW ij.The Laplacian of Gis defined as L:=D−W and the Normalized Laplacian is˜L:=D−12LD−12= I−D−12W D−12.The following two theorems are well known results from spectral graph theory[Chung-Graham,1997]:Theorem1(Spectrum of˜L).˜L is a symmetric,positive semidefinite matrix, and its eigenvaluesλ1,λ2,...,λn satisfy0≤λi≤2.Furthermore,the number of eigenvalues equal to zero equals to the number of disjoint components in G.The bound on the spectrum follows directly from Gerschgorin’s Theorem.Theorem2(L and˜L for Regular Graphs).Now let G be a regular graph of degree d,that is,a graph in which every vertex has exactly d neighbors.ThenL=d I−W and˜L=I−1d W=1dL.Finally,W,L,˜L share the same eigenvectors{v i},where v i=λ−1iW v i=(d−λi)−1L v i=(1−d−1λi)−1˜L v i for all i.L and˜L can be regarded as linear operators on functions f:V→R,or,equiv-alently,on vectors f=(f1,f2,...,f n) .We could equally well have defined Lbyf,L f =f L f=−12i∼j(f i−f j)2for all f∈R n,(1)which readily generalizes to graphs with a countably infinite number of vertices.The Laplacian derives its name from its analogy with the familiar Laplacianoperator∆=∂2∂x21+∂2∂x22+...+∂2∂x2mon continuous spaces.Regarding(1)asinducing a semi-norm f L= f,L f on R n,the analogous expression for∆defined on a compact spaceΩisf ∆= f,∆f =Ωf(∆f)dω=Ω(∇f)·(∇f)dω.(2)Both(1)and(2)quantify how much f and f vary locally,or how“smooth”they are over their respective domains.Kernels and Regularization on Graphs3 More explicitly,whenΩ=R m,up to a constant,−L is exactly thefinite difference discretization of∆on a regular lattice:∆f(x)=mi=1∂2∂x2if≈mi=1∂∂x if(x+12e i)−∂∂x if(x−12e i)δ≈mi=1f(x+e i)+f(x−e i)−2f(x)δ2=1δ2mi=1(f x1,...,x i+1,...,x m+f x1,...,x i−1,...,x m−2f x1,...,x m)=−1δ2[L f]x1,...,x m,where e1,e2,...,e m is an orthogonal basis for R m normalized to e i =δ, the vertices of the lattice are at x=x1e1+...+x m e m with integer valuedcoordinates x i∈N,and f x1,x2,...,x m=f(x).Moreover,both the continuous and the dis-crete Laplacians are canonical operators on their respective domains,in the sense that they are invariant under certain natural transformations of the underlying space,and in this they are essentially unique.Regular grid in two dimensionsThe Laplace operator∆is the unique self-adjoint linear second order differ-ential operator invariant under transformations of the coordinate system under the action of the special orthogonal group SO m,i.e.invariant under rotations. This well known result can be seen by using Schur’s lemma and the fact that SO m is irreducible on R m.We now show a similar result for L.Here the permutation group plays a similar role to SO m.We need some additional definitions:denote by S n the group of permutations on{1,2,...,n}withπ∈S n being a specific permutation taking i∈{1,2,...n}toπ(i).The so-called defining representation of S n consists of n×n matricesΠπ,such that[Ππ]i,π(i)=1and all other entries ofΠπare zero. Theorem3(Permutation Invariant Linear Functions on Graphs).Let L be an n×n symmetric real matrix,linearly related to the n×n adjacency matrix W,i.e.L=T[W]for some linear operator L in a way invariant to permutations of vertices in the sense thatΠ πT[W]Ππ=TΠ πWΠπ(3)for anyπ∈S n.Then L is related to W by a linear combination of the follow-ing three operations:identity;row/column sums;overall sum;row/column sum restricted to the diagonal of L;overall sum restricted to the diagonal of W. Proof LetL i1i2=T[W]i1i2:=ni3=1ni4=1T i1i2i3i4W i3i4(4)with T∈R n4.Eq.(3)then implies Tπ(i1)π(i2)π(i3)π(i4)=T i1i2i3i4for anyπ∈S n.4Alexander Smola and Risi KondorThe indices of T can be partitioned by the equality relation on their values,e.g.(2,5,2,7)is of the partition type [13|2|4],since i 1=i 3,but i 2=i 1,i 4=i 1and i 2=i 4.The key observation is that under the action of the permutation group,elements of T with a given index partition structure are taken to elements with the same index partition structure,e.g.if i 1=i 3then π(i 1)=π(i 3)and if i 1=i 3,then π(i 1)=π(i 3).Furthermore,an element with a given index index partition structure can be mapped to any other element of T with the same index partition structure by a suitable choice of π.Hence,a necessary and sufficient condition for (4)is that all elements of T of a given index partition structure be equal.Therefore,T must be a linear combination of the following tensors (i.e.multilinear forms):A i 1i 2i 3i 4=1B [1,2]i 1i 2i 3i 4=δi 1i 2B [1,3]i 1i 2i 3i 4=δi 1i 3B [1,4]i 1i 2i 3i 4=δi 1i 4B [2,3]i 1i 2i 3i 4=δi 2i 3B [2,4]i 1i 2i 3i 4=δi 2i 4B [3,4]i 1i 2i 3i 4=δi 3i 4C [1,2,3]i 1i 2i 3i 4=δi 1i 2δi 2i 3C [2,3,4]i 1i 2i 3i 4=δi 2i 3δi 3i 4C [3,4,1]i 1i 2i 3i 4=δi 3i 4δi 4i 1C [4,1,2]i 1i 2i 3i 4=δi 4i 1δi 1i 2D [1,2][3,4]i 1i 2i 3i 4=δi 1i 2δi 3i 4D [1,3][2,4]i 1i 2i 3i 4=δi 1i 3δi 2i 4D [1,4][2,3]i 1i 2i 3i 4=δi 1i 4δi 2i 3E [1,2,3,4]i 1i 2i 3i 4=δi 1i 2δi 1i 3δi 1i 4.The tensor A puts the overall sum in each element of L ,while B [1,2]returns the the same restricted to the diagonal of L .Since W has vanishing diagonal,B [3,4],C [2,3,4],C [3,4,1],D [1,2][3,4]and E [1,2,3,4]produce zero.Without loss of generality we can therefore ignore them.By symmetry of W ,the pairs (B [1,3],B [1,4]),(B [2,3],B [2,4]),(C [1,2,3],C [4,1,2])have the same effect on W ,hence we can set the coefficient of the second member of each to zero.Furthermore,to enforce symmetry on L ,the coefficient of B [1,3]and B [2,3]must be the same (without loss of generality 1)and this will give the row/column sum matrix ( k W ik )+( k W kl ).Similarly,C [1,2,3]and C [4,1,2]must have the same coefficient and this will give the row/column sum restricted to the diagonal:δij [( k W ik )+( k W kl )].Finally,by symmetry of W ,D [1,3][2,4]and D [1,4][2,3]are both equivalent to the identity map.The various row/column sum and overall sum operations are uninteresting from a graph theory point of view,since they do not heed to the topology of the graph.Imposing the conditions that each row and column in L must sum to zero,we recover the graph Laplacian.Hence,up to a constant factor and trivial additive components,the graph Laplacian (or the normalized graph Laplacian if we wish to rescale by the number of edges per vertex)is the only “invariant”differential operator for given W (or its normalized counterpart ˜W ).Unless stated otherwise,all results below hold for both L and ˜L (albeit with a different spectrum)and we will,in the following,focus on ˜Ldue to the fact that its spectrum is contained in [0,2].Kernels and Regularization on Graphs5 3RegularizationThe fact that L induces a semi-norm on f which penalizes the changes between adjacent vertices,as described in(1),indicates that it may serve as a tool to design regularization operators.3.1Regularization via the Laplace OperatorWe begin with a brief overview of translation invariant regularization operators on continuous spaces and show how they can be interpreted as powers of∆.This will allow us to repeat the development almost verbatim with˜L(or L)instead.Some of the most successful regularization functionals on R n,leading to kernels such as the Gaussian RBF,can be written as[Smola et al.,1998]f,P f :=|˜f(ω)|2r( ω 2)dω= f,r(∆)f .(5)Here f∈L2(R n),˜f(ω)denotes the Fourier transform of f,r( ω 2)is a function penalizing frequency components|˜f(ω)|of f,typically increasing in ω 2,and finally,r(∆)is the extension of r to operators simply by applying r to the spectrum of∆[Dunford and Schwartz,1958]f,r(∆)f =if,ψi r(λi) ψi,fwhere{(ψi,λi)}is the eigensystem of∆.The last equality in(5)holds because applications of∆become multiplications by ω 2in Fourier space.Kernels are obtained by solving the self-consistency condition[Smola et al.,1998]k(x,·),P k(x ,·) =k(x,x ).(6) One can show that k(x,x )=κ(x−x ),whereκis equal to the inverse Fourier transform of r−1( ω 2).Several r functions have been known to yield good results.The two most popular are given below:r( ω 2)k(x,x )r(∆)Gaussian RBF expσ22ω 2exp−12σ2x−x 2∞i=0σ2ii!∆iLaplacian RBF1+σ2 ω 2exp−1σx−x1+σ2∆In summary,regularization according to(5)is carried out by penalizing˜f(ω) by a function of the Laplace operator.For many results in regularization theory one requires r( ω 2)→∞for ω 2→∞.3.2Regularization via the Graph LaplacianIn complete analogy to(5),we define a class of regularization functionals on graphs asf,P f := f,r(˜L)f .(7)6Alexander Smola and Risi KondorFig.1.Regularization function r (λ).From left to right:regularized Laplacian (σ2=1),diffusion process (σ2=1),one-step random walk (a =2),4-step random walk (a =2),inverse cosine.Here r (˜L )is understood as applying the scalar valued function r (λ)to the eigen-values of ˜L ,that is,r (˜L ):=m i =1r (λi )v i v i ,(8)where {(λi ,v i )}constitute the eigensystem of ˜L .The normalized graph Lapla-cian ˜Lis preferable to L ,since ˜L ’s spectrum is contained in [0,2].The obvious goal is to gain insight into what functions are appropriate choices for r .–From (1)we infer that v i with large λi correspond to rather uneven functions on the graph G .Consequently,they should be penalized more strongly than v i with small λi .Hence r (λ)should be monotonically increasing in λ.–Requiring that r (˜L) 0imposes the constraint r (λ)≥0for all λ∈[0,2].–Finally,we can limit ourselves to r (λ)expressible as power series,since the latter are dense in the space of C 0functions on bounded domains.In Section 3.5we will present additional motivation for the choice of r (λ)in the context of spectral graph theory and segmentation.As we shall see,the following functions are of particular interest:r (λ)=1+σ2λ(Regularized Laplacian)(9)r (λ)=exp σ2/2λ(Diffusion Process)(10)r (λ)=(aI −λ)−1with a ≥2(One-Step Random Walk)(11)r (λ)=(aI −λ)−p with a ≥2(p -Step Random Walk)(12)r (λ)=(cos λπ/4)−1(Inverse Cosine)(13)Figure 1shows the regularization behavior for the functions (9)-(13).3.3KernelsThe introduction of a regularization matrix P =r (˜L)allows us to define a Hilbert space H on R m via f,f H := f ,P f .We now show that H is a reproducing kernel Hilbert space.Kernels and Regularization on Graphs 7Theorem 4.Denote by P ∈R m ×m a (positive semidefinite)regularization ma-trix and denote by H the image of R m under P .Then H with dot product f,f H := f ,P f is a Reproducing Kernel Hilbert Space and its kernel is k (i,j )= P −1ij ,where P −1denotes the pseudo-inverse if P is not invertible.Proof Since P is a positive semidefinite matrix,we clearly have a Hilbert space on P R m .To show the reproducing property we need to prove thatf (i )= f,k (i,·) H .(14)Note that k (i,j )can take on at most m 2different values (since i,j ∈[1:m ]).In matrix notation (14)means that for all f ∈Hf (i )=f P K i,:for all i ⇐⇒f =f P K.(15)The latter holds if K =P −1and f ∈P R m ,which proves the claim.In other words,K is the Greens function of P ,just as in the continuous case.The notion of Greens functions on graphs was only recently introduced by Chung-Graham and Yau [2000]for L .The above theorem extended this idea to arbitrary regularization operators ˆr (˜L).Corollary 1.Denote by P =r (˜L )a regularization matrix,then the correspond-ing kernel is given by K =r −1(˜L ),where we take the pseudo-inverse wherever necessary.More specifically,if {(v i ,λi )}constitute the eigensystem of ˜L,we have K =mi =1r −1(λi )v i v i where we define 0−1≡0.(16)3.4Examples of KernelsBy virtue of Corollary 1we only need to take (9)-(13)and plug the definition of r (λ)into (16)to obtain formulae for computing K .This yields the following kernel matrices:K =(I +σ2˜L)−1(Regularized Laplacian)(17)K =exp(−σ2/2˜L)(Diffusion Process)(18)K =(aI −˜L)p with a ≥2(p -Step Random Walk)(19)K =cos ˜Lπ/4(Inverse Cosine)(20)Equation (18)corresponds to the diffusion kernel proposed by Kondor and Laf-ferty [2002],for which K (x,x )can be visualized as the quantity of some sub-stance that would accumulate at vertex x after a given amount of time if we injected the substance at vertex x and let it diffuse through the graph along the edges.Note that this involves matrix exponentiation defined via the limit K =exp(B )=lim n →∞(I +B/n )n as opposed to component-wise exponentiation K i,j =exp(B i,j ).8Alexander Smola and Risi KondorFig.2.Thefirst8eigenvectors of the normalized graph Laplacian corresponding to the graph drawn above.Each line attached to a vertex is proportional to the value of the corresponding eigenvector at the vertex.Positive values(red)point up and negative values(blue)point down.Note that the assignment of values becomes less and less uniform with increasing eigenvalue(i.e.from left to right).For(17)it is typically more efficient to deal with the inverse of K,as it avoids the costly inversion of the sparse matrix˜L.Such situations arise,e.g.,in Gaussian Process estimation,where K is the covariance matrix of a stochastic process[Williams,1999].Regarding(19),recall that(aI−˜L)p=((a−1)I+˜W)p is up to scaling terms equiv-alent to a p-step random walk on the graphwith random restarts(see Section A for de-tails).In this sense it is similar to the dif-fusion kernel.However,the fact that K in-volves only afinite number of products ofmatrices makes it much more attractive forpractical purposes.In particular,entries inK ij can be computed cheaply using the factthat˜L is a sparse matrix.A nearest neighbor graph.Finally,the inverse cosine kernel treats lower complexity functions almost equally,with a significant reduction in the upper end of the spectrum.Figure2 shows the leading eigenvectors of the graph drawn above and Figure3provide examples of some of the kernels discussed above.3.5Clustering and Spectral Graph TheoryWe could also have derived r(˜L)directly from spectral graph theory:the eigen-vectors of the graph Laplacian correspond to functions partitioning the graph into clusters,see e.g.,[Chung-Graham,1997,Shi and Malik,1997]and the ref-erences therein.In general,small eigenvalues have associated eigenvectors which vary little between adjacent vertices.Finding the smallest eigenvectors of˜L can be seen as a real-valued relaxation of the min-cut problem.3For instance,the smallest eigenvalue of˜L is0,its corresponding eigenvector is D121n with1n:=(1,...,1)∈R n.The second smallest eigenvalue/eigenvector pair,also often referred to as the Fiedler-vector,can be used to split the graph 3Only recently,algorithms based on the celebrated semidefinite relaxation of the min-cut problem by Goemans and Williamson[1995]have seen wider use[Torr,2003]in segmentation and clustering by use of spectral bundle methods.Kernels and Regularization on Graphs9Fig.3.Top:regularized graph Laplacian;Middle:diffusion kernel with σ=5,Bottom:4-step random walk kernel.Each figure displays K ij for fixed i .The value K ij at vertex i is denoted by a bold line.Note that only adjacent vertices to i bear significant value.into two distinct parts [Weiss,1999,Shi and Malik,1997],and further eigenvec-tors with larger eigenvalues have been used for more finely-grained partitions of the graph.See Figure 2for an example.Such a decomposition into functions of increasing complexity has very de-sirable properties:if we want to perform estimation on the graph,we will wish to bias the estimate towards functions which vary little over large homogeneous portions 4.Consequently,we have the following interpretation of f,f H .As-sume that f = i βi v i ,where {(v i ,λi )}is the eigensystem of ˜L.Then we can rewrite f,f H to yield f ,r (˜L )f = i βi v i , j r (λj )v j v j l βl v l = iβ2i r (λi ).(21)This means that the components of f which vary a lot over coherent clusters in the graph are penalized more strongly,whereas the portions of f ,which are essentially constant over clusters,are preferred.This is exactly what we want.3.6Approximate ComputationOften it is not necessary to know all values of the kernel (e.g.,if we only observe instances from a subset of all positions on the graph).There it would be wasteful to compute the full matrix r (L )−1explicitly,since such operations typically scale with O (n 3).Furthermore,for large n it is not desirable to compute K via (16),that is,by computing the eigensystem of ˜Land assembling K directly.4If we cannot assume a connection between the structure of the graph and the values of the function to be estimated on it,the entire concept of designing kernels on graphs obviously becomes meaningless.10Alexander Smola and Risi KondorInstead,we would like to take advantage of the fact that ˜L is sparse,and con-sequently any operation ˜Lαhas cost at most linear in the number of nonzero ele-ments of ˜L ,hence the cost is bounded by O (|E |+n ).Moreover,if d is the largest degree of the graph,then computing L p e i costs at most |E | p −1i =1(min(d +1,n ))ioperations:at each step the number of non-zeros in the rhs decreases by at most a factor of d +1.This means that as long as we can approximate K =r −1(˜L )by a low order polynomial,say ρ(˜L ):= N i =0βi ˜L i ,significant savings are possible.Note that we need not necessarily require a uniformly good approximation and put the main emphasis on the approximation for small λ.However,we need to ensure that ρ(˜L)is positive semidefinite.Diffusion Kernel:The fact that the series r −1(x )=exp(−βx )= ∞m =0(−β)m x m m !has alternating signs shows that the approximation error at r −1(x )is boundedby (2β)N +1(N +1)!,if we use N terms in the expansion (from Theorem 1we know that ˜L≤2).For instance,for β=1,10terms are sufficient to obtain an error of the order of 10−4.Variational Approximation:In general,if we want to approximate r −1(λ)on[0,2],we need to solve the L ∞([0,2])approximation problemminimize β, subject to N i =0βi λi −r −1(λ) ≤ ∀λ∈[0,2](22)Clearly,(22)is equivalent to minimizing sup ˜L ρ(˜L )−r−1(˜L ) ,since the matrix norm is determined by the largest eigenvalues,and we can find ˜Lsuch that the discrepancy between ρ(λ)and r −1(λ)is attained.Variational problems of this form have been studied in the literature,and their solution may provide much better approximations to r −1(λ)than a truncated power series expansion.4Products of GraphsAs we have already pointed out,it is very expensive to compute K for arbitrary ˆr and ˜L.For special types of graphs and regularization,however,significant computational savings can be made.4.1Factor GraphsThe work of this section is a direct extension of results by Ellis [2002]and Chung-Graham and Yau [2000],who study factor graphs to compute inverses of the graph Laplacian.Definition 1(Factor Graphs).Denote by (V,E )and (V ,E )the vertices V and edges E of two graphs,then the factor graph (V f ,E f ):=(V,E )⊗(V ,E )is defined as the graph where (i,i )∈V f if i ∈V and i ∈V ;and ((i,i ),(j,j ))∈E f if and only if either (i,j )∈E and i =j or (i ,j )∈E and i =j .Kernels and Regularization on Graphs 11For instance,the factor graph of two rings is a torus.The nice property of factor graphs is that we can compute the eigenvalues of the Laplacian on products very easily (see e.g.,Chung-Graham and Yau [2000]):Theorem 5(Eigenvalues of Factor Graphs).The eigenvalues and eigen-vectors of the normalized Laplacian for the factor graph between a regular graph of degree d with eigenvalues {λj }and a regular graph of degree d with eigenvalues {λ l }are of the form:λfact j,l =d d +d λj +d d +d λ l(23)and the eigenvectors satisfy e j,l(i,i )=e j i e l i ,where e j is an eigenvector of ˜L and e l is an eigenvector of ˜L.This allows us to apply Corollary 1to obtain an expansion of K asK =(r (L ))−1=j,l r −1(λjl )e j,l e j,l .(24)While providing an explicit recipe for the computation of K ij without the need to compute the full matrix K ,this still requires O (n 2)operations per entry,which may be more costly than what we want (here n is the number of vertices of the factor graph).Two methods for computing (24)become evident at this point:if r has a special structure,we may exploit this to decompose K into the products and sums of terms depending on one of the two graphs alone and pre-compute these expressions beforehand.Secondly,if one of the two terms in the expansion can be computed for a rather general class of values of r (x ),we can pre-compute this expansion and only carry out the remainder corresponding to (24)explicitly.4.2Product Decomposition of r (x )Central to our reasoning is the observation that for certain r (x ),the term 1r (a +b )can be expressed in terms of a product and sum of terms depending on a and b only.We assume that 1r (a +b )=M m =1ρn (a )˜ρn (b ).(25)In the following we will show that in such situations the kernels on factor graphs can be computed as an analogous combination of products and sums of kernel functions on the terms constituting the ingredients of the factor graph.Before we do so,we briefly check that many r (x )indeed satisfy this property.exp(−β(a +b ))=exp(−βa )exp(−βb )(26)(A −(a +b ))= A 2−a + A 2−b (27)(A −(a +b ))p =p n =0p n A 2−a n A 2−b p −n (28)cos (a +b )π4=cos aπ4cos bπ4−sin aπ4sin bπ4(29)12Alexander Smola and Risi KondorIn a nutshell,we will exploit the fact that for products of graphs the eigenvalues of the joint graph Laplacian can be written as the sum of the eigenvalues of the Laplacians of the constituent graphs.This way we can perform computations on ρn and˜ρn separately without the need to take the other part of the the product of graphs into account.Definek m(i,j):=l ρldλld+de l i e l j and˜k m(i ,j ):=l˜ρldλld+d˜e l i ˜e l j .(30)Then we have the following composition theorem:Theorem6.Denote by(V,E)and(V ,E )connected regular graphs of degrees d with m vertices(and d ,m respectively)and normalized graph Laplacians ˜L,˜L .Furthermore denote by r(x)a rational function with matrix-valued exten-sionˆr(X).In this case the kernel K corresponding to the regularization operator ˆr(L)on the product graph of(V,E)and(V ,E )is given byk((i,i ),(j,j ))=Mm=1k m(i,j)˜k m(i ,j )(31)Proof Plug the expansion of1r(a+b)as given by(25)into(24)and collect terms.From(26)we immediately obtain the corollary(see Kondor and Lafferty[2002]) that for diffusion processes on factor graphs the kernel on the factor graph is given by the product of kernels on the constituents,that is k((i,i ),(j,j ))= k(i,j)k (i ,j ).The kernels k m and˜k m can be computed either by using an analytic solution of the underlying factors of the graph or alternatively they can be computed numerically.If the total number of kernels k n is small in comparison to the number of possible coordinates this is still computationally beneficial.4.3Composition TheoremsIf no expansion as in(31)can be found,we may still be able to compute ker-nels by extending a reasoning from[Ellis,2002].More specifically,the following composition theorem allows us to accelerate the computation in many cases, whenever we can parameterize(ˆr(L+αI))−1in an efficient way.For this pur-pose we introduce two auxiliary functionsKα(i,j):=ˆrdd+dL+αdd+dI−1=lrdλl+αdd+d−1e l(i)e l(j)G α(i,j):=(L +αI)−1=l1λl+αe l(i)e l(j).(32)In some cases Kα(i,j)may be computed in closed form,thus obviating the need to perform expensive matrix inversion,e.g.,in the case where the underlying graph is a chain[Ellis,2002]and Kα=Gα.Kernels and Regularization on Graphs 13Theorem 7.Under the assumptions of Theorem 6we haveK ((j,j ),(l,l ))=12πi C K α(j,l )G −α(j ,l )dα= v K λv (j,l )e v j e v l (33)where C ⊂C is a contour of the C containing the poles of (V ,E )including 0.For practical purposes,the third term of (33)is more amenable to computation.Proof From (24)we haveK ((j,j ),(l,l ))= u,v r dλu +d λv d +d −1e u j e u l e v j e v l (34)=12πi C u r dλu +d αd +d −1e u j e u l v 1λv −αe v j e v l dαHere the second equalityfollows from the fact that the contour integral over a pole p yields C f (α)p −αdα=2πif (p ),and the claim is verified by checking thedefinitions of K αand G α.The last equality can be seen from (34)by splitting up the summation over u and v .5ConclusionsWe have shown that the canonical family of kernels on graphs are of the form of power series in the graph Laplacian.Equivalently,such kernels can be char-acterized by a real valued function of the eigenvalues of the Laplacian.Special cases include diffusion kernels,the regularized Laplacian kernel and p -step ran-dom walk kernels.We have developed the regularization theory of learning on graphs using such kernels and explored methods for efficiently computing and approximating the kernel matrix.Acknowledgments This work was supported by a grant of the ARC.The authors thank Eleazar Eskin,Patrick Haffner,Andrew Ng,Bob Williamson and S.V.N.Vishwanathan for helpful comments and suggestions.A Link AnalysisRather surprisingly,our approach to regularizing functions on graphs bears re-semblance to algorithms for scoring web pages such as PageRank [Page et al.,1998],HITS [Kleinberg,1999],and randomized HITS [Zheng et al.,2001].More specifically,the random walks on graphs used in all three algorithms and the stationary distributions arising from them are closely connected with the eigen-system of L and ˜Lrespectively.We begin with an analysis of PageRank.Given a set of web pages and links between them we construct a directed graph in such a way that pages correspond。

量子力学英文名词

量子力学英文名词

probability density probability wave normalizing condition Schrödinger equation stationary state stationary Schrödinger equation
势阱
对应原理
隧道效应
能量量子化
Paulaser 泡利不相容原理 激光 自发辐射 受激辐射 氦氖激光器 红宝石激光器
He-Ne laser
Pfund series Bohr quantization condition Bohr hydrogen atom Bohr frequency condition Bohr radius energy level
energy quantum photoelectric effect photo electron photocurrent cutoff potential difference red-limit wave-particle dualism
康普顿效应 康普顿散射 康普顿波长 反冲电子 莱曼系 帕邢系 布拉开系
主量子数
角动量量子化
potential well
correspondence principle
tunneling effect
energy quantization
principal quantum number
angular quantization
角量子数 空间量子化 磁量子数 电子自旋 自旋量子数 自旋磁量子数
Stefan-Boltzmann law Stefan constant Wien displacement law Rayleigh-Jens formula Planck radiation formula Planck constant

量子力学笔记(冷轩)

x) 函数表达式及其傅里叶变换 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.10 以两能级系统为例 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.11 幺正算符 3.12 幺正变换 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
约化密度矩阵 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 刘维尔方程——密度算符的演化 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 统计物理中的多粒子状态 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Peskin量子场论译文1

Peskin 量子场论: Chapter1湮灭中的对乘积:QED 是关于电子和光子的量子理论,它可能是我们现有的最好的基本物理理论,由Dirac 方程和Maxwell 方程组成,它们主要由相对论不变性决定,这些方程的量子力学解给出了宏观的和微观(比质子小几百倍)的电磁现象细致预测。

Feynman 图提供了一种优美的计算程序,通常通过Feynman 图来写出相应过程的量子力学振幅数学表达式。

考虑在质心系中,多数粒子物理实验涉及散射,QFT 中最一般计算的量就散射截面,反粒子的存在实际上是QFT 的预言,实验中为了测量湮灭概率,将一束电子射向正电子束,散射截面作为可测量量是质心能量,入射与出射夹角的函数,质心系中有,假定射束能量(动能?)远大于电子或者μ子的质能(黑体表示3动量,斜体表示4动量),因为,自旋都是1/2,我们必须具体表示出它们的自旋取向,将自旋量子化轴的方向定义为每个粒子运动的方向,粒子的自旋极化可以平行或反平行于这个轴,实际中电子束或者正电子束通常都是非极化的,μ子探测器一般也不能分辨μ的极化(螺旋度?),因此最后得到的散射截面将对正电子和电子的自旋取向取平均,对μ子的自旋求和,对于任何给定的自旋取向,可以方便地写出微分散射截面,例如对于在立体角d Ω中的μ,.(应用简化公式,因此对于2个有限态的质心微分散射截面是,在4个粒子都具有相同质量的特例下(取极限m->0),有近似()。

),因子为散射截面提供了正确的量纲,因为在自然单位制中,跃迁振幅M 是无量纲的,它是量子力学过程发生的振幅(类似于非相对论量子力学中的散射振幅f ),表达式的另一部分因子是纯粹的约定问题,实际上是一个特例,仅对终态包含2个无质量的粒子的质心系散射是合理的,更一般的定理的形式并不能从量纲分析得到。

一个坏消息是即使对于最简单的QED 过程,M 矩阵的恰当表达式也是未知的,实际上这个事实并不令人惊讶,因为即使在非相对论量子力学,散射问题的恰当解也是很少的,最好是我们能得到M 的正规表达式,它作为电磁作用强度的微扰级数,我们将会估计级数的前几项,Feynman 发明了一种奇妙的方法组织并形象化了微扰级数:Feynman 图,简要地说这些图显示了散射过程中电子和光子的流动,对于特定的计算(?),微扰级数的零头阶可以用单个Feynman 图表示(这个图中的唯一可能中间态是γ光子),Feynman 图由3部分组成:1.外线(代表2个入射粒子和出射粒子)。

半导体物理学 刘恩科 第七版


半导体器件
原子的能级的分裂
原子能级分裂为能带
半导体器件
Si的能带 (价带、导带和带隙〕
半导体器件
半导体的能带结构
导带 Eg
价带
价带:0K条件下被电子填充的能量的能带
导带:0K条件下未被电子填充的能量的能带
带隙:导带底与价带顶之间的能量差
半导体器件
自由电子的运动
微观粒子具有波粒二象性
p m0u
p E 2m0
i ( K r t )
2
p K E hv
(r, t ) Ae
半导体器件
半导体中电子的运动
薛定谔方程及其解的形式
V ( x) V ( x sa) d ( x) V ( x) ( x) E ( x) 2 2m0 dx ikx k ( x ) uk ( x ) e
EC
B
EA
EA EV
P型半导体
受主能级
半导体器件
半导体的掺杂
Ⅲ、Ⅴ族杂质在Si、Ge晶体中分别为受 主和施主杂质,它们在禁带中引入了能 级;受主能级比价带顶高 EA,施主能级 比导带底低 ED,均为浅能级,这两种 杂质称为浅能级杂质。 杂质处于两种状态:中性态和离化态。 当处于离化态时,施主杂质向导带提供 电子成为正电中心;受主杂质向价带提 供空穴成为负电中心。
考试90%
半导体器件
半导体物理学
一.半导体中的电子状态
二.半导体中杂质和缺陷能级
三.半导体中载流子的统计分布
四.半导体的导电性
五.非平衡载流子
六.pn结
七.金属和半导体的接触 八.半导体表面与MIS结构 九.半导体异质结构
半导体器件
半导体概要

质子和原子核部分子分布函数的全局分析

摘要原子核是目前高能物理实验的一个重要研究对象。

在高能核物理实验中,原子核部分子分布函数是模拟计算各种高能反应的重要输入信息,在检验标准模型和探寻新物理的过程中起到关键作用。

本论文的目的就是通过对世界上各合作组的带电轻子-原子核(包括质子)的深度非弹性散射实验数据的QCD理论分析,来获取质子和原子核的部分子分布函数。

我们发布了质子的部分子分布函数数据库IMParton16,以及原子核的部分子分布函数数据库nIMParton16(nuclear IMParton)。

本研究包含三个主要内容。

(1)研究了核子内部部分子分布的起源问题。

我们成功地建立了夸克模型和高Q2下测量到的部分子分布之间的直接联系。

(2)研究了各种核介质效应对核子内部部分子分布的影响,以及各种核介质效应的核依赖关系。

(3)我们应用部分子重组效应修正的DGLAP方程对不同实验组的数据进行了全局χ2分析,并得到了质子和原子核的部分子分布函数。

关于部分子分布的起源问题,我们发展了动力学部分子模型,并把部分∼0.1GeV2。

在该初始标度下,我们实现子分布演化的初始标度降低到了Q2了最自然最简单的仅包含价夸克分布的非微扰输入,并且参数化非微扰输入用到的自由参数个数最少,仅有三个。

我们还发现核子内部还应存在一些超越夸克模型的少量的非微扰海夸克成分。

在核介质效应研究方面,我们计算了核子费米运动引起的原子核中核子结构函数的弥散效应,束缚核子变胖效果给出的EMC效应,以及原子核中部分子重组过程增强导致的核遮蔽效应。

我们首次发现了EMC效应的强弱与核子之间剩余强相互作用能量之间的显著的线性关联。

我们在部分子层次上系统地描述了核遮蔽效应、反遮蔽效应和EMC效应。

鉴于考虑了较为全面的核物理效应,我们全局拟合确定的原子核部分子分布函数更加准确,并且参数化核效应修正因子的核依赖关系和x依赖关系时使用的自由参数最少,仅有两个。

(与其他合作组的全局拟合相比,自由参数的个数几乎小一个数量级)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

arXiv:hep-lat/9909163v1 29 Sep 19991RenormalizationconstantsoflocaloperatorswithintheSchr¨odingerfunctionalscheme

A.Shindlera∗aDipartimentodiFisica,Universit`adiRomaTorVergata

Wedefine,withintheSchr¨odingerfunctional(SF)scheme,thematrixelementsofthetwist-2operatorscorrespondingtothefirsttwomomentsofnon-singletpartondensity,andthefirstmomentofsingletpartondensities.Weperformalatticeone-loopcalculationthatfixestherelationbetweentheSFschemeandothercommonschemesandshowsthemainsourceoflatticeartefacts.Fewremarksontheimprovementcaseareadded.

1.IntroductionTheaccurateknowledgeofhadronpartonden-sitiesisanessentialingredientsfortheexperi-mentaltextofQCDattheacceleratorenergies.Theirnormalizationisusuallyobtainedfromafittoasetofreferenceexperimentsandusedforpre-dictingthebehaviourofhardhadronprocessesindifferentenergyregimes.Thecalculationofthenormalizationneedsnon-perturbativemeth-ods.Thesecomputation,expeciallyforthehighermoments,[1]canreduce,forexample,theuncer-taintesonthegluonpartondensitiesatvaluesoftheBjorkenxlargerthan0.5.Thesecalculationaremademainlybytwogroups(seeforexample[2],[3]).It’swellknownthat,tomatchtheschemeofthenon-perturbativesimulationandtheschemewheretheexperimentmadethecomparisonwiththeory,itisnecessaryalatticeperturbativecom-putationoftherenormalizationconstantsoftheoperatorinthesameschemewheretheoperatorisnumericallycomputed.Theperturbative[4]andnon-perturbative[2]calculationaremadeintheSchr¨odingerfunc-tionalscheme(SF).TheSchr¨odingerfunctionalhasbeendiscussedextensivelyintheliterature(see[5]forreviews).Amongtheadvantagesofthemethod,weonlyquotethepossibilityofperform-ingthecomputationsatzerophysicalquarkmassandofusingnon-localgaugeinvariantsources

4¯ψ(x)γ[1↔D2]12τ3ζ(z)󰀃f0123(x0;p)=f3(x0;p)=−a6󰀁y,zeip(y−z)×

󰀂12τ3ψ(x)¯ζ(y)1

2(1±γ0)andpisthemomentumoftheclassicalfieldsittingontheboundary.ThequantitiesζaretheresponsetoavariationoftheclassicalFermifieldconfigurationsontheboundaries.Wetakethelimitofmasslessquarks,butsomecareshouldbetakensoastoensurethislimitatorderg2.ThebreakingofchiralsimmetryoftheWilsonactionentailsanon-zeroshiftofthequarkmassfromthenaivevalueatorderg2.Thematrixelementoftheoperatorforthefirstmomentinvolvestwodirectionsandthreeforthesecondmoment.Thesedirectionsmustbepro-videdbyexternalvectors:wehavechosentoob-2tainoneofthemfromthecontractionmatrixΓ,i.e.fromthepolarizationofthevectorclassicalstateΓ=γ2,andtheremainingonesfromthemomentumpoftheclassicalFermifieldattheboundary.Tocomputetherenormalizationconstantsoftheoperatorsisnecessarytoremovetherenor-malizationconstantoftheclassicalboundarysourcesζ.Followingref.[6],thisisrepresentedbythequantitycalledf1.Bothf2andf1arenor-malizedbytheirtree-levelexpressions.Wedefinetherenormalizationconstantssuchthatoperatormatrixelements(herebrieflyindicatedwithO)isequaltoitstree-levelvalueatµ=1/L:

OR(µ)=Z(aµ)−1Obare(a/L)(2)withZ(a/L)definedbyObare(a/L)=Z(a/L)Otree(3)Attheonelooptheobservablecanbeparametrizedas:

Z(pL,x0/L,a/L)=1+g2Z(1)(a/L)(4)withZ(1)(a/L)=

b0+c0ln(a/L)+∞󰀁k=1akbk+ckln(a/L)

Nk(6)

Table1Valuesoftheconstantsoftheoperatorsrenor-malizedinthetwodefinitionswiththerealmo-mentumporthe“finitesizemomentum”θdif-ferentfromzero.

Firstx0=L/4(p=0)B0=0.2635(10)Firstx0=L/2(p=0)B0=0.2762(5)Secondx0=L/4(p=0)B0=0.1875(20)Secondx0=L/2(p=0)B0=0.1895(50)

excludinghigher-orderterms.Table1containsasummaryoftheconstantsB0fortheoperators,afterremovingtheexternallegsrenormalization,inthevariouscasesthatwehavediscussed.Thelatticeresultconvergemuchfasterintheircontin-uumlimit,usinga“finitesize”momentum,con-firmingthemomentumquantizationasamajorsourceoflatticeartifacts(seeref.[4]).Wemadethesamecomputationwithcloveraction[7].Alsointhiscaselatticeartefactsstartatorderaforbothcoefficients,becausetheO(a)-improvementoftheactionshouldbecomplementedwiththeimprovementoftheoperatorsandofthebound-arycountertermsinordertoleadtoafullcancel-lationofeffectsappearinglinearina.TheFeyn-manrulesforthiscomputationcanbeeasilyde-rivedusingref.[8].Thecalculationisdonetotheorderc2sw.It’simportanttostressoutthattoex-tractthefiniteconstanttothisorderwemustdothecomputationtothesameorderoff1andofthemassshift.ThefittingprocedureisthesamethatintheWilsoncase.Thefinitepartoftherenormalizationconstantnowwillbedefinedas

BO=B(0)O+B(1)Ocsw+B(2)Oc2sw(7)Doingthesubtractionsoff1andofthemassshiftweobtainthefollowingresults:

B(1)O=−0.0327(6)˜B(1)O=−0.0327012B(2)O=−0.005725(9)˜B(2)O=−0.005726859(8)

相关文档
最新文档