Methods in the LO Evolution of Nondiagonal Parton Distributions The DGLAP Case

合集下载

ROC曲线下面积估计的参数法与非参数法的应用研究_宋花玲

ROC曲线下面积估计的参数法与非参数法的应用研究_宋花玲

726第二军医大学学报Acad J Sec M il M ed U niv2006Jul;27(7)专题报道ROC 曲线下面积估计的参数法与非参数法的应用研究宋花玲1,贺 佳2*,黄品贤1,李素云3(1.上海中医药大学预防教研室,上海201203;2.第二军医大学卫生勤务学系卫生统计学教研室,上海200433;3.上海中医药大学病理学教研室,上海201203)[摘要]目的:阐明ROC 曲线下面积估计的参数法和非参数法并进行比较,为其在诊断试验评价中的应用提供依据。

方法:用双正态模型的参数法和M annW itney 统计量的非参数法估计ROC 曲线下面积,并以其在肺癌诊断试验准确度评价中的应用来具体说明。

结果:非参数法估计的肺癌两个标志物Cyfr a21 1和CEA 的ROC 曲线下面积分别为0.77、0.87,参数法估计的面积分别为0.78、0.87,表明在样本量较大时参数法和非参数法估计的R OC 曲线下面积近似相等。

结论:样本量较小时可选择非参数法估计RO C 曲线下面积,样本量较大时可根据实际情况选择参数法或非参数法。

[关键词] ROC 曲线下面积;参数法;非参数法[中图分类号] R 195.1 [文献标识码] A [文章编号] 0258 879X(2006)07 0726 03Application of parametric method and non parametric method in estimation of area under ROC curveSO N G H ua ling 1,HE Jia 2*,HU AN G Pin x ian 1,LI Su y un 3(1.Department of P reventiv e M edicine,Shang hai U niv ersity of T raditional Chinese M edicine,Shanghai 201203,China; 2.Depar tment o f H ealth Statistics,F aculty of H ealth Ser vices,Second M ilitar y M edical U niv ersity ,Shang hai 200433; 3.Department o f Pat ho lo gy ,Shanghai U niversity of T raditio na l Chinese M edi cine,Shanghai 201203)[ABSTRACT] Objective:T o elucidate and co mpar e the par amet ric method and non parametr ic met ho d in estimatio n o f t he area under RO C curv e,so as to pr ovide a basis fo r their applicatio n in diag no sis assessment.Methods:T he ar eas under RO C curv es wer e estimated by parametr ic method of fitting binomial mo del and by non par ametric method of M ann W itney statist ics.T he met ho d w as employed in the diag nostic tests of lung cancer.Results:By non par ametric methods,the areas under ROC curv es of Cyfr a21 1and CEA wer e respectiv ely 0.77and 0.87in the lung cancer diag no st ic tests;by par ametric metho ds,they w ere 0.78and 0.87,respect ively.It was indicated that w hen the sample size was larg e,the v alues o f a reas under R OC Cur ves w ere similar between par ametric method and non parametr ic metho d.C onclusion:No n paramet ric method sho uld be used to evaluate the ar ea under RO C curv e if the sample size is small,and for larg e sample size,the par ametric method o r no nparametr ic met ho d should be cho sen according to the actual situation.[KEY WORDS] ar ea under RO C curv e;parametr ic metho d;no n par amet ric method[A cad J Sec M il M ed U niv,2006,27(7):726 728][作者简介] 宋花玲,讲师,硕士.*Corres ponding author.E mail:h ejia@一项新的诊断试验的诊断性能如何,它能否替代旧的诊断试验,这在很大程度上依赖于新的诊断试验的准确度大小。

methods in nonlinear analysis

methods in nonlinear analysis

Methods in Nonlinear AnalysisNonlinear analysis is a branch of mathematics that deals with the study of equations involving nonlinear functions. It plays a crucial role in various fields, including physics, engineering, economics, and biology. In this article, we will explore some of the methods used in nonlinear analysis and their applications.1. Fixed Point TheoryFixed point theory is a fundamental tool in nonlinear analysis. It provides a powerful framework for studying the existence and uniqueness of solutions to equations. The basic idea behind fixed point theory is to find a point that remains unchanged under the action of a given function.One of the most famous fixed point theorems is the Banach fixed point theorem, also known as the contraction mapping principle. It states that if a complete metric space is mapped into itself by a contraction mapping, then there exists a unique fixed point.The Banach fixed point theorem has numerous applications in various areas of mathematics and its extensions have been developed to handle more general situations. It is widely used in proving the existence of solutions to differential equations, optimization problems, and functional equations.2. Nonlinear OptimizationNonlinear optimization is concerned with finding the optimal solution to a problem involving nonlinear functions. It is a powerful tool in many practical applications, such as engineering design, portfolio optimization, and machine learning.There are various methods used in nonlinear optimization, including gradient-based meth ods, Newton’s method, and evolutionary algorithms. Gradient-based methods, such as the steepest descent and Newton’s method, use the gradient or Hessian matrix of the objective function to iteratively update the solution. Evolutionary algorithms, such as genetic algorithms and particle swarm optimization, mimic natural evolution to search for the optimal solution.Nonlinear optimization problems are often challenging due to the presence of multiple local optima. Therefore, finding the global optimum is a major concern in nonlinear optimization. Several strategies, such as initialization techniques and global optimization algorithms, have been developed to overcome this issue.3. Nonlinear Partial Differential EquationsNonlinear partial differential equations (PDEs) are mathematical models that describe various physical phenomena, such as fluid flow, heat transfer, and reaction-diffusion processes. They involve nonlinear functions of the unknown variables and their derivatives.Solving nonlinear PDEs is a challenging task due to their complexity and lack of analytical solutions. Numerical methods, such as finite difference, finite element, and spectral methods, are commonly used to approximate the solutions. These methods discretize the PDEs into a system of algebraic equations, which can be solved iteratively.Nonlinear PDEs arise in many areas of science and engineering. For example, the Navier-Stokes equations describe the motion of fluid and are essential in understanding turbulence and predicting weather patterns. The reaction-diffusion equations are used to model chemical reactions and pattern formation in biology.4. Chaos TheoryChaos theory is a branch of mathematics that studies the behavior of nonlinear dynamical systems that are highly sensitive to initial conditions. It deals with the concept of deterministic chaos, where small changes in the initial conditions can lead to drasticallydifferent outcomes.Chaotic systems exhibit complex and unpredictable behavior, even though they are governed by deterministic laws. They have applications in various fields, including physics, biology, finance, and cryptography. For example, chaotic systems are used in secure communication systems to generate random numbers.The study of chaos theory involves the analysis of nonlinear equations and the use of numerical simulations. Various techniques, such as Lyapunov exponents and bifurcation diagrams, are used to characterize the behavior of chaotic systems.ConclusionMethods in nonlinear analysis provide powerful tools for studying equations involving nonlinear functions. Fixed point theory, nonlinear optimization, nonlinear PDEs, and chaos theory are some of the key areas in this field. These methods have wide-ranging applications in science, engineering, and other disciplines. As researchers continue to advance these methods, they will contribute to our understanding of complex phenomena and help solve real-world problems.。

外文文献—遗传算法

外文文献—遗传算法

附录I 英文翻译第一部分英文原文文章来源:书名:《自然赋予灵感的元启发示算法》第二、三章出版社:英国Luniver出版社出版日期:2008Chapter 2Genetic Algorithms2.1 IntroductionThe genetic algorithm (GA), developed by John Holland and his collaborators in the 1960s and 1970s, is a model or abstraction of biolo gical evolution based on Charles Darwin’s theory of natural selection. Holland was the first to use the crossover and recombination, mutation, and selection in the study of adaptive and artificial systems. These genetic operators form the essential part of the genetic algorithm as a problem-solving strategy. Since then, many variants of genetic algorithms have been developed and applied to a wide range of optimization problems, from graph colouring to pattern recognition, from discrete systems (such as the travelling salesman problem) to continuous systems (e.g., the efficient design of airfoil in aerospace engineering), and from financial market to multiobjective engineering optimization.There are many advantages of genetic algorithms over traditional optimization algorithms, and two most noticeable advantages are: the ability of dealing with complex problems and parallelism. Genetic algorithms can deal with various types of optimization whether the objective (fitness) functionis stationary or non-stationary (change with time), linear or nonlinear, continuous or discontinuous, or with random noise. As multiple offsprings in a population act like independent agents, the population (or any subgroup) can explore the search space in many directions simultaneously. This feature makes it ideal to parallelize the algorithms for implementation. Different parameters and even different groups of strings can be manipulated at the same time.However, genetic algorithms also have some disadvantages.The formulation of fitness function, the usage of population size, the choice of the important parameters such as the rate of mutation and crossover, and the selection criteria criterion of new population should be carefully carried out. Any inappropriate choice will make it difficult for the algorithm to converge, or it simply produces meaningless results.2.2 Genetic Algorithms2.2.1 Basic ProcedureThe essence of genetic algorithms involves the encoding of an optimization function as arrays of bits or character strings to represent the chromosomes, the manipulation operations of strings by genetic operators, and the selection according to their fitness in the aim to find a solution to the problem concerned. This is often done by the following procedure:1) encoding of the objectives or optimization functions; 2) defining a fitness function or selection criterion; 3) creating a population of individuals; 4) evolution cycle or iterations by evaluating the fitness of allthe individuals in the population,creating a new population by performing crossover, and mutation,fitness-proportionate reproduction etc, and replacing the old population and iterating again using the new population;5) decoding the results to obtain the solution to the problem. These steps can schematically be represented as the pseudo code of genetic algorithms shown in Fig. 2.1.One iteration of creating a new population is called a generation. The fixed-length character strings are used in most of genetic algorithms during each generation although there is substantial research on the variable-length strings and coding structures.The coding of the objective function is usually in the form of binary arrays or real-valued arrays in the adaptive genetic algorithms. For simplicity, we use binary strings for encoding and decoding. The genetic operators include crossover,mutation, and selection from the population.The crossover of two parent strings is the main operator with a higher probability and is carried out by swapping one segment of one chromosome with the corresponding segment on another chromosome at a random position (see Fig.2.2).The crossover carried out in this way is a single-point crossover. Crossover at multiple points is also used in many genetic algorithms to increase the efficiency of the algorithms.The mutation operation is achieved by flopping the randomly selected bits (see Fig. 2.3), and the mutation probability is usually small. The selection of anindividual in a population is carried out by the evaluation of its fitness, and it can remain in the new generation if a certain threshold of the fitness is reached or the reproduction of a population is fitness-proportionate. That is to say, the individuals with higher fitness are more likely to reproduce.2.2.2 Choice of ParametersAn important issue is the formulation or choice of an appropriate fitness function that determines the selection criterion in a particular problem. For the minimization of a function using genetic algorithms, one simple way of constructing a fitness function is to use the simplest form F = A−y with A being a large constant (though A = 0 will do) and y = f(x), thus the objective is to maximize the fitness function and subsequently minimize the objective function f(x). However, there are many different ways of defining a fitness function.For example, we can use the individual fitness assignment relative to the whole populationwhere is the phenotypic value of individual i, and N is the population size. The appropriateform of the fitness function will make sure that the solutions with higher fitness should be selected efficiently. Poor fitness function may result in incorrect or meaningless solutions.Another important issue is the choice of various parameters.The crossover probability is usually very high, typically in the range of 0.7~1.0. On the other hand, the mutation probability is usually small (usually 0.001 _ 0.05). If is too small, then the crossover occurs sparsely, which is not efficient for evolution. If the mutation probability is too high, the solutions could still ‘jump around’ even if the optimal solution is approaching.The selection criterion is also important. How to select the current population so that the best individuals with higher fitness should be preserved and passed onto the next generation. That is often carried out in association with certain elitism. The basic elitism is to select the most fit individual (in each generation) which will be carried over to the new generation without being modified by genetic operators. This ensures that the best solution is achieved more quickly.Other issues include the multiple sites for mutation and the population size. The mutation at a single site is not very efficient, mutation at multiple sites will increase the evolution efficiency. However, too many mutants will make it difficult for the system to converge or even make the system go astray to the wrong solutions. In reality, if the mutation is too high under high selection pressure, then the whole population might go extinct.In addition, the choice of the right population size is also very important. If the population size is too small, there is not enough evolution going on, and there is a risk for the whole population to go extinct. In the real world, a species with a small population, ecological theory suggests that there is a real danger of extinction for such species. Even the system carries on, there is still a danger of premature convergence. In a small population, if a significantly more fit individual appears too early, it may reproduces enough offsprings so that they overwhelm the whole (small) population. This will eventually drive the system to a local optimum (not the global optimum). On the other hand, if the population is too large, more evaluations of the objectivefunction are needed, which will require extensive computing time.Furthermore, more complex and adaptive genetic algorithms are under active research and the literature is vast about these topics.2.3 ImplementationUsing the basic procedure described in the above section, we can implement the genetic algorithms in any programming language. For simplicity of demonstrating how it works, we have implemented a function optimization using a simple GA in both Matlab and Octave.For the generalized De Jong’s test function where is a positive integer andr > 0 is the half length of the domain. This function has a minimum of at . For the values of , r = 100 and n = 5 as well as a population size of 40 16-bit strings, the variations of the objective function during a typical run are shown in Fig. 2.4. Any two runs will give slightly different results dueto the stochastic nature of genetic algorithms, but better estimates are obtained as the number of generations increases.For the well-known Easom functionit has a global maximum at (see Fig. 2.5). Now we can use the following Matlab/Octave to find its global maximum. In our implementation, we have used fixedlength 16-bit strings. The probabilities of crossover and mutation are respectivelyAs it is a maximization problem, we can use the simplest fitness function F = f(x).The outputs from a typical run are shown in Fig. 2.6 where the top figure shows the variations of the best estimates as they approach while the lower figure shows the variations of the fitness function.% Genetic Algorithm (Simple Demo) Matlab/Octave Program% Written by X S Yang (Cambridge University)% Usage: gasimple or gasimple(‘x*exp(-x)’);function [bestsol, bestfun,count]=gasimple(funstr)global solnew sol pop popnew fitness fitold f range;if nargin<1,% Easom Function with fmax=1 at x=pifunstr=‘-cos(x)*exp(-(x-3.1415926)^2)’;endrange=[-10 10]; % Range/Domain% Converting to an inline functionf=vectorize(inline(funstr));% Generating the initil populationrand(‘state’,0’); % Reset the random generatorpopsize=20; % Population sizeMaxGen=100; % Max number of generationscount=0; % counternsite=2; % number of mutation sitespc=0.95; % Crossover probabilitypm=0.05; % Mutation probabilitynsbit=16; % String length (bits)% Generating initial populationpopnew=init_gen(popsize,nsbit);fitness=zeros(1,popsize); % fitness array% Display the shape of the functionx=range(1):0.1:range(2); plot(x,f(x));% Initialize solution <- initial populationfor i=1:popsize,solnew(i)=bintodec(popnew(i,:));end% Start the evolution loopfor i=1:MaxGen,% Record as the historyfitold=fitness; pop=popnew; sol=solnew;for j=1:popsize,% Crossover pairii=floor(popsize*rand)+1; jj=floor(popsize*rand)+1;% Cross overif pc>rand,[popnew(ii,:),popnew(jj,:)]=...crossover(pop(ii,:),pop(jj,:));% Evaluate the new pairscount=count+2;evolve(ii); evolve(jj);end% Mutation at n sitesif pm>rand,kk=floor(popsize*rand)+1; count=count+1;popnew(kk,:)=mutate(pop(kk,:),nsite);evolve(kk);endend % end for j% Record the current bestbestfun(i)=max(fitness);bestsol(i)=mean(sol(bestfun(i)==fitness));end% Display resultssubplot(2,1,1); plot(bestsol); title(‘Best estimates’); subplot(2,1,2); plot(bestfun); title(‘Fitness’);% ------------- All sub functions ----------% generation of initial populationfunction pop=init_gen(np,nsbit)% String length=nsbit+1 with pop(:,1) for the Signpop=rand(np,nsbit+1)>0.5;% Evolving the new generationfunction evolve(j)global solnew popnew fitness fitold pop sol f;solnew(j)=bintodec(popnew(j,:));fitness(j)=f(solnew(j));if fitness(j)>fitold(j),pop(j,:)=popnew(j,:);sol(j)=solnew(j);end% Convert a binary string into a decimal numberfunction [dec]=bintodec(bin)global range;% Length of the string without signnn=length(bin)-1;num=bin(2:end); % get the binary% Sign=+1 if bin(1)=0; Sign=-1 if bin(1)=1.Sign=1-2*bin(1);dec=0;% floating point.decimal place in the binarydp=floor(log2(max(abs(range))));for i=1:nn,dec=dec+num(i)*2^(dp-i);enddec=dec*Sign;% Crossover operatorfunction [c,d]=crossover(a,b)nn=length(a)-1;% generating random crossover pointcpoint=floor(nn*rand)+1;c=[a(1:cpoint) b(cpoint+1:end)];d=[b(1:cpoint) a(cpoint+1:end)];% Mutatation operatorfunction anew=mutate(a,nsite)nn=length(a); anew=a;for i=1:nsite,j=floor(rand*nn)+1;anew(j)=mod(a(j)+1,2);endThe above Matlab program can easily be extended to higher dimensions. In fact, there is no need to do any programming (if you prefer) because there are many software packages (either freeware or commercial) about genetic algorithms. For example, Matlab itself has an extra optimization toolbox.Biology-inspired algorithms have many advantages over traditional optimization methods such as the steepest descent and hill-climbing and calculus-based techniques due to the parallelism and the ability of locating the very good approximate solutions in extremely very large search spaces.Furthermore, more powerful new generation algorithms can be formulated by combiningexisting and new evolutionary algorithms with classical optimization methods.Chapter 3Ant AlgorithmsFrom the discussion of genetic algorithms, we know that we can improve the search efficiency by using randomness which will also increase the diversity of the solutions so as to avoid being trapped in local optima. The selection of the best individuals is also equivalent to use memory. In fact, there are other forms of selection such as using chemical messenger (pheromone) which is commonly used by ants, honey bees, and many other insects. In this chapter, we will discuss the nature-inspired ant colony optimization (ACO), which is a metaheuristic method.3.1 Behaviour of AntsAnts are social insects in habit and they live together in organized colonies whose population size can range from about 2 to 25 millions. When foraging, a swarm of ants or mobile agents interact or communicate in their local environment. Each ant can lay scent chemicals or pheromone so as to communicate with others, and each ant is also able to follow the route marked with pheromone laid by other ants. When ants find a food source, they will mark it with pheromone and also mark the trails to and from it. From the initial random foraging route, the pheromone concentration varies and the ants follow the route with higher pheromone concentration, and the pheromone is enhanced by the increasing number of ants. As more and more ants follow the same route, it becomes the favoured path. Thus, some favourite routes (often the shortest or more efficient) emerge. This is actually a positive feedback mechanism.Emerging behaviour exists in an ant colony and such emergence arises from simple interactions among individual ants. Individual ants act according to simple and local information (such as pheromone concentration) to carry out their activities. Although there is no master ant overseeing the entire colony and broadcasting instructions to the individual ants, organized behaviour still emerges automatically. Therefore, such emergent behaviour is similar to other self-organized phenomena which occur in many processes in nature such as the pattern formation in animal skins (tiger and zebra skins).The foraging pattern of some ant species (such as the army ants) can show extraordinary regularity. Army ants search for food along some regular routes with an angle of about apart. We do not know how they manage to follow such regularity, but studies show that they could move in an area and build a bivouac and start foraging. On the first day, they forage in a random direction, say, the north and travel a few hundred meters, then branch to cover a large area. The next day, they will choose a different direction, which is about from the direction on the previous day and cover a large area. On the following day, they again choose a different direction about from the second day’s direction. In this way, they cover the whole area over about 2 weeks and they move out to a different location to build a bivouac and forage again.The interesting thing is that they do not use the angle of (this would mean that on the fourth day, they will search on the empty area already foraged on the first day). The beauty of this angle is that it leaves an angle of about from the direction on the first day. This means they cover the whole circle in 14 days without repeating (or covering a previously-foraged area). This is an amazing phenomenon.3.2 Ant Colony OptimizationBased on these characteristics of ant behaviour, scientists have developed a number ofpowerful ant colony algorithms with important progress made in recent years. Marco Dorigo pioneered the research in this area in 1992. In fact, we only use some of the nature or the behaviour of ants and add some new characteristics, we can devise a class of new algorithms.The basic steps of the ant colony optimization (ACO) can be summarized as the pseudo code shown in Fig. 3.1.Two important issues here are: the probability of choosing a route, and the evaporation rate of pheromone. There are a few ways of solving these problems although it is still an area of active research. Here we introduce the current best method. For a network routing problem, the probability of ants at a particular node to choose the route from node to node is given bywhere and are the influence parameters, and their typical values are .is the pheromone concentration on the route between and , and the desirability ofthe same route. Some knowledge about the route such as the distance is often used so that ,which implies that shorter routes will be selected due to their shorter travelling time, and thus the pheromone concentrations on these routes are higher.This probability formula reflects the fact that ants would normally follow the paths with higher pheromone concentrations. In the simpler case when , the probability of choosing a path by ants is proportional to the pheromone concentration on the path. The denominator normalizes the probability so that it is in the range between 0 and 1.The pheromone concentration can change with time due to the evaporation of pheromone. Furthermore, the advantage of pheromone evaporation is that the system could avoid being trapped in local optima. If there is no evaporation, then the path randomly chosen by the first ants will become the preferred path as the attraction of other ants by their pheromone. For a constant rate of pheromone decay or evaporation, the pheromone concentration usually varies with time exponentiallywhere is the initial concentration of pheromone and t is time. If , then we have . For the unitary time increment , the evaporation can beapproximated by . Therefore, we have the simplified pheromone update formula:where is the rate of pheromone evaporation. The increment is the amount of pheromone deposited at time t along route to when an ant travels a distance . Usually . If there are no ants on a route, then the pheromone deposit is zero.There are other variations to these basic procedures. A possible acceleration scheme is to use some bounds of the pheromone concentration and only the ants with the current global best solution(s) are allowed to deposit pheromone. In addition, certain ranking of solution fitness can also be used. These are hot topics of current research.3.3 Double Bridge ProblemA standard test problem for ant colony optimization is the simplest double bridge problem with two branches (see Fig. 3.2) where route (2) is shorter than route (1). The angles of these two routes are equal at both point A and pointB so that the ants have equal chance (or 50-50 probability) of choosing each route randomly at the initial stage at point A.Initially, fifty percent of the ants would go along the longer route (1) and the pheromone evaporates at a constant rate, but the pheromone concentration will become smaller as route (1) is longer and thus takes more time to travel through. Conversely, the pheromone concentration on the shorter route will increase steadily. After some iterations, almost all the ants will move along the shorter route. Figure 3.3 shows the initial snapshot of 10 ants (5 on each route initially) and the snapshot after 5 iterations (or equivalent to 50 ants have moved along this section). Well, there are 11 ants, and one has not decided which route to follow as it just comes near to the entrance.Almost all the ants (well, about 90% in this case) move along the shorter route.Here we only use two routes at the node, it is straightforward to extend it to the multiple routes at a node. It is expected that only the shortest route will be chosen ultimately. As any complex network system is always made of individual nodes, this algorithms can be extended to solve complex routing problems reasonably efficiently. In fact, the ant colony algorithms have been successfully applied to the Internet routing problem, the travelling salesman problem, combinatorial optimization problems, and other NP-hard problems.3.4 Virtual Ant AlgorithmAs we know that ant colony optimization has successfully solved NP-hard problems such asthe travelling salesman problem, it can also be extended to solve the standard optimization problems of multimodal functions. The only problem now is to figure out how the ants will move on an n-dimensional hyper-surface. For simplicity, we will discuss the 2-D case which can easily be extended to higher dimensions. On a 2D landscape, ants can move in any direction or , but this will cause some problems. How to update the pheromone at a particular point as there are infinite number of points. One solution is to track the history of each ant moves and record the locations consecutively, and the other approach is to use a moving neighbourhood or window. The ants ‘smell’ the pheromone concentration of their neighbourhood at any particular location.In addition, we can limit the number of directions the ants can move by quantizing the directions. For example, ants are only allowed to move left and right, and up and down (only 4 directions). We will use this quantized approach here, which will make the implementation much simpler. Furthermore, the objective function or landscape can be encoded into virtual food so that ants will move to the best locations where the best food sources are. This will make the search process even more simpler. This simplified algorithm is called Virtual Ant Algorithm (VAA) developed by Xin-She Yang and his colleagues in 2006, which has been successfully applied to topological optimization problems in engineering.The following Keane function with multiple peaks is a standard test functionThis function without any constraint is symmetric and has two highest peaks at (0, 1.39325) and (1.39325, 0). To make the problem harder, it is usually optimized under two constraints:This makes the optimization difficult because it is now nearly symmetric about x = y and the peaks occur in pairs where one is higher than the other. In addition, the true maximum is, which is defined by a constraint boundary.Figure 3.4 shows the surface variations of the multi-peaked function. If we use 50 roaming ants and let them move around for 25 iterations, then the pheromone concentrations (also equivalent to the paths of ants) are displayed in Fig. 3.4. We can see that the highest pheromoneconcentration within the constraint boundary corresponds to the optimal solution.It is worth pointing out that ant colony algorithms are the right tool for combinatorial and discrete optimization. They have the advantages over other stochastic algorithms such as genetic algorithms and simulated annealing in dealing with dynamical network routing problems.For continuous decision variables, its performance is still under active research. For the present example, it took about 1500 evaluations of the objective function so as to find the global optima. This is not as efficient as other metaheuristic methods, especially comparing with particle swarm optimization. This is partly because the handling of the pheromone takes time. Is it possible to eliminate the pheromone and just use the roaming ants? The answer is yes. Particle swarm optimization is just the right kind of algorithm for such further modifications which will be discussed later in detail.第二部分中文翻译第二章遗传算法2.1 引言遗传算法是由John Holland和他的同事于二十世纪六七十年代提出的基于查尔斯·达尔文的自然选择学说而发展的一种生物进化的抽象模型。

r2glmm 0.1.2 软件包文档说明书

r2glmm 0.1.2 软件包文档说明书

Package‘r2glmm’October14,2022Type PackageTitle Computes R Squared for Mixed(Multilevel)ModelsDate2017-08-04Version0.1.2Description The model R squared and semi-partial R squared for the linear and generalized linear mixed model(LMM and GLMM)are computed with confidence limits.The R squared measure from Edwards et.al(2008)<DOI:10.1002/sim.3429> is extended to the GLMM using penalized quasi-likelihood(PQL)estimation(see Jaeger et al.2016<DOI:10.1080/02664763.2016.1193725>).Three methods of computation are provided and described as follows.First,TheKenward-Roger approach.Due to some inconsistency between the'pbkrtest'package and the'glmmPQL'function,the Kenward-Roger approach in the'r2glmm'package is limited to the LMM.Second,The method introducedby Nakagawa and Schielzeth(2013)<DOI:10.1111/j.2041-210x.2012.00261.x>and later extended by Johnson(2014)<DOI:10.1111/2041-210X.12225>.The'r2glmm'package only computes marginal R squared for the LMM and doesnot generalize the statistic to the GLMM;however,confidence limits andsemi-partial R squared forfixed effects are useful stly,anapproach using standardized generalized variance(SGV)can be used forcovariance model selection.Package installation instructions can be foundin the readmefile.Imports mgcv,lmerTest,Matrix,pbkrtest,ggplot2,afex,stats,MASS,gridExtra,grid,data.table,dplyrSuggests lme4,nlme,testthatLicense GPL-2LazyData TRUERoxygenNote6.0.1URL https:///bcjaeger/r2glmmBugReports https:///bcjaeger/r2glmm/issuesNeedsCompilation noAuthor Byron Jaeger[aut,cre]12calc_sgvMaintainer Byron Jaeger<**********************>Repository CRANDate/Publication2017-08-0510:26:17UTCR topics documented:calc_sgv (2)cmp_R2 (3)glmPQL (4)pSym (5)make.partial.C (5)plot.R2 (6)pqlmer (7)print.R2 (8)r2beta (8)r2dt (10)Index12 calc_sgv Compute the standardized generalized variance(SGV)of a blockeddiagonal matrix.DescriptionCompute the standardized generalized variance(SGV)of a blocked diagonal matrix.Usagecalc_sgv(nblocks=NULL,blksizes=NULL,vmat)Argumentsnblocks Number of blocks in the matrix.blksizes vector of block sizesvmat The blocked covariance matrixValueThe SGV of the covariance matrix vmat.Exampleslibrary(Matrix)v1=matrix(c(1,0.5,0.5,1),nrow=2)v2=matrix(c(1,0.2,0.1,0.2,1,0.3,0.1,0.3,1),nrow=3)v3=matrix(c(1,0.1,0.1,0.1,1,0.2,0.1,0.2,1),nrow=3)calc_sgv(nblocks=3,blksizes=c(2,3,3),vmat=Matrix::bdiag(v1,v2,v3))cmp_R23 cmp_R2Compute R2with a specified C matrixDescriptionCompute R2with a specified C matrixUsagecmp_R2(c,x,SigHat,beta,method,obsperclust=NULL,nclusts=NULL)Argumentsc Contrast matrix forfixed effectsx Fixed effects design matrixSigHat estimated model covariance(matrix or scalar)betafixed effects estimatesmethod the method for computing r2betaobsperclust number of observations per cluster(i.e.subject)nclusts number of clusters(i.e.subjects)ValueA vector with the Wald statistic(ncp),approximate Wald F statistic(F),numerator degrees of free-dom(v1),denominator degrees of freedom(v2),and the specified r squared value(Rsq)Exampleslibrary(nlme)library(lme4)library(mgcv)lmemod=lme(distance~age*Sex,random=~1|Subject,data=Orthodont)X=model.matrix(lmemod,data=Orthodont)SigHat=extract.lme.cov(lmemod,data=Orthodont)beta=fixef(lmemod)p=length(beta)obsperclust=as.numeric(table(lmemod$data[, Subject ]))nclusts=length(obsperclust)C=cbind(rep(0,p-1),diag(p-1))partial.c=make.partial.C(p-1,p,2)cmp_R2(c=C,x=X,SigHat=SigHat,beta=beta,obsperclust=obsperclust,nclusts=nclusts,method= sgv )cmp_R2(c=partial.c,x=X,SigHat=SigHat,beta=beta,obsperclust=obsperclust,nclusts=nclusts,method= sgv )4glmPQLglmPQL Compute PQL estimates forfixed effects from a generalized linearmodel.DescriptionCompute PQL estimates forfixed effects from a generalized linear model.UsageglmPQL(glm.mod,niter=20,data=NULL)Argumentsglm.mod a generalized linear modelfitted with the glm function.niter maximum number of iterations allowed in the PQL algorithm.data The data used by thefitted model.This argument is required for models with special expressions in their formula,such as offset,log,cbind(sucesses,trials),etc.ValueA glmPQL object(i.e.a linear model using pseudo outcomes).Examples#Load the datasets package for example codelibrary(datasets)library(dplyr)#We ll model the number of world changing discoveries per year for the#last100years as a poisson outcome.First,we set up the datadat=data.frame(discoveries)%>%mutate(year=1:length(discoveries))#Fit the GLM with a poisson link functionmod<-glm(discoveries~year+I(year^2),family= poisson ,data=dat)#Find PQL estimates using the original GLMmod.pql=glmPQL(mod)#Note that the PQL model yields a higher R Squared statistic#than the fit of a strictly linear model.This is attributed#to correctly modelling the distribution of outcomes and then#linearizing the model to measure goodness of fit,rather than#simply fitting a linear modelsummary(mod.pql)pSym5summary(linfit<-lm(discoveries~year+I(year^2),data=dat))r2beta(mod.pql)r2beta(linfit)pSym Checks if a matrix is Compound Symmetric.DescriptionChecks if a matrix is Compound Symmetric.UsagepSym(mat,tol=1e-05)Argumentsmat The matrix to be tested.tol a number indicating the smallest acceptable difference between off diagonal val-ues.ValueTrue if the matrix is compound symmetric.Examplesgcmat<-matrix(c(1,0.2,0.1,0.2,1,0.3,0.1,0.3,1),nrow=3)csmat<-matrix(c(1,0.2,0.2,0.2,1,0.2,0.2,0.2,1),nrow=3)pSym(csmat)make.partial.C Generate partial contrast matricesDescriptionGenerate partial contrast matricesUsagemake.partial.C(rows,cols,index)6plot.R2Argumentsrows Number of rows in the contrast matrixcols Number of columns in the contrast matrixindex A number corresponding to the position of thefixed effect in the vector offixed effect parameter estimates.ValueA contrast matrix designed to test thefixed effect corresponding to index in the vector offixedeffects.Examplesmake.partial.C(4,5,2)make.partial.C(4,5,3)make.partial.C(4,5,2:4)plot.R2Visualize standardized effect sizes and model R squaredDescriptionVisualize standardized effect sizes and model R squaredUsage##S3method for class R2plot(x,y=NULL,txtsize=10,maxcov=3,r2labs=NULL,r2mthd="sgv",cor=TRUE,...)Argumentsx An R2object from the r2beta function.y An R2object from the r2beta function.txtsize The text size of the axis labels.maxcov Maximum number of covariates to include in the semi-partial plots.r2labs a character vector containing labels for the models.The labels are printed as subscripts on a covariance model matrix.r2mthd The method used to compute R2cor An argument to be passed to the r2dt function.Only relevant if comparing two R2objects....Arguments to be passed to plotpqlmer7ValueA visual representation of the model and semi-partial R squared from the r2object provided.Exampleslibrary(nlme)library(r2glmm)data(Orthodont)#Linear mixed modellmemod=lme(distance~age*Sex,random=~1|Subject,data=Orthodont)r2=r2beta(model=lmemod,partial=TRUE,method= sgv )plot(x=r2)pqlmer pqlmerDescriptionFit a GLMM model with multivariate normal random effects using Penalized Quasi-Likelihood for mermod objects.Usagepqlmer(formula,family,data,niter=40,verbose=T)Argumentsformula The lme4model formula.family a family function of the error distribution and link function to be used in the model.data the dataframe containing the variables in the model.niter Maximum number of iterations to perform.verbose if TRUE,iterations are printed to console.ValueA pseudo linear mixed model of class"lme".See AlsoglmmPQLExamples#Compare lmer PQL with lme PQLlibrary(MASS)lmePQL=glmmPQL(y~trt+week+I(week>2),random=~1|ID,family=binomial,data=bacteria,verbose=FALSE)merPQL=pqlmer(y~trt+week+I(week>2)+(1|ID),family=binomial,data=bacteria,verbose=FALSE)summary(lmePQL)summary(merPQL)print.R2Print the contents of an R2objectDescriptionPrint the contents of an R2objectUsage##S3method for class R2print(x,...)Argumentsx an object of class R2...other arguments passed to the print function.r2beta r2beta Compute R Squared for Mixed ModelsDescriptionComputes coefficient of determination(R squared)from edwards et al.,2008and the generalized R squared from Jaeger et al.,2016.Currently implemented for linear mixed models with lmer and lme objects.For generalized linear mixed models,only glmmPQL are supported.Usager2beta(model,partial=TRUE,method="sgv",data=NULL)Argumentsmodel afitted mermod,lme,or glmmPQL model.partial if TRUE,semi-partial R squared are calculated for eachfixed effect in the mixed model.method Specifies the method of computation for R squared beta:if method=’sgv’then the standardized generalized variance approach is applied.This method is rec-ommended for covariance model selection.if method=’kr’,then the KenwardRoger approach is applied.This option is only available for lme models.ifmethod=’nsj’,then the Nakagawa and Schielzeth approach is applied.This op-tion is available for lmer and lme objects.if method=’lm’,the classical Rsquared from the linear model is computed.This method should only be usedon glm and lm object.data The data used by thefitted model.This argument is required for models with special expressions in their formula,such as offset,log,cbind(sucesses,trials),etc.ValueA dataframe containing the model F statistic,numerator and denominator degrees of freedom,non-centrality parameter,and R squared statistic with95If partial=TRUE,then the dataframe also contains partial R squared statistics for allfixed effects in the model.ReferencesEdwards,Lloyd J.,et al."An R2statistic forfixed effects in the linear mixed model."Statistics in medicine27.29(2008):6137-6157.Nakagawa,Shinichi,and Holger Schielzeth."A general and simple method for obtaining R2from generalized linear mixed effects models."Methods in Ecology and Evolution4.2(2013):133-142.Jaeger,Byron C.,et al.,"An R Squared Statistic for Fixed Effects in the Generalized Linear Mixed Model."Journal of Applied Statistics(2016).Exampleslibrary(nlme)library(lme4)data(Orthodont)#Linear mixed modelsmermod=lmer(distance~age*Sex+(1|Subject),data=Orthodont)lmemod=lme(distance~age*Sex,random=~1|Subject,data=Orthodont)#The Kenward-Roger approachr2beta(mermod,method= kr )#Standardized Generalized Variancer2beta(mermod,method= sgv )r2beta(lmemod,method= sgv )10r2dt#The marginal R squared by Nakagawa and Schielzeth(extended by Johnson)r2beta(mermod,method= nsj )#linear and generalized linear modelslibrary(datasets)dis=data.frame(discoveries)dis$year=1:nrow(dis)lmod=lm(discoveries~year+I(year^2),data=dis)glmod=glm(discoveries~year+I(year^2),family= poisson ,data=dis)#Using an inappropriate link function(normal)leads to#a poor fit relative to the poisson link function.r2beta(lmod)r2beta(glmod)#PQL models#Currently only SGV method is supportedlibrary(MASS)PQL_bac=glmmPQL(y~trt+I(week>2),random=~1|ID,family=binomial,data=bacteria,verbose=FALSE)r2beta(PQL_bac,method= sgv )r2dt R Squared Difference Test(R2DT).Test for a statistically significantdifference in generalized explained variance between two candidatemodels.DescriptionR Squared Difference Test(R2DT).Test for a statistically significant difference in generalized ex-plained variance between two candidate models.Usager2dt(x,y=NULL,cor=TRUE,fancy=FALSE,onesided=TRUE,clim=95,nsims=2000,mu=NULL)Argumentsx An R2object from the r2beta function.y An R2object from the r2beta function.If y is not specified,Ho:E[x]=mu is tested(mu is specified by the user).r2dt11 cor if TRUE,the R squared statistics are assumed to be positively correlated anda simulation based approach is used.If FALSE,the R squared are assumedindependent and the difference of independent beta distributions is used.Thisonly needs to be specified when two R squared measures are being considered.fancy if TRUE,the output values are rounded and changed to characters.onesided if TRUE,the alternative hypothesis is that one model explains a larger propor-tion of generalized variance.If false,the alternative is that the amount of gener-alized variance explained by the two candidate models is not equal.clim Desired confidence level for interval estimates regarding the difference in gen-eralized explained variance.nsims number of samples to draw when simulating correlated non-central beta random variables.This parameter is only relevant if cor=TRUE.mu Used to test Ho:E[x]=mu.ValueA confidence interval for the difference in R Squared statistics and a p-value corresponding to thenull hypothesis of no difference.Exampleslibrary(nlme)library(lme4)library(r2glmm)data(Orthodont)#Comparing two linear mixed modelsm1=lmer(distance~age*Sex+(1|Subject),Orthodont)m2=lmer(distance~age*Sex+(1+age|Subject),Orthodont)m1r2=r2beta(model=m1,partial=FALSE)m2r2=r2beta(model=m2,partial=FALSE)#Accounting for correlation can make a substantial difference.r2dt(x=m1r2,y=m2r2,cor=TRUE)r2dt(x=m1r2,y=m2r2,cor=FALSE)Indexcalc_sgv,2cmp_R2,3glmmPQL,7,8glmPQL,4pSym,5lme,8,9lmer,8,9make.partial.C,5plot.R2,6pqlmer,7print.R2,8r2beta,8r2dt,1012。

方法论英语作文模板

方法论英语作文模板

方法论英语作文模板Title: The Essence of Methodology in Academic Research.In the realm of academic inquiry, methodology holds a pivotal position, guiding researchers towards reliable and valid conclusions. It is the systematic approach that underpins the entire research process, ensuring that objectives are met, hypotheses are tested, and data are analyzed with precision and rigor.Methodology is not merely a set of techniques or tools; it is a philosophy that dictates how one approaches a problem, selects appropriate methods, and interprets findings. It is the backbone of any research study, informing every step from conceptualization to dissemination of results.The foundation of any methodology is the theoretical framework. This framework provides a lens through which the researcher views the world and interprets data. It is thetheoretical backdrop that guides the selection of methods, the interpretation of findings, and the ultimateconclusions drawn.In selecting a methodology, researchers must consider the nature of their research question. Is it descriptive, explanatory, or exploratory? The answer to this questionwill determine the appropriate research design, sample size, data collection methods, and analytical techniques. For example, a descriptive study might employ quantitative methods such as surveys or experiments to gather data,while an exploratory study might rely on qualitative methods like interviews or case studies.The sampling technique is also crucial. Whether the researcher opts for a probabilistic or non-probabilistic sample, the chosen method must be representative of the population being studied. Otherwise, the findings may notbe generalizable or reliable.Data collection is another vital aspect of methodology. Researchers must determine the best method for gatheringdata, whether it be through primary or secondary sources. Primary data collection methods include surveys, interviews, observations, and experiments, while secondary data can be obtained from existing databases, published studies, or government reports. The choice of data collection method should be based on the research objectives and the availability of resources.Data analysis is where the real magic happens. It is here where raw data are transformed into meaningful information that answers the research question. The analytical techniques employed should be appropriate forthe type of data collected and the research objectives. For example, quantitative data might be analyzed using descriptive statistics, inferential statistics, or regression analysis, while qualitative data might be analyzed through content analysis, thematic analysis, or grounded theory.Finally, the dissemination of results is an integralpart of the research process. Researchers must communicate their findings in a clear and concise manner, ensuring thattheir work is accessible to other scholars and practitioners. Publications in peer-reviewed journals, presentations at conferences, and the sharing of data and materials are all important avenues for disseminating research findings.In conclusion, methodology is the lifeblood of academic research. It is the compass that guides researchers through the maze of inquiry, ensuring that they stay on course and arrive at reliable and valid conclusions. By paying careful attention to the theoretical framework, research design, sampling techniques, data collection and analysis, and dissemination of results, researchers can ensure that their work makes a meaningful contribution to the field.。

离散型牛顿法解非线性方程的应用论文1

离散型牛顿法解非线性方程的应用论文1

本科毕业设计(论文)题目名称:离散型牛顿法在解非线性方程中的应用学院:数学学院专业年级:信息与计算科学2009级学生姓名:班级学号:200911020110指导教师:姜晓威二O一三年四月十七日摘要牛顿型方法是解非线性方程组的一类重要方法,在非线性方程组迭代解法的理论研究中占有十分重要的地位,牛顿型方法是逐步线性化方法的典型代表,牛顿法的收敛性理论及其研究方法,特别是K ahtopobnu的著名论文,对迭代的研究产生了深远的影响.在通常情况下,非线性算子方程的解不能精确解出,而是用数值方法求其近似解.牛顿法是一种普遍适用的迭代法.它的计算格式简洁,程序简单,而且收敛速度快,适用范围广.多年来,众多学者对经典牛顿法提出多种改进方案,如:萨马斯基提出的修正牛顿法,阻尼牛顿法,拟牛顿法等各种变形.经典牛顿法尽管具有很多优点,但在处理某些不可微问题或导数难计算问题时会遇到一些困难,而离散型牛顿法可以在一定程度上弥补这方面的不足.本文讨论了牛顿法及离散型牛顿法的半局部收敛性及大范围收敛性,并给出数值算例对此两种方法的执行情况.关键词:非线性方程;牛顿法;离散型牛顿法;收敛性AbstractThe Newton method is an important method for the solution of nonlinear equations,Occupies a very important position in the theory group iterative method for solving nonlinear equations.The Newton method is a typical representative of successive l inearization method, Newton method, convergence theory and research method, especially the famous paper Kahtopobnu, exerted a profound influence on the study of iteration.Generally speaking, we can not solve the nonlinear equations exactly. We always Give the approximate solution by using the numerical methods for nonlinear equations. Newton’s method is one of the most powerful and well-known iterative methods known to converge operator equation. In recent decades, scholars obtained many progresses of the classic Newton ’s method for solving nonlinear equations, Frozen- Newton method given by Samaski, damped Newton method, Quasi- Newton method and other forms.In this paper, we will give the convergence and convergence rate of the modified discrete Newton’s method, again. And numerical examples are given to verify the validity of the method. Moreover, using the modified discrete Newton’s method, we propose the modified continuous Newton’s method. We prove that it is convergence.Keywords: nonlinear equations; Newton’s method;Discret e Newton’s method; convergence目录中文摘要 (Ⅰ)英文摘要 (Ⅱ)目录 (Ⅲ)1.引言 (1)2.主要内容 (1)2.1牛顿法和牛顿型方法介绍 (1)2.2牛顿方法的收敛性 (3)2.2.1牛顿法的半局部收敛性 (3)2.2.2大范围收敛问题 (4)2.3.离散型牛顿法 (6)2.3.1半局部收敛性 (6)2.3.2大范围收敛性 (7)2.4数值算例 (8)3.总结 (10)致谢 (11)参考文献 (12)1.引 言牛顿法是牛顿在17世纪提出的一种在实数域或复数域上近似求解方程的方法.多数方程不存在求根公式,因此求精确根非常困难,甚至不可能,从而寻找方程的近似根就显得特别重要.牛顿法是求方程的重要方法之一,其最大优点是在方程0)(=x f 的单根附近具有平方收敛速度,而且该方法还可以用来求方程的重根,复根,此时线性收敛,但是可通过一些手段变成超线性收敛.该方法已广泛用于计算机编程中. 本课题主要来源于非线性算子方程数值解法.随着非线性科学的飞速发展,许多科研工作者逐渐的对非线性问题的求解产生了浓厚的兴趣.线性系统的解很容易由计算机求出,但是对于非线性问题,无论从理论上还是从计算方法上都比解线性问题要复杂的多.一般情况下,非线性问题是很难求出解析解(或精确解),往往只能求出数值解(或近似解).经典牛顿法尽管具有很多优点,但在处理某些不可微问题或导数难计算问题时会遇到一些困难,而离散型牛顿法可以在一定程度上弥补这方面的不足.2.主要内容2.1 牛顿法和牛顿型方法介绍考虑解方程式 ()0F x =,(2.1)其中映像F :nnR RΩ⊂→ 于凸区域Ω中二次G-可微,且()F x ''于Ω连续.设*x ∈Ω为方程组(2,1)的解.选定*x 的初始近似值(0)x ∈Ω,利用Taylor 级数,我们有1(0)(0)(0)(0)(0)(0)2()()()()(())()(1)F x F xF xx xF xt x xx xt d t'''=+-++---⎰.由于(0)x 充分接近*x ,因此我们可以用线性方程组(0)(0)(0)()()()0F xF xx x'+-= (2.2)近似的代替方程组(2.1).设(2.2)的解为(1)x :(1)(0)(0)1[()]()xxFx F x-'=-. 一般说来,(1)x 应较(0)x 更近似于*x ,因而,类似的可以再(1)x 近旁用线性方程组(1)(1)(1)()()()F xFx x x'+-= 近似代替(2.1),其解(2)x 为*x 的新的近似值: (2)(1)(1)1[()]()xxFx F x-'=-. 一般地.我们有...2,1,0),()]([)(1)()()1(='-=-+k xF xF xxk k k k (2.3)这就是解方程组的牛顿程序.从上述讨论看出,解方程组的牛顿法,无论其形式或者构造方法均与方程式情形相同.对于方程组,同样可以构造简化牛顿程序,其迭代公式为...2,1,0),()]([)(1)()()1(='-=-+k xF xF xxk k k k (2.4)显然,对于方程组,这种简单化更有意义,因为他每一步减少了2n 个微商值的计算. 迭代公式(2.3)及(2.4)一般说来只是一种形式记法,因为,在空间维数n 很大,求微商的逆是困难的.实际计算时,它们分别采用下述形式:(1)()()'()()()()()0,0,1,2,k k k k k k xx x F x x F x k +⎧=+∆⎪⎨∆+==⎪⎩ , (2.5)(1)()()'(0)()()()()0,0,1,2,k k k k kxx x F x x F xk +⎧=+∆⎪⎨∆+==⎪⎩ . (2.6)即利用牛顿法或简化牛顿法计算时,每步需要解一个n 阶线性方程组,其中简化牛顿程序每步所解得方程组具有同一系数矩阵.按上述构造牛顿法的方法,我们实际上是用形如()()()()()()0k k k A xx xF x-+= (2.7)的线性方程组近似代替方程组(2.1),其解(1)()()1()[()](),0,1,2,...k k k kxxA xF xk +-=-= (2.8)即作为(2.1)的解的近似值,为保证()k x 近似于*x ,应要求A(x)近似于*()F x '.基于不 同的考虑,适当选取)(x A ,即得到牛顿法的各种变体.这类方法统称为牛顿型方法.考虑到很多问题(例如,由微积分方程离散化导出的方程组)()F x ''的计算较复杂,因此常常将()F x '的元素用相应的差商代替,即)(x A 去乘下列矩阵:(,)J x h 211111121111(()()),...,(()()).................................................................................11(()()),...,(()())nn nn n n n n n n f x h e f x f x h e f x h h f x h e f x f x h e f x h h ⎛⎫+-+- ⎪ ⎪= ⎪⎪ ⎪+-+- ⎪⎝⎭ (2.9)1(,...,),Tin h h h e=为第i 个单位向量,此时相应的迭代程序(2.8)称为离散型牛顿程序,其计算公式为(1)()()(k )()()()(,)()0,0,1,2...k k k k k k xx x J x h xF x k +⎧=+∆⎪⎨∆+==⎪⎩ (2.10) 其中()k h 为事先选定的向量序列.容易看出,为实现(2.10)每步需计算1+n 个函数向量(即(1)n n +个函数值),并且解一个n 阶线性方程组,每步计算量与牛顿法相同,但无需计算()F x '.2.2牛顿方法的收敛性2.2.1牛顿法的半局部收敛性局部收敛性定理都是在假定原方程的解*x 存在,并且初值0x 必须在真解的某个邻域中得到的.但是一般情况下,我们不知道方程是否有解,自然地,希望能从迭代过程的收敛性去确定方程解的存在性.并且对选取的初值0x ,给出保证迭代收敛性的条件,进一步还希望估计出*xx k-的误差.这样不事先假定解存在的收敛性叫做半局部收敛性.讨论牛顿法的半局部收敛性,最著名的定理是康托洛维奇定理. 定理2.1 设,X Y 均为实Banach 空间,算子0:(,)F B x X Yγ⊂→是F-可微分的,且满足: (i)0[()]F x ' 是Y到X 的有界性算子100||[()]()||F x F x α-'≤10||[()]||F x β-'≤,(ii)0||[()()]||||||,,(,)F x F y L x y x y B x γ''-≤-∈,(iii) 21L αβ<, 2αγ<.则牛顿迭代程序: 11[()](),0,1,...n n n nx x Fx F x n -+'=-= 收敛于方程()F x θ=的唯一解*0(,2),x B x α∈且有估计式:*2111||||2nn n x x q---≤,其中2qL αβ=.2.2.2大范围收敛问题Mbicobcknx 曾经指出,在Cauchy 型条件下,即使对单调函数,牛顿法也仅有局部收敛性质,并且举出方程式情形4ρ>不收敛的例子,然而,选择初始近似(0)x ,使之满足牛顿法的收敛条件是很困难的,因此,改造牛顿型方法,使之具有大范围收敛性,无论在理论上或是实际应用上都有意义的.大范围收敛的牛顿程序是按下降思想导出的,现以方程式为例介绍牛顿下降法的构造思想.考虑解方程式()0F x =(2.11)的牛顿程序(1)()()1(()()k k k kxxFx F s +-''=-,利用Taylor 公式有(1)(1)()()(1)|()||()()()()|k k k kk kF xF xF xFx x x +++'=---(k )()1()21|(x)||()()|2k k F F x F x-'''=,式中()()(1)()(),01k k k k x xx xθθ+=+-<<. 由此看出,若()()12()1|()||()||()|12k kk F xF x F x -'''<, (2.12)则有(1)()|()||()|k k F x F x +<.条件(2.2)实际上是Mbicobcknx 定理2.1中的条件2ρ<.换言之,当()F x 满足Cauchy 型条件时, 2ρ<保证了()|()|k F x 随着k 的增大而减少.上述事实启发我们适当改造牛顿程序,以减弱2ρ<的限制并保持()|()|k F x 关于k 下降的性质.基于这种考虑,构造程序(1)()()1(()()k k k kk xxFx F x ω+-'=- , (2.13)仍利用Taylor 公式导出(1)2()()12()()1|()|(|()||()||()|1)|()|2k k k kkk k F xF x F x F x F x ωω+-'''≤-+, 01k ω<≤,当()()12()1|()||()||()|12kk k k f xf xf x ω-'''<(2.14) 时,将有(1)|()||()|k kf x f x +<.只要取k ω充分小,尽管(2.2)不成立,仍可使(2.4)成立.这就解除了2ρ<的限制.鉴于上述讨论,在方程组情形考虑下述牛顿下降程序(1)()()1(()()k k k kk xxFx F x ω+-'=- 0,1,...,01k k ω=<≤ (2.15)我们有 定理2.2 设:nnFR RΩ⊂→满足下列条件:(1)(0)||()||;F x η< (2)于区域(0){||||}()x x xF x γβη'Ω=-≤⊂Ω有逆存在,且1||()||,F x x β-''≤∀∈Ω, (2.16)||()()||||||,,F x F y x y x y ρ''-≤-∀∈Ω (2.17) 则当22/1γρα≥>时,方程组()F x =于0Ω有解*x 存在,且对任何01k ω<≤,210,02,k a x αωρβη<≤≤-=.由(2.5)定义之()k x 收敛于*x ,且至少是线性收敛的.注1 当满足定理2.1的条件时,有0k 使02||()||2kx F x βα≤-,此时视0k x 为该定理中(0)x ,仍满足定理条件,因而可取1k ω≡,即此时牛顿法收敛,因而得到二阶收敛性.注2 注意到二次三项式21()12ϕωωρω=+- 在12ρ>时由1ωρ=处取最小值:11()12ρϕρρ=-,因而,为使(2.5)具有较快的敛速,可取2(k)1/||()||kx F xωβ=,换言之,程序(2.5)可以取成下列形式(1)()()1(2()()(),m i n {1,1/||()},0,1,2,... .k k k k k k k x x F x F x xF x k ωωβ+-'⎧=-⎪=⎨⎪=⎩(2.18) 对由(2.11)定义的()k x ,估计式(3.8)变成为()()22(1)2()2()212||()||,||()||;2||()||12||()||,||()||.2k k k k k F x F x X X F x X F x F x X ββββ+⎧-≥⎪⎪≤⎨⎪≤⎪⎩当当 (2.19) 注3 如果集合{||()||()||}Dx F x F x =<为有限区域,且于D 满足(2.6)(2.7)则对任何(0)(0),||()||||()||x D F x F x ∈<,故定理(2.1)的结论成立.事实上,此时由(2.5)定义的()k x 均落于D 中,否则,若有某m,使()m x D∈,而(1)m x D+∈,则有()()1()()()m m m m xx Fx F x θω-'=- ,01θ<<,使||()||||()||F xF x = ,对此x 仍有222()()1||()||(||()||1)||()||2m mm m F xX F x F x θωβθω<-+(0)||()||F x < , 这就导致了矛盾.前述定理已给出了牛顿法的大范围收敛条件.程序(2.5)早就有人研究过.2.3离散型牛顿法2.3.1半局部收敛性对于非线性算子方程 ()F x θ=其中,:FX Y→,这里Y X ,都是Banach 空间,则有以下定理:定理2.3设,X Y 均为实际Banach 空间,算子0:(,)F B x r X Y⊂→是F-可微分的,且满足:(i) 10[()]F x -'是Y 到X 的有界线性算子,100||[()]()||F x F x a-'≤ ,10||[()]||F x β-'< (ii)||()()||||||,F x F y L x y ''-≤- 0,(,)x y B x r ∈(iii)41,2,lim 0n n L a r αβτ→∞<<=则离散型牛顿法11[()]()(),0,1,...n n n n n n x x F x F x x x n τ-+'=---=收敛于式(3.1)的唯一解*0(,)x B x r ∈,且有估计式:*02||||2() (,)32n n r x x a x B x -≤∈.即对牛顿法进行简单修正后得到的离散型牛顿法 11[()]()(),0,1...n n n n n n x x F x F x x x n τ-+'=---=只要对n τ附加条件:lim 0nn τ→∞=,再加上牛顿法收敛的条件,就能收敛到非线性算子方程()F x =的解*x ,且收敛率为*2||||2()3nn x x a -≤ .2.3.2大范围收敛性由于离散牛顿法在实际应用中的重要性以及它对初始近似的苛刻限制(定理3.2及3.3),研究离散牛顿程序的大范围收敛条件更有实际意义. 相应于离散牛顿程序烤炉大范围收敛程序(1)()1()()()()11(,)(),||||||()||.k k kk kk k k k xx J xhF xhF x ωω+-⎧=-⎪⎨≤⎪⎩我们有定理2.4 设:nnFR RΩ⊂→ 满足下列条件:(1)0||()||F x η≤,(2)于(0){||||}x x xγβηΩ=-≤⊂Ω内()F x '有逆存在,且满足(2.6)及(2.7),则当35/γρα≥时,方程组()F x =于0Ω有解*x 存在,且对任何401,07k k ωαωρ<≤<≤≤,由(2.13)定义的序列(){}k x 收敛于*x ,并且至少是线性收敛的,其中,max(,1)X ρββηββ==.对于定理2.2也可以列出与定理2.1类似的注记.例如,特别可以取0Ω为集合11{||()||||()||}D x F x F x =≤,而(0)x 满足条件(0)11||()||||()||F x F x ≤其次,为提高收敛速度可以将(2.13)取成下列形式(1)()1()()()()1()()11(,)(),m in{1,25/84||()||},||||||()||.k k k k k k k k k k k x x J x h F x X F x h F x ωωββω+-⎧=-⎪=⎨⎪≤⎩此时由(2.13)定义的()k x 有下述估计式:()()11(1)1()2()11254||()||,||()||;1687||()||424||()||,||()||257.k k k k k F x X F x X F x X F x X F x ββββββββ+⎧-≤⎪⎪≤⎨⎪>⎪⎩当当最后,容易想到,仿定理2.2可以建立离散型牛顿程序的大范围收敛性定理.2.4数值算例本小节对二个经典的非线性方程组分别运用牛顿法和离散型牛顿法求解,并将所求结果进行对比,直观的说明修正的离散型牛顿法的有效性.1. 求方程组212121()0c os(/2)x x F x x x π⎧⎫-+==⎨⎬-⎩⎭的真解为*(0,1)x =,其中迭代停止准则为101||||10n n x x -+-≤.若取初值0(1,0)x =,用牛顿法收敛到(1,2)-,不收敛到所需要的真解*x .若使用离散型牛顿法来求解非线性方程组,取(0.5,0.5)x=-,则有小面的计算结果:表2.1取(0.5,0.5)x=-时的修正的离散型牛顿法迭代计算结果2. 方程组121212211(sin())22()01(1)()24x x x x x f x ex e e ex πππ⎧⎫--⎪⎪⎪⎪==⎨⎬⎪⎪--+-⎪⎪⎩⎭的真解为*(0.3,2.8)x =,其中迭代停止准则为101||||10n n x x -+-≤.若取初值0(0.1,0.1)x =,使用牛顿法进行迭代,求解结果收敛到点(-0.26059929002248,0.62253089661391), 也就是不收敛到所需要的真解*x . 而用离散型牛顿法求解,若取(0.5,3.3)x=,则有以下计算结果:表2.2取(0.5,3.3)x=时离散型牛顿法迭代计算结果对于以上两个非线性方程组,当初值取在真解附近时,使用牛顿法求解,所得结果都不收敛到真解,而使用离散型牛顿法求解,所得结果都收敛到真解,收敛准则都确定为101||||10n n x x -+-≤.数值算例的结果说明了离散型牛顿法是可行有效的.3.总 结牛顿型方法是解非线性方程组的一类重要方法,在非线性方程组迭代解法的理论研究中占有十分重要的地位,牛顿程序的构造方法是逐步线性化方法的典型代表,牛顿法的收敛性理论及其研究方法,特别是Kahtopobnu[1984,1957]的著名论文,对迭代的研究产生了深远的影响. 因此, 寻找快速可行的迭代的方法具有重要意义本文针对这种离散型牛顿法,列出完善的收敛性证明以及收敛速率,并应用数值算例验证算法的可行性.致谢本文的研究和撰写工作都是在导师姜晓威老师的悉心指导下完成的. 从论文的选题、开题、撰写直至最后的答辩, 都得到了姜老师的关心、大力帮助和耐心指导. 姜老师在学术上敏锐的洞察力、开阔活跃的学术思维、不懈进取的精神、严谨的治学风范、崇高的敬业精神、渊博的学识给我留下深刻的印象, 将使我终身受益. 最令我感动的是姜老师在我撰写论文期间给予了孜孜不倦的指导, 他严谨的科研作风给我留下了深刻的教诲和影响. 谨此之际, 向关心和培养我的导师姜晓威老师表示衷心的感谢和诚挚的敬意!同时, 感谢论文的各位评审专家能在百忙之中抽出时间对我的学士论文进行评审, 并提出宝贵的建议, 在此表示衷心的感谢! 最后, 向所有曾经关心和帮助过我的老师、同学、朋友表示诚挚的感谢!参考文献[1]关治, 陆金甫.数值分析基础(第二版)[M], 高等教育出版社,1998.52-67[2]谢如彪, 姜培庆.非线性数值分析[M], 上海交通大学出版社, 1984. 1-6[3] 袁东锦, 计算方法(第二版)[M], 南京师范大学出版社,2007. 183-209.[4] 李庆扬, 王能超,易大义.数值分析[M], 清华大学出版社,1995.863-1003[5] 姜波, 徐家旺. 非线性方程组的数值解法比较[J],沈阳航空工业学院报,2002(29):195-203.[6]邓建中, 葛仁杰,程正兴.计算方法[M], 西安交通大学出版社,2003(30): 1255-1258.[7]吴淦洲.求解非线性方程组的改进牛顿法[J], 茂民学院学报,2004, 8(2): 88-96.[8]田巧玉,古钟壁,周新志.基于混合遗传算法求解非线性方程组[J], 计算机技术与发展, 1999, 292: 99-125.[9]罗亚中,袁端才,唐国金.求解非线性方程组的混合遗传算法[J]. 计算力学学报,1979, 244(5): 1093-1096.[10] Gill P E, Murray W, Saunders M A, Tomlia J A, Wright M H. On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method[J]. Mathematical Programming, 1986, 36: 183-209.。

动圈式检波器芯体极性的一种智能测试方法

动圈式检波器芯体极性的一种智能测试方法

2020年第6卷第6期PETROLEUM TUBULAR GOODS & INSTRUMENTS• 25 •-开发设计-动圈式检波器芯体极性的一种智能测试方法雷宇,余正杰,焦新程,王云琦,井乔(中国石油东方地球物理公司西安物探装备分公司 陕西 西安710077 $摘 要:动圈式检波器在其制造过程中存在芯体极性反接的情况,必须对每一个检波器芯体进行极性校验。

介绍了一种对动圈式检波器芯体极性的智能测试方法,利用单片机驱动气缸作为检波器的激励信号,同时采集芯体输出电压值来判断检波器芯体极 性,有效避免了检波器性 为因素干扰而造成的判断失误。

该 建检波器自动化生产线的重要一环,具有易于实现、成本低、准确性高的优点。

关 键 词:动圈式检波器;极性测试;自动激励;自动检测中图法分类号:P631.4 + 36 文献标识码:A 文章编号:2096 - 0077( 2020) 06 - 0025 - 04DOI :10.19459/j. cnki. 61 - 1500/te. 2020. 06. 006A Intelligent Method to Test the Polarity of Moving-coil Detector CorrLE 【Yu, YU Zhengjie, JIAO Xincheng# WANG Yunqi# JING Qiao(Xian Geophysical Prospecting Equipmeni Company O BGP , CNPC,Xian,Shaanxi 710077 , China)Abstract : The phenomenon of the inversed grafting on the polaOty of moving-coii detector core usuaHy occurr when producing moving-coiidetector. As a result , every core of detector needs polaOty test. In this paper , an intellioent method is introduced for the polaOty testing ofmoving-coii detector core. We applied the air cylinder driven by singOe chip as the activating siynal of detector and utilized the output vo V-age value of detector core to determine the core polarity. In this way , we can aveid the user bias when testing the polarity of detector core.Polarity test is a sionificant step when we create the automatic production line. Our method can provide an easy-implemented , low-cost andhigh-cccurate way to achieve polarity test.Key word : moving-coii detector ;polarity test ;automatic excitation ;automatic detection自现代石油工业诞生以来, 技术是 油气 有效的方法。

脱细胞同种异体神经移植物的制备及成分分析

脱细胞同种异体神经移植物的制备及成分分析

解剖科学进展Progress of Anatomicai Sciences 2004,10(2):106~108,三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三三111脱细胞同种异体神经移植物的制备及成分分析孙慧哲,佟晓杰,曹德寿,王振宁(中国医科大学解剖学教研室,组织工程学研究室,辽宁沈阳110001)【摘要】目的探讨脱细胞同种异体神经移植的制备方法及其含有的成分。

方法采用组织工程化低渗脱细胞方法制备神经异体移植物,光电镜观察其结构特征;用免疫组织化学法和聚丙烯酰胺凝胶电泳法分析神经异体移植物的成分。

结果正常的(天然的)周围神经经脱细胞处理后,清除了雪旺细胞、神经外膜和束膜的细胞,以及神经纤维的髓鞘和轴突,保存了由雪旺细胞基底膜管以及神经外膜和束膜的细胞外基质构成的三维支架结构。

成分分析结果显示,含有LN、FN以及生长相关蛋白GAP-43和65kda蛋白等促进和诱导受体损伤神经再生的重要成分。

结论脱细胞异体神经移植物含有诱导和促进神经再生的相关蛋白,有利于受损神经的再生。

【关键词】神经组织工程;脱细胞神经异体移植物;神经再生;大鼠【中图分类号】R322.85 【文献标识码】A 【文章编号】1006-2947(2004)02-0106-03Preparation of Acellular Nerve Allografts and Analysis of Their ComponentsSUN Hui-zhe,TONG Xiao-jie,CAO de-shou,WANG Zhen-yu(department of Anatomy,Institute of Tissue Engineering,China Medicai University,Liaoning Shenyang110001China)【Abstract】Objective To study the preparation of the aceiiuiar nerve aiiografts and anaiyze the components.Methods The aceiiuiar nerve aiiografts were prepared with the hypotonic aceiiuiar method and the characters of thestructure were observed under iight and eiectron microscope,and the components were anaiyzed by immunohistochem-istry and poiyacryiamidedei eiectrophoresis(PAGE).Results Schwann ceiis,the ceiis of epineurium or nerve fas-cicies,myeiin sheath and axon of nerve fibers were eiiminated,but the basement membrane tube of Schwann ceiisand three-dimensionai supports of extraceiiuiar matrix of epineurium and nerve fascicies were reserved for normai pe-ripherai nerves treated aceiiuiariy.The component anaiyticai resuits of the aceiiuiar nerve aiiografts indicate that theycontain iaminin,fbronectin,growth associated protein GAO-43and65kda protein those can faciiitate and induce theregeneration of the injured nerves.Conclusion The aceiiuiar nerve aiiografts contain severai,proteins those induceand faciiitate the regeneration of the injured nerves.【Key words】nerve tissue-engineering;aceiiuiar nerve aiiografts;nerve regeneration;rat周围神经缺损的修复是创伤外科的难题之一,筛选促进神经再生理想的移植物是解决这一难题的关键。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

a r X i v :h e p -p h /9801388v 4 17 F eb 1999Methods in the LO Evolution of Nondiagonal PartonDistributions:The DGLAP CaseAndreas Freund,Vadim GuzeyDepartment of Physics,The Pennsylvania State UniversityUniversity Park,PA 16802,U.S.A.Abstract In this paper,we discuss the algorithms used in the LO evolution program for nondiagonal parton distributions in the DGLAP region and discuss the stability of the code.Furthermore,we demonstrate that we can reproduce the case of the LO diagonal evolution within 0.5%of the original code as developed by the CTEQ-collaboration.PACS:12.38.Bx,13.85.Fb,13.85.Ni Keywords:Deeply Virtual Compton Scattering,Nondiagonal distributions,Evolution I.INTRODUCTIONDue to the recent availability of exclusive hard diffraction data at HERA,there has been a great interest in the study of generalized parton distributions also known as nondiagonal,off-forward or non-forward parton distributions occurring in these reactions (see Ref.[1–11]).These parton distributions are different from the usual,diagonal distributions found in e.g.inclusive DIS since one has a finite momentum transfer to the proton due to the exclusive nature of the reactions.In this paper we give an exposition of the algorithms used to nu-merically solve the generalized GLAP-evolution equations.The main part of the evolution program was taken over from the CTEQ package for the diagonal parton distributions frominclusive reactions.At this point in time the evolution kernels for generalized parton distri-butions are known only to leading order inαs and thus our analysis will be a leading order one.The paper is organized in the following way.In Sec.II we will quickly review the formal expressions for the parton distributions and the evolution equations together with the explicit expressions for the kernels and afirst comment on the arising numerical problems.In Sec. III we will explain the difference of our algorithms to the ones used in the original CTEQ package and then give a detailed account of how we implemented our algorithms.In Sec. IV we demonstrate the stability of our code and show that we reproduce the case of the usual or diagonal parton distributions within1%for a vanishing asymmetry factor.Sec.V contains concluding remarks.II.REVIEW OF NONDIAGONAL PARTON DISTRIBUTIONS,EVOLUTIONEQUATIONS AND KERNELSA.Nondiagonal Parton DistributionsGeneralized or,from now on nondiagonal parton distributions,occur for example in exclusive,hard diffractive J/ψorρmeson production and alternatively in deeply virtual Compton scattering(DVCS),where a real photon is produced.As mentioned in Sec.I since one imposes the condition of exclusiveness on top of the diffraction condition,one has a kinematic situation in which there is a non-zero momentum transfer onto the target proton as evidenced by the lowest order“handbag”diagram of DVCS in Fig.1.The picture serves to only introduce the kinematic notations used throughout the text and nothing more.For more on DVCS see for example Ref.[6,7,12–20].The nondiagonal quark and gluon distributions have the following formal definition as matrix elements of bilocal,path-ordered and renormalized quark and gluon operators sand-wiched between different momentum states of the proton as in the factorization theorems(q’)P(p)P(p’)x_1x_2γ∗γ(q)FIG.1.The lowest order handbag contribution to DVCS with Q 2=−q 2and q ′2=0.for exclusive vector meson production [4]and DVCS [7,19,20]:f q/p = ∞−∞dy −2π1d ln Q 2= 1x 1dy 1d ln Q 2= 1x 1dy 1d ln Q 2= 1x 1dy 1P qq,S,NS(x1,∆)=αs(1−∆)(1−x1)−δ(1−x1) 10dz1z2−3πN F[x31+x1(1−x1)2−x21∆]πC F[1+(1−x1)2−∆]2−x21)(x1−∆)1−x1+x1−∆2N C− 10dz1z2 ].(3)With our definitions,we obtain for the diagonal limit,i.e.,∆=0,q S,NS→xQ(x,Q2)and g S→xG(x,Q2)where Q and G are the usual parton densities.A word concerning the above employed regularization prescription which is the usual+ -prescription in thefirst integral below and a generalized+-prescription for the second integral,is in order,since these prescriptions have direct implications on the numerical treatment of the integrals involved.In convoluting the above kernels,after appropriate scaling of x1and∆with y1,with a nondiagonal parton density,one has to replace z1and z2in the regularization integrals with z1→(y1−x1)/y1and z2→(y1−x1)/(y1−∆).This leads to the following regularization prescription as employed in our modified version of the CTEQ package and in agreement with Ref.[7]:1x1dy11−x1/y1+= 1x1dy1y1−x1+f(x)ln(1−x1)(4) 1x1dy1(x1−∆)f(y1)y1y1f(y1)−x1f(x1)y1y1f(y1)−∆f(x1)1−∆ (5) Eq.5and a closer inspection of Eq.3reveals that if one were to integrate each term by itself one would encounter infinities in all the expressions at both the lower bound of integration if∆=y1and in taking the limit∆=x1.Although Eq.3is completely analytical,it will cause numerical problems since the cancellations of the infinite terms can only be done in the analytical expressions.This is in contrast to the diagonal case where such problems are absent.The integration over Q2is identical to the diagonal case andhence has already been dealt with in the original CTEQ-code.III.DIFFERENCES BETWEEN THE CTEQ AND OUR ALGORITHMSLet us point out in the beginning that our code is to99%the original CTEQ-code(for a detailed account of this code see Ref.[22]).We only modified the subroutines NSRHSM, NSRHSP and SNRHS within the subroutine EVOLVE and added the subroutines NEWAR-RAY and NINTEGR.These routines are only dealing with the convolution integrals but not with,for example,the Q2-integration or any other part of the CTEQ-code which re-mains unchanged.This is due to the fact that the main difference between the diagonal and nondiagonal evolution stems from the different kernels which only influence the convolution integration and nothing else.In order to make the simple changes in the existing routines more obvious we willfirst deal with the new subroutines.A.NEW ARRAY and NINTEGRDue to the increased complexity of the convolution integrals as compared to the diagonal case as pointed out in Sec.II B,we were forced to slightly change the very elegant and fast integration routines employed in the original CTEQ-code.The basic idea,very close to the one in the CTEQ-code,is the following:Within the CTEQ package,the parton distributions are given on a dynamical x-and Q-grid of variable size where the convolution of the kernels with the initial distribution is performed on the x-grid.Due to the possibility of singular behavior of the integrands,we perform the convolution integrals byfirst splitting up the region of integration according to the number of grid points in x,analytically integrating between two grid points x i and x i+1where i runs from1to the specified number of points in x and then adding up the contributions from the small intervals as exemplified in the following equation:1x1dy1y1f(x1/y1,∆/y1,y1),(6) where f(x1/y1,∆/y1,y1)is the product of the initial distribution for each evolution step and an evolution kernel with x0=x1,x N=1.We can do the integration analytically between two neighboring grid points by approximating the distribution function f(y1)through a second order polynomial ay21+by1+c,using the fact that we know the function on the grid points x i−1,x i and x i+1and can thus compute the coefficients a,b,c of the polynomial in the following way,given the function is well behaved and the neighboring grid points are close together[23]:f(x1+1)=ax2i+1+bx i+1+cf(x i)=ax2i+bx i+cf(x i−1)=ax2i−1+bx i−1+c(7)which yields a3×3matrix relating the coefficients of the polynomial to the values of the distribution functions at x i−1,x i and x i+1.Inverting this matrix in the usual way one obtains a matrix relating the x values of the distribution function to the coefficients making it possible to compute them just from the knowledge of the different x values and the value of the distribution function at those x values.This calculation is implemented in NEWARRAY where the initial distribution is handed to the subroutine and the coefficient array is then returned.The coefficient array in which the values of the coefficients for the integration are stored,has3times the size of the user-specified number of points in x since we have3 coefficients for each bin in x.We treat the last integration between the points x0and x1 again by approximating the distribution in this last bin through a second order polynomial. However,for this last bin,the coefficients are computed using the last three values in x and of the distribution at those points,since the point x−1which would be required according to the above prescription for calculating the coefficients,does not exist.After having regrouped the terms appearing in the convolution integral in such a way that all the necessary cancellations of large terms occur within the analytic expression forthe integral and not between different parts of the convolution integral,the integration of the different terms is performed in the new subroutine NINTEGR with the aid of the coefficient array from NEWARRAY.As mentioned above the convolution integral from x1to1is split up into several intervals in which the integration is carried out analytically.To give an example of this procedure we consider the convolution integral of P qg(x1/y1,∆/y1)with the parton distribution g S(y1):1x1dy1y1x21(x1−∆)y1x1(y1−x1)2The case x1=∆=x and∆<<x1,are implemented in NINTEGR in the same way as above but separately from each other and from the more general case.For x1=∆=x the form of the integrands simplify in such a way that one can use the integration routines INTEGR and HINTEG from the original CTEQ-code.In the case of∆<<x1the analytic expressions obtained for the above general case are expanded tofirst order in∆and then the same methods as above for evaluating the integrals are applied.The last case also allows us to go to the diagonal case by setting∆=0without using the integration routines from the original CTEQ-code giving us a valuable tool to compare our code to the original one.B.Modifications in NSRHSM,NSRHSP and SNRHSThe modification in the already existing routines NSRHSM,NSRHSP and SNRHS of the original CTEQ package are rather trivial.The most notable difference is that the subroutine NEWARRAY is called every time either of the three subroutines is called since the distribution function handed down on an array changes with every call of NSRHSM, NSRHSP and SNRHS.In NSRHSM and NSRHSP,NEWARRAY is only called once since one is only dealing with the non-singlet part containing no gluons,whereas in SNRHS the subroutine for the singlet case,one needs a coefficient array for both the quark and the gluon.Besides this change,the calls for INTEGR are replaced by NINTEGR according to how the convolution integral has been regrouped as explained in Sec.III A.The different regrouped expressions are then added,after integration for different x-values,to obtain the final answer in an output array which is handed back to the subroutine EVOLVE.The method is the same as in the original CTEQ-code but the terms themselves have changed of course.IV.CODE ANALYSISAs afirst step we tested the stability and speed of convergence of the code and found that by doubling the number of points in the x-grid,which is only relevant for the convolutionintegral,from50to300the result of our calculation changed by less than0.5%,hence we can assume that our code converges rather rapidly.We also found the code to be stable down to an x2=10−10beyond which we did not test.Furthermore we can reproduce the result of the original CTEQ-code,i.e.the diagonal case in LO within0.5%giving us confidence that our code works well since the analytic expressions for the diagonal case are the expansions of the general case of non vanishing asymmetry up to,but not including,O(∆2).In the followingfigures(Fig.2-7)we compare,for illustrative purposes,the diagonal and nondiagonal case by plotting the ratiog(x1,x2,Q2)R g(x1,x2,Q2)=,(9)x1Q(x1,Q2)for various values of x1,Q2and∆=x Bj[27],i.e.varying x2,using the CTEQ4M and CTEQ4LQ[28]parameterizations[30].We assume the same initial conditions for the diag-onal and nondiagonal case(see Ref.[5]for a detailed physical motivation of this ansatz).The reader might wonder why only CTEQ4M and CTEQ4LQ and not GRV or MRS were used.The answer is not a prejudice of the authors against GRV or MRS but rather the fact that a comparison of CTEQ4M and CTEQ4LQ shows the same characteristic as comparing, for example,CTEQ4M and GRV at LO.The observation is the following:CTEQ4LQ is given at a different,rather low,Q,as compared to CTEQ4M and hence one has significant corrections from NLO terms in the evolution at these scales.This leads to a large difference between CTEQ4LQ and CTEQ4M(see Fig.8),if one evolves the CTEQ4LQ set from its very low Q scale to the scale at which the CTEQ4M distribution is given,making a sensitivity study of nondiagonal parton distributions for different initial distributions impossible at LO.Of course,the inclusion of the NLO terms corrects this difference in the diagonal case but since there is no NLO calculation of the nondiagonal case available yet,a study of the sensitivity of nondiagonal evolution to different initial distributions has to wait.Thefigures themselves suggest the following.The lower the starting scale,the stronger the effect of the difference of the nondiagonal evolution as compared to the diagonal one andalso that most of the difference between nondiagonal and diagonal evolution stems from the first few steps in the evolution at lower scales.Secondly,under the assumption that the NLO evolution in the nondiagonal case will yield the same results for the parton distributions at some scale Q,irrespective of the starting scale Q0,in analogy to the diagonal case.One can say that the NLO corrections to the nondiagonal evolution will be in the same direction and same order of magnitude as the diagonal NLO evolution.If,in the nondiagonal case, the NLO corrections were in the opposite direction,which would lead to a marked deviation from the LO results,compared to the diagonal case,the overall sign of the NLO nondiagonal kernels would have to change for some∆=0since in the limit∆→0we have to recover the diagonal case.This occurance is not likely for the following reason:First,the Feynaman diagrams involved in the calculation of the NLO nondiagonal kernels are the same as in the diagonal case,except for the different kinematics,therefore,we have a very good idea about the type of terms appearing in the kernels,namely polynomials,logs and terms in need of regularization such as ln(z)ln(1−z)f(x1/y1,∆/y1)(10)y1which will be numerically small unless y1≃∆in the convolution integral of the evolution equations.Moreover,we know that in this limit the contribution of the regularized terms in the kernel give the largest contributions in the convolution integral and therefore sign changing contributions in the nondiagonal case would have to originate from regularized terms.This in turn disallows a term like Eq.10due to the fact that regularized terms are not allowed to vanish in the diagonal limit,since the regularized terms arise from the same Feynman diagrams in the both diagonal and nondiagonal case.Therefore,the overall sign of the contribution of the NLO nondiagonal kernels will be the same as in the diagonal case.A word should be said about how the results of Ref.[10]compare to ours.For the caseof the same∆=10−3similar starting scales and almost identical values of Q wefind good agreement with their numbers for R g at x1≃∆[29]and are slightly higher at larger x1. The observed differences are due to the fact that the quark distributions are included in our evolution as compared to[10]and their initial distributions is slightly different.We alsofind very similar ratios to[10]if one changes the starting scale to a lower one.The slight difference of a few percent in the ratios between us and[10]can again be attributed to the fact that they used the GRV distribution as compared to our use of the CTEQ4 distributions,hence a slight difference in the starting scales and their lack of incorporating quarks into the evolution.V.CONCLUSIONSWe modified the original CTEQ-code in such a way that we can now compute the evo-lution of nondiagonal parton distributions to LO.We gave a detailed account of the mod-ifications and the methods employed in the new or modified subroutines.As the reader can see,the modifications and methods themselves are not something magical but rather a straightforward application of well known numerical methods.We further demonstrated the rapid convergence and stability of our code.In the limit of vanishing asymmetry we reproduce the diagonal case in LO as obtained from the original CTEQ-code within1%.We also have good agreement with the results in Ref.[10].In the future,after the NLO kernels for the nondiagonal case have been calculated,we will extend the code to the NLO level to be on par with the diagonal case.ACKNOWLEDGMENTSThis work was supported in part by the U.S.Department of Energy under grant number DE-FG02-90ER-40577.We would like to thank John Collins and Mark Strikman for helpful conversations.REFERENCES[1]S.J.Brodsky,L.L.Frankfurt,J.F.Gunion,A.H.Mueller,and M.Strikman,Phys.Rev.D50(1994)3134;see also[2][2]L.L.Frankfurt,W.Koepf,and M.Strikman,Phys.Rev.D54(1996)3194.[3]A.Radyushkin Phys.Lett.B385(1996)333.[4]J.C.Collins,L.Frankfurt,and M.Strikman,Phys.Rev.D56(1997)2982.[5]L.L.Frankfurt,A.Freund,V.Guzey and M.Strikman,hep-ph/9703449to appear inPhys.Lett.B.[6]X.-D.Ji,Phys.Rev.D55(1997)7114..[7]A.Radyushkin Phys.Lett B380(1996)417,Phys.Rev.D56,5524(1997).[8]I.I Balitsky and V.M.Braun,Nucl.Phys.B311,541(1989).[9]J.Bluemlein,B.Geyer and D.Robaschik,hep-ph/9705264.[10]A.Martin and M.Ryskin,hep-ph/9711371.[11]L.Mankiewicz,G.Piller and T.Weigel,hep-ph/9711227.[12]D.M¨u ller,hep-ph/9704406.[13]X.Ji and J.Osborne,hep-ph/9707254.[14]A.V.Belitsky and D.M¨u ller,hep-ph/9709379.[15]M.Diehl,T.Gousset,B.Pire,and J.P.Ralston,Phys.Lett.B411,193(1997).[16]Z.Chen,hep-ph/9705279.[17]L.Frankfurt,A.Freund and M.Strikman,hep-ph/9710356.[18]L.Mankiewicz,G.Piller,E.Stein,M.V¨a ttinen and T.Weigl,hep-ph/9712251.[19]J.C.Collins and A.Freund,hep-ph/9801262.[20]X.-D.Ji and J.Osborne,hep-ph/9801260.[21]For more details on the derivation of the kernels to leading order see for example[5–7].[22]The CTEQ-Meta page at /˜cteq/and the documentation inthe different parts of the package.[23]The parton distributions functions are smooth and well behaved thus one just has touse enough points in x.[24]The general analytic expressions for the convolution integrals in an arbitrary x-bin wereobtained with the help of MATHEMATICA.[25]The value of∆is specified in NINTEGR.[26]The value of the output at position N is always0since in this case the upper and thelower bound of the integral coincide.[27]We also plot the same ratio for∆=0to demonstrate the deviation from our code inthe diagonal limit from the CTEQ-code.[28]CTEQ4LQ gives the bestfit at low Q2whereas CTEQ4M gives the bestχ2-fit for alarge range of Q and x.[29]This was also the case in Ref.[5]where the authors mistakenly put the energies as Q2where in fact they should have been Q,which led to some confusion in the comparisons of thisfirst study to Ref.[10].[30]i et al.Phys.Rev.D55(1997)1280.11.051.11.151.21.251.31.351.410-410-310-2x 111.051.11.151.21.251.31.351.41.451.510-310-210-1x 1FIG.2.R g is plotted versus x 1for fixed ∆using the CTEQ4M parameterization with11.251.51.7522.252.52.75310-410-310-2x 111.251.51.7522.252.52.7533.253.510-310-210-1x 1FIG.3.R q is plotted versus x 1for fixed ∆using the CTEQ4M parameterization with0.980.9850.990.99511.0051.011.01510-410-310-210-1x 10.980.9850.990.99511.0051.011.0151.0210-410-310-210-1x 1FIG.4.R g and R q are plotted versus x 1for ∆=0using the CTEQ4M parameterization with11.051.11.151.21.251.31.351.41.4510-410-310-2x 111.051.11.151.21.251.31.351.41.451.510-310-210-1x 1FIG.5.R g is plotted versus x 1for fixed ∆using the CTEQ4LQ parameterization with11.522.533.5410-410-310-2x 111.522.533.544.510-310-210-1x 1FIG.6.R q is plotted versus x 1for fixed ∆using the CTEQ4LQ parameterization with0.980.9850.990.99511.0051.011.01510-410-310-210-1x 10.980.9850.990.99511.0051.011.0151.0210-410-310-210-1x 1FIG.7.R g and R q are plotted versus x 1for ∆=0using the CTEQ4LQ parameterization with0.90.9250.950.97511.0251.051.0751.110-310-2x 1R a t i o G (x 1,Q )Q=10 GeV0.80.820.840.860.880.90.920.940.960.98110-310-2x 1R a t i o q (x 1,Q )FIG.8.The ratios for CTEQ4M to CTEQ4LQ for gluons and quarks in the diagonal case is plotted to demonstrate the difference between the LO evolution for these parameterizations.。

相关文档
最新文档