Evolutionary programming made faster

合集下载

进化算法

进化算法
Evolutionary Computation: An interview Back and Schwefel
• We present an overview of the most important representatives of algorithms gleaned from natural evolution, so-called evolutionary algorithms. Evolution strategies, evolutionary programming, and genetic algorithms are summarized, with special emphasis on the principle of strategy parameter self-adaptation utilized by the first two algorithms to learn their own strategy parameters such as mutation variances and covariances. Some experimental results are presented which demonstrate the working principle and robustness of the self-adaptation methods used in evolution strategies and evolutionary programming. General principles of evolutionary algorithms are discussed, and we identify certain properties of natural evolution which might help to improve the problem solving capabilities of evolutionary algorithms even further.

进化计算综述

进化计算综述

进化计算综述1.什么是进化计算在计算机科学领域,进化计算(Evolutionary Computation)是人工智能(Artificial Intelligence),进一步说是智能计算(Computational Intelligence)中涉及到组合优化问题的一个子域。

其算法是受生物进化过程中“优胜劣汰”的自然选择机制和遗传信息的传递规律的影响,通过程序迭代模拟这一过程,把要解决的问题看作环境,在一些可能的解组成的种群中,通过自然演化寻求最优解。

2.进化计算的起源运用达尔文理论解决问题的思想起源于20世纪50年代。

20世纪60年代,这一想法在三个地方分别被发展起来。

美国的Lawrence J. Fogel提出了进化编程(Evolutionary programming),而来自美国Michigan 大学的John Henry Holland则借鉴了达尔文的生物进化论和孟德尔的遗传定律的基本思想,并将其进行提取、简化与抽象提出了遗传算法(Genetic algorithms)。

在德国,Ingo Rechenberg 和Hans-Paul Schwefel提出了进化策略(Evolution strategies)。

这些理论大约独自发展了15年。

在80年代之前,并没有引起人们太大的关注,因为它本身还不够成熟,而且受到了当时计算机容量小、运算速度慢的限制,并没有发展出实际的应用成果。

到了20世纪90年代初,遗传编程(Genetic programming)这一分支也被提出,进化计算作为一个学科开始正式出现。

四个分支交流频繁,取长补短,并融合出了新的进化算法,促进了进化计算的巨大发展。

Nils Aall Barricelli在20世纪六十年代开始进行用进化算法和人工生命模拟进化的工作。

Alex Fraser发表的一系列关于模拟人工选择的论文大大发展了这一工作。

[1]Ingo Rechenberg在上世纪60 年代和70 年代初用进化策略来解决复杂的工程问题的工作使人工进化成为广泛认可的优化方法。

进化策略

进化策略
当把这种算法用于函数优化时,发现它有两个缺点:
(1) 各维取定常的标准差使得程序收敛到最优解的速度很慢; (2) 点到点搜索的脆弱本质使得程序在局部极值附近容易受停
滞的影响(虽然此算法表明可以渐近地收敛到全局最优 点)。
8.2 进化策略
(μ + 1)-ES:
早期的(1十1)-ES,没有体现群体的作用,只是单个个体在进 化,具有明显的局限性。随后,Rechenberg又提出(μ+1)ES,在这种进化策略中,父代有μ个个体(μ>1),并且引入 重组(Recombination)算子,使父代个体组合出新的个体。 在执行重组时,从μ个父代个体中用随机的方法任选两个个 体:
8.2 进化策略
在1973年,Rechenburg把该算法的期望收敛速度定义为对 最优点的平均距离与要得到此改善所需要的试验次数之比。
1981年,Schwefel在进化策略中使用多重亲本和子代,这是 Rechenburg早期工作(使用多重亲本,但是仅使用单个子 代)的发展,后来考察了两种方法,分别表示为(μ+λ)-ES 相(μ,λ)-ES。在前者中,μ个亲本制造λ个子代,所有解均 参加生存竞争,选出最好的μ个作为下一代的亲本。在后者 中,只有λ( λ > μ )个子代参加生存竞争,在每代中μ个亲 本被完全取代。这就是说,对于每一代,每个解张成的生 命是有限的。增加种群大小,就在固定数目的世代中增加 了优化速率。
进化策略的基本技术
然后将其分量进行随机交换,构成子代新个体的各个分量,从而得 出如下新个体:
(2) 中值重组。这种重组方式也是先随机选择两个父代个体,然后将 父代个体各分量的平均值作为子代新个体的分量,构成的新个体 为:
这时,新个体的各个分量兼容两个父代个体信息,而在离散重组中 则只含有某一个父代个体的因子。

软件工程英文答案

软件工程英文答案

Chapter 1 An Introduction to Software Engineering1. Why software engineering is important?软件工程由应对软件危机也产生,软件工程的发展极大地完善了我们的软件。

软件工程的研究使得我们对软件开发活动有个更深入的了解,并且已经找到了进行软件描述、设计和实现的有效方法。

软件工程中新的标记发和工具大大降低了制作大型、复杂系统的工作量2. What is software? What is software engineering?软件工程是一门工程学科,包括了软件开发的各个方面,从最初的系统描述一直到使用后的系统维护,都属于其学科范畴。

3. What is the difference between software engineering and computer science?计算机科学研究的是构成计算机和软件系统基础的有关理论和方法,耳软件工程则研究软件制作中的实际问题。

计算机科学侧重理论和基础; 软件工程侧重软件开发和交付的实际活动。

4. What are the attributes of good software?软件除了提供基本的功能,对用户来说是还应该是可维护的、可依赖的和可接受的。

可维护性,软件必须能够不断变化以满足变化;可依赖性,软件必须可以被信赖;有效性,软件不能浪费系统资源;可用性,使用起来比较容易5. What is CASE?CASE 工具是一些软件系统,被设计成支持软件过程中的常规活动,如编辑设计图表、检查图表的连贯性、跟踪已经运行的程序测试等。

6. What is the difference between software engineering and system engineering?系统工程侧重于计算机系统开发的所有方面,包括硬件、软件和处理工程。

软件工程是整个系统的一部分,它关心系统中基础软件、控制软件、应用软件和数据库的开发。

23个测试函数C语言代码(23个函数来自论文:Evolutionary Programming Made Faster)

23个测试函数C语言代码(23个函数来自论文:Evolutionary Programming Made Faster)

//F1函数 Spheredouble calculation(double *sol){int i;double result=0;fit++; //标记评估次数for (i=0; i<dim; i++){result+= sol[i]*sol[i];}return result;}//***********************************//F2函数 Schwefel's P2.22double calculation(double *sol){int i;fit++;double result=0;double tmp1=0,tmp2=1.0;for(i=0;i<dim;i++){double temp=fabs(sol[i]);tmp1+=temp;tmp2*=temp;}result=tmp1+tmp2;return result;}//***********************************//F3函ˉ数簓 Quadricdouble calculation(double *sol){int i,j;fit++;double result=0;double tmp2=0.0;for(i=0;i<dim;i++){double tmp1=0.0;for(j=0;j<i+1;j++){tmp1+=sol[j];}tmp2+=tmp1*tmp1;}result=tmp2;return result;}//***********************************//F4函ˉ数簓double calculation(double *sol){int i,j;double result=fabs(sol[0]);fit++;for(i=1;i<dim;i++){if(result<fabs(sol[i])){result=fabs(sol[i]);}}return result;}//***********************************//F5函ˉ数簓 Rosenbrockdouble calculation(double *sol){int i,j;fit++;double result=0;double tmp1=0,tmp2=0;for(i=0;i<dim-1;i++){tmp1=100*(sol[i]*sol[i]-sol[i+1])*(sol[i]*sol[i]-sol[i+1]);tmp2=(sol[i]-1)*(sol[i]-1);result+=tmp1+tmp2;}return result;}//***********************************//F6函ˉ数簓 Stepdouble calculation(double *sol){int i;fit++;double result=0;for (i=0; i<dim; i++){result+=(floor(sol[i]+0.5))*(floor(sol[i]+0.5));}return result;}//***********************************//F7函ˉ数簓 Quadric Noisedouble calculation(double *sol){int i;fit++;double result=0;for (i=0; i<dim; i++){result+=(i+1)*sol[i]*sol[i]*sol[i]*sol[i];}result=result+1.0*rand()/RAND_MAX;return result;}//***********************************//F8函ˉ数簓 Schwefeldouble calculation(double *sol){int i;fit++;double result=0;for (i=0; i<dim; i++){result+=(-1*sol[i])*sin(sqrt(fabs(sol[i])));}return result;}//***********************************//F9函ˉ数簓 Rastrigrindouble calculation(double *sol){int i;fit++;double result=0;for (i=0; i<dim; i++){result+=(sol[i]*sol[i]-10*cos(2*pi*sol[i])+10);}return result;}//***********************************//F10函ˉ数簓 Ackley函ˉ数簓double calculation(double *sol){int i;fit++;double result=0;double tmp1=0.0,tmp2=0.0;for (i=0; i<dim; i++){tmp1+=sol[i]*sol[i];tmp2+=cos(2*pi*sol[i]);}result=20+exp(1)-20*exp(-0.2*sqrt(tmp1/dim))-exp(tmp2/dim);return result;}//***********************************//F11函ˉ数簓 griewangdouble calculation(double *sol){int i;fit++;double result=0;double temp1=0.0,temp2=1.0;for (i=0; i<dim; i++){temp1+=(sol[i]*sol[i])/4000;temp2*=cos(sol[i]/sqrt(i+1));}result=temp1-temp2+1;}//***********************************//F12函ˉ数簓 Generalized Penalizeddouble calculation(double *sol){int i;fit++;double x[dim];double result=0;double temp1=0.0,temp2=0.0;for (i=0; i<dim; i++){x[i]=1+(sol[i]+1)/4;if(sol[i]>10){temp1=100*(sol[i]-10)*(sol[i]-10)*(sol[i]-10)*(sol[i]-10);}else{if(sol[i]<-10){temp1=100*(-sol[i]-10)*(-sol[i]-10)*(-sol[i]-10)*(-sol[i]-10);}elsetemp1=0;}temp2+=temp1;}for(i=0;i<dim-1;i++){result+=(x[i]-1)*(x[i]-1)*(1+10*sin(pi*x[i+1])*sin(pi*x[i+1]));}result=pi/dim*(10*sin(pi*x[0])*sin(pi*x[0])+result+(x[dim-1]-1)*(x[dim-1]-1))+temp2;return result;}//***********************************//F13函ˉ数簓double calculation(double *sol){int i;double temp1=0.0,temp2=0.0;fit++;for (i=0; i<dim; i++){if(sol[i]>5){temp1=100*(sol[i]-5)*(sol[i]-5)*(sol[i]-5)*(sol[i]-5);}else{if(sol[i]<-5){temp1=100*(-sol[i]-5)*(-sol[i]-5)*(-sol[i]-5)*(-sol[i]-5);}elsetemp1=0;}temp2+=temp1;}for(i=0;i<dim-1;i++){result+=(sol[i]-1)*(sol[i]-1)*(1+sin(3.0*pi*sol[i+1])*sin(3.0*pi*sol[i+1]));}result=0.1*(sin(3.0*pi*sol[0])*sin(3.0*pi*sol[0])+result+(sol[dim-1]-1)*(1.0+1.0*si n(2.0*pi*sol[dim-1])))+temp2;return result;}//***********************************//F14函ˉ数簓double calculation(double sol[2]){int i,j;double top=0.0,tmp1=0.0,tmp2=0.0;fit++;double a[2][25]={{-32,-16,0,16,32,-32,-16,0,16,32,-32,-16,0,16,32,-32,-16,0,16,32,-32,-16,0,16,32},{-32,-32,-32,-32,-32,-16,-16,-16,-16,-16,0,0,0,0,0,16,16,16,16,16,32,32,32,32,32}};for(j=0;j<25;j++){top=0.0;for(i=0;i<D;i++)//这a里?D=2{tmp1=sol[i]-a[i][j];tmp1=pow(tmp1,(double)6);top=top+tmp1;}top=(j+1)+top;top=1.0/top;tmp2=tmp2+top;}top=1.0/500+tmp2;top=1.0/top;return top;}//***********************************//F15函ˉ数簓double calculation(double sol[4]){int i;fit++;double top=0.0,tmp1=0.0,tmp2=0.0;doublea[11]={0.1957,0.1947,0.1735,0.1600,0.0844,0.0627,0.0456,0.0342,0.0323,0.0235,0.0246};double b[11]={4.0,2.0,1.0,0.5,0.25,1.0/6,1.0/8,1.0/10,1.0/12,1.0/14,1.0/16};for(i=0;i<11;i++){tmp1=sol[0]*(b[i]*b[i]+b[i]*sol[1]);tmp2=1.0*tmp1/(b[i]*b[i]+b[i]*sol[2]+sol[3]);top=top+(a[i]-tmp2)*(a[i]-tmp2);}return top;}//***********************************//F16函ˉ数簓double calculation(double sol[2]){double top=0;fit++;top=4*pow(sol[0],(double)2)-2.1*pow(sol[0],(double)4)+1.0*pow(sol[0],(double)6)/3+sol[0 ]*sol[1]-4*pow(sol[1],(double)2)+4*pow(sol[1],(double)4);return top;}//***********************************//F17函ˉ数簓double calculation(double sol[2]){ double top=0;fit++;top=sol[1]-(5.1/(4*pi*pi))*pow(sol[0],(double)2)+(5.0/pi)*sol[0]-6;top=pow(top,(double)2);top=top+10.0*(1.0-1.0/(8*pi))*cos(sol[0])+10.0;return top;}//***********************************//F18函ˉ数簓double calculation(double sol[2]){fit++;double top=0,tmp1=0,tmp2=0;top=19.0-14.0*sol[0]+3.0*sol[0]*sol[0]-14*sol[1]+6.0*sol[0]*sol[1]+3.0*sol[1]*sol[1]; tmp1=(sol[0]+sol[1]+1)*(sol[0]+sol[1]+1);tmp2=18.0-32.0*sol[0]+12.0*sol[0]*sol[0]+48*sol[1]-36.0*sol[0]*sol[1]+27.0*sol[1]*sol[1 ];tmp2=30.0+(2.0*sol[0]-3.0*sol[1])*(2.0*sol[0]-3.0*sol[1])*tmp2;top=(1.0+tmp1*top)*tmp2;return top;}//***********************************//该?函ˉ数簓sol[D]是?一?个?四?维?函ˉ数簓,?搜?索÷范?围§:阰[0,1],最?优?值μ是?:阰-3.86//F19函ˉ数簓double calculation(double sol[D]){ int i,j;fit++;double top=0.0,tmp1=0.0;double c[4]={1,1.2,3,3.2};//这里a[4][D]和p[4][D]数据可能有些错误,正确的应该有4*4个数据,这里只有4*3个数据,不清楚是不是最后一个默认为0,大家在应用的时候要特别注意了double a[4][D]={{3,10,30},{0.1,10,35},{3,10,30},{0.1,10,35}};double p[4][D]={{0.3689,0.1170,0.2673},{0.4699,0.4387,0.7470},{0.1091,0.8732,0.5547},{0.03815,0.5743,0.8828}};for(i=0;i<4;i++){top=0.0;for(j=0;j<D;j++){top=top+a[i][j]*(sol[j]-p[i][j])*(sol[j]-p[i][j]);}top=-1.0*top;top=exp(top);tmp1=tmp1+c[i]*top;}tem1=-1.0*tmp1;return tem1;}//***********************************//F20函ˉ数簓double calculation(double sol[D]){int i,j;fit++;double top=0.0,tmp1=0.0;double c[4]={1,1.2,3,3.2};double a[4][D]={{10,3,17,3.5,1.7,8},{0.05,10,17,0.1,8,14},{3,3.5,1.7,10,17,8},{17,8,0.05,10,0.1,14}};double p[4][D]={{0.1312,0.1696,0.5569,0.0124,0.8283,0.5886},{0.2329,0.4135,0.8307,0.3736,0.1004,0.9991},{0.2348,0.1415,0.3522,0.2883,0.3047,0.6650},{0.4047,0.8828,0.8732,0.5743,0.1091,0.0381}};for(i=0;i<4;i++){top=0.0;for(j=0;j<D;j++){top=top+a[i][j]*(sol[j]-p[i][j])*(sol[j]-p[i][j]);}top=-1.0*top;top=exp(top);tmp1=tmp1+c[i]*top;}tem1=-1.0*tmp1;return tem1;}//***********************************//F21函ˉ数簓double calculation(double sol[D]){int i;fit++;double top=0.0,tmp1=0.0;double c[5]={0.1,0.2,0.2,0.4,0.4};double a[5][4]={{4.0,4.0,4.0,4.0},{1.0,1.0,1.0,1.0},{8.0,8.0,8.0,8.0},{6.0,6.0,6.0,6.0},{3.0,7.0,3.0,7.0}};for(i=0;i<5;i++){tmp1=(sol[0]-a[i][0])*(sol[0]-a[i][0])+(sol[1]-a[i][1])*(sol[1]-a[i][1])+(sol[2]-a[i][2])*(sol[2]-a[i][2])+(sol[3]-a[i][3])*(sol[3]-a[i][3]);tmp1=tmp1+c[i];tmp1=1.0/tmp1;top=top+tmp1;}top=-1.0*top;return top;}//***********************************//F22函ˉ数簓double calculation(double sol[D]){int i;fit++;double top=0.0,tmp1=0.0;double c[7]={0.1,0.2,0.2,0.4,0.4,0.6,0.3};double a[7][4]={{4.0,4.0,4.0,4.0},{1.0,1.0,1.0,1.0},{8.0,8.0,8.0,8.0},{6.0,6.0,6.0,6.0},{3.0,7.0,3.0,7.0},{2.0,9.0,2.0,9.0},{5.0,5.0,3.0,3.0}};for(i=0;i<7;i++){tmp1=(sol[0]-a[i][0])*(sol[0]-a[i][0])+(sol[1]-a[i][1]) *(sol[1]-a[i][1])+(sol[2]-a[i][2])*(sol[2]-a[i][2])+(sol[3]-a[i][3])*(sol[3]-a[i][3]);tmp1=tmp1+c[i];tmp1=1/tmp1;top=top+tmp1;}top=0.0-top;return top;}//***********************************//F23函ˉ数簓double calculation(double sol[D]){int i;fit++;double top=0.0,tmp1=0.0;double c[10]={0.1,0.2,0.2,0.4,0.4,0.6,0.3,0.7,0.5,0.5};double a[10][4]={{4.0,4.0,4.0,4.0},{1.0,1.0,1.0,1.0},{8.0,8.0,8.0,8.0},{6.0,6.0,6.0,6.0},{3.0,7.0,3.0,7.0},{2.0,9.0,2.0,9.0},{5.0,5.0,3.0,3.0},{8.1,1.0,8.0,1.0},{6.0,2.0,6.0,2.0},{7.0,3.6,7.0,3.6}};for(i=0;i<10;i++){tmp1=(sol[0]-a[i][0])*(sol[0]-a[i][0])+(sol[1]-a[i][1])*(sol[1]-a[i][1])+(sol[2]-a[i][2 ])*(sol[2]-a[i][2])+(sol[3]-a[i][3])*(sol[3]-a[i][3]);tmp1=tmp1+c[i];tmp1=1.0/tmp1;top=top+tmp1;}top=-1.0*top;return top;}。

Evolutionary programming made faster

Evolutionary programming made faster

Evolutionary Programming Made Faster Xin Yao,Senior Member,IEEE,Yong Liu,Student Member,IEEE,and Guangming LinAbstract—Evolutionary programming(EP)has been applied with success to many numerical and combinatorial optimization problems in recent years.EP has rather slow convergence rates, however,on some function optimization problems.In this paper, a“fast EP”(FEP)is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator.The re-lationship between FEP and classical EP(CEP)is similar to that between fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for different function optimization problems.This paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood.For a suite of23benchmark problems,FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a few local minima.This paper also shows the relationship between the search step size and the probability offinding a global optimum and thus explains why FEP performs better than CEP on some functions but not on others.In addition,the importance of the neighborhood size and its relationship to the probability of finding a near-optimum is investigated.Based on these analyses, an improved FEP(IFEP)is proposed and tested empirically. This technique mixes different search operators(mutations).The experimental results show that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.Index Terms—Cauchy mutations,evolutionary programming, mixing operators.I.I NTRODUCTIONA LTHOUGH evolutionary programming(EP)wasfirstproposed as an approach to artificial intelligence[1],it has been recently applied with success to many numerical and combinatorial optimization problems[2]–[4].Optimization by EP can be summarized into two major steps:1)mutate the solutions in the current population;2)select the next generation from the mutated and thecurrent solutions.These two steps can be regarded as a population-based version of the classical generate-and-test method[5],where mutation isManuscript received October30,1996;revised February3,1998,August 14,1998,and January7,1999.This work was supported in part by the Australian Research Council through its small grant scheme and by a special research grant from the University College,UNSW,ADFA.X.Yao was with the Computational Intelligence Group,School of Computer Science,University College,The University of New South Wales,Australian Defence Force Academy,Canberra,ACT,Australia2600.He is now with the School of Computer Science,University of Birmingham,Birmingham B15 2TT U.K.(e-mail:X.Yao@).Y.Liu and G.Lin are with the Computational Intelligence Group,School of Computer Science,University College,The University of New South Wales, Australian Defence Force Academy,Canberra,ACT,Australia2600(e-mail: liuy@.au;glin@.au).Publisher Item Identifier S1089-778X(99)ed to generate new solutions(offspring)and selection is used to test which of the newly generated solutions should survive to the next generation.Formulating EP as a special case of the generate-and-test method establishes a bridge between EP and other search algorithms,such as evolution strategies,genetic algorithms,simulated annealing(SA),tabu search(TS),and others,and thus facilitates cross-fertilization among different research areas.One disadvantage of EP in solving some of the multimodal optimization problems is its slow convergence to a good near-optimum(e.g.,paper and the CEP used to solve it.The CEP algorithm given follows suggestions from Fogel[3],[6]and B¨a ck and Schwefel [7].Section III describes the FEP and its implementation. Section IV gives the23functions used in our studies.Section V presents the experimental results and discussions on FEP and CEP.Section VI investigates FEP with different scale parameters for its Cauchy mutation.Section VII analyzes FEP and CEP and explains the performance difference between FEP and CEP in depth.Based on such analyses,an improved FEP(IFEP)is proposed and tested in Section VIII.Finally, Section IX concludes with some remarks and future research directions.II.F UNCTION O PTIMIZATION BYC LASSICAL E VOLUTIONARY P ROGRAMMINGA global minimization problem can be formalized as apairis a bounded setonsuchthat.More specifically,it is required tofindandoes not need to be continuous but it must bebounded.This paper only considers unconstrained functionoptimization.Fogel[3],[8]and B¨a ck and Schwefel[7]have indicatedthat CEP with self-adaptive mutation usually performs betterthan CEP without self-adaptive mutation for the functions theytested.Hence the CEP with self-adaptive mutation will beinvestigated in this paper.According to the description byB¨a ck and Schwefel[7],the CEP is implemented as followsin this study.11)Generate the initial populationof.Each individual is taken as a pair of real-valuedvectors,,where,of the population based on the objectivefunction,.3)Eachparent,creates a singleoffspring-thcomponent of thevectors,respectively..Thefactorsand[7],[6].1A recent study by Gehlhaar and Fogel[9]showed that swapping the orderof(1)and(2)may improve CEP’s performance.4)Calculate thefitness of eachoffspring.5)Conduct pairwise comparison over the union of parentsandoffspring.Foreachindividual,individuals out ofand,that have the most wins to be parentsof the next generation.7)Stop if the halting criterion is satisfied;otherwise,resembles that of the Gaussian densityfunction but approaches the axis so slowly that an expectationdoes not exist.As a result,the variance of the Cauchydistribution is infinite.Fig.1shows the difference betweenCauchy and Gaussian functions by plotting them in the samescale.The FEP studied in this paper is exactly the same as theCEP described in Section II except for(1)which is replacedby the following[11]and is generated anew for each valueofparison between Cauchy and Gaussian density functions.IV.B ENCHMARK F UNCTIONSTwenty-three benchmark functions [2],[7],[12],[13]were used in our experimental studies.This number is larger than that offered in many other empirical study papers.This is necessary,however,since the aim here is not to show FEP is better or worse than CEP,but to find out when FEP is better (or worse)than CEP and why.Wolpert and Macready [14],[15]have shown that under certain assumptions no single search algorithm is best on average for all problems.If the number of test problems is small,it would be very difficult to make a generalized ing too small a test set also has the potential risk that the algorithm is biased (optimized)toward the chosen problems,while such bias might not be useful for other problems of interest.The 23benchmark functions are given in Table I.A more detailed description of each function is given in the Appendix.Functionsare high-dimensional problems.Functions aremultimodal functions where the number of local minima increases exponentially with the problem dimension [12],[13].They appear to be the most difficult class of problems for manyoptimization algorithms (including EP).Functions–are low-dimensional functions which have only a few local minima [12].For unimodal functions,the convergence rates of FEP and CEP are more interesting than the final results of optimization as there are other methods which are specifically designed to optimize unimodal functions.For multimodal functions,the final results are much more important since they reflect an algorithm’s ability of escaping from poor local optima and locating a good near-global optimum.V.E XPERIMENTAL S TUDIESA.Experimental SetupIn all experiments,the same self-adaptive method [i.e.,(2)],the same populationsize,and the sameinitial population were used for both CEP and FEP.These parameters follow the suggestions from B¨a ck and Schwefel [7]and Fogel [2].The initial population was generated uniformly at random in the range as specified in Table I.B.Unimodal FunctionsThe first set of experiments was aimed to compare the convergence rate of CEP and FEP forfunctionsTABLE IT HE23B ENCHMARK F UNCTIONS U SED IN O UR E XPERIMENTAL S TUDY,W HERE n I S THE D IMENSION OF THE F UNCTION,f minI S THE M INIMUM V ALUE OF THE F UNCTION,AND S R n.A D ETAILED D ESCRIPTION OF A LL F UNCTIONS I S G IVEN IN THE A PPENDIXTABLE IIC OMPARISON B ETWEEN CEP AND FEP ON f1–f7.A LL R ESULTS H A VE B EEN A VERAGED OVER50R UNS,W HERE“M EAN B EST”I NDICATES THEM EAN B EST F UNCTION V ALUES F OUND IN THE L AST G ENERATION,AND“S TD D EV”S TANDS FOR THE S TANDARD D EVIATIONof the global optimum.CEP is capable of maintaining itsnearly constant convergence rate because its search is muchmore localized than FEP.The different behavior of CEP andFEP on(a)(b)parison between CEP and FEP on f1–f4.The vertical axis is the function value,and the horizontal axis is the number of generations. The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best results,and(b)shows the average results.Both were averaged over50runs.(a)(b)parison between CEP and FEP on f 5–f 7.The vertical axis is the function value,and the horizontal axis is the number of generations.The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best results,and (b)shows average results.Both were averaged over 50runs.generating long jumps than CEP.Such long jumps enable FEP to move from one plateau to a lower one with relative ease.The rapid convergence of FEP shown in Fig.3supports our explanations.C.Multimodal Functions1)Multimodal Functions with Many Local Min-ima:Multimodal functions having many local minimaare often regarded as being difficult tooptimize.are such functions where the number of local minima increases exponentially as the dimension of the function increases.Fig.4shows the two-dimensional versionofwere all set to 30in our ex-periments.Table III summarizes the final results of CEP and FEP.It is obvious that FEP performs significantly better than CEP consistently for these functions.CEP appeared to become trapped in a poor local optimum and unable to escape from it due to its smaller probability of making long jumps.According to the figures we plotted to observe the evolutionary process,CEP fell into a poor local optimum quite early in a run whileFig.4.The two-dimensional version of f 8.TABLE IIIC OMPARISON B ETWEEN CEP AND FEP ON f 8–f 13.T HE R ESULTS A RE A VERAGED OVER 50R UNS ,W HERE “M EAN B EST ”I NDICATESTHEM EAN B EST F UNCTION V ALUES F OUND IN THE L AST G ENERATION AND “S TD D EV ”S TANDS FOR THE S TANDARD DEVIATIONTABLE IVC OMPARISON B ETWEEN CEP AND FEP ON f 14–f 23.T HE R ESULTS A RE A VERAGED OVER 50R UNS ,W HERE “M EAN B EST ”I NDICATESTHEM EAN B EST F UNCTION V ALUES F OUND IN THE L AST G ENERATION AND “S TD D EV ”S TANDS FOR THE S TANDARD DEVIATIONFEP was able to improve its solution steadily for a long time.FEP appeared to converge at least at a linear rate with respect to the number of generations.An exponential convergence rate was observed for some problems.2)Multimodal Functions with Only a Few Local Minima:To evaluate FEP more fully,additional multimodal benchmarkfunctions were also included in our experiments,i.e.,–,where the number of local minima for each function and thedimension of the function are small.Table IV summarizes the results averaged over 50runs.Interestingly,quite different results have been observed forfunctions–.For six(i.e.,–)out of ten functions,no statistically significant difference was found between FEP and CEP.In fact,FEP performed exactly the same as CEP forTABLE VC OMPARISON B ETWEEN CEP AND FEP ON f 8TO f 13WITH n =5.T HE R ESULTS A RE A VERAGED OVER 50R UNS ,W HERE “M EAN B EST ”I NDICATES THE M EAN B ESTF UNCTION V ALUES F OUND IN THE L ASTG ENERATION AND “S TD D EV ”S TANDS FOR THE S TANDARD DEVIATIONand .For the four functions where there was statistically significant difference between FEP and CEP,FEP performedbetterfor,but was outperformed by CEPfor–.The consistent superiority of FEP over CEP forfunctions was not observed here.The major difference betweenfunctionsand–is thatfunctions–appear to be simplerthandue to their low dimensionalities and a smaller number of local minima.To find out whether or not the dimensionality of functions plays a significant role in deciding FEP’s and CEP’s behavior,another set of experiments on thelow-dimensionalwas carried out.The resultsaveraged over 50runs are given in Table V.Very similar results to the previous ones onfunctionswere obtained despite the large difference in the dimensionality of functions.FEP still outperforms CEP significantly evenwhen the dimensionality offunctionsislow inits Cauchy mutation.This value was used for its simplicity.To examine the impact of differentvalues for the Cauchy mutation.Seven benchmark functions from the three different groups in Table I were used in these experiments.The setup of these experiments is exactly the same as before.Table VI shows the average results over 50independent runs of FEP for different parameters.These results showthatis problemdependent.As analyzed later in Section VII-A,the optimalfor a given problem.A good approach toTABLE VIT HE M EAN B EST S OLUTIONS F OUND BY FEP U SING D IFFERENT S CALE P ARAMETER t IN THE C AUCHY M UTATION FOR F UNCTIONS f 1(1500);f 2(2000);f 10(1500);f 11(2000);f 21(100);f 22(100);AND f 23(100).THE V ALUES IN “()”I NDICATE THE N UMBER OF G ENERATIONS U SED INFEP.A LL R ESULTS H A VE B EEN A VERAGED OVER 50RUNSdeal with this issue is to use self-adaptation so thatvalues in a population so that the whole popu-lation can search both globally and locally.The percentage of each type of Cauchy mutation will be self-adaptive,rather than fixed.Hence the population may emphasize either global or local search depending on different stages in the evolutionary process.VII.A NALYSIS OF F AST AND C LASSICALE VOLUTIONARY P ROGRAMMINGIt has been pointed out in Section III that Cauchy mutation has a higher probability of making long jumps than Gaussian mutation due to its long flat tails shown in Fig.1.In fact,the likelihood of a Cauchy mutation generating a larger jump than a Gaussian mutation can be estimated by a simple heuristic argument.Fig.5.Evolutionary search as neighborhood search,where x3is the global optimum and >0is the neighborhood size. is a small positive number(0< <2 ).It is well known thatifthen(i.e.,It is obvious that Gaussian mutation is much more localizedthan Cauchy mutation.Similar results can be obtained for Gaussian distributionwithexpectationandvarianceis often regardedas the step size of the Gaussian mutation.Fig.5illustrates thesituation.Thederivative can be used toevaluate the impactof.Accordingto the mean value theorem for definite integrals[17,p.322],there exists anumber suchthat(7)That is,thelarger willbe,ifis,thesmaller will be.Similar analysis can be carried out for Cauchy mutationin FEP.Denote the Cauchy distribution defined by(3)as.Then wehavemay not be the same as that in(6)and(7).It is obviousthat(9)That is,thelarger will be,ifis,thesmaller will be.Since could be regarded as search step sizes forGaussian and Cauchy mutations,the above analyses show thata large step size is beneficial(i.e.,increases the probabilityoffinding a near-optimal solution)only when the distancebetween the neighborhoodofand.The analytical results explain why FEP achieved betterresults than CEP for most of the benchmark problems wetested,because the initial population was generated uniformlyat random in a relatively large space and was far away from theglobal optimum on average.Cauchy mutation is more likely togenerate larger jumps than Gaussian mutation and thus betterin such cases.FEP would be less effective than CEP,however,near the small neighborhood of the global optimum becauseGaussian mutation’s step size is smaller(smaller is better inthis case).The experimental results onfunctionsvalue for its Cauchy mutation would perform better wheneverCEP outperforms FEPwithfor a problem,it implies that this FEP’s search stepsize may be too large.In this case,using a Cauchy mutationwith a smaller(i.e.,Shekel-5)wasused here since it appears to pose some difficulties to FEP.First we made the search points closer to the global optimumby generating the initial population uniformly at random inthe rangeof ratherthan andrepeated our previous experiments.(The global optimumofisat.)Such minor variation to the experiment isexpected to improve the performance of both CEP and FEPsince the initial search points are closer to the global optimum.Note that both Gaussian and Cauchy distributions have higherprobabilities in generating points around zero than those ingenerating points far away from zero.Thefinal experimental results averaged over50runs aregiven in Table VII.Fig.6shows the results of CEP and FEP.It is quite clear that the performance of CEP improved muchmore than that of FEP since the smaller average distancebetween search points and the global optimum favors a smallstep size.The mean best of CEP improved significantly from7.90,while that of FEP improved only from5.62.Then three more sets of experiments were conducted wherethe search space was expanded ten times,100times,and1000times,i.e.,the initial population was generated uniformly atrandom in the rangeofC OMPARISON OF CEP’S AND FEP’S F INAL R ESULTS ON f21W HEN THE I NITIAL P OPULATION I S G ENERATED U NIFORMLY AT R ANDOM IN THE R ANGE OF0 x i 10AND2:5 x i 5:5.T HE R ESULTS W ERE A VERAGED OVER50R UNS,W HERE“M EAN B EST”I NDICATES THE M EAN B EST F UNCTION V ALUESF OUND IN THE L ASTG ENERATION,AND“S TD D EV”S TANDS FOR THE S TANDARD D EVIATION.T HE N UMBER OF G ENERATIONS FOR E ACH R UN W AS100(a)(b)parison between CEP and FEP on f21when the initial population is generated uniformly at random in the range of2:5 x i 5:5.The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best result,and(b)shows the average result.Both were averaged over50runs.The horizontal axis indicates the number of generations.The vertical axis indicates the function value.space is expected to make the problem more difficult and thusmake CEP and FEP less efficient.The results of the sameexperiment averaged over50runs are shown in Table VIIIand Figs.7–9.It is interesting to note that the performance ofFEP was less affected by the larger search space than CEP.When the search space was increased todisappeared.There was no statistically significant differencebetween CEP and FEP.When the search space was increasedfurther to,FEP even outperformed CEPsignificantly.It is worth pointing out that a population size of100and the maximum number of generations of100are verysmall numbers for such a huge search space.The populationmight not have converged by the end of generation100.This,however,does not affect our conclusion.The experimentsstill show that Cauchy mutation performs much better thanGaussian mutation when the current search points are far awayfrom the global optimum.Even if’s were not multiplied by10,100,and1000,similar results can still be obtained as long as the initialpopulation was generated uniformly at random in the rangeofC OMPARISON OF CEP’S AND FEP’S F INAL R ESULTS ON f21W HEN THE I NITIAL P OPULATION I S G ENERATED U NIFORMLY AT R ANDOM IN THER ANGE OF0 x i 10;0 x i 100;0 x i 1000;AND0 x i 10000,AND a i’S W ERE M ULTIPLIED BY10,100,AND1000.T HE R ESULTS W ERE A VERAGED OVER50R UNS,W HERE“M EAN B EST”I NDICATES THE M EAN B EST F UNCTION V ALUES F OUND IN THEL AST G ENERATION,AND“S TD D EV”S TANDS FOR THE S TANDARD D EVIATION.T HE N UMBER OF G ENERATIONS FOR E ACH R UN W AS100.(a)(b)parison between CEP and FEP on f21when the initial population is generated uniformly at random in the range of0 x i 100and a i’s were multiplied by ten.The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best result,and(b)shows the average result.Both were averaged over50runs.The horizontal axis indicates the number of generations.The vertical axis indicates the function value.(a)(b)parison between CEP and FEP on f21when the initial population is generated uniformly at random in the range of0 x i 1000and a i’s were multiplied by100.The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best result,and(b)shows the average result.Both were averaged over50runs.The horizontal axis indicates the number of generations.The vertical axis indicates the function value.parison between CEP and FEP on f21when the initial population is generated uniformly at random in the range of0 x i 10000and a i’s were multiplied by1000.The solid lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best result,and(b)shows the average result.Both were averaged over50runs.The horizontal axis indicates the number of generations.The vertical axis indicates the function value.TABLE IXC OMPARISON OF CEP’S AND FEP’S F INAL R ESULTS ON f21W HEN THE I NITIAL P OPULATION I S G ENERATED U NIFORMLY AT R ANDOM INTHE R ANGE OF0 x i 10;0 x i 100;0 x i 1000;and0 x i 10000.a i’S W ERE U NCHANGED.T HE R ESULTS W ERE A VERAGED OVER50R UNS,W HERE“M EAN B EST”I NDICATES THE M EAN B EST F UNCTION V ALUES F OUND IN THE L ASTG ENERATION,AND“S TD D EV”S TANDS FOR THE S TANDARD D EVIATION.T HE N UMBER OF G ENERATIONS FOR E ACH R UN W AS100and the time used tofind the solution?This issue can be approached from the point of view of neighborhood size,i.e.,on the probability of generating a near-optimum in that neighborhood can be worked out.(The probability offinding a near-optimum would be the same as that of generating it when the elitism is used.)Although not an exact answer to the issue,the following analysis does provide some insights into such impact.Similar to the analysis in Section VII-A7,the following is true according to the mean value theorem for definite integrals[17,p.322]:forThat is,for)is governed by thetermThatis,grows exponentially fasteras increases.TABLE XC OMPARISON A MONG IFEP,FEP,AND CEP ON F UNCTIONS f1;f2;f10;f11;f21;f22;AND f23.A LL R ESULTS H A VE B EENA VERAGED OVER50R UNS,W HERE“M EANB EST”I NDICATES THE M EAN B EST F UNCTION V ALUES F OUND IN THE L AST GENERATIONA similar analysis can be carried out for Cauchy mutationusing its density function,i.e.,(3).Let the density functionbe.ForHence the probability of generating a near-optimum in theneighborhood always increases as the neighborhood size in-creases.While this conclusion is quite straightforward,it isinteresting to note that the rate of increase in the probabilitydiffers significantly between Gaussian and Cauchy mutationsince.VIII.A N I MPROVED F AST E VOLUTIONARY P ROGRAMMINGThe previous analyses show the benefits of FEP and CEPin different situations.Generally,Cauchy mutation performsbetter when the current search point is far away from theglobal minimum,while Gaussian mutation is better atfindinga local optimum in a good region.It would be ideal ifCauchy mutation is used when search points are far away fromthe global optimum and Gaussian mutation is adopted whensearch points are in the neighborhood of the global optimum.Unfortunately,the global optimum is usually unknown inpractice,making the ideal switch from Cauchy to Gaussianmutation very difficult.Self-adaptive Gaussian mutation[7],[2],[8]is an excellent technique to partially address theproblem.That is,the evolutionary algorithm itself will learnwhen to“switch”from one step size to another.There is roomfor further improvement,however,to self-adaptive algorithmslike CEP or even FEP.This paper proposes an improved FEP(IFEP)based onmixing(rather than switching)different mutation operators.The idea is to mix different search biases of Cauchy andGaussian mutations.The importance of search biases has beenpointed out by some earlier studies[18,pp.375–376].Theimplementation of IFEP is very simple.It differs from FEPand CEP only in Step3of the algorithm described in SectionII.Instead of using(1)(for CEP)or(4)(for FEP)alone,IFEPgenerates two offspring from each parent,one by Cauchy mu-tation and the other by Gaussian.The better one is then chosenas the offspring.The rest of the algorithm is exactly the sameas FEP and CEP.Chellapilla[19]has recently presented somemore results on comparing different mutation operators in EP.A.Experimental StudiesTo carry out a fair comparison among IFEP,FEP,andCEP,the population size of IFEP was reduced to half ofthat of FEP or CEP in all the following experiments,sinceeach individual in IFEP generates two offspring.ReducingIFEP’s population size by half,however,actually puts IFEPat a slight disadvantage because it does not double the timefor any operators(such as selection)other than mutations.Nevertheless,such comparison offers a good and simplecompromise.IFEP was tested in the same experimental setup as before.For the sake of clarity and brevity,only some representativefunctions(out of23)from each group were tested.Functionsand are multimodal functions with many local minima.Functions–are multimodal functions with only a fewlocal minima and are particularly challenging to FEP.Table Xsummarizes thefinal results of IFEP in comparison with FEPand CEP.Figs.10and11show the results of IFEP,FEP,andCEP.B.DiscussionsIt is very clear from Table X that IFEP has improved FEP’sperformance significantly for all test functions exceptfor.Even in the caseof,IFEP is better than FEP for25out of50runs.In other words,IFEP’s performance is still rather closeto FEP’s and certainly better than CEP’s(35out of50runs)on.These results show that IFEP continues to perform(a)(b)parison among IFEP,FEP,and CEP on functions f 1;f 2;f 10;and f 11.The vertical axis is the function value,and the horizontal axis is the number of generations.The solid lines indicate the results of IFEP.The dashed lines indicate the results of FEP.The dotted lines indicate the results of CEP.(a)shows the best results,and (b)shows the average results.All were averaged over 50runs.at least as well as FEP on multimodal functions with many minima and also performs very well on unimodal functions and multimodal functions with only a few local minima with which FEP has difficulty handling.IFEP achieved performance similar to CEP’s on these functions.For the two unimodal functions where FEP is outperformed by CEP significantly,IFEP performs better than CEPon–,the difference be-tween IFEP and CEP is much smaller than that between FEP and CEP.IFEP has improved FEP’s performance significantly。

进化计算综述

进化计算综述

进化计算综述1.什么是进化计算在计算机科学领域,进化计算(Evolutionary Computation)是人工智能(Artificial Intelligence),进一步说是智能计算(Computational Intelligence)中涉及到组合优化问题的一个子域。

其算法是受生物进化过程中“优胜劣汰”的自然选择机制和遗传信息的传递规律的影响,通过程序迭代模拟这一过程,把要解决的问题看作环境,在一些可能的解组成的种群中,通过自然演化寻求最优解。

2.进化计算的起源运用达尔文理论解决问题的思想起源于20世纪50年代。

20世纪60年代,这一想法在三个地方分别被发展起来。

美国的Lawrence J. Fogel提出了进化编程(Evolutionary programming),而来自美国Michigan 大学的John Henry Holland则借鉴了达尔文的生物进化论和孟德尔的遗传定律的基本思想,并将其进行提取、简化与抽象提出了遗传算法(Genetic algorithms)。

在德国,Ingo Rechenberg 和Hans-Paul Schwefel提出了进化策略(Evolution strategies)。

这些理论大约独自发展了15年。

在80年代之前,并没有引起人们太大的关注,因为它本身还不够成熟,而且受到了当时计算机容量小、运算速度慢的限制,并没有发展出实际的应用成果。

到了20世纪90年代初,遗传编程(Genetic programming)这一分支也被提出,进化计算作为一个学科开始正式出现。

四个分支交流频繁,取长补短,并融合出了新的进化算法,促进了进化计算的巨大发展。

Nils Aall Barricelli在20世纪六十年代开始进行用进化算法和人工生命模拟进化的工作。

Alex Fraser发表的一系列关于模拟人工选择的论文大大发展了这一工作。

[1]Ingo Rechenberg在上世纪60 年代和70 年代初用进化策略来解决复杂的工程问题的工作使人工进化成为广泛认可的优化方法。

进化算法及其在数值计算中的应用

进化算法及其在数值计算中的应用

进化算法及其在数值计算中的应用
s 假设群体中的粒子数为 ,群体中所有的粒子所飞过的最好
位置为 Pg (t) ,称为全局最好位置,则:
Pg (t)
P0 (t), P1(t),, Ps (t) f (Pg (t)) min f (P0(t)), f (P1(t)),, f (Ps (t))
有了上面的定义,基本粒子群算法的进化方程可描述为:
进化算法及其在数值计算中的应用
遗传算法是一种宏观意义下的仿生算法,它模仿的机制是一 切生命与智能的产生与进化过程。遗传算法通过模拟达尔文 “优胜劣汰、适者生存”的原理,激励好的结构;通过模拟
孟 德尔遗传变异理论,在迭代过程中保持已有的结构,同时寻 找更好的结构。 适应度:遗传算法中使用适应度这个概念来度量群体中的每 个个体在优化计算中可能达到或接近最优解的程度。适应度 较高的个体遗传到下一代的概率较大,而适应度较低的个体 遗传到下一代的概率相对较小。度量个体适应度的函数称为 适应度函数(Fitness Function)。
单点交叉:
A:1 0 1 1 0 1 1 0 0 0 单点交叉 A : 1 0 1 1 0 1 1 0
B:0 1 1 0 1 0 0 1 1 1
B : 0 1 1 0 1 0 0 1
11 00
交叉点
算术交叉:
X X
t 1 A
t 1 B
X X
t B
t A
(1 )X (1 )X
t A
t B
进化算法及其在数值计算中的应用
限定于一定范围内,即 vij [vmax , vmax ] 。微粒的最大速度vmax 取决于当前位置与最好位置间区域的分辨率。若 vmax 太高, 则微粒可能会飞过最好解;若 vmax 太小,则又将导致微粒移 动速度过慢而影响搜索效率;而且当微粒聚集到某个较好解
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
In order to explain why Cauchy mutation performs better than Gaussian mutation for most benchmark problems used here, theoretical analysis has been carried out to show the importance of the neighborhood size and search step size in EP. It is shown that Cauchy mutation performs better because of its higher probability of making longer jumps. Although the idea behind FEP appears to be simple and straightforward (the larger the search step size, the faster the algorithm gets to the global optimum), no theoretical result has been provided to answer the question why this is the case and how fast it might be. In addition, a large step size may not be bene cial at all if the current search point is already very close to the global optimum. This paper shows for the rst time the relationship between the distance to the global optimum and the search step size, and the relationship between the search step size and the probability of nding a near (global) optimum. Based on such analyses, an improved FEP has been proposed and tested empirically.
1. Mutate the solutions in the current population, and
2. Select the next generation from the mutated and the current solutions.
These two steps can be regarded as a population-based version of the classical generate-and-test
Evolutionary Programming Made Faster y
Xin Yao, Yong Liu, and Guangming Lin Computational Intelligence Group, School of Computer Science
University College, The University of New South Wales Australian Defence Force Academy, Canberra, ACT, Australia 2600 Email: fxin,liuy,gling@.au, URL: .au/ xin
Abstract
Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. However, EP has rather slow convergence rates on some function optimization problems. In this paper, a \fast EP" (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator. The relationship between FEP and classical EP (CEP) is similar to that between the fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for di erent function optimization problems. This paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood. For a suite of 23 benchmark problems, FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a few local minima. This paper also shows the relationship between the search step size and the probability of nding a global optimum, and thus explains why FEP performs better than CEP on some functions but not on others. In addition, the importance of the neighborhood size and its relationship to the probability of nding a near-optimum is investigated. Based on these analyses, an improved FEP (IFEP) is proposed and tested empirically. This technique mixes di erent search operators (mutations). The experimental results show that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested.
1
One disadvantage of EP in solving some of the multimodal optimization problems is its slow convergence to a good near-optimum (e.g., f8 to f13 studied in this paper). The generate-and-test formulation of EP indicates that mutation is a key search operator which generates new solutions from the current ones. A new mutation operator based on Cauchy random numbers is proposed and tested on a suite of 23 functions in this paper. The new EP with Cauchy mutation signi cantly outperforms the classical EP (CEP), which uses Gaussian mutation, on a number of multimodal functions with many local minima while being comparable to CEP for unimodal and multimodal functions with only a few local minima. The new EP is denoted as \fast EP" (FEP) in this paper.
Extensive empirical studies of both FEP and CEP have been carried out in order to evaluate the relative strength and weakness of FEP and CEP for di erent problems. The results show that Cauchy mutation is an e cient search operator for a large class of multimodal function optimization problems. FEP's performance can be expected to improve further since all the parameters used in the FEP were set equivalently to those used in CEP.
相关文档
最新文档