A Modified Particle Swarm Optimizer Algorithm

合集下载

粒子群算法的惯性权重调整策略

粒子群算法的惯性权重调整策略

粒子群算法的惯性权重调整策略李丽1薛冰2牛奔31深圳大学管理学院信管系,广东深圳(518060)2深圳大学管理学院信管系,广东深圳(518060)3深圳大学管理学院信管系,广东深圳(518060)E-mail(小五,Times New Roman)摘要:惯性权重是粒子群算法改进的一个重要出发点,通过调整惯性权重可以大大提高算法的性能。

本文在介绍粒子群算法原理、流程的基础上,分析了惯性权重在算法寻优过程中的重要作用,然后归纳了运用不同方法对惯性权重的改进,进行了简单的讨论,并对下一步工作进行了展望。

关键词:粒子群算法惯性权重改进策略1 引言粒子群优化算法(Particle Swarm Optimization,PSO)是1995年由Eberhant和Kennedy 在文献[1]中提出的一种基于群体智能、自适应的搜索优化方法。

其基本思想源于对鱼类、鸟类等群体社会行为的观察研究。

粒子群算法提出以后,由于其算法概念简单、需要调整的参数较少、容易实现和快速收敛能力,已被广泛地用在科学和工程领域,如电力系统优化(文献[31]—[33])、TSP问题(文献[34])、神经网络训练(文献[35])、函数优化(文献[37]、[38])等。

粒子群算法在应用过程中体现出了很强的寻优能力,但与其他全局优化算法相同,粒子群算法也存在早熟局部收敛和后期震荡现象。

针对这些问题,国内外学者经过大量研究工作,提出了多种改进方法,包括参数改进,拓扑结构改进和混合算法等。

其中惯性权重是最重要的可调整参数,惯性权重由于其概念简单、容易理解、改进的方法较多、改进的空间较大且容易实现等特点,成为很多学者研究的焦点。

通过调整惯性权重的值可以实现全局搜索和局部搜索之间的平衡:较大的权值有利于提高全局搜索能力,而较小的权会提高局部搜索能力。

诸多研究者运用线性递减、非线性递减等方法对惯性权重进行调整,实现了算法在不同方面和不同程度上的改进。

本文通过对国内外研究人员所提出的调整惯性权重策略进行归纳总结,讨论了各种策略的优缺点,并在此基础上提出了下一步工作方向及需要解决的问题。

A modified particle swarm optimizer

A modified particle swarm optimizer

1. INTRODUCTION
Evolutionary computation techniques (evolutionary programming [4], genetic algorithms [5], evolutionary strategies [9] and genetic programming [SI) are motivated by the evolution of nature. A population of individuals. which encode the problem solutions. are manipulated accordingto the rule of survival of the fittest through “genetic” operations, such as mutation, crossover and reproduction. A best solution is evolved through the generations. These kjnds of techniques have been successfully applied in many areas and a lot of new applicationsare expected to appear. In contrast to evolutionary computation techniques, Eberhart and Kennedy developed a different algorithm through simulatingsocial behavior [2,3,6,7J As in other algorithms,a population of individuals exists. This algorithm is called particle swarm optimization (PSO) since it resembles a school of flying birds. In a particle swarm optimizer, instead of using genetic operators, these individuals are “evolved” by cooperation and competition among the individuals themselves through generations. Each particle adjusts its flying accordingto its own flying experience and its companions’ flying experience. Each individual is named as a “particle” which, in fact, represents a potential solution to a problem. Each particle is treated as a point in a Ddimensional space. The ith particle is represented as

particleswarmoptimization粒子群优化算法解析

particleswarmoptimization粒子群优化算法解析
最优解
v
? 初始化:将种群做初始化,以随机的方式求出每一 粒子之初始位置与速度。
? 评估:依据适应度函数计算每一个粒子适应度以判 断其好坏。
? 计算自身最优:找出每一个粒子到目前为止的搜寻 过程中最佳解,这个最佳解称之为 Pbest。
? 计算全局最优:找出种群中的最佳解,此最佳解称 之为Gbest。
群体智能(Swarm Intelligence )
生物学家研究表明:在这些群居生物中虽然每个个体的智能不 高,行为简单,也不存在集中的指挥,但由这些单个个体组成 的群体,似乎在某种内在规律的作用下,却表现出异常复杂而 有序的群体行为。
Swarm可被描述为一些相互作用相邻个体的集合体,蜂群、蚁群、鸟群都是Swarm的典 型例子。
粒子群初始位置和速度随机产生,然后按公式 (1)(2) 进行迭代,直至找到满意的解。 目前,常用
的粒子群算法将全体粒子群 (Global) 分成若干个有 部分粒子重叠的相邻子群,每个粒子根据子群
(Local) 内历史最优Pl调整位置,即公式 (2) 中Pgd 换 为Pld 。
? 每个寻优的问题解都被想像成一支鸟,也称为“Particle”。
Vi =?Vi1,Vi2 ,...,Vid ?
Xi =?Xi1,Xi2 ,...,Xid ?
x(t) Here I am!
Study Factor
My bes最t局优部解
position
pi
运动向量
xi (t ? 1) ? xi (t) ? vi (t)
惯性向量
pg The best position of
位置Pg为所有Pi ( i=1, …,n )中的最优g best ;第i个粒 子的位置变化率(速度)为向量 Vi= (vi1 , v i2 ,…, v iD )。

群体智能(第二讲)

群体智能(第二讲)
• Eberhart和Shi:w的线性递减策略;
• Shi和Eberhart:以当前惯性权重值和当前算法最优值为模糊系统的 输入,给出一种惯性权重的模糊控制方法。
加速度系数
加速度系数
x (1 t ) x ( t ) v (1 t ) i j i j i j
加速度系数
v (1 t ) w v ( t ) c r (( p ) x ( t ) ) c r ( p ( t ) x ( t ) ) i j i j 1 1 i jt i j 2 2 g j i j
微粒速度更新公式包括三项:
速度冲量:指引微粒继续飞行的先前微粒速度。 认知项:微粒重新返回其所经过的最好位置的趋势,既微粒本身
记忆的影响。
社会项:微粒被当前最好位置吸引的趋势,既群体信息的影响。
在三部分的共同作用下,微粒根据历史经验并利用全局信息,不断 调整自己的位置,以期找到最优解。
PSO算法
1. 在搜索空间中随机生成n个微粒,组成微粒群; 2. 重复下列步骤: for (i=1 to n)
t)); 计算微粒i的适应值 f (x i(
x t ( ( t ) ) fp (( 1 ) ) if fx then pt i() i() i it
p g ( t ) 取值为适应值最高的 p i ( t ) ; //更新微粒全局最优点
for ( j=1 to d ) end end 3. 如果满足终止条件,那么终止算法,并输出结果. // d为决策变量维数
按公式更新微粒i的第j个分量;
PSO算法
惯性权重 速度冲量导致微粒按照先前速度方向继续移动。Yuhui Shi[1]提出一个惯性权重w来控制先前微粒速度的影响。
惯性权重

计算机机箱的电磁脉冲耦合模拟仿真

计算机机箱的电磁脉冲耦合模拟仿真
强电磁脉冲通过孔缝耦合透入各种电子系统的研究是 电磁兼容、干扰和防护等领域的重要课题。文献[1]综述了 计算机电磁干扰的形பைடு நூலகம்及其影响,并分析了常用的干扰抑制 和隔离方法。文献[2]研究了电磁脉冲与窄缝腔体的耦合共 振特性。Russell P. Jedlicka 等[3]用有限元法(FEM)结合矩 量法(MOM)分析了电磁波通过扭曲窄缝对复杂腔体的耦 合透入问题。本文应用时域有限差分法[4](FDTD)模拟了 电磁脉冲对计算机机箱的耦合透入过程,分析了机箱中电磁 场和电磁能量随时间的变化情况,为计算机电磁兼容、干扰
4 结论
通过电磁脉冲对计算机机箱的耦合透入仿真分析,可以 得到:
(1) 高频强电磁脉冲很容易通过孔缝耦合进入计算机 机箱,透入机箱的电磁场瞬间峰值较大,功率流密度很强,
·2786·
系统仿真学报 JOURNAL OF SYSTEM SIMULATION
Vol. 16 No. 12 Dec. 2004
计算机机箱的电磁脉冲耦合模拟仿真
陈修桥 1,胡以华 1,张建华 1,黄友锐 1,2,何 丽 1
(1 电子工程学院, 合肥 230037; 2 安徽理工大学, 淮南 232001)
摘 要:强电磁脉冲能量通过小孔、缝隙等耦合到计算机机箱内,会对计算机产生干扰和破坏作用。
本文应用时域有限差分法模拟了电磁脉冲对计算机机箱的耦合透入过程,通过分析机箱中电磁场和
电磁能量随时间的变化曲线,得出了机箱中电磁脉冲的耦合变化特征。电磁脉冲对计算机机箱的耦
合模拟计算可用于指导计算机系统的电磁兼容、干扰和防护研究。
P = E×H
(4)
x 方向的分量为
Px = E y ⋅ H z − Ez ⋅ H y
(5)

粒子群优化算法:发展,应用程序和资源(英译汉)

粒子群优化算法:发展,应用程序和资源(英译汉)

粒子群优化算法:发展,应用程序和资源Russell C. EberhartPurdue School of Engineering and Technology 799 West Michigan StreetIndianapolis, IN 46202 USAEberhart@ Yuhui ShiEDS Embedded Systems Group I40 1 E. Hoffer Street Kokomo, IN 46982 USA Yuhui.Shi@摘要-本文重点工程和计算机科学方面的发展、应用程序和相关的粒子群优化的资源。

对1995年以来的粒子群算法提出到现在的发展进行探讨。

包括收缩因素,惯性权重,以及跟踪动态系统的简短的讨论。

应用方面,无论是对于已发展的和对未来的潜力应用领域进行探讨。

最后,涉及到粒子群优化的资源列出,包括书籍,网站和软件。

粒子群优化的书目是在文末。

一、引言粒子群优化(PSO)是由Kennedy和Eberhart在1995年开发的进化计算技术(1995年Kennedy和Eberhart埃伯哈特和肯尼迪,1995年,辛普森埃伯哈特和杜宾斯1996年)。

因此,在撰写本文的时候,PSO算法产生刚刚超过五年。

目前,它正在十多个国家研究并使用。

现在是一个适当的时候退后一步,看看对于粒子群的研究我们处在哪里,我们是如何研究到这个程度,以及将来这项研究可能会朝着哪个方向。

本文是关于这个起源于1995年PSO算法的发展、应用和相关资源的介绍。

这是从工程和计算机科学的角度来展望和分析的,并不意味着是在社会科学等领域的全面概括。

以下的介绍,是关于起源于1995年在粒子群算法的主要发展及探讨。

首先提出的是原始算法,接下来是对于收缩因素,惯性权重,及跟踪动态系统的简短的讨论。

其次,在应用方面,无论是对于已发展的和对未来的潜力应用领域进行探讨。

已开发的包括人力震颤分析,电力系统的负荷稳定和产品结构的优化。

群体智能(第二讲)

群体智能(第二讲)

for ( j=1 to d )
// d为决策变量维数
按公式更新微粒i的第j个分量;
end
end
3. 如果满足终止条件,那么终止算法,并输出结果.
PSO算法
惯性权重 速度冲量导致微粒按照先前速度方向继续移动。Yuhui Shi[1]提出一个惯性权重w来控制先前微粒速度的影响。
惯性权重
vij (t 1) wvij (t) c1r1( pij (t) xij (t)) c2r2( pgj (t) xij (t)) xij (t 1) xij (t) vij (t 1)
[1] Y. Shi, R. Eberhart. “A modified particle swarm optimizer,” Proceedings of IEEE World Congress on Computational Intelligence, Anchorage, AK, 1998, pp. 69-73.
PSO的收敛性
利用具有较强局部搜索能力的算法进一步细化/开发PSO所 得结果。
一些方法:
差分算法(Differential Evolution, DE) (Zhang and Xie, 2003) 遗传算法 (Genetic algorithm, GA) (Matthew et al., 2005); 爬山法; 模拟退火(Simulated annealing, SA )(Nasser Sadati et al., 2006); 单纯形法 (Simplex method, SM) (Fan S K et al., 2007).
环形拓扑信息传输速度最慢,相应地,PSO算法收敛速度慢, 但是微粒有更多的机会发现最优解。
Mendes和Kennedy(2002)在对比不同拓扑结构时发现:Von Neumann拓扑优于其它拓扑。

粒子群优化算法【范本模板】

粒子群优化算法【范本模板】

什么是粒子群优化算法粒子群优化算法(ParticleSwarm optimization,PSO)又翻译为粒子群算法、微粒群算法、或微粒群优化算法。

是通过模拟鸟群觅食行为而发展起来的一种基于群体协作的随机搜索算法。

通常认为它是群集智能(Swarm intelligence, SI)的一种。

它可以被纳入多主体优化系统(Multiagent OptimizationSystem,MAOS). 是由Eberhart博士和kennedy博士发明.PSO模拟鸟群的捕食行为。

一群鸟在随机搜索食物,在这个区域里只有一块食物。

所有的鸟都不知道食物在那里.但是他们知道当前的位置离食物还有多远。

那么找到食物的最优策略是什么呢.最简单有效的就是搜寻目前离食物最近的鸟的周围区域。

PSO从这种模型中得到启示并用于解决优化问题.PSO中,每个优化问题的解都是搜索空间中的一只鸟。

我们称之为“粒子”。

所有的粒子都有一个由被优化的函数决定的适应值(fitnessva lue),每个粒子还有一个速度决定他们飞翔的方向和距离。

然后粒子们就追随当前的最优粒子在解空间中搜索。

PSO初始化为一群随机粒子(随机解),然后通过叠代找到最优解,在每一次叠代中,粒子通过跟踪两个“极值”来更新自己。

第一个就是粒子本身所找到的最优解,这个解叫做个体极值p Best,另一个极值是整个种群目前找到的最优解,这个极值是全局极值gBest。

另外也可以不用整个种群而只是用其中一部分最优粒子的邻居,那么在所有邻居中的极值就是局部极值.[编辑]PSO算法介绍[1]如前所述,PSO模拟鸟群的捕食行为。

设想这样一个场景:一群鸟在随机搜索食物.在这个区域里只有一块食物。

所有的鸟都不知道食物在那里。

但是他们知道当前的位置离食物还有多远。

那么找到食物的最优策略是什么呢。

最简单有效的就是搜寻目前离食物最近的鸟的周围区域.PSO从这种模型中得到启示并用于解决优化问题。

PSO中,每个优化问题的解都是搜索空间中的一只鸟.我们称之为“粒子”。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

A Modified Particle Swarm Optimizer AlgorithmYang Guangyou(School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068,China)deals with this issue, yet conceptually it is simple as well as being very easy to implement. In the algorithm, through periodically monitoring aggregation degree of the particle swarm and on the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima.Abstract: This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles’ diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.2 Standard PSO Algori thmPSO simulates the behaviors of bird flocking and used it to solve the optimization problems. PSO is initialized with a group of random particles (solutions, xi) and then searches for optima by updating generations. In every generation, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. This value is called pbest (p Keywords: Particle Swarm, Aggregation Degree, Mutation, Optimization.1 Introducti onThe particle swarm optimization (PSO) is a new community intelligence optimization method firstly proposed by Kennedy and Eberhart in 1995 [1,2]. To be exactly similar with other global optimization algorithm, the PSO algorithm tends to suffer from premature convergence. In order to overcome the problem of premature convergence, many solutions and improvements have been suggested, they included: changing descend direction of the best particle[3]; renewing state of the whole swarm or the some particles according to the certain standards [4];introducing the concepts about the breeding and subpopulation of GA [5,6,7]; introducing diversity measure to control diversity of the swarm[8]; adopting new position and velocity of particle to renew equation; cooperative particle swarm optimizers [9,10] etc.. In this paper, we presented a new model whichi ). Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest (p g ).The velocity and positions of each particle is updated according to their best encountered position and the best position encountered by any particle according to the following equation.12**()*()*()*()id id id id gd id v w v c rand p x c rand p x (1)id id id x x v (2)Where v id is the particle velocity in d-dimension, x id is the current particle position (solution) in d-dimension, w is the inertia weight. p and p i g are defined as stated before, rand() is a random function in the range [0,1]. c1 and c2are learning factors, usually c1 = c2 = 2. If the velocity is higher than a certain limit, called Vmax , this limit will be used as the new2-6751-4244-1135-1/07/$25.00 ©2007 IEEE.velocity for this particle in this dimension, thus keeping the particles within the search space._*(1new gbest gbest )KV (4)3 The Modi fi ed PSO Algori thmwhere V is a random number with Gaussian distribution. Thus it could not only produce the lesser disturbance scope with the bigger probability to perform local searching, but also appropriately produce the bigger disturbance scope to jump out of local optima. The initial value of K is set 1.0, K = EK , at f generations interval , E is a random number in the range [0.01, 0.9].Studies indicated that excessively concentrated particle swarm is easy to run into local minima due to loss of diversity of population. If the aggregation degree of particle swarm could be controlled validly, the ability to search the global minimum would be improved.3.1 Aggregation Degree of the Particles Swarm 3.3 MPSO AlgorithmThe aggregation degree of the particle swarm is used to describe the discrete degree of the swarm, namely diversity. It is represented as a distance between particles. In this paper, we applied the absolute difference value of each dimensional coordinate to denote the distance, and defined its biggest value as the aggregation degree of the particle swarm. If m is the size of swarm, N is the dimensionality of the problem, 3The modified PSO (MPSO) adds monitoring the aggregation degree of the particle swarm periodically to the standard PSO. Furthermore, the mutation operation for the gbest is performed when it do not reach global optimum point. The processes of implementing MPSO are listed as follows:Step 1. Set current iteration generation Iter=1.Initialize a population including m particles; Set the current position as the pbest position, the gbest is the best particle position of initialization particle swarm.id is the d th value of the i th particle and M id is the d th value of the j th particle. The Aggregation Degree of the swarm is calculated according to the following formula.Step 2. Evaluate the fitness for each particle;Step 3. Compare the evaluated fitness value of each particle with its pbest . If current value is better than pbest , then set the current position as the pbest position. Furthermore, if current value is better than gbest , then reset gbest to the current index in particle array;(){,,=1,2,,;;1,2,}id jd d t m a x x -x i j mi j d N z ""˙ (3)Between distances of particles in particle swarm also are available with Euclidean space mean distance seeing reference [6], but its computation load is relative big.Step 4. Change the velocity and position of the particle according to the equations (1) and (2), respectively;3.2 Strategy of MutationThe mutation operator of the algorithm includes two parts. One is, when monitoring periodically the aggregation degree of particle swarm (for example period=50), if the aggregation degree is less than the given value (d ˄t ˅< e ), then all particles’ position and velocity should be reinitialized, however, the pbest and the gbest would be reserved. The other is that when the PSO algorithm can not searches the global optimal point, the mutation for the gbest would be performed as follow:Step 5. if (Iter%Ie ==0) {Calculate d(t) of aggregation degree according to the equation (3);if d(t) is less then given threshold value e, reinitialize velocities and position of particle; }Step 6. Iter = Iter +1, If a stop criterion is met, end algorithm; else execute mutation operation to the gbest according to equation (4)ˈturn to step 2.2-6764 Results and Discussion4.1. Benchmark FunctionsComparison functions adopted here are benchmark functions used by many researchers. In which x represents a real number type the vector, its dimension is n, but x is ith element.i The function f1 is the generalized Rastrigrin function:211()(10cos(2)10),[5.12,5.12]ni i i i f x x x x S¦The function f 2 is the generalized Griewank function:22111()1,[600,600]4000n n i i i i f x x x¦ <The function f 3 is the Rosenbrock function:2223111()(100()(1)),[30,30]ni i i i f x x x x x¦The function f 4 is the Ackley function:411()20exp exp cos(2)20,n i i f x x n S §§·u ¨¨¸¨©¹©¦[30,30]i x e 4.2 Results and AnalysisFor the purpose of comparison, we had 50 trial runs for every instance. The size of particle swarm is 30; the goal value of all function is 1.0e-10; the maximum number of iteration is 3000. Table 1 lists the Vmax values for all the functions, respectively. Table 2 to 4 lists the results of both standard PSO and modified PSO for the four benchmark functions with 10, 20 and 30 dimensions. Where PSO is the results of standard PSO with a linearly decreasing w which from 0.7 to 0.4 or be fixed as 0.4, respectively. MPSO is the results of the modified PSO, in which inertia weight w is as 0.375 and interval Ie is set as 50. The Avg /Std stands for average value and standard deviation of fitness value of 50 trail runs, the fev als meant the average number of function call, the Ras representedratio of the number of reaching goal to the number of experiment. When the fitness value less than 1.0e-10, it is zero. By comparing the results, it is easy to see that MPSO have better results than PSO for all cases. Fig. 1 to 4 show typical convergence results of PSO and MPSO during 3000 generations for the four benchmark functions with 10 and 30 dimensions, respectively. In each case it is seen that MPSO performs better than PSO.Table 1. The initial range of benchmark function Function Vmax Function Vmax f110f3100f2600f430Table 2. The results of both PSO and MPSO with 10 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf1 2.965/1.35690009.001/500/017745.0050/50f20.091/0.03890009.000/500.023/0.083 32814.6044/50f313.375/25.07190030.000/507.102/0.333 90030.000/50f40/066064.8050/500/035025.6050/50Table 3. The results of both PSO and MPSO with 20 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf115.342/4.973590030.000/500/020259.0050/50f20.029/0.02288887.604/500.029/0.124 24218.4046/50f381.864/179.31388887.604/5017.433/0.235 90030.000/50f40/088155.0048/500/052959.0049/50Table 4. The results of both PSO and MPSO with 30 dimensionsStdPSO MPSOFun.Avg/Std fevals Ras Avg/Std fevals Rasf 40.020/7.86090030.000/500/020380.2050/501f 0.012/0.015 89724.009/500/017225.4050/502f 130.629/280.34990030.000/5027.962/0.464 90030.000/503f 0.019/0.132 90030.000/500/4.80E-10 58921.8046/5042-677101010-510105Rastrigrin function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-5100105Rastrigrin function with popu.=30,dim=10generationsl o g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 1. Performance comparison on Rastrigin101010-5100105Griewank function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-810-610-410-2100102Griewank function with popu.=30,dim=10generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 2. Performance comparison on Griewank1010101010Rosenbrock function with popu.=30,dim=10generationslo g 10(f i t n e s s)101021041061081010Rosenbrock function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 3. Performance comparison on Rosenbrock2-678101010-810-610-410-2100102Ackley function with popu.=30,dim=10generationslo g 10(f i t n e s s )101010-810-610-410-2100102Ackley function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 4. Performance comparison on Ackley[4] Xiaofeng Xie, Wenjun Zhang, Zhilian Yang, Dissipative particle swarm optimization[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02, vol.2, 2002: l456-l46l5 Conclus i onIn this paper, we presented a modified PSO. Two new features are added to PSO: Two newfeatures are added to PSO: aggregation degree and strategy of the Gaussian mutation. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively. The method shows good performance in all test cases than the standard PSO method. The further study on application is the subject of future work.[5] Angeline P J, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences[C] In: Evolutionary programming VII, 1998: 601-610[6] Lovbjerg M, Rasmussen T K, Krink T, Hybrid particle swarm optimization with breeding and subpopulations[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l [7] Angeline Peter J, Using selection to improve particle swarm optimization[C] In: Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, 1998: 84-89AcknowledgmentsThis work is supported by Hubei Key lab of Manufacturing [8] Riget, J. and Vesterstroem, J. S. A Diversity guided particle swarm optimizer-the ARPSO. [R] Aarhus: University of Aarhus, EVALife, 2002.2.2Quality Engineering(LMQ2005A04).References[9]Yu, H .J.,Zhang, L.P.,and H u,S.X. Adaptive particle swarm optimization algorithm based, [J] Journal of Zhejiang University (Engineering), 2005 Vol.39 No.9 1286-1291[1]J.Kennedy and R.C. Eberhart, Particle Swarm Optimization[C], Proc. on feedback mechanism IEEE Int’l. Conf. on Neural Networks, vol. VI, 1942-1948, IEEE Service Center, 1995[10] Van den Bergh F. Engelbrecht A P, Training product unit networks using cooperative particle swarm optimizers[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l, l26-l3l[2] R. Eberhart, J. Kennedy. A new optimizer using particle swarm theory. Proc. 6th Int. Symposium on Micro Machine and Human Science, 1995: 39-43[3] Thiemo Krink, Jakob S VesterstrOm, Jacques Riget, Particle Swarm Optimization with Spatial Particle Extension[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, vol. 2, 2002: 1474-1479Author BiographyYang Guangyou , Ph.D., Professor. Research interests are inthe area of intelligent computing and control. 2-679。

相关文档
最新文档