A New Diversity Guided Particle Swarm Optimization with Mutation
一种新形式的微粒群算法

(4)
用所有微粒当前最优位置的中心(既平均位置)pmean 取代 pg,速度进化方程式(1)变成
v(ij t+1)=w(t)v(ij t)+c1r(1 p(ij t)-x(ij t))+c2r(2 pmean,(j t)-x(ij t)) (5) 由式(5)、式(2)组成新的微粒群算法,记为 MPSO。
YUAN Dai-lin,CHENG Shi-juan,CHEN Qiu.New formal Particle Swarm Optimization puter Engineering and Applications,2008,44(33):57-59.
Abstract:The standard particle swarm optimization algorithm(PSO) shows a bad performance when optimizing the multimodal and higher dimensional functions.A new formal particle swarm optimization(MPSO) is advanced,which replaces the global best place (p)g by the center of all individual best places(pmean).So,the colonial diversity is increased,and the pre-mature convergence is avoided to some degree.At the same time,the concise iterative formulation is kept.The simulations of 3 testing functions show that the MPSO has better ability to find the global optimum solution than the standard particle swarm optimization algorithm. Key words:particle swarm optimization;pre-mature convergence;function optimization
An improved krill herd algorithm Krill herd with linear

An improved krill herd algorithm:Krill herd with linear decreasingstepJunpeng Li a ,b ,⇑,Yinggan Tang a ,Changchun Hua a ,Xinping Guan a ,ba Institute of Electrical Engineering,Yanshan University,Qinhuangdao 066004,China bDepartment of Automation,Shanghai Jiao Tong University,Shanghai 200240,Chinaa r t i c l e i n f o Keywords:Swarm intelligent algorithms Krill herd OptimizationLinear decreasinga b s t r a c tKrill herd (KH)inspired by the herding behavior of the krill individuals is a new swarm intelligent algorithm which is proved to perform better than other swarm intelligent algo-rithms.However,there are some weak points yet.In this paper,we analyze a deficiency of KH which cannot achieve the excellent balance between exploration and exploitation in optimization processing and proposed an improved KH-krill herd with linear decreasing step (KHLD).Twenty benchmark functions are used to verify the effectiveness of these improvements and it is illustrated that,in most cases,the performance of KHLD is superior to the standard KH.Ó2014Elsevier Inc.All rights reserved.1.IntroductionThe collective intelligent behavior of insect or animal groups in nature such as flock of birds,colonies of ants,schools of fish,swarms of bees and termites have attracted the attention of researchers.The aggregate behavior of insects or animals is called swarm behavior.Entomologists have studied this collective behavior to model biological swarms,and engineers ap-plied these models as a framework for solving complex real-world problems.This branch of artificial intelligence which deals with the collective behavior of swarms through complex interaction of individuals without supervision,is referred to as swarm intelligence.Bonabeau defined swarm intelligence as ‘‘any attempt to design algorithms or distributed problem-solv-ing devices inspired by the collective behavior of the social insect colonies and other animal societies’’[1].Swarm intelli-gence has some advantages such as scalability,fault tolerance,adaptation,speed,modularity,autonomy and parallelism [2].The key components of swarm intelligence are self-organization and division of labor.In a self-organizing system,each of the covered units may respond to local stimuli individually and act together to accomplish a global task via division of labor without a centralized supervision.The entire system can adapt to internal and external changes efficiently.Bonabeau et al.have characterized four basic properties on which self-organization relies:positive feedback,negative feedback,fluctuations and multiple interactions [1].Positive feedback means that an individual recruits other individuals by some directive,such as dancing of bees in order to lead some other bees onto a specific food source site.Negative feedback avoids all individuals accumulating on the same task by counterbalancing the attraction negatively,such as abandoning the exhausted food source.Fluctuations are random behaviors of individuals in order to explore new states,such as random flights of scouts in a bee swarm.Multiple interactions are the basis of these tasks to be carried out by certain rules./10.1016/j.amc.2014.01.1460096-3003/Ó2014Elsevier Inc.All rights reserved.⇑Corresponding author at:Institute of Electrical Engineering,Yanshan University,Qinhuangdao 066004,China.E-mail address:zicheyue.324@ (J.Li).Nowadays,a lot of swarm intelligent optimization algorithms have been proposed such as Artificial Immune Algorithm (AIA)[3,4],Ant Colony Optimization(ACO)[5,6],Particle Swarm Optimization(PSO)[7,8],Artificial Bee Colony(ABC)[9,10]. In fact,a swarm can be considered as any collection of interacting agents or individuals.An immune system can be consid-ered as a swarm of cells and molecules as well as a crowd is a swarm of people.In ACO,an ant colony is thought of as a swarm whose individual agents are ants.In PSO,aflock of birds is a swarm of birds.In ABC,a bee colony is a swarm of bees which is consist three kind of bees to the division of work for the foraging.There are also a lot of new swarm intelligent optimization algorithms such as Bat Algorithm(BA)[11–13],Cuckoo Search(CS)algorithm[14,15],Firefly Algorithm(FA) [16,17].Krill herd is a new swarm intelligent optimization algorithm proposed by Gandomi and Alavi in2012[18]which is in-spired by the herding behavior of krill individuals.KH algorithm is a novel swarm intelligence method for optimizing pos-sibly nondifferentiable and nonlinear complex functions in continuous space.Some improved KH have been proposed and applied in some engineering problems.Wang et al.proposed some variants of KH[19–24].In[19],a novel hybrid meta-heu-ristic optimization approach HS/KH was proposed in which harmony search(HS)is applied to mutate between krill during the process of krill updating instead of physical diffusion used in KH.In[20],a hybrid differential evolution(HDE)operator is added into the krill named a hybrid differential evolution KH(DEKH).In[21],stud selection and crossover(SSC)operator is introduced into the KH during the krill updating process.In[22],a series of chaotic particle-swarm krill herd(CPKH)algo-rithms are proposed in which chaos sequence is introduced into the KH algorithm so as to further enhance its global search ability.In[23],a new krill selecting(KS)operator as simulated annealing(SA)is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness.In[24],a L e vy-flight krill herd(LKH)algorithm is proposed in which a new local L e vy-flight(LLF)operator is included.In[25],Gandomi et al.applied krill herd(KH)algorithm to solve truss optimization problems.Although KH demonstrated better performance compared with other swarm intelligent algorithms,it has some weak point such as the lack of regulation of exploration and the exploitation.In this paper we analyze the effect how the param-eters effect the balance of exploration and exploitation of KH.And then we propose an improved KH whose exploration and exploitation are changing with the iteration.The rest of this paper is organized as follows.In Section2,we introduce the KH algorithm.Our improved KH algorithm is presented in Section3.In Section4,we offer a large number of corresponding simulations.Section5presents conclusion resulting from the study.2.Basic KHKrill herd(KH)is a novel biologically-inspired algorithm proposed by Gandomi and Alavi in2012[18]for solving optimization tasks.The KH algorithm is based on the simulation of the herding of the krill swarms in response to specific biological and environmental processes.Nearly,all of the necessary coefficients for implementing the proposed algorithm are obtained from real world empirical studies done.Thefitness function of each krill individual is defined as its distances from food and highest density of the swarm.The time-dependent position of an individual krill is determined by three essen-tial actions as follows:i.Movement induced by other krill individuals.ii.Foraging activity.iii.Random diffusion.KH algorithm adopted the following Lagrangian model in a d-dimensional decision space.dX idt¼N iþF iþD ið1Þwhere N i;F i and D i are the motion induced by other krill individuals,the foraging motion,and the physical diffusion of the i th krill individual,respectively.For a krill individual,this movement is defined asN newi¼N max a iþx n N old ið2Þwherea i¼a locali þa targetið3Þand N max is the maximum induced speed,x n is the inertia weight of the motion induced in the range½0;1 ;N old i is the lastmotion induced,a locali is the local effect provided by the neighbors and a targetiis the target direction effect provided by thebest krill individual.The foraging motion which is formulated in terms of the food location and the previous experience about the food loca-tion can be expressed for the i th krill individual as follows:J.Li et al./Applied Mathematics and Computation234(2014)356–367357F i ¼V f b i þx f F old ið4Þwhereb i ¼b food iþb best i ð5Þand V f is the foraging speed,x f is the inertia weight of the foraging motion in the range ½0;1 ;F old iis the last foraging motion,b food i is the food attractive and b besti is the effect of the best fitness of the i th krill so far.The physical diffusion which is con-sidered to be a random process is formulated as follows:D i ¼D max dð6Þwhere D max is the maximum diffusion speed,and d is the random directional vector and its arrays are random values in ½À1;1 .In general,the defined motions frequently change the position of a krill individual toward the best fiing dif-ferent effective parameters of the motion during the time,the position vector of a krill individual during the interval t to t þD t is given by the following equation:X i ðt þD t Þ¼X i ðt ÞþD tdX i ð7Þwhere D t is one of the most important constants.We define LB j and UB j as lower and upper bounds of the j th variables ðj ¼1;2;...;NV Þ,respectively.NV is the total num-ber of variables.Since D t should depend on the search space,it can be obtained from the following formula:D t ¼C tXNV j ¼1ðUB j ÀLB j Þð8ÞTo improve the performance of the above algorithm,the crossover and mutation which is GA (genetic algorithm)mecha-nisms are incorporated into the above algorithm.More details about the KH algorithm can be found in [18].The KH algorithm can be described briefly by the following steps:Step 1Initialization.Randomly initialize the population in the search space.Step 2Fitness evaluation.Fitness evaluation of each krill individual according to its position.Step 3Motion calculation.Motion induced by the presence of other individuals;foraging motion;physical diffusion.Step 4Genetic operators.Crossover;mutation.Step 5Update the krill individual position in the search space.Step6Repeat step 2–5until a stop criterion is satisfied or a predefined number of iterations are completed.Comparing with some other swarm intelligent algorithms in [18],the KH with crossover operator has the best performance which confirms the robustness.However,through analysis and simulation we find that the balance of exploration and exploitation of KH hardly achieve the ideal state due to stationary parameter Ct .So we proposed an improved HK with a changed parameter Ct .The details is described in the third part.3.Our improved KH 3.1.Analysis of CtIt is apparent that the parameter D t works as a scale factor of the speed vector.So the value of parameter D t has abso-lutely critical effect on the performance of KH.From Eq.(8)we can know that D t completely depends on the search space and C t .It is obvious that the parameter Ct plays an important role as D t for a specific optimization problem.So C t should be care-fully tuned according to the specific optimization problem.To further understand the effect of parameter Ct to the searching progress of krill herd,we conducted the following experiments.Firstly,we introduce the test function,Ackley function,whose detailed description is as follows.Ackley Function:f ðX Þ¼P n i ¼1ðÀx i Þsin ðffiffiffiffiffij x i j p Þn .Optimal X Ã:x Ãi¼0;i ¼1;...;n .Optimal value:f ðX ÃÞ¼0.Fig.1is the ackley function graph for n ¼2.We use standard KH with 50krills to search the optimal value,the minimal value,of ackley function.Fig.2shows the trajectory of three krills with different Ct in 50steps.We can see that the larger Ct the larger each step.When the Ct is large enough the step would be large enough so that the krill herd could search all the space.However,the krill herd cannot search the local area carefully.When the Ct is small the step would be small so that the krill herd could exploit the local area.However,the krill herd cannot explore the global area.Inspired by the experience of our lives we hope that the krill herd explore the global area at the beginning.With the increasing of the number of iterations krill herd’s step gradually becomes more and more smaller for encouraging the local search ability.In the final stage the krill358J.Li et al./Applied Mathematics and Computation 234(2014)356–367herd just need to carefully search a local area without any exploration.Base on the above idea we proposed our improved krill herd algorithm with a linear decreasing Ct (KHLD).The detail description will be shown in Section 3.2.3.2.Our algorithmThrough the above analysis,it is clear that better performance will be achieved by applying a changed Ct .We make Ct have the following formulation:Ct ðt Þ¼Ct max ÀCt max ÀCt mint maxÁtð9ÞCt max and Ct min is the maximum and the minimum values of Ct .t max is the maximum number of iterations,and t is the present number of iteration.So Ct ðt Þis linearly decreasing from Ct max to Ct min with iteration.Below is the description of KHID.Step 1Initialization.Randomly initialize the population in the search space.Step 2Fitness evaluation.Fitness evaluation of each krill individual according to its position.Step 3Motion calculation.Motion induced by the presence of other individuals;foraging motion;physical diffusion.Step 4Genetic operators.Crossover;mutation.Step 5Calculation of Ct .Calculate Ct according to Eq.(9).Step 6Update the krill individual position in the search space.Step7Repeat steps 2–6until a stop criterion is satisfied or a predefined number of iterations is completed.4.SimulationA swarm intelligent algorithm performs well one time,it maybe perform a little worse next time which is because of the random nature of swarm intelligent algorithms.So the performance of swarm intelligent algorithms cannot be judged by the result of a single run.Many times of simulation with independent population initializations should be made to obtain a use-ful conclusion of the performance of the swarm intelligent algorithm.Therefore,in this study the results are obtained in 30trials.In addition,it should be noted that,in [18],Gandomi and Alavi had proved that,compared with the other swarm intel-ligent algorithms,the standard KH especially the standard KH with crossover operator performed the best.So we just need to compare our improved KH with standard KH.Depending on whether containing genetic operator,8kinds of algorithms are defined as follows.J.Li et al./Applied Mathematics and Computation 234(2014)356–367359360J.Li et al./Applied Mathematics and Computation234(2014)356–367362J.Li et al./Applied Mathematics and Computation234(2014)356–367Table2Mean and standard deviation of objective functions withfixed dimension reached after10000function evaluations(30runs were used to obtain the statistical result).Function D KHLD KHLD-C KHLD-M KHLD-CM KH KH-C KH-M KH-CMf12 4.1512EÀ5 2.6416EÀ4 2.8472EÀ4 1.9373EÀ4 5.2768EÀ4 4.7661EÀ40.00320.0020(8.2332EÀ5)(1.3212EÀ4)(2.2260EÀ4)(7.3919EÀ5)(5.8817EÀ4)(3.1783EÀ4)(0.0028)(0.0035)f229.3123EÀ7 6.1004EÀ89.3476EÀ7 1.1576EÀ7 3.3937EÀ7 1.1728EÀ77.0031EÀ7 4.4825EÀ7(9.1295EÀ7)(6.8996EÀ8)(4.6012EÀ7)(8.0299EÀ8)(2.0102EÀ7)(9.6154EÀ8)(1.1626EÀ6)(7.4985EÀ7)f32 2.9945EÀ5 1.8903EÀ4 3.0196EÀ4 4.9179EÀ5 1.6510EÀ4 4.0937EÀ4 4.8580EÀ57.1720EÀ5(5.2523EÀ5)(1.6474EÀ4)(3.8683EÀ5)(3.6461EÀ5)(2.2208EÀ4)(4.3339EÀ4)(3.9493EÀ5)(4.5124EÀ5)f42À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000(4.9620EÀ8)(6.9216EÀ9)(6.4165EÀ8)(1.4486EÀ8)(1.9691EÀ7)(8.4098EÀ9)(9.9838EÀ8)(6.2099EÀ7)f52 3.0242 3.0160 3.0060 3.0387 3.0277 3.1041 3.0255 3.0385(0.0154)(0.0138)(0.0041)(0.0397)(0.0315)(0.0807)(0.0153)(0.0647)f63À3.8628À3.8616À3.8628À3.8623À3.8628À3.8620À3.8628À3.8623(1.1648EÀ5)(6.9570EÀ4)(3.8457EÀ6)(3.3301EÀ4)(1.2952EÀ5)(4.8597EÀ4)(1.0473EÀ5)(4.8792EÀ4)f76À3.2702À3.2476À3.2488À3.2940À3.2828À3.2472À3.2700À3.2720(0.0557)(0.0571)(0.0573)(0.0458)(0.0489)(0.0575)(0.0551)(0.0588)f82À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000À1.0000(3.2970EÀ7) 1.0701EÀ9)(7.3466EÀ8)(8.4851EÀ9)(4.5145EÀ7)(1.3677EÀ9)(1.3166EÀ7)(2.4152EÀ8)f940.01040.00690.00610.00560.91900.19800.27350.0610(0.0046)(0.0026)(0.0031)(6.2819EÀ4)(0.3498)(0.2352)(0.3738)(0.0687)f102À186.7307À186.6735À186.7306À186.6833À186.7238À186.6798À186.7279À186.7206(9.5661EÀ5)(0.0226)(3.6640EÀ4)(0.0302)(0.0085)(0.0694)(0.0017)(0.0069)f112 1.5326EÀ6 1.1166EÀ6 1.5567EÀ7 4.1767EÀ77.5757EÀ77.9118EÀ7 1.3988EÀ6 1.8188EÀ6(1.1721EÀ6)(8.4358EÀ7)(2.7176EÀ7)(8.0794EÀ7)(5.9680EÀ7)(9.1892EÀ7)(1.1377EÀ6)(3.6180EÀ6)1.KHLD without any operators(KHLD).2.KHLD with crossover operator(KHLD-C).3.KHLD with mutation operator(KHLD-M).4.KHDL with crossover operator and mutation operator(KHLD-CM).5.Standard KH without any operators(KH).6.Standard KH with crossover operator(KH-C).7.Standard KH with mutation operator(KH-M).Table3Mean and standard deviation of objective functions reached after10000function evaluations in dimension D=10(30runs were used to obtain the statistical result).Function D KHLD KHLD-C KHLD-M KHLD-CM KH KH-C KH-M KH-CMf12100.00130.05430.00680.09330.00310.03960.00700.0659(3.8343EÀ4)(0.0411)(0.0032)(0.0409)(0.0015)(0.0280)(0.0063)(0.0239)f13100.24620.24600.24290.24590.24570.25120.24580.2476(7.3259EÀ4)(4.1947EÀ4)(0.0063)(2.8748EÀ4)(4.5830EÀ5)(0.0080)(4.0779EÀ4)(0.0021)f1410 1.3489EÀ5 3.9878EÀ4 2.6679EÀ40.0046 4.6235EÀ50.0149 3.9047EÀ40.0068(2.6548EÀ4)(6.9403EÀ4)(4.2975EÀ4)(0.0036)(7.3152EÀ5)(0.0248)(3.8297EÀ4)(0.0035)f1510 1.0026EÀ6 1.4410EÀ40.9729EÀ5 2.3421EÀ4 1.0525EÀ6 1.5257EÀ4 1.3741EÀ5 3.4787EÀ4(1.2248EÀ6)(1.0017EÀ4)(1.7461EÀ5)(2.2542EÀ4)(1.3746EÀ6)(1.1123EÀ4)(1.6100EÀ5)(2.5158EÀ4)f1610 1.3373EÀ6 1.0073EÀ5 6.0334EÀ5 3.0822EÀ5 1.0001EÀ59.2792EÀ5 2.9383EÀ58.6230EÀ5(2.0980EÀ6)(1.1279EÀ5)(5.7531EÀ5)(3.0255EÀ5)(1.7241EÀ5)(8.7125EÀ5)(4.6152EÀ5)(1.2864EÀ4)f1710 1.5528EÀ4 1.7178EÀ4 1.8570EÀ49.3171EÀ4 2.0033EÀ4 3.4707EÀ4 1.8922EÀ40.0013(1.0729EÀ4)(2.4636EÀ4)(1.7434EÀ4)(0.0014)(3.5100EÀ4)(4.4446EÀ4)(2.5189EÀ4)(0.0017)f18100.01030.014520.00910.06170.01920.49390.01550.5242(0.0133)(0.1334)(0.0061)(0.0786)(0.0211)(0.5267)(0.0122)(0.7204)f1910 3.0651EÀ6 1.4356EÀ4 1.4543EÀ50.0011 3.3271EÀ60.0021 3.9480EÀ50.0014(2.1703EÀ6)(1.1979EÀ4)(2.4349EÀ5)(0.0016)(2.5085EÀ6)(0.0026)(4.2103EÀ5)(0.0014)f2010 2.1231EÀ40.0015 3.9550EÀ40.0038 3.8541EÀ40.00350.00480.0201(2.0111EÀ4)(0.0016)(0.0020)(0.0647)(4.6208EÀ4)(0.0036)(0.0046)(0.0203)J.Li et al./Applied Mathematics and Computation234(2014)356–367363Table4Mean and standard deviation of objective functions reached after10000function evaluations in dimension D=30(30runs were used to obtain the statistical result).Function D KHLD KHLD-C KHLD-M KHLD-CM KH KH-C KH-M KH-CMf12300.00320.36140.03170.65230.00370.31410.04890.2776(0.0017)(0.1603)(0.0172)(0.2323)(0.0042)(0.1420)(0.0393)(0.0997)f13300.25040.32530.25770.47660.25210.35640.25290.3090(8.9959EÀ4)(0.0450)(0.0116)(0.3599)(0.0049)(0.0628)(0.0046)(0.0652)f14300.00160.02640.0498 1.37190.00200.48340.0918 1.9545(0.0024)(0.9676)(0.0562)(1.0707)(9.9531EÀ4)(0.6661)(0.1338)(2.6637)f1530 4.4323EÀ70.0091 4.0376EÀ40.0041 2.4482EÀ60.0091 4.2548EÀ40.0154(5.9978EÀ7)(0.0070)(3.4201EÀ4)(0.0068)(2.8260EÀ6)(0.0044)(2.4875EÀ4)(0.0052)f16309.2877EÀ60.0036 1.3492EÀ40.0212 3.7971EÀ50.0031 3.6827EÀ40.0056(1.3600EÀ5)(0.0044)(9.9697EÀ5)(0.0356)(7.1728EÀ5)(0.0025)(5.0245EÀ4)(0.0073)f17300.00100.01070.01020.0742E0.00130.04110.02110.1721(0.0011)(0.1347)(0.0111)(0.0968)(0.0018)(0.0227)(0.0152)(0.3267)f18300.010118.02390.21639.67140.013120.93110.982128.1738(0.0081)(7.2098)(1.8788)(6.5899)(0.0187)(11.6189)(0.8890)(8.9545)f1930 4.4937EÀ50.22190.00150.2504 1.5750EÀ40.03710.0030E0.0576(5.9075EÀ5)(0.2563)(0.0012)(0.2135)(2.8357EÀ4)(0.0423)(0.0047)(0.0808)f2030 4.6016EÀ40.48620.00150.00488.6660EÀ40.50950.03600.2099(0.0009)(0.6542)(0.0244)(0.0108)(0.0011)(0.7022)(0.0360)(0.2838)Table5Mean and standard deviation of objective functions reached after10000function evaluations in dimension D=50(30runs were used to obtain the statistical result).Function D KHLD KHLD-C KHLD-M KHLD-CM KH KH-C KH-M KH-CMf12500.00260.68310.06160.69440.00400.57400.09490.5549(9.9892EÀ4)(0.2400)(0.0468)(0.1406)(0.0027)(0.2200)(0.0540)(0.2842)f13500.25160.60550.27730.69270.25270.71330.31520.9556(0.0020)(0.2065)(0.0679)(0.2216)(0.0024)(0.4054)(0.0479)(0.5390)f1450 6.1985EÀ4 5.15380.070211.44370.00227.02830.084616.5221(4.4107EÀ4)(3.5984)(0.0826)(17.6374)(0.0023)(10.9362)(0.0763)(17.9441)f1550 2.2116EÀ60.04499.8576EÀ40.0390 4.6578EÀ60.04570.00170.0430(4.0162EÀ6)(0.0316E)(2.9151EÀ4)(0.0216)(6.6209EÀ6)(0.0234)(7.2778EÀ4)(0.0219)f1650 1.9997EÀ50.03510.00190.0947 4.7012EÀ50.04980.00540.1090(1.5299EÀ5)(0.0315)(0.0020)(0.0726)(6.1983EÀ5)(0.0336)(0.0075)(0.1155)f17500.00150.04710.00360.17000.00360.52830.01960.8793(0.0018)(0.0546)(0.0036)(0.1526)(0.0029)(0.5199)(0.0147)(0.8701)f18500.016724.6720 2.122616.13070.033726.9499 4.674427.6915(0.0177)(13.8754)(1.3776)(11.2063)(0.0536)(17.1364)(3.4146)(13.2876)f19507.0420EÀ50.21380.01620.80187.4681EÀ50.22420.01220.8755(3.6880EÀ5)(0.1102)(0.0153)(0.7725)(4.6180EÀ5)(0.1515)(0.0123)(0.4856)f20500.00160.02430.01340.98730.00960.03910.2890 1.5450(0.0059)(2.0006)(0.0816)(1.9242)(0.0064)(2.0021)(0.4722)(1.3273) 8.Standard KH with crossover operator and mutation operator(KH-CM).In our approach,according to Eq.(9),Ct max¼2;Ct min¼0.Then,Ct is equal to2at the beginning of the search to emphasize exploration.The parameter Ct is linearly decreased to0at the end to encourage exploitation.About standard KH the parameter Ct is set to be0.5.The maximum number of function evaluations is set to10,000for all the functions.The other parameters are the same as in[18](See Table1).Tables2–5show the mean and standard deviation of objective functions reached after10000function evaluations.Table2 is about stationary low dimensional functions,and Tables3–5are about high dimensional functions of dimension10,dimen-sion30and dimension50.Bold fond indicates the best results from which we can know most of the best results come from our proposed improved KH especially about high dimensional functions.It also indicates that KHDL performs much better in high dimension which is result of outstanding balance between exploration and exploitation due to decreasing parameter Ct. Figs.2and3present the convergence progresses of all the benchmark functions in terms of the global bestfitness achieved by each of the algorithms over10000function evaluations.The graphs show that our algorithm provides best statistical qual-364J.Li et al./Applied Mathematics and Computation234(2014)356–367ity solution and best convergence rate for all the test functions.The convergence progresses presented are only about dimen-sion30about function f12–f20.In fact,the convergence progresses of other dimension are similar since they do not show any worth noting difference(see Figs.4,5).J.Li et al./Applied Mathematics and Computation234(2014)356–367365366J.Li et al./Applied Mathematics and Computation234(2014)356–3675.ConclusionKH is a good swarm intelligent algorithm.However,the balance between exploration and exploitation of KH does not perform excellent.Though analyzing the parameter Ct of KH we know how it affect the searching step.Therefore,we pro-posed our improved KH with linear decreasing Ct which can adjust the balance between exploration and exploitation. Twenty benchmark functions have been tested which indicated that the performance of KHLD is better than the standard KH.In this paper,we just use a linear decreasing Ct to adjust the balance between exploration and exploitation of KH.In the future,our work will focus on nonlinear decreasing Ct,for example exponential,logarithmical reduction.AcknowledgmentsThis paper is partially supported by the Science Fund for Hundred Excellent Innovation Talents Support Program of Hebei Province,Doctoral Fund of Ministry of Education of China(20121333110008),Hebei Province Applied Basis Research Project (13961806D),and the National Natural Science Foundation of China(61273260,61290322,61273222,61322303). Appendix A.Supplementary dataSupplementary data associated with this article can be found,in the online version,at /10.1016/ j.amc.2014.01.146.References[1]E.Bonabeau,M.Dorigo,G.Theraulaz,Swarm Intelligence:From Natural to Artificial Systems,Oxford University Press,USA,1999.[2]I.Kassabalidis,M.A.El-Sharkawi,R.J.Marks II,P.Arabshahi,A.A.Gray,Swarm intelligence for routing in communication networks,in:GlobalTelecommunications Conference,GLOBECOM01,IEEE,2001,pp.3613–3617.[3]K.Mori,M.Tsukiyama,T.Fukuda,Immune algorithm with searching diversity and its application to resource allocation problem,Trans.Inst.Electron.Eng.Jpn.(Transactions Institute of electronics,information,and communication engineers,Japan)113-C(1993)872–878.[4]Y.Luo,M.Liao,J.Yan,C.Zhang,S.Shang,Development and demonstration of an artificial immune algorithm for mangrove mapping using landsat tm,IEEE Geosci.Remote Sens.Lett.10(2013)751–755.[5]M.Dorigo,V.Maniezzo,A.Colorni,Ant system:optimization by a colony of cooperating agents,IEEE Trans.Syst.Man Cybern.Part B Cybern.26(1996)29–41.[6]M.Lopez-Ibanez,T.Stutzle,The automatic design of multiobjective ant colony optimization algorithms,IEEE put.16(2012)861–875.J.Li et al./Applied Mathematics and Computation234(2014)356–367367 [7]R.Eberhart,J.Kennedy,A new optimizer using particle swarm theory,in:Proceedings of the Sixth International Symposium on Micro Machine andHuman,Science,1995,pp.39–43.[8]Y.Pehlivanoglu,A new particle swarm optimization method enhanced with a periodic mutation strategy and neural networks,IEEE Trans.Evol.Comput.17(2013)436–452.[9]D.Karaboga,An idea based on honey bee swarm for numerical optimization,Technical Report TR06,Erciyes University,2005.[10]Q.K.Pan,L.Wang,K.Mao,J.H.Zhao,M.Zhang,An effective artificial bee colony algorithm for a real-world hybridflowshop problem in steelmakingprocess,IEEE Trans.Autom.Sci.Eng.10(2013)307–322.[11]G.Wang,L.Guo,A novel hybrid bat algorithm with harmony search for global numerical optimization,J.Appl.Math.(2013).</10.1155/2013/696491>.[12]X.S.Yang,A.H.Gandomi,Bat algorithm:a novel approach for global engineering optimization,put.29(2012)464–483.[13]G.Wang,L.Guo,H.Duan,L.Liu,H.Wang,A Bat algorithm with mutation for UCAV path planning,Sci.World J.(2012).</10.1100/2012/418946>.[14]G.Wang,L.Guo,H.Duan,H.Wang,H.Liu,M.Shao,A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning,Sci.World J.(2012).</10.1100/2012/583973>.[15]G.Wang,L.Guo,H.Duan,L.Liu,H.Wang,A hybrid meta-heuristic DE/CS algorithm for UCAV path planning,put.Sci.9(2012)1–8.[16]G.Wang,L.Guo,H.Duan,H.Wang,A new improvedfirefly algorithm for global numerical optimization,put.Theor.Nanosci.11(2014)477–485.[17]G.Wang,L.Guo,H.Duan,L.Liu,H.Wang,A modifiedfirefly algorithm for UCAV path planning,Int.J.Hybrid Inf.Technol.5(2012)123–144.[18]A.H.Gandomi,A.H.Alavi,Krill herd:a new bio-inspired optimization algorithm,Commun.Nonlinear Sci.Numer.Simul.17(2012)4831–4845.[19]G.Wang,L.Guo,H.Wang,H.Duan,L.Liu,J.Li,Incorporating mutation scheme into krill herd algorithm for global numerical optimization,NeuralComput.Appl.(2012).</10.1007/s00521-012-1304-8>.[20]G.Wang,A.H.Gandomi,A.H.Alavi,G.Hao,Hybrid krill herd algorithm with differential evolution for global numerical optimization,Neural Comput.Appl.(2013).</10.1007/s00521-013-1485-9>.[21]G.Wang,A.H.Gandomib,A.H.Alavi,Stud krill herd algorithm,Neurocomputing(2013).</10.1016/j.neucom.2013.08.031>.[22]G.Wang,A.H.Gandomi,A.H.Alavi,A chaotic particle-swarm krill herd algorithm for global numerical optimization,Kybernetes42(2013)962–978.[23]G.Wang,L.Guo,A.H.Gandomi,A.H.Alavi,H.Duan,Simulated annealing-based krill herd algorithm for global optimization,Abstr.Appl.Anal.(2013).</10.1155/2013/213853>.[24]G.Wang,L.Guo,A.H.Gandomi,L.Cao,A.H.Alavi,H.Duan,J.Li,L e vy-flight krill herd algorithm,Math.Prob.Eng.(2013).</10.1155/2013/682073>.[25]A.H.Gandomi,S.Talatahari,F.Tadbiri,A.H.Alavi,Krill herd algorithm for optimum design of truss structures,Int.J.Bio-Inspired Comput.(IJBIC)5(2013)281–288.。
粒子群优化算法综述_杨维

2004年5月第6卷第5期中国工程科学Engineering ScienceM ay .2004Vol .6No .5综合述评[收稿日期] 2003-08-05;修回日期 2003-09-08[基金项目] “八六三”高技术资助项目(2001AA413420),山东省自然科学基金资助项目(2003G01)[作者简介] 杨 维(1978-),女(满族),山东济南市人,山东大学控制科学与工程学院硕士研究生粒子群优化算法综述杨 维,李歧强(山东大学控制科学与工程学院,济南 250061)[摘要] 粒子群优化(PSO )算法是一种新兴的优化技术,其思想来源于人工生命和演化计算理论。
PSO 通过粒子追随自己找到的最好解和整个群的最好解来完成优化。
该算法简单易实现,可调参数少,已得到广泛研究和应用。
详细介绍了PSO 的基本原理、各种改进技术及其应用等,并对其未来的研究提出了一些建议。
[关键词] 群体智能;演化算法;粒子群优化[中图分类号]T P301.6 [文献标识码]A [文章编号]1009-1742(2004)05-0087-081 前言 从20世纪90年代初,就产生了模拟自然生物群体(swarm )行为的优化技术。
Do rigo 等从生物进化的机理中受到启发,通过模拟蚂蚁的寻径行为,提出了蚁群优化方法;Eberhart 和Kennedy 于1995年提出的粒子群优化算法是基于对鸟群、鱼群的模拟。
这些研究可以称为群体智能(swarm intelligence )。
通常单个自然生物并不是智能的,但是整个生物群体却表现出处理复杂问题的能力,群体智能就是这些团体行为在人工智能问题中的应用。
粒子群优化(PSO )最初是处理连续优化问题的,目前其应用已扩展到组合优化问题[1]。
由于其简单、有效的特点,PSO 已经得到了众多学者的重视和研究。
2 PSO 基本原理2.1 基本PS O 原理粒子群优化算法是基于群体的演化算法,其思想来源于人工生命和演化计算理论。
混沌粒子群优化算法

混沌粒子群优化算法¨计算机科学2004V01.31N-o.8高鹰h2胜利1(华南理工大学电子与信息学院 510641)1(大学信息机电学院计算机科学与技术系 510405)2摘要粒子群优化算法是一种新的随机全局优化进化算法。
本文把混沌手优思想引入到粒子群优化算法中,这种方法利用混沌运动的随机性、遍历性和规律性等特性首先对当前粒子群体中的最优粒子进行混池寻优,然后把混沌寻优的结果随机替换粒子群体中的一个粒子。
通过这种处理使得粒子群体的进化速度加快t从而改善了粒子群优化算法摆脱局部极值点的能力,提高了算法的收敛速度和精度。
仿真结果表明混沌粒子群优化算法的收敛性能明显优于粒子群优化算法。
关键词粒子群优化算法。
混沌手优,优化’ChaosParticle SwarmOptimizationAlgorithmGAOYin91”XIESheng—Lil(Collegeof Electronic&InformationEngineeringtSouthChina University ofTechnology,Guangzhou510641)1(Dept.of ComputerScience andTechnology.GuangzhouUniversity·Guangzhou510405)2Abstract Particle swarmoptimizationis anewstochasticglobaloptimization evolutionaryalgorithm.Inthis paper,the chaotic searchis embeddedintooriginalparticleswarmoptimizers.Basedon theergodicity,stochastic propertyandregularityofchaos,fl newsuperiorindividualisreproducedbychaoticsearchingonthecurrentglobalbest individ—ual。
新型的动态粒子群优化算法

新型的动态粒子群优化算法王润芳;张耀军;裴志松【期刊名称】《计算机工程与应用》【年(卷),期】2011(047)016【摘要】为了解决动态改变惯性权重的自适应粒子群算法不易跳出局部最优的问题,提出了一种自适应变异的动态粒子群优化算法.在算法中引入了自适应学习因子和自适应变异策略,从而使算法具有动态自适应性,能够较容易地跳出局部最优.对几种典型函数的测试结果表明,该算法的收敛速度明显优于文献算法,收敛精度也有所提高.%To solve the problem that adaptive particle swarm algorithm with dynamically changing inertia weigh algorithm is apt to trap in local optimum,a dynamic particle swarm optimization algorithm with adaptive mutation is proposed.The adaptive learning factor and adaptive mutation strategy are introduced in this new algorithm, so that proposed algorithm can easily jump out of local optimum with effective dynamic adaptability.The test experiments with three well-known benchmark functions show that the convergence speed of proposed algorithm is significantly superior to existing algorithms,and the convergence accuracy of algorithm is also increased.【总页数】3页(P32-34)【作者】王润芳;张耀军;裴志松【作者单位】长春工业大学人文信息学院,长春,130122;信阳农业高等专科学校计算机系,河南,信阳,464000;长春工业大学人文信息学院,长春,130122【正文语种】中文【中图分类】TP301【相关文献】1.一种新型的动态粒子群优化算法 [J], 林楠2.具有动态学习能力的分层进化粒子群优化算法 [J], 徐超;单志勇;徐好好3.基于动态自适应变参的粒子群优化算法 [J], 李眩;吴晓兵;童百利4.高计算代价动态优化问题的代理模型辅助粒子群优化算法 [J], 张勇;胡江涛5.基于动态自适应混沌粒子群优化算法的干线协调控制方法 [J], 郭海锋;黄贤恒;徐甲;乔洪帅因版权原因,仅展示原文概要,查看原文内容请购买。
自适应粒子群优化算法

自适应粒子群优化算法自适应粒子群优化算法(Adaptive Particle Swarm Optimization,简称APSO)是一种基于粒子群优化算法(Particle Swarm Optimization,简称PSO)的改进算法。
PSO算法是一种群体智能优化算法,模拟鸟群觅食行为来求解优化问题。
与传统PSO算法相比,APSO算法在粒子个体的位置和速度更新方面进行了优化,增强了算法的鲁棒性和全局能力。
APSO算法的关键改进之一是引入自适应策略来调整个体的速度和位置更新。
传统PSO算法中,个体的速度与当前速度和历史最优位置有关。
而在APSO算法中,个体的速度与自适应权重有关,该权重能够自动调整以适应不同的空间和优化问题。
自适应权重的调整基于个体的历史最优位置和整个粒子群的全局最优位置。
在每次迭代中,根据粒子群的全局情况来动态调整权重,使得速度的更新更加灵活和可靠。
另一个关键改进是引入自适应的惯性因子(inertia weight)来调整粒子的速度。
传统PSO算法中,惯性因子是一个常数,控制了速度的更新。
在APSO算法中,惯性因子根据粒子群的性能和进程进行自适应调整。
对于空间广阔、优化问题复杂的情况,惯性因子较大以促进全局;对于空间狭窄、优化问题简单的情况,惯性因子较小以促进局部。
通过调整惯性因子,粒子的速度和位置更新更具有灵活性和针对性,可以更好地适应不同的优化问题。
此外,APSO算法还引入了自适应的局域半径(search range)来控制粒子的范围。
传统PSO算法中,粒子的范围是固定的,很容易陷入局部最优解。
而在APSO算法中,根据全局最优位置和当前最优位置的距离进行自适应调整,当距离较大时,范围增加;当距离较小时,范围减小。
通过自适应调整范围,可以提高算法的全局能力,减少陷入局部最优解的风险。
综上所述,自适应粒子群优化算法(APSO)是一种改进的PSO算法,通过引入自适应策略来调整个体的速度和位置更新,增强了算法的鲁棒性和全局能力。
IGAPSO-ELM:一种网络安全态势预测模型

Vol. 29 No. 2Feb.2022第29卷第2期2022年2月电光与控制Electronics Optics & Control 引用格式:唐延强,李成海,王坚,等.IGAPSO-ELM :-种网络安全态势预测模型[J].电光与控制,2022,29(2) :30-35. TANG Y Q, LI C H, WANG J, et al. IGAPSO-ELM : a model for network security situation prediction [ J ]. Electronics Optics & Control, 2022, 29(2) :30-35.IGAPSO-ELM :一种网络安全态势预测模型唐延强",李成海",王坚",王亚男",曹波" (空军工程大学,a.研究生院;b.防空反导学院,西安710000)摘要:针对网络安全态势预测,为提高预测的精确度和预测算法的收敛速度,提出一种改进遗传粒子群算法优化极限学习机(IGAPSO-ELM)的预测方法。
首先,改进GAPSO 中的惯性权重和学习因子,通过定义动态指数函数使算法在执行的不同阶段实现两种参数自适应;其次,针对GAPSO 中人为设定的固定交叉率和变异率,提出一种自适应交叉和变异 策略;最后,以IGAPSO 优化ELM 的初始权值和偏差。
IGAPSO 既保证了种群的多样性,又提高了算法的收敛速度。
通过仿真实验对比得出:IGAPSO-ELM 对网络安全态势预测拟合度可达0.99,收敛速度相较于对比算法有大幅度提升。
关键词:网络安全态势预测;遗传粒子群算法;极限学习机;自适应调整中图分类号:TP393.8 文献标志码:A dot : 10.3969/j. issn. 1671 -637X. 2022.02.007IGAPSO-ELM : A Model for Network SecuritySituation PredictionTANG Yanqiang 3, LI Chenghai b , WANG Jian b , WANG Ya^an 11, CAO Bo a(Air Force Engineering University, a. Graduate School ; b. Air and Missile Defense College, Xfan 710000, China)Abstract : As for network security situation prediction, an Improved Genetic Algorithm and Particle SwarmOptimization (IGAPSO ) is proposed to optimize Extreme Learning Machine ( ELM ) neural network, so as to obtain higher prediction precision and faster convergence rate. Firstly, the inertia weight and the learning factor in GAPSO are improved to realize self-adaptation at different stages of execution by defining thedynamic exponential function ・ Secondly, as for the fixed crossover rate and mutation rate in GAPSO, anadaptive crossover and mutation strategy is proposed. Finally, the IGAPSO is used to optimize the initial weights and deviations of ELM. IGAPSO not only ensures the diversity of the population, but also improvesthe convergence rate of the algorithm. The simulation results show that the fitting degree of IGAPSO-ELM for network security situation prediction can reach 0. 99, and the convergence rate is gready improved comparedw 让h that of the contrast algorithms.Key words : network security situation prediction ; Genetic Algorithm and Particle Swarm Optimization(GAPSO ) ; Extreme Learning Machine (ELM ) ; adaptive adjustment0引言在日益复杂的网络环境中,网络安全威胁越来越 受到人们的重视。
EBPSK调制脉冲数据帧串信号的平均模糊函数

EBPSK调制脉冲数据帧串信号的平均模糊函数姚誉;吴乐南【摘要】基于EBPSK(Extended binary phase shift keying)调制雷达通信机可输出高精度大量程的雷达测距值.同时利用EBPSK调制脉冲数据帧串信号和冲击滤波辅助解调的特殊优势进行高频谱利用率的数字通信.研究针对EBPSK调制脉冲数据帧串信号的特点,根据cooper给出的随机信号平均模糊函数的基本定义,详细推导了该信号的平均模糊函数,并通过实验仿真了平均模糊函数图.结论表明,随机EBPSK 调制脉冲数据帧串雷达信号与随机二相码脉冲雷达信号的距离模糊函数类似,即在距离轴上无模糊,而其速度模糊函数则由随机EBPSK调制数据和周期的相参脉冲串共同决定,即在速度轴上呈“梳状”模糊图,其模糊瓣取决于周期脉冲串.【期刊名称】《中国电子科学研究院学报》【年(卷),期】2015(010)002【总页数】6页(P180-184,194)【关键词】雷达通信机;EBPSK调制;脉冲数据帧串;平均模糊函数【作者】姚誉;吴乐南【作者单位】东南大学信息科学与工程学院,江苏南京210096;东南大学信息科学与工程学院,江苏南京210096【正文语种】中文【中图分类】TN911从所周知,白噪声随机信号作为雷达的基本信号形式可获得“图钉”型的模糊函数。
为了达到实际应用的程度,提高主副瓣比,Shinikchi,Nishimoto用脉冲压缩器组、FFT变换器进行信号处理[1-3]。
但目前这方面研究还只限于连续波雷达和近距离探测系统,但是对远程目标尚无有效的探测手段[4-6]。
而基于EBPSK调制雷通机[7-10]作为雷达测距系统,可根据实际目标环境,灵活改变调制参数,进而得到不同的测距精度和目标探测性能。
同时,相对于传统雷达系统,该系统不存在多普勒扩展问题与距离-多普勒耦合效应,且系统结构更为简单[11-14],基于EBPSK调制雷通机发送接收新型的随机信号波形—随机EBPSK调制脉冲数据帧串信号,该信号具有对远程目标探测的能力。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
A New Diversity Guided Particle Swarm Optimization with MutationRadha Thangaraj1, Millie Pant1 and Ajith Abraham21Indian Institute of Technology Roorkee, India2Norwegian University of Science and Technology, Norwayt.radha@, millifpt@iitr.ernet.in, ajith.abraham@Abstract— This paper presents a new diversity guided Particle Swarm Optimization algorithm (PSO) named Beta Mutation PSO or BMPSO for solving global optimization problems. The BMPSO algorithm makes use of an evolutionary programming based mutation operator to maintain the level of diversity in the swarm population, thereby maintaining a good balance between the exploration and exploitation phenomena and preventing premature convergence.Beta distribution is used to perform the mutation in the proposed BMPSO algorithm. The performance of the BMPSO algorithm is investigated on a set o ten standard benchmark problems and the results are compared with the original PSO algorithm. The numerical results show that the proposed algorithm outper orms the basic PSO algorithm in all the test cases taken in this study.Keywords-Particle Swarm Optimization; Diversity; Mutation; global optimization.I.I NTRODUCTIONThe concept of Particle Swarm Optimization (PSO) was first suggested by Kennedy and Eberhart [1]. Since its development is 1995, PSO has become one of the most promising optimizing techniques for solving global optimization problems. It has been shown empirically in many studies to work well, outperforming other optimization techniques such as evolutionary algorithms. Although the rate of convergence of PSO is good due to fast information flow among the solution vectors, its diversity decreases very quickly in the successive iterations resulting in a suboptimal solution. One of the simplest methods to overcome the problem of diversity loss is to capitalize the strengths of EA and PSO together in an algorithm. A variety of methods combining the aspects of EA and PSO are available in literature [2] – [7] etc. Out of the EA operators, mutation is the most widely used EA tool applied in PSO [8], [9]. The concept of mutation is quite common to Evolutionary Programming (EP), G enetic Algorithms and Differential Evolution. Mutation has been introduced into the PSO as a mechanism to increase the diversity of PSO, and by doing so improving the exploration abilities of the algorithm. Mutation can be applied to different elements of a particle swarm. The effect of mutation depends on which elements of the swarm are mutated. If only the neighbourhood best position vectors are mutated, then effect is minimal, compared to mutation of particle position vectors. Velocity vector mutation is equivalent to particle’s position vector mutation, under the condition that the same mutation operator is considered [10].There are several instances in PSO where mutation is introduced in the swarm. Some mutation operators that have been applied to mutate the position vector in PSO include G aussian Mutation [11] - [15], Cauchy [15], [16], Chaos mutation [17] – [19] etc. However, in our knowledge none of the above mentioned techniques uses diversity as a measure to guide the swarm. This paper presents a new PSO algorithm based on EP Mutation with the help of Beta distribution; also it uses a diversity enhancing mechanism to improve the performance of the swarm. The proposed Beta Mutation PSO algorithm is named as BMPSO. The BMPSO algorithm uses diversity threshold values d low and d high to guide the movement of the swarm. The threshold values are predefined by the user.The rest of the paper is organized as follows: Section II describes the original Particle Swarm Optimization algorithm. In section III, the proposed BMPSO algorithm is discussed; section IV deals with the results and discussion. Finally, this paper concludes with section V.II.P ARTICLE S WARM O PTIMIZATIONPSO is a multi-agent parallel search technique developed by Eberhart and Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling. The particles or members of the swarm fly through a multidimensional search space looking for a potential solution. Each particle adjusts its position in the search space from time to time according to the flying experience of its own and of its neighbors (or colleagues).For a D-dimensional search space, the position of the i th particle is represented as),...,,...,,(21iDidiii xxxxX=. Each particle maintains a memory of its previous best position),...,,...,,(21iDidiii ppppP=. The best one among all the particles in the population is represented as),...,,...,,(21gDgdggg ppppP=. The velocity of each particle is represented as),...,,...,,(21iDidiii vvvvV=.During each generation each particle is accelerated toward the particles previous best position and the global best position. At each iteration a new velocity value for each particle is calculated based on its current velocity, the distance from the global best position. The new velocity value is then used to calculate the next position of the particle in the search space. This process is then iterated a number of times or until a minimum error is achieved. The two basic equations which govern the working of PSO are that of velocity vector and position vector given by:294 978-1-4244-5612-3/09/$26.00c 2009IEEE)()(*2211id gd id id id id x p r c x p r c v v −+−+=ω(1)id id id v x x +=(2) Here ω is inertia weight, predefined by the user. c 1 and c 2 are acceleration constants. They represent the weighting of the stochastic acceleration terms that pull each particle toward personal best and global best positions. Therefore, adjustment of these constants changes the amount of tension in the system. Low value of these constants allow particles to roam far from the target regions before tugged back, while high values result in abrupt movement toward, or past, target regions [20]. The constants r 1, r 2 are the uniformly generated random numbers in the range of [0, 1]. The first part of equation (1) represents the inertia of the previous velocity, the second part tells us about the personal thinking of the particle and the third part represents the cooperation among particles and is therefore named as the social component [21].III.P ROPOSED BMPSO A LGORITHMThe Beta Mutation Particle Swarm Optimization is simple and modified version of Particle Swarm Optimization algorithm; it uses Beta distribution to mutate the particle. The BMPSO algorithm has two phases namely attraction phase and mutation phase. The attraction phase is same as that of the classical PSO, while in the mutation phase the swarm particles position vectors are mutated using Beta Distributed Mutation (BDM) operator. The BDM operator is EP based mutation operator and is defined as: ()*1j i t i t i Betarand x x σ′+=+ (3) where, τσσexp(*i i =′τ′+)1,0(N ))1,0(j N)1,0(N denotes a normally distributed random number with mean zero and standard deviation one. )1,0(j N indicates that a different random number is generated for each value of j. τ and τ′ are set as n 2/1 and n 2/1respectively. Betarand j () is a random number generated by beta distribution with parameters less than 1. The BMPSO starts like the classical PSO i.e. it uses attraction phase (Eqn. (1)) for updating velocity vector and uses (2) for updating position vector. In this phase, the swarm contracts rapidly due to the fast information flow among the particles, as a result, the diversity also decreases consequently the chances of the swarm particles to get trapped in some local region or some suboptimal solution also increases. In BMPSO algorithm, we keep a check on the decreasing value of diversity with the help of a user-defined parameter called d low . When the diversity of population drops below, d low , it switches over to the mutation phase, with the hope of increasing the diversity of the swarm population and thereby helping the swarm to escape the possible local regions. This process is repeated until a maximum number iteration is reached or the stopping criterion is reached.The Beta distribution used in this study has the probability density function,11)1(),(1)(−−−=βαβαx x B x f ,0>α,0>β (4) with parameters less than 1.The computational steps of BMPSO are given below: Step 1 Initialize PSO parameters Step 2 Initialize the positions andvelocities of all particles using unifor m ly distributed random numbersStep 3 Evaluate the objective functionvalues of all particles in the swarmStep 4 Update particles personal bestposition and global best position (i.e. P i and P g )Step 5 Calcula t e t he diversi t y ofswarmStep 6 If (diversity < d low )Upda t e par t icles posi t ion using Eqn. (3)ElseUpda t e par t icles veloci t y and posi t ion vec tors according t o equat ions (1) and (2) respectivelyEnd ifStep 7 Evaluate particle’s fitnessvaluesStep 8 Update P i and P gStep 9 If (Stopping criteria isreached) then go to step 10Else go to step 5 Step 10 Print the global best particleand the corresponding fitness function value IV.E XOERIMENTAL S ETTINGS AND N UMERICAL R ESULTS For comparison of PSO and BMPSO algorithms, a collection of 10 standard benchmark problems with box constraints is considered. Mathematical models of the problems along with the true optimum value are given in Table I. The entire set of test problems taken for the present study is scalable i.e. the problems can be tested for any number of variables. However, for the present study we have tested the problems for dimensions 10, 30 and 50.In order to make a fair comparison of basic PSO and the proposed BMPSO algorithm, we fixed the same seed for random number generation so that the initial population is same for both the algorithms. The population size is taken as 20 for all the test problems. A linearly decreasing inertia weight is used which starts at 0.9 and ends at 0.4, with the user defined parameters c 1=2.0 and c 2=2.0. For each algorithm, the maximum number of generations is set as 1000, 2000 and 3000 generations for dimensions 10, 30 and 50 respectively. . A total of 30 runs for each experimental setting were conducted.2009World Congress on Nature &Biologically Inspired Computing (NaBIC 2009)295The diversity measure of the swarm can be calculated as [10]:¦=¦=−=sxn i n j t j x t ij x s n t S Diversity 112))()((1))(( (5)where S is the swarm, n s = ¨S ¨is the swarm size, n x is the dimensionality of the problem, x ij is the j ’th value of the i ’thparticle and )(t x j is the average of the j-th dimension over all particles, i.e. sn sn i t ij x t j x ¦==1)()( The results of the given benchmark problems for dimension 30 are shown in Table II in terms of mean best fitness, standard deviation, diversity, the improvement (%), and t-values of proposed BMPSO algorithm in comparison with original PSO. In addition, we have tested the benchmark problems f 1 – f 6 with the two more different dimensions 10 and 50; the corresponding results are given in Table III. Figures 1 - 6 shows the performance curves of PSO and BMPSO algorithms. From the numerical results of Table II, we can see that the proposed algorithm perform better than the original PSO algorithm by a significant difference. The first test function f 1 is Rastringin function, which is a multimodal function. For this function, BMPSO gave a remarkable percentage of improvement of approximately 72% in comparison with PSO. For the function f 2, which is also a multimodal function, the proposed BMPSO algorithm performs much better than the original PSO algorithm. For the functions f 4 and f 10, theproposed BMPSO algorithm performs little better than PSO,in these test cases BMPSO algorithm gave approximately 4% improvement in comparison to original PSO. Likewise all the other test cases also we can see that there is a noticeable percentage of improvement in average mean value by using the proposed diversity based mutation algorithm. From the numerical results of Table III also, we can see that the proposed algorithm gave better results than PSO.Figure 1. Performance curves of PSO and BMPSO for function f 1Figure 2. Performance curves of PSO and BMPSO for function f 4Figure 3. Performance curves of PSO and BMPSO for function f 5Figure 4. Performance curves of PSO and BMPSO for function f 62962009World Congress on Nature &Biologically Inspired Computing (NaBIC 2009)TABLE I. N UMERICAL BENCHMARK PROBLEMSTABLE II. C OMPARISON RESULTS OF PSO AND BMPSO IN TERMS OF AVERAGE FITNESS FUNCTION VALUE, STANDARD DEVIATION AND DIVERSITYFunctionFitness Value Standard Deviation DiversityImprovement (%) andt-value of BMPSOwith PSOPSO BMPSO PSO BMPSO PSO BMPSOImprovementt-valuef1 47.2922313.34445 11.06489 4.690 3.70371 3.01e-05 71.78 15.47f2 0.01820.002525 0.244025 0.001589 33.1326 0.009598 86.12 0.35f3 316.446874.76106 80.001 24.37858 4.15842 3.14e-05 76.37 15.82f4 -6466.19-6718.49 643.4821 666.7723 0.42845 0.01666 3.9 1.49f5 0.6172220.072433 0.492993 0.21498 0.547031 0.071049 88.26 5.54f6 1.700.077461 0.4530 0.06742 1.56783 0.60276 95.43 19.36 f7 271.793 28.938 208.325 98.7083 4.7239 0.10343 89.35 5.77 f8 15.22289.96414 3.652739 2.00684 63.3639 6.60096 34.54 6.91f9 0.2097760.169551 0.072407 0.625909 0.239009 0.02924 19.17 0.34f10 4.22028 4.10447 0.416 0.460416 14.4341 0.806342 2.74 1.02 2009World Congress on Nature&Biologically Inspired Computing(NaBIC2009)297TABLE III. C OMPARISON RESULTS OF PSO AND BMPSO FOR FUNCTIONS F 1 – F 6 WITH DIMENSION 10 AND 50Function Dimension PSOBMPSOImprovement (%) of BMPSO:comparison with PSOf 1 10 5.5572 0.012055 99.78307 50 104.03725 27.889415 73.19286 f 2 10 0.0919 3.726e-06 99.99595 50 0.014701 0.00415 71.77063 f 3 10 96.1715 7.864022 91.82292 50 533.64808 178.32484 66.58381f 4 10 -2389.365 -2764.422 15.6969 50 -10473.09 -10605.52 1.26448 f 5 10 0.502671 0.405912 19.24897 50 0.788322 0.409282 48.08188f 610 6.965e-12 0.004697 - 50 3.43866 0.31380190.87432Figure 5. Performance curves of PSO and BMPSO for function f 7Figure 6. Performance curves of PSO and BMPSO for function f 10V.C ONCLUSIONIn this paper, a simple and modified version of ParticleSwarm Optimization named BMPSO has been proposed. The novelty of the present work is the use of diversity for activating the mutation. Another new feature is the use of a beta distributed mutation operator. The performance of BMPSO algorithm is tested with ten standard benchmark functions with various dimensions 10, 30 and 50. The empirical results show that the proposed BMPSO algorithm is efficient than the original PSO algorithm and is quite competent for solving problems of dimensions up to 50. Thus, from the numerical results it is concluded that the BMPSO algorithm is simple and yet effective algorithm for solving different kind of optimization problems.R EFERENCES[1]Kennedy, J. and Eberhart, R. C., “Particle Swarm Optimization”, IEEE International Conference on Neural Networks (Perth, Australia), IEEE Service Center, Piscataway, NJ, Vol. IV, 1995, pp. 1942-1948.[2]Robinson, J., Sinton, S. and Rahmat-Samii, Y., “Particle Swarm, G enetic Algorithm, and Their Hybrids: Optimization of a Profiled Corrugated Horn Antenna”, In Proc. of the IEEE Antennas and Propagation Society International Symposium and URSI national radio Science meeting, Vol. 1, 2002, pp. 314 – 317.[3]Shi, Y. and Krohling, R. A., “Co-Evolutionary Particle Swarm Optimization to Solve Min-Max Problems”, In Proc. of the IEEE Congress on Evolutionary Computation, Vol.2, 2002, pp. 1682 – 1687.[4]Shi, X., Hao, J., Zhou, J. and Liang, Y., “Hybrid Evolutionary Algorithms Based on PSO and GA”, In Proc. of the IEEE Congrss on Evolutionary Computation, Vol. 4, 2003, pp. 2393 – 2399.[5]Zhang, W - J. and Xie, X - F., “DEPSO: Hybrid Particle Swarm with Differential Evolution Operator”, In Proc. of IEEE Int. Conf. on Systems, Man & Cybernetics, 2003, pp. 3816 – 3821.2982009World Congress on Nature &Biologically Inspired Computing (NaBIC 2009)[6]Hao, Z-F., G uo, G-H. and Huang, H., “A Particle SwarmOptimization Algorithm with Differential Evolution”, In Proc. of the 6th Int. Conf. on Machine Learning and Cybernetics, 2007, pp. 1031 – 1035.[7]Yang, B., Chen, Y. and Zhao, Z., “A Hybrid Evolutionary Algorithmby Combination of PSO and GA for Unconstrained and Constrained optimization Problems”, In Proc. of the IEEE Int. Conf. on Control and Automation, 2007, pp. 166 – 170.[8]Hu, X., Eberhart, R. C. and Shi, Y., “Swarm Intelligence forPermutation Optimization: A Case Study on n-Queens problem”, In Proc. of IEEE Swarm Intelligence Symposium, 2003, pp. 243 – 246. [9]Juang, C - F., “A hybrid of genetic Algorithm and Particle SwarmOptimization for Recurrent Network Design”, IEEE Trans. On Systems, Man, and Cybernetics – Part B: Cybernetics, Vol. 34(2), 2003, pp. 997 – 1006.[10]Engelbrecht, A. P., “Fundamentals of Computational SwarmIntelligence”, England: John Wiley & Sons Ltd, 2005.[11]Wei, C., He, Z., Zheng, Y. and Pi, W., “Swarm Directions Embeddedin Past Evolutionary Programming”, In Proc. of IEEE Congress on Evolutionary Computation, Vol. 2, 2002, pp. 1278 – 1283.[12]Higashi, H. and Iba, H., “Particle Swarm Optimization with GaussianMutation”, In Proc. of the IEEE Swarm Intelligence Symposium, 2003, pp. 72 – 79.[13]Secrest, B. R. and Lamont, G. B., “Visualizing Particle SwarmOptimization – aussian Particle Swarm Optimization”, In Proceedings of the IEEE Swarm Intelligence Symposium, 2003, pp.198 – 204. [14]Sriyanyong, P., “Solving Economic Dispatch Using Particle SwarmOptimization Combined with Gaussian Mutation”, In Proc. of ECTI-CON, 2008, pp. 885 – 888.[15]Krohling, R. A., “Gaussian Particle Swarm with Jumps”, In. Proc. ofthe IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2005, pp. 1226-1231.[16]Stacey, A., Jancic, M. and Grundy, I., “Particle Swarm Optimizationwith Mutation”, In Proc. of the IEEE Congress on Evolutionary Computation, 2003, pp. 1425 – 1430.[17]Dong, D., Jie, J., Zeng, J. and Wang, M., “Chaos-Mutation-BasedParticle Swarm Optimizer for Dynamic Environment”, In Proc. of the 3rd Int. Conf. on Intelligent System and Knowledge Engineering, 2008, pp. 1032 – 1037.[18]Yang, M., Huang, H. and Xiao, G., “A Novel Dynamic ParticleSwarm Optimization Algorithm Based on Chaotic Mutation”, Int.Workshop on Knowledge Discovery and Data Mining, 2009, pp. 656 – 659.[19]Yue-lin, G., Xiao-hui, A. and Jun-min, L., “A Particle Swarmoptimization Algorithm with Logarithm Decreasing Inertia Weight and Chaos Mutation, In Proc. of Int. Conference on Computational Intelligence and Security, 2008, pp. 61 – 65.[20]Eberhart, R. C. and Shi, Y., “Particle Swarm Optimization:Developments, Applications and Resources”, In Proc. of IEEE Congress of Evolutionary Computation, Vol. 1, 2001, pp. 27 - 30. [21]Kennedy, J., “The Particle Swarm: Social Adaptation of Knowledge”,In Proc. of the IEEE Int. Conf. on Evolutionary Computation, 1997, pp. 303 – 308.2009World Congress on Nature&Biologically Inspired Computing(NaBIC2009)299。