一种基于偏好的多目标调和遗传算法_英文

一种基于偏好的多目标调和遗传算法_英文
一种基于偏好的多目标调和遗传算法_英文

V ol.16, No.5 ?2005 Journal of Software 软 件 学 报 1000-9825/2005/16(05)0761 一种基于偏好的多目标调和遗传算法

? 崔逊学1,2+, 林 闯2

1

(南京大学 计算机软件新技术国家重点实验室,江苏 南京 210093) 2(清华大学 计算机科学与技术系,北京 100084)

A Preference-Based Multi-Objective Concordance Genetic Algorithm

CUI Xun-Xue 1,2+, LIN Chuang 2

1

(State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China) 2(Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China)

+ Corresponding author: Phn: +86-551-5769700, E-mail: xxcui@https://www.360docs.net/doc/6513657045.html,

Received 2003-11-30; Accepted 2004-05-11

Cui XX , Lin C. A preference-based multi-objective concordance genetic algorithm. Journal of Software , 2005,16(5):761?770. DOI: 10.1360/jos0761

Abstract : Recently various evolutionary approaches have been developed for multi-objective optimization. Most of them take Pareto dominance as their selection strategy and do not require any preference information. However these algorithms cannot perform well on problems involving many objectives. By introducing preferences among different criteria, a multi-objective concordance genetic algorithm (MOCGA) is proposed to deal with the problems in the paper. As the number of objectives to be simultaneously optimized increases, the weak dominance is used to compare among the individuals with decision-maker’s information. It is proven that the algorithm can guarantee the convergence towards the global optimum. Experimental results of the multi-objective optimization benchmark problems demonstrate the validity of the new algorithm.

Key words : genetic algorithm; multi-objective optimization; preferences information; multi-criterion

decision-making

摘 要: 最近涌现了各种进化方法来解决多目标优化问题,多数方法使用Pareto 优胜关系作为选择策略而没有采用偏好信息.这些算法不能有效处理目标数目许多时的优化问题.通过在不同准则之间引入偏好来解决该问题,提出一种多目标调和遗传算法MOCGA(multi-objective concordance genetic algorithm).当同时待优化的目标数目增加时,根据决策者提供的信息使用弱优胜关系进行个体优劣的比较.这种算法被证明为能收敛至全局最优.对于目标数目

? Supported by the National Grand Fundamental Research 973 Program of China under Grant No.2003CB314804 (国家重点基础研究发展规划(973)); the National Natural Science Foundation of China under Grant Nos.90104002, 60303027 (国家自然科学基金); Opening Foundation of State Key Laboratory for Novel Software Technology in Nanjing University (南京大学计算机软件新技术国家重点实验室开放基金); the Natural Science Foundation of Anhui Province of China (安徽省自然科学基金)

CUI Xun-Xue was born in 1969. He is a post-doctor at the Department of Computer Science and Technology, Tsinghua University. His research areas are evolutionary algorithms, routing algorithm and optimal theory. LIN Chuang was born in 1948. He is a professor and doctoral supervisor at the Department of Computer Science and Technology, Tsinghua University. His researches areas are computer networks, stochastic Petri nets and performance analysis of computer systems.

762 Journal of Software软件学报2005,16(5)

为很多的优化问题,测试实验结果表明了这种新算法的有效性.

关键词: 遗传算法;多目标优化;偏好信息;多准则决策

中图法分类号: TP18文献标识码: A

1 Introduction

Many real-world problems are multi-objective in nature, because they consider several objectives that are to be optimized simultaneously. There is no a single optimal solution, but rather a set of alternative solutions for the kind of problems. These solutions are optimal in the wider sense that no other solutions in the search space are superior to them when all objectives are considered[1]. Multi-objective optimization problems (MOPs) have received considerable attention in the field of Operation Research. Classical optimization methods such as multi-criterion decision-making methods suggest converting an MOP to a single-objective optimization problem by emphasizing one particular Pareto-optimal solution. When such a method is used for finding multiple solutions, it has to be applied many times and results in obtaining a different and even incompatible solution at each run. Finally a decision-maker is difficult to select optimal alternatives from them using those methods.

Recently MOPs have become a popular area of research within evolutionary computation that is normally called Evolutionary Multi-objective Optimization. Over the past decade, a number of multi-objective evolutionary algorithms (MOEAs) have been proposed[2]. EAs are well suited for MOPs because they process a set of solutions in parallel. Some researchers suggest that MOPs seem to belong to an area where EAs do better than other blind search strategies[3]. Although this statement might be qualified with regard to the “No Free Lunch” (NFL) theorems[4], up to now there are few other alternatives solving MOPs good like EAs. In most cases these EAs are modifications of the genetic algorithms, so genetic algorithm is the most important evolutionary approach among MOEAs.

Selection is the key mechanism in evolutionary computation. In the case of multiple objectives, the selection operator steers the search in direction of the nondominated front, and is controlled by the individual’s fitness that reflects its utility of Pareto-optimality. Therefore fitness assignment is the main issue in multi-objective optimization. Many MOEAs use Pareto-based fitness assignment, which directly bases on Pareto dominance and assigns all nondominated solutions equal reproduction probabilities. A scheme was introduced by Fonseca and Fleming, where an individual’s rank corresponds to the number of solutions in the population by which it is dominated[5]. Other Pareto-based approaches include the Strength Pareto Evolutionary Algorithm[6] and the Nondominated Sorting Genetic Algorithm[7]. Although they do not require any preference information, the dimensionality of the search space influences their performance. Fonseca and Fleming pointed out that pure Pareto EAs cannot be expected to perform well on problems involving many competing objectives, and may simply fail to produce satisfactory solutions due to the large dimensionality and size of the trade-off surface[3].

Aiming at the problem, preferences have been integrated with a multi-objective environment. By introducing preference information among different objectives, evolutionary population is weakly ranked as the selection strategy according to the theory of multicriterion decision-making. The advantage of the algorithm lies on that it can solve optimization problems with many objectives. Other work in the literatures reported the use of preferences[8], and recently Dragan and Parmee proposed a fuzzy preference method where these preferences were developed with a goal to reduce the cognitive overload[9]. But they were used in the different circumstances and for the different purposes. A survey on the use of preference in a multi-objective context was provided by Coello[10].

The next section introduces some concepts used in the field of evolutionary multi-objective optimization and preference relationship. Section 3 presents MOCGA. Section 4 gives the performances for high dimensionality optimization problems. The convergence property is discussed in Section 5. The paper concludes with Section 6.

崔逊学 等:一种基于偏好的多目标调和遗传算法 763 2 Main MOP Concepts

2.1 Multi-objective optimization

A general multi-objective problem can be described as a vector function f that maps a tuple of m decision variables to a tuple of n objectives (criteria). Formally:

Min/Max y =f (x )=(f 1(x ),f 2(x ),...,f n (x )) (1) Subject to x =(x 1,x 2,...,x m )∈X , y =(y 1,y 2,...,y m )∈Y .

where x is called decision vecto r, X parameter space , y objective vector , and Y objective space .

If the set of solutions consists of all decision vectors for which the corresponding objective vectors cannot be improved in any dimension without degradation in another, these vectors are known as Pareto optimal . Several concepts are mathematically defined as follows:

Definition 1. (Pareto Dominance): In a minimum problem, a vector u =(u 1,u 2,...,u n ) is said to dominate v =(v 1, v 2,...,v n ) iff u is partially less than v , i.e.,i i i i v u n i v u n i <∈?∧≤∈?:},...,1{},,...,1{.

Definition 2. (Pareto Optimality): A solution x ∈X is said to be Pareto optimal with respect to X iff there is no x ′∈X for which v =f (x ′)=(f 1(x ′),...,f n (x ′)) dominates u =f (x )=(f 1(x ),...,f n (x )).

2.2 Semantics of preference

For a multi-objective preference system, the initial set of atomic propositions is given by a fundamental binary relation R , defined on a finite set A of alternatives with the following general semantics:

Definition 3. (Outranking Relation): ≡∈?)(:,b R a A b a “b outranks less or more a ”

On the basis of the general outranking relation R , we can define adequate preference, indifference and incomparability relations, denoted respectively as P , I and J . The semantics of each relation is given as follows: A j i ∈?,: (i P j)≡“i is more or less preferred to j”

(i I j)≡“i is more or less indifferent to j”

(i J j)≡“i is more or less incomparable to j”

The three relations are different from the ordinary multi-objective optimization and provide a complicated comparison relation between any two alternatives. They elicit two concepts: concordance index and discordance index, which are used in MOCGA.

The concordance index between any two alternatives i and j is a weighted measure of a criterion, for which alternative i is preferred to alternative j (denoted i ;j ) or for which i is equal to j (denoted i~j ). It is given as: ∑∑=∈=n k j i A k k k j i C 1

),()()(),(ωω (2) where ω(k) is the weight of criterion k , k=1,…,n , and A (i ,j )={k|i ;j },0≤C (i ,j )≤1. Concordance is considered as the weighted percentage of criteria for which one alternative is preferred to another.

Determination of the discordance between i and j requires that an interval scale common to each criterion be defined. The scale is used to compare the discomfort caused between the worst and the best criterion values for each pair of alternatives. A range may be chosen where the best rating would be assigned the highest value of the range, and the worst rating would receive the lowest value of the one. Each criterion can have a different range to reflect the leeway available for that criterion. The discordance index is defined as:

*))

,(),((),(max

,1R k i r k j r j i D n k ?== (3)

where r (j ,k ) is the evaluation of alternative j with respect to criterion k , and R* is the largest of the n criterion scales

764 Journal of Software 软件学报 2005,16(5)

by construction existing 0≤D (i ,j )≤1. 3 MOCGA

3.1 Multi-objective model

The advantage of outranking relation is that it can handle intransitivity and contradictions in a local manner and allow incomparable relation J . Typical technique is the ELECTRE method proposed by Roy [11], which includes versions I and II. The abbreviation “ELECTRE” is the abbreviated form of Elimination et Choice Translating Reality which is derived from French. The method provides a partial and weak ordering of the nondominated alternatives. The ordering is accomplished by the construction of outranking relationships with the preferences. Version I of ELECTRE will be embedded in the selection process of MOCGA. Its character is to choose those alternatives, which are preferred for at least a plurality of the criteria and yet do not cause an unacceptable level of discontent for any one criterion. ELECTRE I operates as follows:

i) A set of m alternatives (R ) and a set of n objectives (criteria) are first given. From the decision matrix, a normalized one is constructed to take each criterion value as the same unit vector:

∑==n i ij ij

ij r

r r 12

: (4) If ωj denotes the weight of criterion j , a weighted normalized decision matrix is then formed whose elements are given by:

ii) Weak outranking relation is constructed with threshold values from the concordance and discordance matrices. Then an ELECTRE’s relation graph is designed where each node corresponds to a nondominated alternative and the arrows indicate preference dominances between any two nodes.

ij j ij r v ω= (5)

① Concordance index If a criterion in an alternative x k is better than the same criterion in another alternative x l , it is said to be preferred in terms of that criterion. If x k is preferred to x l (x k P x l ), any normalized criterion value in support of the assumption is said to be concordant with x k P x l , where P denotes an adequate preference relation defined before. A concordance set and a discordance set are built with the weighted normalized decision matrix:

Concordance set: C kl = {j |r kj ≥r l j } (6) Discordance set: D kl ={j |r kj

(7) where M is the set of {1,2,...,n }, and the expression significations of C kl and D kl are different from the above C (i ,j ) and D (i ,j ). If C ′kl ={j |r kj =r l j }, two concordance indexes αk 1 and α

?k l are defined respectively,

∑∈=kl C j j kl ωα (8) ∑∑∑∈∈′∈?=kl kl

kl C j D j j C j j j kl ωωωα/)(? (9) If αk 1≥α0 and α

?kl ≥1, the concordance check is passed. α0 denotes the lowest concordance threshold value determined by a decision maker. For example, α0 can be taken as the average of concordance index αk 1:

)1(/110?=∑∑

≠=≠=m m m k

l l kl m l k k αα (10)

崔逊学 等:一种基于偏好的多目标调和遗传算法

765

② Discordance index The discordance index is defined as follows:

M j lj kj D j lj kj kl v v v v kl ∈∈??=|

|max |

|max β (11)

If βkl ≤β0, the discordance check is passed. β0 denotes the highest discordance threshold value determined by a decision maker. The weights, α0 and β0 reflect the preference of the decision maker. For example, β0 can be taken as the average of discordance index βkl :

)1(/110?=∑∑

≠=≠=m m m k

l l kl m l k k ββ (12) iii) A minimal dominance subset R D is obtained from R .

iv) If the number of designs in R D is small enough to be chosen as preference alternatives from them, the iteration process ends; Otherwise, repeat (i) to (iv) after the concordance level is adjusted.

3.2 Detailed description of MOCGA

The subsection describes the details of MOCGA, whose emphasis is laid on integrating preference model with population evolution. In a finite population, each individual is weakly sorted (ranked) by the ELECTRE method, and avoids being difficult to be sorted for several objectives with usually strict Pareto-based comparison. The main procedure of MOCGA is described as follows.

(1) Initialize control parameters: population size N , the size of external nondominated set N ′, number of evolutionary iteration gen , crossover probability P c , mutation probability P m , the weight of every objective ωi , the lowest concordance threshold value α0, and the highest discordance threshold value β0;

(2) Generate an initial population P and an empty external nondominated set P’ when gen=0;

(3) Sort individuals in P with ELECTRE and copy the produced members in the minimal dominance set to P ′;

(4) Remove solutions in P ′ which are covered by any other member of P ′ (the step is ignored for the first copy, as now there is no dominated solutions in P ′);

(5) If the number of externally stored nondominated solutions exceeds a given maximum value, prune P ′ by means of averagely correlative clustering;

(6) Calculate the fitness of each individual in P and P ′;

(7) Select individuals from P+P ′ with binary tournament rule whose constraints satisfy constraint values, until the mating pool is filled, then gen :=gen +1;

(8) Apply crossover and mutation operators related with a solving problem;

(9) If the maximum number of generations is reached, stop algorithm, else go to (3);

(10) Users adjust α0 and β0 values according to their needs, then go to step (1).

The fitness assignment in (6) includes two stages, which uses strength concept proposed by Zitzler and Thiele [6]:

Step 1: Each solution i ∈P ′ is assigned a real value s i ∈[0,1] as its strength; s i is proportional to the number of population members j ∈P for which i ;j . Let k denote the number of individuals in P that are covered by i , and assume N is the size of P . Then s i is defined as s i =k /(N+1). The fitness f i of i is equal to its strength: f i =s i .

Step 2: The fitness of an individual j ∈P is calculated by summing the s i value of all external nondominated solution i ∈P ′ that cover j . Fitness is to be minimized (where f j ∈[1,N ]):

766 Journal of Software 软件学报 2005,16(5)

∑+=j i i i j s f ;,1 (13)

The method of fitness assignment and clustering in step (5) avoids the phenomenon of genetic drift, which would influence the population diversity. Note that as the fitness is calculated during the algorithm execution, the dominated or nondominated relation between any two individuals means an absolutely Pareto dominated or nondominated one. Step (7) practically eliminates the unqualified individuals whose constraints exceed the bounds. 4 Performance Evaluation and Analysis

4.1 Multi-Objective optimization problem

4.1.1 4-D objective problem

In order to verify the effectiveness of MOCGA, a 4-D problem is tested in which one dimension is more than the usual benchmark problems. The paper designs a four-objective functional optimization problem of a single variable without constraints:

Min F (x )=(f 1(x ),f 2(x ),f 3(x ),f 4(x )) (14)

f 1(x )=x 2, f 2(x )=(x ?2)2, f 3(x )=(x ?1)2, f 4(x )=0.5*(x ?1)2

The decision variable x is taken as a real value between [?10, 10]. When the weight of every sub-objective is equal, i.e. 0.25, the optimal solution is obtained as x =1.0, which is solved by linear weighted method of multi-objective programming in advance. In the paper the famous Strength Pareto Evolutionary Algorithm (SPEA)[6] is compared with MOCGA for the problem. The main difference between the two algorithms lies on that MOCGA uses a weakly outranking to sort an individual, while SPEA uses Pareto-based purely superior relation to do it.

Here N =100, N ′=30, P c =0.75, P m =0.01. The values of α0 and β0 in MOCGA are assigned by the related averages, i.e.α0 taken as the average of concordance index, β0 taken as the average of discordance index. Comparative results cannot be visually shown in graphical form for four-objective problem, indicating who performs better. The evaluation for comparing their performances is adopted by the statistical number of optimal solutions produced. The maximal generation is taken as 50, 100, 200, 500, respectively. Each time ten experiments were done while the two algorithms use the same initial population. The produced results are provided in Table 1. In Table the numbers before and after commas represent the number of optimal solutions in an external set created by MOCGA and SPGA, respectively.

Table 1 Results of a 4-D objective problem by MOCGA and SPEA

100

200

500 13, 45, 28, 311, 5 8, 4 7, 2 11, 6 5, 2 8, 2 6, 3 7, 3 14, 3 14, 2 17, 4 11, 4 15, 410, 220, 27, 2 14, 312, 319, 4 6, 2 16, 4 11, 2 9, 3 12, 2 15, 3 5, 2 7, 4

In addition during each experiment the optimal solutions were mixed, which were done at the end of an evolution process of MOCGA and SPEA. The dominance relation among them was assessed to determinate whether there exist some dominated solutions. It was found out that most of these optimal solutions were nondominated by others with a few exceptions. The fact shows that MOCGA creates an evolutionary optimal set along the Pareto front as SPEA, although it adopts rather weak ranking than strictly Pareto-based dominance relation. Due to the reason, the two algorithms could be compared with the number of the last optimal solutions generated by them.

It is found from Table 1 that the number of optimal solutions from SPEA is much smaller than that from

崔逊学 等:一种基于偏好的多目标调和遗传算法 767 MOGA. The trend behaves more and more evidently when the evolutionary generation is higher. As the dimensionality of optimization objectives is high, SPEA is difficult to sort individuals in a finite genetic population by Pareto-based rank, even existing a situation where all individuals are Pareto-optimal to each other in a population. Thus the normal iteration process might be interrupted using pure Pareto selection strategy; whereas MOCGA has not the disadvantage.

4.1.2 10-D objective problem

How does MOCGA deal with many competing objectives more efficiently? It would greatly convince people if it could be revealed that MOCGA can deal with more objectives. This paper gives a test with ten objectives of DTLZ2 problem proposed by Deb [12]. It can be scalable to more objectives. This is a really challenging problem and naturally enough to judge the performance of MOCGA on the high dimensional objective space. The optimization task considers a real-parameter function and investigates the EA ′s ability to scale up its performance in a large number of objectives, which is defined as follows.

Minimize f 1(x )=(1+g (X n ))cos(x 1π/2)cos(x 2π/2)…cos(x n ?2π/2)cos(x n ?1π/2)

Minimize f 2(x )=(1+g (X n ))cos(x 1π/2)cos(x 2π/2)…cos(x n ?2π/2)sin(x n ?1π/2)

……

Minimize f n (x )=(1+g (X n ))sin(x 1π/2) (15) where g (X n )=∑∈?n

i X x i x 2)5.0(, with X n =[x n ,…,x m ], and 0≤x i ≤1, for i =1,2,…,m , with m =n +k ?1. As the above definitions, here n is the number of objectives, m is the number of variable, and k is a difficulty parameter. In this study n =10, m =11, and k =2. The function code design is followed in C language:

Pi =4.0*atan (1.0);

for(j=n , xk k=0.0; j

for(j =n ?1; j >=0; j --)

{ temp =1.0;

for(i =0; i=0)? cos(x [i ] *Pi /2.0): 1.0;

f [j ]=(1.0+xkk )*temp * ((j >0) ?sin(x [n ?j ?1]*Pi /2.0): 1.0);

}

The Pareto-optimal solutions correspond to x i *=0.5 (x i *∈x n ) and all objective function values must satisfy the ∑==n i i f 121)(*. MOCGA was compared with the important NSGA-II proposed by Deb [7]. According to the viewpoint of Deb, the distribution of solutions (n =10) obtained with NSGA-II is poor in the problem [12]. His viewpoint was validated, whereas MOCGA got a good approximation set. If N ds denotes the size of nondominated set (N') and the theoretical value of

∑=n i i f 12)(*is taken as 1.0, an evaluation measure is constructed based on the average deviation:

ds N j n i i N f /)|0.1|('12∑∑∈=?=? (16)

The benchmark problem was tested using the same genetic parameters as those in the 4-D experiment. Table 2 shows the 10 comparison results (Δ) after 100 generations. From the results of Table 2, the Pareto-optimal solutions of MOCGA have a little deviation. However they have a big deviation in NSGA-II which did not work well on the problem as Deb said [12]. It is well known that NSGA-II is a famous algorithm for solving search and

768 Journal of Software软件学报2005,16(5)

optimization problems involving multiple conflicting objectives used in many studies. Its selection mechanism is not well fit for high dimensional objective optimization problems according to the paper. This result also proves the points of Fonseca and Fleming, i.e. pure Pareto EAs cannot perform well on problems involving many objectives[3]. MOCGA uses a weak dominance to sort individuals in which preference information is integrated. Some individuals are selected as local Pareto-optimal solutions in current population after all individuals are compared with weak dominance. Thus after a finite generations many Pareto-based solution points are successfully obtained to approximate the front of optimal set.

4.2 Analysis

It should be emphasized that the total performance of MOCGA is not very good for all MOPs, although MOCGA has its success in the numeric problems. Any algorithmic approach is bound to have some advantages and shortfalls when applied to certain problems, as proved by the NFL theorems[4]. Because many concordance and discordance checks need to be done in determining outranking relations, the computational complexity of MOCGA is increased. Thus MOCGA could be well suited for high dimensionality optimization problems and might not be good at MOPs of low dimensionality. The kind of multi-objective problems has an increased complexity introduced by large dimensionalities as more objectives are added. The excess computational capacity of MOCGA is counteracted by the difficulty of a multi-objective problem, and therefore the significance of MOCGA is obvious. Moreover, while the computation time is not important without considering real-time requirement, the computation complexity of MOCGA could be accepted by users.

5 Convergence Property

5.1 Regular MOEA theory

During a genetic process the local set of Pareto optimal solutions is determined at the t th generation and termed P current(t), which is with respect to the current population. The solutions are added to a secondary population termed P known(t), and the process continued until the process termination. P known(0) is always an empty set. Some MOEAs including MOCGA use an extern set to store known optimal solutions. All corresponding vectors of P known(t) are tested at each generation, and the dominated solutions are removed. The mixture of P current(t)∪P known(t) creates optimal solutions P known(t+1) of the next generation. The result is a final set of Pareto optimal solution found by an algorithm that is denoted as P known without t symbol. The actual Pareto optimal solution set (termed P true) is not usually known for problems. P true is fixed and does not change with generation change.

P current(t)and P known(t)are sets of EA genotypes. The fitness is judged via phenotypes that is a Pareto front. The associated Pareto front for each of the above sets are denoted as PF current(t), PF known(t), and PF true, respectively. The global optimum for an MOP is a set of vectors. Thus the Pareto front PF true determined by evaluating the Pareto optimal set P true is the global optimum of an MOP.

5.2 MOCGA convergence

Assumption 1: When using an EA to solve MOPs, the implicit qualification is that one of the following holds: P known=P true, P known?P true, or PF known∈[PF true-ε, PF true] over some norm.

Lemma 1. Under the condition of Assumption 1, given any non-empty solution set, at least one Pareto optimal solution exists within that set.

The proof of the lemma was presented by Van Veldhuizen and Lamont in Ref.[13].

Theorem 1 (MOCGA convergence). MOCGA with an infinite population converges to the global optimal of an MOP with probability one. Namely, for a Pareto front F*, which is composed of at most an infinite number of

崔逊学 等:一种基于偏好的多目标调和遗传算法

769 vectors V 1*, V 2*,…,V n * such that:

P{)(t *lim curren t P F t ∈∞→}=1 (17)

Proof. It was proved that an EA would converge with probability one if it could fulfill the following two conditions [13]: (1)?F , F'∈I , F' is reachable from F by means of mutation and recombination; (2) the population sequence P (0), P (1), ... is monotone, i.e.,?t :

Min{Φ(F (t +1))|F (t +1)∈P (t +1)}≤Min{Φ (F (t ))|F (t )∈P (t )} (18)

MOCGA is assumed with infinite precision only provided that it operates on real value, a minimization MOP, an infinite population size, and appropriate mutation and crossover operators allowing every point in a search space to be visited. Thus ?F , F'∈I , F' is reachable from F . MOCGA satisfies B?ck’s condition 1.

In order to prove the monotone character of the population sequence in MOCGA, both the fitness function and selection in it should be guaranteed monotone. When using Pareto-based fitness assignment, any given pair of Pareto optimal solutions receives identical fitness values; they also receive better fitness than dominated solutions. Therefore any fitness function assigning fitness in this way like MOCGA is monotonic.

Idealistically MOCGA sorts individuals in the current population of an infinite size. The produced minimal dominance set by ELECTRE method forms P current (t ), which is at least composed by local Pareto-optimal solutions and always a not-empty set. After P current (t )∪P known (t ) is mixed and dominated solutions in the compound are removed, the new P known (t +1) can be considered as a part of P true , i.e. P known (t+1)?P true . Thus MOCGA satisfies Assumption 1.

The selection of MOCGA is naturally classified as (μ+λ) selection strategy. In the plus strategy an elitist selection strategy is applied, which chooses μbest individuals from λchildren and μparents together as parents for the next generation. According to Lemma 1, at least one Pareto optimal solution exists within P known (t ). The solution is held to P known (t +1) so that the best fitness at the (t +1)th generation is at least equal to the one at the t th generation. Moreover the better solutions would be appeared in P current (t ) because of crossover and mutation, which obtain better fitness in P known (t +1) after P current (t )∪P known (t ). The evolutionary results at (t +1)th generation are not at least inferior to the ones at the t th generation. Consequently, selection strategy in MOCGA satisfies monotone.

MOCGA are qualified for the above two conditions; therefore, it converges to the global optimum of an MOP with probability one. Theorem 1 holds. 6 Conclusions

The paper proposes a feasible multi-objective evolutionary algorithm based on preferences for multi-objective optimization problems. The algorithm differs from the existing MOEAs as it uses a different selection strategy by the weak dominance comparison between any two individuals. From the results of test problems in the literature, the proposed MOCGA has the ability to scale up its performance in a large number of objectives. Property of MOCGA about stochastic convergence is analyzed. Experimental results of the multi-objective optimization benchmark problems demonstrate the validity of the new algorithm.

References :

[1] Van Veldhuizen DA, Lamont GB. Multi-Objective evolutionary algorithms: Analyzing the State-of-the-Art. IEEE Trans. on

Evolutionary Computation, 2000,8(2):125?147.

[2] Coello CAC. List of Reference on Evolutionary Multi-objective Optimization. https://www.360docs.net/doc/6513657045.html,nia.mx/~ccoello/EMOO/EMOObib.html .

[3] Fonseca CM, Fleming PJ. An overview of evolutionary algorithms in multi-objective optimization. Evolutionary Computation,

1995,3(1):11?16.

770 Journal of Software软件学报2005,16(5)

[4] Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans. on Evolutionary Computation, 1997,1(1):

67?82.

[5] Fonseca CM, Fleming PJ. Genetic algorithms for multi-objective optimization: formulation, discussion and generalization. In:

Stephanie Forrest, ed. Proc. of the 5th Int’l Conf. on Genetic Algorithms. University of Illinois at Urbana-Champaign: Morgan Kauffman Publishers, 1993. 416?423.

[6] Zitzler E, Thiele L. Multi-Objective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE

Trans. on Evolutionary Computation, 1999,3(4):257?271.

[7] Deb K, Agarwal S, Pratap A, Heyarivon T. A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Trans. Evolutionary

Computation, 2002,6(2):182?197.

[8] Dragan C, Ian CP. Designer’s preferences and multi-objective preliminary design processes, In: Ian CP, ed. Proc. of the 4th Int’l

Conf. on Adaptive Computing in Design and Manufacture. London: Springer, 2000. 249?260.

[9] Dragan C, Ian CP. Preferences and their application in evolutionary multi-objective optimization. IEEE Trans. on Evolutionary

Computation, 2002,6(1):42?57.

[10] Coello CAC. Handling preferences in evolutionary multi-objective optimization: A survey. In: Coello CAC, ed. Proc. Congress of

Evolutionary Computation. New Jersey: IEEE Service Center, 2000. 30?37.

[11] Triantaphyllou E, Shu B, Nieto SS. Multi-Criteria decision making: An operations research approach. In: Webster JG, ed.

Encyclopedia of Electrical and Electronics Engineering. New York: John Wiley & Sons, 1998. 175?186.

[12] Deb K, Thiele L, Laumanns M, Laumanns M, Zitzler E. Scalable multi-objective optimization test problems. In: Yao X, ed. Proc.

of the 2002 Congress on Evolutionary Computation. New Jersey: IEEE Service Center, 2002. 825?830.

[13] Van Veldhuizen DA, Lamont GB. Evolutionary computation and convergence to a pareto front. In: Koza JR, ed. Late Breaking

Papers at the Genetic Programming 1998 Conf. California: Stanford University Bookstore, 1998. 221?228.

遗传算法在多目标优化的应用:公式,讨论,概述总括

遗传算法在多目标优化的应用:公式,讨论,概述/总括 概述 本文主要以适合度函数为基础的分配方法来阐述多目标遗传算法。传统的群落形成方法(niche formation method)在此也有适当的延伸,并提供了群落大小界定的理论根据。适合度分配方法可将外部决策者直接纳入问题研究范围,最终通过多目标遗传算法进行进一步总结:遗传算法在多目标优化圈中为是最优的解决方法,而且它还将决策者纳入在问题讨论范围内。适合度分配方法通过遗传算法和外部决策者的相互作用以找到问题最优的解决方案,并且详细解释遗传算法和外部决策者如何通过相互作用以得出最终结果。 1.简介 求非劣解集是多目标决策的基本手段。已有成熟的非劣解生成技术本质上都是以标量优化的手段通过多次计算得到非劣解集。目前遗传算法在多目标问题中的应用方法多数是根据决策偏好信息,先将多目标问题标量化处理为单目标问题后再以遗传算法求解,仍然没有脱离传统的多目标问题分步解决的方式。在没有偏好信息条件下直接使用遗传算法推求多目标非劣解的解集的研究尚不多见。 本文根据遗传算法每代均产生大量可行解和隐含的并行性这一特点,设计了一种基于排序的表现矩阵测度可行解对所有目标总体表现好坏的向量比较方法,并通过在个体适应度定标中引入该方法,控制优解替换和保持种群多样性,采用自适应变化的方式确定交叉和变异概率,设计了多目标遗传算法(Multi Objective Genetic Algorithm, MOGA)。该算法通过一次计算就可以得到问题的非劣解集, 简化了多目标问题的优化求解步骤。 多目标问题中在没有给出决策偏好信息的前提下,难以直接衡量解的优劣,这是遗传算法应用到多目标问题中的最大困难。根据遗传算法中每一代都有大量的可行解产生这一特点,我们考虑通过可行解之间相互比较淘汰劣解的办法来达到最 后对非劣解集的逼近。 考虑一个n维的多目标规划问题,且均为目标函数最大化, 其劣解可以定义为: f i (x * )≤f i (x t ) i=1,2,??,n (1) 且式(1)至少对一个i取“<”。即至少劣于一个可行解的x必为劣解。 对于遗传算法中产生大量的可行解,我们考虑对同一代中的个体基于目标函数相互比较,淘汰掉确定的劣解,并以生成的新解予以替换。经过数量足够大的种群一定次数的进化计算,可以得到一个接近非劣解集前沿面的解集,在一定精度要求下,可以近似的将其作为非劣解集。 个体的适应度计算方法确定后,为保证能得到非劣解集,算法设计中必须处理好以下问题:(1)保持种群的多样性及进化方向的控制。算法需要求出的是一组不同的非劣解,所以计算中要防止种群收敛到某一个解。与一般遗传算法进化到

遗传算法的优缺点

遗传算法属于进化算法( Evolutionary Algorithms) 的一种, 它通过模仿自然界的选择与遗传的机理来寻找最优解. 遗传算法有三个基本算子: 选择、交叉和变异. 。数值方法求解这一问题的主要手段是迭代运算。一般的迭代方法容易陷入局部极小的陷阱而出现"死循环"现象,使迭代无法进行。遗传算法很好地克服了这个缺点,是一种全局优化算法。 生物在漫长的进化过程中,从低等生物一直发展到高等生物,可以说是一个绝妙的优化过程。这是自然环境选择的结果。人们研究生物进化现象,总结出进化过程包括复制、杂交、变异、竞争和选择。一些学者从生物遗传、进化的过程得到启发,提出了遗传算法( GA)。算法中称遗传的生物体为个体( individual ),个体对环境的适应程度用适应值( fitness )表示。适应值取决于个体的染色体(chromosome),在算法中染色体常用一串数字表示,数字串中的一位对应一个基因 (gene)。一定数量的个体组成一个群体(population )。对所有个体进 行选择、交叉和变异等操作,生成新的群体,称为新一代( new generation )。遗传算法计算程序的流程可以表示如下[3]:第一步准备工作 (i)选择合适的编码方案,将变量(特征)转换为染色体(数字串,串长为m。通常用二 进制编码。 (2 )选择合适的参数,包括群体大小(个体数M)、交叉概率PC和变异概率Pm (3、确定适应值函数f (x、。f (x、应为正值。 第二步形成一个初始群体(含M个个体)。在边坡滑裂面搜索问题中,取已分析的可能滑裂 面组作为初始群体。 第三步对每一染色体(串)计算其适应值fi ,同时计算群体的总适应值。 第四步选择 计算每一串的选择概率Pi=fi/F 及累计概率。选择一般通过模拟旋转滚花轮 ( roulette ,其上按Pi大小分成大小不等的扇形区、的算法进行。旋转M次即可选出M个串来。在计算机 上实现的步骤是:产生[0,1]间随机数r,若rpc ,则该串参加交叉操作,如此选出参加交叉的一组后,随机配对。 (2)对每一对,产生[1 , m]间的随机数以确定交叉的位置。 第六步变异 如变异概率为Pm则可能变异的位数的期望值为Pm x mx M,每一位以等概率变异。具体为 对每一串中的每一位产生[0 , 1]间的随机数r,若r

遗传算法在多目标优化中的作用 调研报告

遗传算法在多目标优化中的作用调研报告 姓名: 学院: 班级: 学号: 完成时间:20 年月日 目录 1 .课题分析................................................................................................................................ 0 2 .检索策略................................................................................................................................ 0 2.1 检索工具的选择................................................................................................................................ ......... 0 2.2 检索词的选择................................................................................................................................ ............. 0 2.3 通用检索式................................................................................................................................ .. 0 3.检索步骤及检索结果 0 3.1 维普中文科技期刊数据库 0 3.2 中国国家知识产权局数据

多目标遗传算法代码

. % function nsga_2(pro) %% Main Function % Main program to run the NSGA-II MOEA. % Read the corresponding documentation to learn more about multiobjective % optimization using evolutionary algorithms. % initialize_variables has two arguments; First being the population size % and the second the problem number. '1' corresponds to MOP1 and '2' % corresponds to MOP2. %inp_para_definition=input_parameters_definition; %% Initialize the variables % Declare the variables and initialize their values % pop - population % gen - generations % pro - problem number %clear;clc;tic; pop = 100; % 每一代的种群数 gen = 100; % 总共的代数 pro = 2; % 问题选择1或者2,见switch switch pro case 1 % M is the number of objectives. M = 2; % V is the number of decision variables. In this case it is % difficult to visualize the decision variables space while the % objective space is just two dimensional. V = 6; case 2 M = 3; V = 12; case 3 % case 1和case 2 用来对整个算法进行常规验证,作为调试之用;case 3 为本工程所需; M = 2; %(output parameters 个数) V = 8; %(input parameters 个数) K = 10; end % Initialize the population chromosome = initialize_variables(pop,pro); %% Sort the initialized population % Sort the population using non-domination-sort. This returns two columns % for each individual which are the rank and the crowding distance

多目标规划遗传算法

%遗传算法解决多目标函数规划 clear clc syms x; %Function f1=f(x) f1=x(:,1).*x(:,1)/4+x(:,2).*x(:,2)/4; %function f2=f(x) f2=x(:,1).*(1-x(:,2))+10; NIND=100; MAXGEN=50; NV AR=2; PRECI=20; GGPA=0.9; trace1=[]; trace2=[]; trace3=[]; FielD=[rep([PRECI],[1,NV AR]);[1,1;4,2];rep([1;0;1;1],[NV AR])]; Chrom=crtbp(NIND,NV AR*PRECI); v=bs2rv(Chrom,FielD); gen=1; while gen

遗传算法多目标函数优化

多目标遗传算法优化 铣削正交试验结果 说明: 1.建立切削力和表面粗糙度模型 如: 3.190.08360.8250.5640.45410c e p z F v f a a -=(1) a R =此模型你们来拟合(上面有实验数据,剩下的两个方程已经是我帮你们拟合好的了)(2) R a =10?0.92146v c 0.14365f z 0.16065a e 0.047691a p 0.38457 10002/c z p e Q v f a a D π=-????(3) 变量约束范围:401000.020.080.25 1.0210c z e p v f a a ≤≤??≤≤??≤≤? ?≤≤? 公式(1)和(2)值越小越好,公式(3)值越大越好。π=3.14 D=8 2.请将多目标优化操作过程录像(同时考虑三个方程,优化出最优的自变量数值),方便我后续进行修改;将能保存的所有图片及源文件发给我;将最优解多组发给我,类似于下图(黄色部分为达到的要求)

遗传算法的结果:

程序如下: clear; clc; % 遗传算法直接求解多目标优化 D=8; % Function handle to the fitness function F=@(X)[10^(3.19)*(X(1).^(-0.0836)).*(X(2).^0.825).*(X(3).^0.564).*(X(4).^0. 454)]; Ra=@(X)[10^(-0.92146)*(X(1).^0.14365).*(X(2).^0.16065).*(X(3).^0.047691).*( X(4).^0.38457)]; Q=@(X)[-1000*2*X(1).*X(2).*X(3).*X(4)/(pi*D)];

遗传算法程序代码--多目标优化--函数最值问题

函数最值问题:F=X2+Y2-Z2, clear clc %%初始化 pc=0.9; %交叉概率 pm=0.05; %变异概率 popsize=500; chromlength1=21; chromlength2=23; chromlength3=20; chromlength=chromlength1+chromlength2+chromlength3; pop=initpop(popsize,chromlength);% 产生初始种群 for i=1:500 [objvalue]=calobjvalue(pop); %计算目标函数值 [fitvalue]=calfitvalue(objvalue);%计算个体适应度 [newpop]=selection(pop,fitvalue);%选择 [newpop1]=crossover(newpop,pc) ; %交叉 [newpop2]=mutation(newpop1,pm) ;%变异 [newobjvalue]=newcalobjvalue(newpop2); %计算最新代目标函数值 [newfitvalue]=newcalfitvalue(newobjvalue); % 计算新种群适应度值[bestindividual,bestfit]=best(newpop2,newfitvalue); %求出群体中适应值最大的个体及其适应值 y(i)=max(bestfit); %储存最优个体适应值 pop5=bestindividual; %储存最优个体 n(i)=i; %记录最优代位置 %解码 x1(i)=0+decodechrom(pop5,1,21)*2/(pow2(21)-1); x2(i)=decodechrom(pop5,22,23)*6/(pow2(23)-1)-1; x3(i)=decodechrom(pop5,45,20)*1/(pow2(20)-1); pop=newpop2; end %%绘图 figure(1)%最优点变化趋势图 i=1:500; plot(y(i),'-b*') xlabel('迭代次数'); ylabel('最优个体适应值'); title('最优点变化趋势'); legend('最优点');

遗传算法应用论文

论文 题目:遗传应用算法 院系:计算机工程系 专业:网络工程 班级学号: 学生姓名: 2014年10月23日

摘要: 遗传算法是基于自然界生物进化基本法则而发展起来的一类新算法。本文在简要介绍遗传算法的起源与发展、算法原理的基础上,对算法在优化、拟合与校正、结构分析与图谱解析、变量选择、与其他算法的联用等方面的应用进行了综述。该算法由于无需体系的先验知识,是一种全局最优化方法,能有效地处理复杂的非线性问题,因此有着广阔的应用前景。 关键词: 遗传算法; 化学计量学; 优化 THEORY AND APPL ICATION OF GENETIC AL GORITHM ABSTRACT: Genetic Algo rithm( GA) is a kind of recursive computational procedure based on the simulation of principle principles of evaluati on of living organisms in nature1Based on brief int roduction of the principle ,the beginning and development of the algorithms ,the pape r reviewed its applications in the fields of optimization ,fitting an d calibration,structure analysis and spectra interpretation variable selection ,and it s usage in combination with othersThe application o f GA needs no initiating knowledge of the system ,and therefore is a comprehensive optimization method with extensive application in terms of processing complex nonlinear problems。 KEY WORDS : Genetic Algorithm( GA) Chemometrics Optimization 遗传算法是在模拟自然界生物遗传进化过程中形成的一种自适应优化的概率搜索算法,它于1962年被提出,直到1989年才最终形成基本框架。遗传算法是一种借鉴生物界自然选择和自然遗传机制的随机化搜索算法, 由美国J. H. Ho llad教授提出, 其主要特点是群体搜索策略和群体中个体之间的信息交换。该算法尤其适用于处理传统搜索方法难以解决的复杂和非线性问题, 可广泛用于组合优化、机器学习、自适应控制、规划设计和人工生命等领域。 顾名思义,遗传算法(Genetic Algorithm ,GA)是模拟自然界生物进化机制的一种算法 ,即遵循适者生存、优胜劣汰的法则 ,也就是寻优过程中有用的保留 ,无用的则去除。在科学和生产实践中表现为 ,在所有可能的解决方法中找出最符合该问题所要求的条件的解决方法 ,即找出一个最优解。这种算法是 1960 年由

遗传算法在多目标线性规划的应用

龙源期刊网 https://www.360docs.net/doc/6513657045.html, 遗传算法在多目标线性规划的应用 作者:陈紫电 来源:《新课程·上旬》2013年第11期 摘要:求解多目标线性规划的基本思想大都是将多目标问题转化为单目标规划,目前主 要有线性加权和法、最大最小法、理想点法等。然而实际问题往往是复杂的,究竟哪种方法更加有效,也是因题而异。因此,通过讨论各种方法,提出了一个对各种算法的优劣进行量化对比的方法,并运用Matlab软件设计了相应的遗传算法来实现求解。 关键词:多目标线性规划;Matlab;遗传算法 多目标线性规划是最优化理论的重要组成部分,由于各目标之间的矛盾性和不可公度性,要使所有目标均达到最优,基本上是不可能的,因此,多目标规划问题往往只是求其相对较优的解。目前,求解多目标线性规划问题的有效方法有理想点法、线性加权和法、最大最小法、目标规划法,然而这些方法对多目标偏好信息的确定、处理等方面的研究工作不够深入,本文对多目标线性规划各解法的优劣进行了量化比较,最后还设计了相应的遗传算法,并借助MATLAB实现求解。 一、多目标线性规划模型 多目标线性规划有着两个和两个以上的目标函数,且目标函数和约束条件全是线性函数,其数学模型表示为: 二、多目标线性规划的求解方法 1.理想点法 三、遗传算法 对于上述多目标规划问题的各种解法,都从一定程度上有各自的偏好。为此,我们提出了一种多目标规划问题的遗传算法。 本文对各分量都做了数据标准化,并以(1,1,…,1)为理想目标,再以目标值的距离为目标(此距离可以作为其他算法的评价),消除了各分量之间的不公平性,最后借助MATLAB软件,从结果上看最后得到了更为合理的目标值。 参考文献: [1]李荣钧.多目标线性规划模糊算法与折衷算法分析[J].运筹与管理,2001,10(3):13-18.

多目标遗传算法中文【精品毕业设计】(完整版)

一种在复杂网络中发现社区的多目标遗传算法 Clara Pizzuti 摘要——本文提出了一种揭示复杂网络社区结构的多目标遗传算法。该算法优化了两个目标函数,这些函数能够识别出组内节点密集连接,而组间连接稀疏。该方法能产生一系列不同等级的网络社区,其中解的等级越高,由更多的社区组成,被包含在社区较少的解中。社区的数量是通过目标函数更佳的折衷值自动确定的。对合成和真实网络的实验,结果表明算法成功地检测到了网络结构,并且能与最先进的方法相比较。 关键词:复杂网络,多目标聚类,多目标进化算法 1、简介 复杂网络构成了表示组成许多真实世界系统的对象之间关系的有效形式。协作网络、因特网、万维网、生物网络、通信传输网络,社交网络只是一些例子。将网络建模为图,节点代表个体,边代表这些个体之间的联系。 复杂网络研究中的一个重要问题是社区结构[25]的检测,也被称作为聚类[21],即将一个网络划分为节点组,称作社区或簇或模块,组内连接紧密,组间连接稀疏。这个问题,如[21]指出,只有在建模网络的图是稀疏的时候才有意义,即边的数量远低于可能的边数,否则就类似于数据簇[31]。图的聚类不同于数据聚类,因为图中的簇是基于边的密度,而在数据聚类中,它们是与距离或相似度量紧密相关的组点。然而,网络中社区的概念并未严格定义,因为它的定义受应用领域的影响。因此,直观的理解是同一社区内部边的数量应该远多于连接图中剩余节点的边的数量,这构成了社区定义的一般建议。这个直观定义追求两个不同的目标:最大化内部连接和最小化外部连接。 多目标优化是一种解决问题的技术,当多个相互冲突的目标被优化时,成功地找到一组解。通过利用帕累托最优理论[15]获得这些解,构成了尽可能满足所有目标的全局最优解。解决多目标优化问题的进化算法取得成功,是因为它们基于种群的特性,同时产生多个最优解和一个帕累托前沿[5]的优良近似。 因此,社区检测能够被表述为多目标优化问题,并且帕累托最优性的框架可以提供一组解对应于目标之间的最佳妥协以达到最优化。事实上,在上述两个目标之间有一个折衷,因为当整个网络社区结构的外部连接数量为空时,那它就是最小的,然而簇密度不够高。 在过去的几年里,已经提出了许多方法采用多目标技术进行数据聚类。这些方法大部分在度量空间[14], [17],[18], [28], [38], [39], [49], [51]聚集目标,虽然[8]中给出了分割图的一个方法,并且在[12]中描述了网络用户会议的一个图聚类算法。 本文中,一个多目标方法,名为用于网络的多目标遗传算法(MOGA-Net),通过利用提出的遗传算法发现网络中的社区。该方法优化了[32]和[44]中介绍的两个目标函数,它们已被证实在检测复杂网络中模块的有效性。第一个目标函数利用了community score的概念来衡量对一个网络进行社区划分的质量。community score值越高,聚类密度越高。第二个目标函数定义了模块中节点fitness的概念,并且反复迭代找到节点fitness总和最大的模块,以下将这个目标函数称为community fitness。当总和达到最大时,外部连接是最小。两个目标函数都有一个正实数参数控制社区的规模。参数值越大,找到的社区规模越小。MOGA-Net利用这两个函数的优点,通过有选择地探索搜寻空间获得网络中存在的社区,而不需要提前知道确切的社区数目。这个数目是通过两个目标之间的最佳折衷自动确定的。 多目标方法的一个有趣结果是它提供的不是一个单独的网络划分,而是一组解。这些解中的每一个都对应两个目标之间不同的折衷,并对应多种网络划分方式,即由许多不同簇组成。对合成网络和真实网络的实验表明,这一系列帕累托最优解揭示了网络的分层结构,其中簇的数目较多的解包含在社区数目较少的解中。多目标方法的这个特性提供了一个很好的机会分析不同层级

多目标遗传算法代码

% function nsga_2(pro) %% Main Function % Main program to run the NSGA-II MOEA. % Read the corresponding documentation to learn more about multiobjective % optimization using evolutionary algorithms. % initialize_variables has two arguments; First being the population size % and the second the problem number. '1' corresponds to MOP1 and '2' % corresponds to MOP2. %inp_para_definition=input_parameters_definition; %% Initialize the variables % Declare the variables and initialize their values % pop - population % gen - generations % pro - problem number %clear;clc;tic; pop = 100; % 每一代的种群数 gen = 100; % 总共的代数 pro = 2; % 问题选择1或者2,见switch switch pro case 1 % M is the number of objectives. M = 2; % V is the number of decision variables. In this case it is % difficult to visualize the decision variables space while the % objective space is just two dimensional. V = 6; case 2 M = 3; V = 12; case 3 % case 1和case 2 用来对整个算法进行常规验证,作为调试之用;case 3 为本工程所需; M = 2; %(output parameters 个数) V = 8; %(input parameters 个数) K = 10; end % Initialize the population chromosome = initialize_variables(pop,pro); %% Sort the initialized population % Sort the population using non-domination-sort. This returns two columns % for each individual which are the rank and the crowding distance % corresponding to their position in the front they belong. 真是牛X了。 chromosome = non_domination_sort_mod(chromosome,pro); %% Start the evolution process

多目标优化实例和matlab程序

NSGA-II 算法实例 目前的多目标优化算法有很多, Kalyanmoy Deb 的带精英策略的快速非支配排序遗传算法(NSGA-II) 无疑是其中应用最为广泛也是最为成功的一种。本文用的算法是MATLAB 自带的函数gamultiobj ,该函数是基于NSGA-II 改进的一种多目标优化算法。 一、 数值例子 多目标优化问题 424221********* 4224212212112 12min (,)10min (,)55..55 f x x x x x x x x x f x x x x x x x x x s t x =-++-=-++-≤≤??-≤≤? 二、 Matlab 文件 1. 适应值函数m 文件: function y=f(x) y(1)=x(1)^4-10*x(1)^2+x(1)*x(2)+x(2)^4-x(1)^2*x(2)^2; y(2)=x(2)^4-x(1)^2*x(2)^2+x(1)^4+x(1)*x(2); 2. 调用gamultiobj 函数,及参数设置: clear clc fitnessfcn=@f; %适应度函数句柄 nvars=2; %变量个数 lb=[-5,-5]; %下限 ub=[5,5]; %上限 A=[];b=[]; %线性不等式约束 Aeq=[];beq=[]; %线性等式约束 options=gaoptimset('paretoFraction',0.3,'populationsize',100,'generations', 200,'stallGenLimit',200,'TolFun',1e-100,'PlotFcns',@gaplotpareto); % 最优个体系数paretoFraction 为0.3;种群大小populationsize 为100,最大进化代数generations 为200, % 停止代数stallGenLimit 为200, 适应度函数偏差TolFun 设为1e-100,函数gaplotpareto :绘制Pareto 前端 [x,fval]=gamultiobj(fitnessfcn,nvars,A,b,Aeq,beq,lb,ub,options)

遗传算法的特点及其应用

遗传算法的特点及其应用 上海复旦大学附属中学张宁 目录 【关键词】 【摘要】 【正文】 §1遗传算法的基本概念 §2简单的遗传算法 1.选择 2.交换 3.变异 §3简单的遗传算法运算示例 1.计算机公司的经营策略优化问题 2.函数优化问题 §4遗传算法应用举例 1.子集和问题 2.TSP(旅行商)问题 §5结束语 【附录】 1.子集和问题源程序 2.TSP(旅行商)问题源程序 【参考文献】

【关键词】 遗传算法遗传变异染色体基因群体 【摘要】 遗传算法是基于达尔文进化论,在计算机上模拟生命进化机制而发展起来的一门新学科。它根据适者生存,优胜劣汰等自然进化规则来进行搜索计算和问题求解。 文章的第一部分介绍了遗传算法的基本概念。第二部分介绍了遗传算法的原理以及三种运算:选择、交换、变异。第三部分着重介绍三种运算的具体实现,以及简单实例,主要体现遗传算法的实现过程。第四部分介绍了两个具体问题,都是属于NP-完全问题,如何用遗传算法来解决,以及实现时的一些基本问题。 文章在介绍遗传算法的原理以及各种运算的同时,还分析了一些应用中出现的基本问题,对于我们的解题实践有一定的指导意义。 【正文】 遗传算法作为一门新兴学科,在信息学竞赛中还未普及,但由于遗传算法对许多用传统数学难以解决或明显失效的复杂问题,特别是优化问题,提供了一个行之有效的新途径,且能够较好地解决信息学竞赛中的NP难题,因此值得我们进行深入的讨论。 要掌握遗传算法的应用技巧,就要了解它的各方面的特点。首先,让我们来了解一下什么是遗传算法。 §1遗传算法的基本概念 遗传算法(Genetic Algorithms,简称GA)是人工智能的重要新分支,是基于达尔文进化论,在计算机上模拟生命进化机制而发展起来的一门新学科。它

多变量多目标的遗传算法程序

这是我在解决电梯动力学参数写的简单遗传算法(程序带目标函数值、适应度值计算,但是我的适应度函数因为目标函数的计算很特殊,一起放在了程序外面计算,在此不提供)。 头文件: // CMVSOGA.h : main header file for the CMVSOGA.cpp // 本来想使用链表里面套链表的,程序调试比较麻烦,改为种群用链表表示 //染色体固定为16的方法。 #if !defined(AFX_CMVSOGA_H__45BECA_61EB_4A0E_9746_9A94D1CCF767_ _INCLUDED_) #define AFX_CMVSOGA_H__45BECA_61EB_4A0E_9746_9A94D1CCF767__INCLUDED _ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #include "Afxtempl.h" #define variablenum 16 class CMVSOGA { public: CMVSOGA(); void selectionoperator(); void crossoveroperator(); void mutationoperator(); void initialpopulation(int, int ,double ,double,double *,double *); //种群初始化 void generatenextpopulation(); //生成下一代种群 void evaluatepopulation(); //评价个体,求最佳个体 void calculateobjectvalue(); //计算目标函数值 void calculatefitnessvalue(); //计算适应度函数值 void findbestandworstindividual(); //寻找最佳个体和最差个体 void performevolution(); void GetResult(double *); void GetPopData(double **); void SetValueData(double *); void maxandexpectation(); private: struct individual { double chromosome[variablenum]; //染色体编码长度应该为变量的个数 double value; double fitness; //适应度 };

MATLAB课程遗传算法实验报告及源代码

硕士生考查课程考试试卷 考试科目:MATLAB教程 考生姓名:张宜龙考生学号:2130120033 学院:管理学院专业:管理科学与工程考生成绩: 任课老师(签名) 考试日期:年月日午时至时

《MATLAB 教程》试题: A 、利用MATLA B 设计遗传算法程序,寻找下图11个端点的最短路径,其中没有连接的端点表示没有路径。要求设计遗传算法对该问题求解。 a e h k B 、设计遗传算法求解f (x)极小值,具体表达式如下: 3 21231(,,)5.12 5.12,1,2,3 i i i f x x x x x i =?=???-≤≤=? ∑ 要求必须使用m 函数方式设计程序。 C 、利用MATLAB 编程实现:三名商人各带一个随从乘船渡河,一只小船只能容纳二人,由他们自己划行,随从们密约,在河的任一岸,一旦随从的人数比商人多,就杀人越货,但是如何乘船渡河的大权掌握在商人手中,商人们怎样才能安全渡河? D 、结合自己的研究方向选择合适的问题,利用MATLAB 进行实验。 以上四题任选一题进行实验,并写出实验报告。

选择题目: B 、设计遗传算法求解f (x)极小值,具体表达式如下: 3 21231(,,)5.12 5.12,1,2,3 i i i f x x x x x i =?=???-≤≤=? ∑ 要求必须使用m 函数方式设计程序。 一、问题分析(10分) 这是一个简单的三元函数求最小值的函数优化问题,可以利用遗传算法来指导性搜索最小值。实验要求必须以matlab 为工具,利用遗传算法对问题进行求解。 在本实验中,要求我们用M 函数自行设计遗传算法,通过遗传算法基本原理,选择、交叉、变异等操作进行指导性邻域搜索,得到最优解。 二、实验原理与数学模型(20分) (1)试验原理: 用遗传算法求解函数优化问题,遗传算法是模拟生物在自然环境下的遗传和进化过程而形成的一种自适应全局优化概率搜索方法。其采纳了自然进化模型,从代表问题可能潜在解集的一个种群开始,种群由经过基因编码的一定数目的个体组成。每个个体实际上是染色体带有特征的实体;初始种群产生后,按照适者生存和优胜劣汰的原理,逐代演化产生出越来越好的解:在每一代,概据问题域中个体的适应度大小挑选个体;并借助遗传算子进行组合交叉和主客观变异,产生出代表新的解集的种群。这一过程循环执行,直到满足优化准则为止。最后,末代个体经解码,生成近似最优解。基于种群进化机制的遗传算法如同自然界进化一样,后生代种群比前生代更加适应于环境,通过逐代进化,逼近最优解。 遗传算法是一种现代智能算法,实际上它的功能十分强大,能够用于求解一些难以用常规数学手段进行求解的问题,尤其适用于求解多目标、多约束,且目标函数形式非常复杂的优化问题。但是遗传算法也有一些缺点,最为关键的一点,即没有任何理论能够证明遗传算法一定能够找到最优解,算法主要是根据概率论的思想来寻找最优解。因此,遗传算法所得到的解只是一个近似解,而不一定是最优解。 (2)数学模型 对于求解该问题遗传算法的构造过程: (1)确定决策变量和约束条件;

遗传算法综述

遗传算法综述 摘要 遗传算法(Genetic Algorithm,GA)是一类借鉴生物界自然选择和自然遗传机制的随机化搜索算法,其主要特点是群体搜索策略和群体中个体之间的信息交换,搜索不依赖于梯度信息。它尤其适用于处理传统搜索方法难于解决的复杂和非线性问题,可广泛用于组合优化、机器学习、自适应控制、规划设计和人工生命等领域。本文从遗传算法的起源谈起,论述了遗传算法的基本思想和基本原理,并对其性能和收敛性进行了分析,最后还介绍了几种改进的遗传算法及其在求解旅行商问题(TSP)方面的应用。 Genetic algorithm ( Genetic, Algorithm, GA ) is a kind of biological natural selection and genetic mechanism of the random search algorithm, its main characteristic is the group searching strategy and individual in the colony between the exchange of information, search does not rely on gradient information. It is especially suitable for the processing of traditional search method to solve the complex and nonlinear problems, can be widely used in combinatorial optimization, machine learning, adaptive control, planning design and artificial life etc.. This article from the origin of the genetic algorithm, the genetic algorithm basic thought and basic principle, and its performance and convergence are analyzed, finally introduces several improved genetic algorithm for solving the traveling salesman problem ( TSP ) with respect to the application. 关键词:遗传算法;搜索算法;TSP;遗传算法收敛性 Key words: genetic algorithm; search algorithm; TSP; genetic algorithm convergence 1 引言 在自然界中,生物要生存下去,就必须进行生存斗争。在生存斗争中,具有有利变异的个体容易存活下来,并且有更多的机会将有利变异传给后代;具有不利变异的个体就容易被淘汰,产生后代的机会也少得多。因此,凡是在生存斗争中获胜的个体都是对环境适应性比较强的。达尔文把这种在生存斗争中适者生存,不适者淘汰的过程叫做自然选择。遗传算法是根据这种生物的自然进化思想而启发得出的一种全局优化算法。 遗传算法是由美国的J. Holland教授于1975年在他的专著《自然和人工系统的适应性》中首先提出的,他将遗传算法应用于函数优化中,设计了遗传算法执行策略和性能评价指标。1989年,Goldberg出版了专著《搜索、优化和机器学习中的遗传算法》,系统总结了遗传算法的主

相关文档
最新文档