BP神经网络原理及MATLAB仿真

BP神经网络原理及MATLAB仿真
BP神经网络原理及MATLAB仿真

BP Neural Network principle and MATLAB Simulation

Xiong Xin Nie Mingxin

School of Information Engineering School of Information Engineering

Wuhan University of Technology Wuhan University of Technology

Wuhan, P. R. China 430070 Wuhan, P. R. China 430070

sense1559@https://www.360docs.net/doc/8e3396635.html, niemx@https://www.360docs.net/doc/8e3396635.html,

Abstract

This paper introduces the prevalent BP algorithm in neural network, and discusses the goodness、problem and training process of BP neural network, as well as using MATLAB software to simulate the numbers on the basis of it. At last, several improved BP training algorithms have been compared in the paper.

Keywords: BP neural network; number recognition; MATLAB

1 Introduction

The development of neural network is rapid since the first neural network model——MP model came up in 1943[1]. Hopfield neural network proposed in 1982 and opposite phase broadcast algorithm proposed by Rumelhart in 1985 make the neural network of Hopfield model and multilayer feedforward model to be the prevalent neural network model. They are effective in many applications of fields such as speech recognition, mode recognition, image processing and industry controlling.

Neural network is an theory which is imitative of the biological processing model to get the function of information intelligent processing. It treats with the pattern information which is hard to be expressed in certain language by the method from bottom to top and parallel distribution way formed by self-study, self-organization and non-linear dynamics. Neural network is a parallel and distributed information processing network architecture. It is generally composed by massive neurons, each of which has only one output that can connect many other neurons. The reciprocity between neurons is embodied by their connected weighs. The output of neurons is its input function. The types of functions in common use have linear function, Sigmoid function and value function.

There are two phases of positive transmitting processing and error reverse transmitting processing in the study processing of BP neural network[2]. The signal inputted from outside spreads to the output layer and gives the result through processing layer for layer of neurons in input layer and hidden layer. If the expected output can’t be obtained in output layer, it shifts to the conversed spreading processing and the true value and the error outputted by network will return along the coupled access formerly. The error is reduced by modifying contacted weight value of neurons in every layer and then it shifts to the positive spreading processing and revolves iteration until the error is smaller the given value. Take a three layer network for example, the network is composed of N input neurons, K hidden neurons and M output neurons(as showed in fig.1). O2pm and O1pk are the output value of output layer and hidden layer respectively. w2km and w1nk are the connected weight value from the hidden layer to the output layer and from the input layer to the hidden layer respectively. Suppose the input studying sample is Xpn, so its corresponding expected output value is tpm.

Fig.1 BP neural network configuration

2 BP Neural Network

2.1 the Discussion about the Advantages

and Disadvantages of BP Neural Network

BP neural network is a kind of neural network forms which has most applications currently[3], but it isn’t very perfect. In order to understand how to apply the neural network to resolve problems, we carry on the discussion about its advantages and disadvantages here.

The advantages of BP neural network:

① Network realizes the mapped function from input to output and mathematical theory has proved that it has the function to achieve any complex non-linear mapping;

②Network can extract the “logical” solution rules automatically through studying the examples with correct results. It has the ability of self-study;

③Network has the definite abilities of promotion and generalization.

The disadvantages of BP neural network:

①The study speed of BP algorithm is very slow. The main causations of it are:

a. Because BP algorithm is grads declining method essentially and the aim function optimized by it is very complex, the “sawtooth-shaped phenomenon” is bound to appear which makes the BP algorithm inefficiency;

b. The torpid phenomenon exists. Because the optimized aim function is very complex, it can appear some flat areas in the case of the output of neurons approach 0 or 1. In these areas, the error of weight value changes very little, which can hardly make the training processing break down;

c. In order to execute the BP algorithm in the network, we can’t use the traditional one-dimensional search method to solve the interative step length every time. We should put the network the updated rules of step length in advance. The former algorithm will make the algorithm inefficiency.

②the network training is much more likely to fail, the reasons as below: a. From the perspective of mathematics, BP algorithm as a kind of local searching optimized method, it is used to solve the overall extremum of complex non-linear function, so the algorithm is likely to be gotten into the local extremum and make the training fail;

b. The approaching and promoting abilities are closely linked with the representative of studying sample. It is a hard problem to choose the training collection composed of the typical sampling examples.

③The contradiction between the scale of examples and network is hard to solve, which refers to the relationship of possibility and feasibility of network capacity, viz. the problem of studying complexity;

④The choice of network configuration has still no a uniform and integrate academic guidance and it can be selected by experience. Therefore some people call the structure choice of neural network is a kind of art. The network structure infects the approaching ability and promoting character directly. So how to choose an appropriate network structure is an important problem;

⑤New samples can infect the network which studies successfully and the number that describes the character of every input sample should be equal;

⑥There is contradiction between the predictive ability of network(also called generalization ability or promoting ability) and training ability(also called approaching ability or study ability). Usually when the training ability is poor, the predictive ability will be poor and in a certain extent, with the improvement of training ability the predictive ability is also improved. However, this trend has a limit. When achieving this limit, with the improvement of the training ability the predictive ability will be decline on the contrary, which is also called the over fitting phenomenon. And now the network studies

https://www.360docs.net/doc/8e3396635.html,

too much detail of samples and can’t reflect the embedded laws of samples.

2.2 BP Network algorithm

The training process of BP network is as below [4].

(1) Initialization. Endow every connected weight value and threshold value with a lesser random value.

(2) Input the corresponding neurons in input layer with an component of a eigenvector Xpk = (Xpk1 ,Xpk2 Xpk3 ,…,Xpkn) .

(3) Use the eigenvector of input samples to calculate the corresponding output value

O1pk=f(Xpkn) of neurons in hidden layer. (4) Use each unit output O1pk in hidden layer to calculate the input value in each output layer and then further calculate the corresponding output O2pm ,O2pm

=f(O1pk) of each unit in output layer. (5) Calculate the generalized error of each unit in output layer via the teach signals.

(6) Use the connected weight value W2km between middle layer and output layer, the generalized error δi in each unit of output layer and the output O1pk of each unit in middle layer to calculate the generalized error of each unit in middle layer.

(7) Use the generalized error δj of each unit in output layer and the output O1pk of each unit in middle layer to modify the weight value

w2km between output layer and middle layer and the threshold value Yj of each unit in output layer.

(8) Use the generalized error ei of each unit in middle layer and the input Xpkn of each unit in input layer to modify the weight value

w1nk between input layer and middle layer and the threshold value θi of each unit in hidden layer.

(9) Select the next sample in order and return 2 until the samples in training collection are all studied over.

(10) Return 2 over again until the error function is lower than the predetermined value, viz. the times of network constringency or study is greater than the predetermined value.

3 the Applications of BP Network in digital recognition

Firstly, the numbers 0~9 should be carried on the digital processing to constitute the input samples. Considering that using the 0-1 image of 5×5 matrix will represent every number clearly and then it designs and trains an neural network which can recognize the ten numbers 0~9 once again. When giving the trained network with an input that can represent some number, network can represent this number correctly through the 8421BCD code in output terminal. This network can remember all the ten numbers by study and training. The neural network training should be supervised to train ten groups of arrays which can represent numbers 0~9 and show the corresponding four two-scale numbers of 1~10 in output terminal.

3.1 the Design of network structure

From the analysis above, we can get that the neural network need to have N=5×5 input neurons and M=4 output neurons. The two layer logsig/logsig network adopting the logarithm Sigmoid type active function in the scope of (0,1) is quite effective to the 0,1 Boolean values. So the network may adopt the N-K-M structure, where the hidden layer chooses one layer, N is the number of neurons in input layer, K is the number of neurons in hidden layer and M is the number of neurons in output layer. The number ‘0’ and ‘9’ can represent respectively by 0-1 chart as:

le0=[1 1 1 1 1 le1=[0 0 1 0 0 1 0 0 0 1 0 0 1 0 0

1 0 0 0 1 0 0 1 0 0

1 0 0 0 1 0 0 1 0 0

1 1 1 1 1] 0 0 1 0 0]

le2=[1 1 1 1 1 le3=[1 1 1 1 1

0 0 0 0 1 0 0 0 0 1

1 1 1 1 1 1 1 1 1 1

1 0 0 0 0 0 0 0 0 1

1 1 1 1 1] 1 1 1 1 1]

le4=[1 0 0 0 1 le5=[1 1 1 1 1

1 0 0 0 1 1 0 0 0 0

https://www.360docs.net/doc/8e3396635.html,

1 1 1 1 1 1 1 1 1 1

0 0 0 0 1 0 0 0 0 1

0 0 0 0 1] 1 1 1 1 1]

le6=[1 1 1 1 1 le7=[1 1 1 1 1

1 0 0 0 0 0 0 0 1 0

1 1 1 1 1 0 0 1 0 0

1 0 0 0 1 0 1 0 0 0

1 1 1 1 1] 1 0 0 0 0]

le8=[1 1 1 1 1 le9=[1 1 1 1 1

1 0 0 0 1 1 0 0 0 1

1 1 1 1 1 1 1 1 1 1

1 0 0 0 1 0 0 0 0 1

1 1 1 1 1] 1 1 1 1 1] 3.2Network MATLAB Simulation

To the applied BP network above, we can use the functions in MATLAB neural network toolbox to simulate[5]. The neural network toolbox is one of toolboxes under the MATLAB environment. It constructs the excitation functions of typical neural network such as the S type, linear type, competitive type and saturated linear type functions based on the artificial neural networks theory and using the MATLAB language. These functions can make the designer change the selected network output calculation to the transfer of excitation functions.

Hidden layer gets 10 neurons relied on experience. To the network structure which is 25-10-4 structure, input layer has 25 neurons and each number can be represented with 0-1 chart of 5×5 matrix whose elements can constitute a numeric column matrix. The 10 numbers are represented separately by a 25 × 10 input matrix which is composed of 10 numbers column matrix. And then send the 10 numbers to the variable named P:

P= [le0, le1, le2, le3, le4, le5, le6, le7, le8, le9]

Output layer has 4 neurons, because the objective vector expects that when each number is inputted, the four binary elements of 1~10 can be corresponded in the output terminal correctly.

The design parameters of network are: the largest training times 5000-2000(according to the training function); no noise recognized

training precision 0.1; the rate of study 0.01; momentum constant 0.95; the biggest error ratio 1.05.

The program flow as below:

le0=[1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1]';

le1=[0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0]';

le2=[1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1]';

le3=[1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1]';

le4=[1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1]';

le5=[1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1]';

le6=[1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1]';

le7=[1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0]';

le8=[1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1]';

le9=[1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 1 1]';

P=[le0,le1,le2,le3,le4,le5,le6,le7,le8,le9];

T=[0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1;0 1 0 0;0 1 0 1;0 1 1 0;0 1 1 1;1 0 0 0;1 0 0 1]';

%adopting feedforward BP network algorithm %and the BP algorithm training function

%Levenberg_Marquardt

net=newff(minmax(P),[10,4],{'logsig','logsig'} ,'trainlm');

%the biggest training times is 5000 here

net.trainParam.epochs=5000;

net.trainParam.goal=0.1;

net.trainParam.lr=0.01;

net.trainParam.mm=0.95;

net.trainParam.er=1.05

net=train(net,P,T);

The training curve that come out finally is shown in fig.2, in which we can that the network converges to the required precision at a great speed. In the condition that objective convergent precision and other training

parameter don’t change, compare the simulations separately of the other several BP algorithm training functions, the result shown in table 1. In this table we can conclude that in the condition of not affecting the precision, adopting the L-M optimized algorithm is the most fast.

Fig.2 Training curve of L-M algorithm Table 1 the comparison of training effect in

different BP algorithms

Training function

BP algorithm

biggest training times / actual training times

convergence precision

traingd Gradual descendent BPalgorithm 10000/ 6842 0.0999982

traingda

Gradual descendent self-adaptw/l BPalgorithm

5000/ 130

0.0971766

traingdm

Gradual descendent w/momentum BPalgorithm

20000/ 18373

0.0999995

traingdx

Gradual descendent w/momentum and self-adapt BPalgorithm

5000/ 156

0.0957755

trainlm

Levenberg_

Marquardt BP algorithm 5000/ 7

0.0994116The adjust of Levenberg – Marquardt

algorithm weight value is [6]

:

(1)

Where J is a Jacobian matrix which is composed of the differential coefficient by error to weight value, E is the error vector, m is a scalar quantity which depends on the width of m. This method is ranging between the two extreme cases smoothly which are the Newton method(when m →0) and the steepest decline method(when m →∞).

4 Conclusions

Train the network by five kinds of BP algorithm and the result shows that for the part of the rapidity of convergence, the L-M algorithm is the fast and its precision is the highest, but other algorithms are not so good as the L-M algorithm relatively. L – M algorithm is fit to some trainings which have a

great quantity of samples. In addition, the reliability of digital recognition using neural network can be gotten by using hundreds of vectors that have random noise. If a higher recognized precision is required, it can lengthen the time of network training and make the training error precision higher or increase the numbers of the neurons in hidden layer. Otherwise, it can enhance the resolving power of input vectors, for example, adopting 16×16 lattices and so on.

References:

[1] Ying Liandong.The Design and Application of

BP Neural Network [J]. Information Technology,2003,27(6):18-20.

[2] NG S C, CHEUNG C C,LEUNG S H. Fast

Convergence for Back - Propagation Network with Magnified Gradient Function[ J ].IEEE, 2003, 9 (3) : 1903 - 1908.

[3] He Qingbi, Zhou Jianli. The convergence and

improvements of BP neural network[J]. Journal Of ChongQing Jiao TTong University, 2005,24(1):143-145. [4] Fan lei, Zhang Yuntao, Chen Zhenjun.

Application of Improved BP Neural Network

Based onMatlab[J]. Journal of China West

Normal University(Natural Sciences), 2005,

26(1):70-73.

[5] Zhang Dexi, Bi Yuhua.The Application of

MATLAB in Pattern Recognition[J].Journal Of

XuChang Teachers COLLEGE, 2002,21(5):43-46.

[6] Yang Zhongjin, Shi Zhongke. Architecture

Optimization for Neural Network[J]. Project and

Application Of Computer, 2004,25:52-53.

(完整版)BP神经网络matlab实例(简单而经典).doc

p=p1';t=t1'; [pn,minp,maxp,tn,mint,maxt]=premnmx(p,t); % 原始数据归一化 net=newff(minmax(pn),[5,1],{'tansig','purelin'},'traingdx'); %设置网络,建立相应的BP 网络net.trainParam.show=2000; % 训练网络 net.trainParam.lr=0.01; net.trainParam.epochs=100000; net.trainParam.goal=1e-5; [net,tr]=train(net ,pn,tn); %调用TRAINGDM 算法训练BP 网络 pnew=pnew1'; pnewn=tramnmx(pnew,minp,maxp); anewn=sim(net,pnewn); anew=postmnmx(anewn,mint,maxt); %对 BP 网络进行仿真%还原数据 y=anew'; 1、 BP 网络构建 (1)生成 BP 网络 net newff ( PR,[ S1 S2...SNl],{ TF1 TF 2...TFNl }, BTF , BLF , PF ) PR :由R 维的输入样本最小最大值构成的R 2 维矩阵。 [ S1 S2...SNl] :各层的神经元个数。 {TF 1 TF 2...TFNl } :各层的神经元传递函数。 BTF :训练用函数的名称。 (2)网络训练 [ net,tr ,Y, E, Pf , Af ] train (net, P, T , Pi , Ai ,VV , TV ) (3)网络仿真 [Y, Pf , Af , E, perf ] sim(net, P, Pi , Ai ,T ) {'tansig','purelin'},'trainrp' BP 网络的训练函数 训练方法 梯度下降法 有动量的梯度下降法 自适应 lr 梯度下降法 自适应 lr 动量梯度下降法弹性梯度下降法训练函数traingd traingdm traingda traingdx trainrp Fletcher-Reeves 共轭梯度法traincgf Ploak-Ribiere 共轭梯度法traincgp

BP神经网络模型应用实例

BP神经网络模型 第1节基本原理简介 近年来全球性的神经网络研究热潮的再度兴起,不仅仅是因为神经科学本身取得了巨大的进展.更主要的原因在于发展新型计算机和人工智能新途径的迫切需要.迄今为止在需要人工智能解决的许多问题中,人脑远比计算机聪明的多,要开创具有智能的新一代计算机,就必须了解人脑,研究人脑神经网络系统信息处理的机制.另一方面,基于神经科学研究成果基础上发展出来的人工神经网络模型,反映了人脑功能的若干基本特性,开拓了神经网络用于计算机的新途径.它对传统的计算机结构和人工智能是一个有力的挑战,引起了各方面专家的极大关注. 目前,已发展了几十种神经网络,例如Hopficld模型,Feldmann等的连接型网络模型,Hinton等的玻尔茨曼机模型,以及Rumelhart等的多层感知机模型和Kohonen的自组织网络模型等等。在这众多神经网络模型中,应用最广泛的是多层感知机神经网络。多层感知机神经网络的研究始于50年代,但一直进展不大。直到1985年,Rumelhart等人提出了误差反向传递学习算法(即BP算),实现了Minsky的多层网络

设想,如图34-1所示。 BP 算法不仅有输入层节点、输出层节点,还可有1个或多个隐含层节点。对于输入信号,要先向前传播到隐含层节点,经作用函数后,再把隐节点的输出信号传播到输出节点,最后给出输出结果。节点的作用的激励函数通常选取S 型函数,如 Q x e x f /11)(-+= 式中Q 为调整激励函数形式的Sigmoid 参数。该算法的学习过程由正向传播和反向传播组成。在正向传播过程中,输入信息从输入层经隐含层逐层处理,并 传向输出层。每一层神经元的状态只影响下一层神经

BP神经网络地设计实例(MATLAB编程)

神经网络的设计实例(MATLAB编程) 例1 采用动量梯度下降算法训练BP 网络。训练样本定义如下: 输入矢量为 p =[-1 -2 3 1 -1 1 5 -3] 目标矢量为t = [-1 -1 1 1] 解:本例的MATLAB 程序如下: close all clear echo on clc % NEWFF——生成一个新的前向神经网络% TRAIN——对BP 神经网络进行训练 % SIM——对BP 神经网络进行仿真pause % 敲任意键开始 clc % 定义训练样本 P=[-1, -2, 3, 1; -1, 1, 5, -3]; % P 为输入矢量T=[-1, -1, 1, 1]; % T 为目标矢量

clc % 创建一个新的前向神经网络 net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingdm') % 当前输入层权值和阈值 inputWeights=net.IW{1,1} inputbias=net.b{1} % 当前网络层权值和阈值 layerWeights=net.LW{2,1} layerbias=net.b{2} pause clc % 设置训练参数 net.trainParam.show = 50; net.trainParam.lr = 0.05; net.trainParam.mc = 0.9; net.trainParam.epochs = 1000; net.trainParam.goal = 1e-3; pause clc % 调用TRAINGDM 算法训练BP 网络 [net,tr]=train(net,P,T);

Matlab训练好的BP神经网络如何保存和读取方法(附实例说明)

Matlab训练好的BP神经网络如何保存和读取方法(附实例说明) 看到论坛里很多朋友都在提问如何存储和调用已经训练好的神经网络。 本人前几天也遇到了这样的问题,在论坛中看了大家的回复,虽然都提到了关键的两个函数“save”和“load”,但或多或少都简洁了些,让人摸不着头脑(呵呵,当然也可能是本人太菜)。通过不断调试,大致弄明白这两个函数对神经网络的存储。下面附上实例给大家做个说明,希望对跟我有一样问题的朋友有所帮助。 如果只是需要在工作目录下保到当前训练好的网络,可以在命令窗口 输入:save net %net为已训练好的网络 然后在命令窗口 输入:load net %net为已保存的网络 加载net。 但一般我们都会在加载完后对网络进行进一步的操作,建议都放在M文件中进行保存网络和调用网络的操作 如下所示: %% 以函数的形式训练神经网络 functionshenjingwangluo() P=[-1,-2,3,1; -1,1,5,-3]; %P为输入矢量 T=[-1,-1,1,1,]; %T为目标矢量 net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingdm') %创建一个新的前向神经网络 inputWeights=net.IW{1,1} inputbias=net.b{1} %当前输入层权值和阀值 layerWeights=net.LW{2,1} layerbias=net.b{2} net.trainParam.show=50; net.trainParam.lr=0.05; net.trainParam.mc=0.9;

BP神经网络matlab实例

神经网络Matlab p=p1';t=t1'; [pn,minp,maxp,tn,mint,maxt]=premnmx(p,t); %原始数据归一化net=newff(minmax(pn),[5,1],{'tansig','purelin'},'traingdx');%设置网络,建立相应的BP网络 net.trainParam.show=2000; % 训练网络 net.trainParam.lr=0.01; net.trainParam.epochs=100000; net.trainParam.goal=1e-5; [net,tr]=train(net ,pn,tn); %调用TRAINGDM算法训练BP网络 pnew=pnew1'; pnewn=tramnmx(pnew,minp,maxp); anewn=sim(net,pnewn); %对BP网络进行仿真 anew=postmnmx(anewn,mint,maxt); %还原数据 y=anew'; 1、BP网络构建 (1)生成BP网络 = net newff PR S S SNl TF TF TFNl BTF BLF PF (,[1 2...],{ 1 2...},,,) R?维矩阵。 PR:由R维的输入样本最小最大值构成的2

S S SNl:各层的神经元个数。 [1 2...] TF TF TFNl:各层的神经元传递函数。 { 1 2...} BTF:训练用函数的名称。 (2)网络训练 = [,,,,,] (,,,,,,) net tr Y E Pf Af train net P T Pi Ai VV TV (3)网络仿真 = [,,,,] (,,,,) Y Pf Af E perf sim net P Pi Ai T {'tansig','purelin'},'trainrp' BP网络的训练函数 训练方法训练函数 梯度下降法traingd 有动量的梯度下降法traingdm 自适应lr梯度下降法traingda 自适应lr动量梯度下降法traingdx 弹性梯度下降法trainrp Fletcher-Reeves共轭梯度法traincgf Ploak-Ribiere共轭梯度法traincgp Powell-Beale共轭梯度法traincgb 量化共轭梯度法trainscg 拟牛顿算法trainbfg 一步正割算法trainoss Levenberg-Marquardt trainlm

matlab BP神经网络

基于MATLAB的BP神经网络工具箱函数 最新版本的神经网络工具箱几乎涵盖了所有的神经网络的基本常用模型,如感知器和BP网络等。对于各种不同的网络模型,神经网络工具箱集成了多种学习算法,为用户提供了极大的方便[16]。Matlab R2007神经网络工具箱中包含了许多用于BP网络分析与设计的函数,BP网络的常用函数如表3.1所示。 3.1.1BP网络创建函数 1) newff 该函数用于创建一个BP网络。调用格式为: net=newff net=newff(PR,[S1S2..SN1],{TF1TF2..TFN1},BTF,BLF,PF) 其中, net=newff;用于在对话框中创建一个BP网络。 net为创建的新BP神经网络; PR为网络输入向量取值范围的矩阵; [S1S2…SNl]表示网络隐含层和输出层神经元的个数; {TFlTF2…TFN1}表示网络隐含层和输出层的传输函数,默认为‘tansig’; BTF表示网络的训练函数,默认为‘trainlm’; BLF表示网络的权值学习函数,默认为‘learngdm’; PF表示性能数,默认为‘mse’。

2)newcf函数用于创建级联前向BP网络,newfftd函数用于创建一个存在输入延迟的前向网络。 3.1.2神经元上的传递函数 传递函数是BP网络的重要组成部分。传递函数又称为激活函数,必须是连续可微的。BP网络经常采用S型的对数或正切函数和线性函数。 1) logsig 该传递函数为S型的对数函数。调用格式为: A=logsig(N) info=logsig(code) 其中, N:Q个S维的输入列向量; A:函数返回值,位于区间(0,1)中; 2)tansig 该函数为双曲正切S型传递函数。调用格式为: A=tansig(N) info=tansig(code) 其中, N:Q个S维的输入列向量; A:函数返回值,位于区间(-1,1)之间。 3)purelin 该函数为线性传递函数。调用格式为: A=purelin(N) info=purelin(code) 其中, N:Q个S维的输入列向量; A:函数返回值,A=N。 3.1.3BP网络学习函数 1)learngd 该函数为梯度下降权值/阈值学习函数,它通过神经元的输入和误差,以及权值和阈值的学习效率,来计算权值或阈值的变化率。调用格式为: [dW,ls]=learngd(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) [db,ls]=learngd(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)

bp神经网络及matlab实现

bp神经网络及matlab实现 分类:算法学习2012-06-20 20:56 66399人阅读评论(28) 收藏举报网络matlab算法functionnetworkinput 本文主要内容包括:(1) 介绍神经网络基本原理,(2) https://www.360docs.net/doc/8e3396635.html,实现前向神经网络的方法,(3) Matlab实现前向神经网络的方法。 第0节、引例 本文以Fisher的Iris数据集作为神经网络程序的测试数据集。Iris数据集可以在https://www.360docs.net/doc/8e3396635.html,/wiki/Iris_flower_data_set 找到。这里简要介绍一下Iris数据集: 有一批Iris花,已知这批Iris花可分为3个品种,现需要对其进行分类。不同品种的Iris花的花萼长度、花萼宽度、花瓣长度、花瓣宽度会有差异。我们现有一批已知品种的Iris花的花萼长度、花萼宽度、花瓣长度、花瓣宽度的数据。 一种解决方法是用已有的数据训练一个神经网络用作分类器。 如果你只想用C#或Matlab快速实现神经网络来解决你手头上的问题,或者已经了解神经网络基本原理,请直接跳到第二节——神经网络实现。 第一节、神经网络基本原理 1. 人工神经元( Artificial Neuron )模型 人工神经元是神经网络的基本元素,其原理可以用下图表示:

图1. 人工神经元模型 图中x1~xn是从其他神经元传来的输入信号,wij表示表示从神经元j到神经元i的连接权值,θ表示一个阈值( threshold ),或称为偏置( bias )。则神经元i的输出与输入的关系表示为: 图中yi表示神经元i的输出,函数f称为激活函数 ( Activation Function )或转移函数( Transfer Function ) ,net称为净激活(net activation)。若将阈值看成是神经元i的一个输入x0的权重wi0,则上面的式子可以简化为: 若用X表示输入向量,用W表示权重向量,即: X = [ x0 , x1 , x2 , ....... , xn ]

用遗传算法优化BP神经网络的Matlab编程实例

用遗传算法优化BP神经网络的 Matlab编程实例 由于BP网络的权值优化是一个无约束优化问题,而且权值要采用实数编码,所以直接利用Matlab遗传算法工具箱。以下贴出的代码是为一个19输入变量,1个输出变量情况下的非线性回归而设计的,如果要应用于其它情况,只需改动编解码函数即可。 程序一:GA训练BP权值的主函数 function net=GABPNET(XX,YY) %-------------------------------------------------------------------------- % GABPNET.m % 使用遗传算法对BP网络权值阈值进行优化,再用BP 算法训练网络 %-------------------------------------------------------------------------- %数据归一化预处理 nntwarn off XX=premnmx(XX); YY=premnmx(YY); %创建网络 net=newff(minmax(XX),[19,25,1],{'tansig','tansig','purelin'},' trainlm'); %下面使用遗传算法对网络进行优化 P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隐含层节点数 S=R*S1+S1*S2+S1+S2;%遗传算法编码长度 aa=ones(S,1)*[-1,1]; popu=50;%种群规模 initPpp=initializega(popu,aa,'gabpEval');%初始化种群 gen=100;%遗传代数 %下面调用gaot工具箱,其中目标函数定义为gabpEval [x,endPop,bPop,trace]=ga(aa,'gabpEval',[],initPpp,[1e-6 1 1],'maxGenTerm',gen,... 'normGeomSelect',[0.09],['arithXover'],[2],'nonUnifMutatio n',[2 gen 3]); %绘收敛曲线图 figure(1) plot(trace(:,1),1./trace(:,3),'r-'); hold on plot(trace(:,1),1./trace(:,2),'b-'); xlabel('Generation'); ylabel('Sum-Squared Error'); figure(2) plot(trace(:,1),trace(:,3),'r-'); hold on plot(trace(:,1),trace(:,2),'b-'); xlabel('Generation'); ylabel('Fittness'); %下面将初步得到的权值矩阵赋给尚未开始训练的BP网络 [W1,B1,W2,B2,P,T,A1,A2,SE,val]=gadecod(x); net.LW{2,1}=W1; net.LW{3,2}=W2; net.b{2,1}=B1; net.b{3,1}=B2; XX=P; YY=T; %设置训练参数 net.trainParam.show=1; net.trainParam.lr=1; net.trainParam.epochs=50; net.trainParam.goal=0.001; %训练网络 net=train(net,XX,YY); 程序二:适应值函数 function [sol, val] = gabpEval(sol,options) % val - the fittness of this individual % sol - the individual, returned to allow for Lamarckian evolution % options - [current_generation] load data2 nntwarn off XX=premnmx(XX); YY=premnmx(YY); P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隐含层节点数 S=R*S1+S1*S2+S1+S2;%遗传算法编码长度 for i=1:S, x(i)=sol(i); end; [W1, B1, W2, B2, P, T, A1, A2, SE, val]=gadecod(x);

BP神经网络matlab实例

BP神经网络及其MATLAB实例 问题:BP神经网络预测2020年某地区客运量和货运量 公路运量主要包括公路客运量和公路货运量两方面。某个地区的公路运量主要与该地区的人数、机动车数量和公路面积有关,已知该地区20年(1999-2018)的公路运量相关数据如下: 人数/万人: 20.5522.4425.3727.1329.4530.1030.9634.0636.4238.09 39.1339.9941.9344.5947.3052.8955.7356.7659.1760.63机动车数量/万辆: 0.60.750.850.9 1.05 1.35 1.45 1.6 1.7 1.85 2.15 2.2 2.25 2.35 2.5 2.6 2.7 2.85 2.95 3.1 公路面积/单位:万平方公里: 0.090.110.110.140.200.230.230.320.320.34 0.360.360.380.490.560.590.590.670.690.79 公路客运量/万人:5126621777309145104601138712353157501830419836 21024194902043322598251073344236836405484292743462公路货运量/万吨: 1237137913851399166317141834432281328936 11099112031052411115133201676218673207242080321804影响公路客运量和公路货运量主要的三个因素是:该地区的人数、机动车数量和公路面积。 Matlab代码实现 %人数(单位:万人) numberOfPeople=[20.5522.4425.3727.1329.4530.1030.9634.0636.42 38.0939.1339.9941.9344.5947.3052.8955.7356.7659.1760.63]; %机动车数(单位:万辆) numberOfAutomobile=[0.60.750.850.91.051.351.451.61.71.852.15 2.2 2.25 2.35 2.5 2.6 2.7 2.85 2.95 3.1]; %公路面积(单位:万平方公里) roadArea=[0.090.110.110.140.200.230.230.320.320.340.360.360.38

matlab神经网络实例(超级简单)

介绍神经网络算法在机械结构优化中的应用的例子 (大家要学习的时候只需要把输入输出变量更改为你自己的数据既可以了,如果看完了还有问题的话可以加我微博“极南师兄”给我留言,与大家共同进步)。 把一个结构的8个尺寸参数设计为变量,如上图所示, 对应的质量,温差,面积作为输出。用神经网络拟合变量与输出的数学模型,首相必须要有数据来源,这里我用复合中心设计法则构造设计点,根据规则,八个变量将构造出81个设计点。然后在ansys workbench中进行81次仿真(先在proe建模并设置变量,将模型导入wokbench中进行相应的设置,那么就会自动的完成81次仿真,将结果导出来exceel文件) Matlab程序如下 P= [20 2.5 6 14.9 16.5 6 14.9 16.5 15 2.5 6 14.9 16.5 6 14.9 16.5 25 2.5 6 14.9 16.5 6 14.9 16.5 20 1 6 14.9 16.5 6 14.9 16.5 20 4 6 14.9 16.5 6 14.9 16.5 20 2.5 2 14.9 16.5 6 14.9 16.5 20 2.5 10 14.9 16.5 6 14.9 16.5 20 2.5 6 10 16.5 6 14.9 16.5 20 2.5 6 19.8 16.5 6 14.9 16.5 20 2.5 6 14.9 10 6 14.9 16.5 20 2.5 6 14.9 23 6 14.9 16.5 20 2.5 6 14.9 16.5 2 14.9 16.5 20 2.5 6 14.9 16.5 10 14.9 16.5 20 2.5 6 14.9 16.5 6 10 16.5 20 2.5 6 14.9 16.5 6 19.8 16.5 20 2.5 6 14.9 16.5 6 14.9 10 20 2.5 6 14.9 16.5 6 14.9 23 17.51238947 1.75371684 4.009911573 12.46214168 13.26610631 4.009911573 12.46214168 19.73389369 22.48761053 1.75371684 4.009911573 12.46214168 13.26610631 4.009911573 12.46214168 13.26610631 17.51238947 3.24628316 4.009911573 12.46214168 13.26610631 4.009911573

基于MATLAB 的神经网络的仿真

智能控制 基于MATLAB 的神经网络的仿真 学院:机电工程学院 姓名:白思明 学号:2011301310111 年级:自研-11 学科:检测技术与自动化装置 日期:2012-4-3

一.引言 人工神经网络以其具有信息的分布存储、并行处理以及自学习能力等优点, 已经在模式识别、 信号处理、智能控制及系统建模等领域得到越来越广泛的应用。MATLAB中的神经网络工具箱是以人工神经网络理论为基础, 利用MATLAB 语言构造出许多典型神经网络的传递函数、网络权值修正规则和网络训练方法,网络的设计者可根据自己的需要调用工具箱中有关神经网络的设计与训练的程序, 免去了繁琐的编程过程。 二.神经网络工具箱函数 最新版的MATLAB 神经网络工具箱为Version4.0.3, 它几乎涵盖了所有的神经网络的基本常用类型,对各种网络模型又提供了各种学习算法,我们可以根据自己的需要调用工具箱中的有关设计与训练函数,很方便地进行神经网络的设计和仿真。目前神经网络工具箱提供的神经网络模型主要用于: 1.数逼近和模型拟合; 2.信息处理和预测; 3.神经网络控制; 4.故障诊断。 神经网络工具箱提供了丰富的工具函数,其中有针对某一种网络的,也有通用的,下面列表中给出了一些比较重要的工具箱函数。 三.仿真实例 BP 网络是一种多层前馈神经网络,由输入层、隐层和输出层组成。BP 网络模型结构见图1。网络同层节点没有任何连接,隐层节点可以由一个或多个。网络的学习过程由正向和反向传播两部分组成。在正向传播中,输入信号从输入层节点经隐层节点逐层传向输出层节点。每一层神经元的状态只影响到下一层神经元网络,如输出层不能得到期望的输出,那么转入误差反向传播过程,将误差信号沿原来的连接通路返回,通过修改各层神经元的权值,逐次地向输入层传播去进行计算,在经正向传播过程,这两个过程反复运用,使得误差信号最小或达到人们所期望的要求时,学习过程结束。

BP神经网络matlab程序入门实例

认真品味,定会有收获。 BP神经网络matlab源程序代码) %原始数据输入 p=[284528334488;283344884554;448845542928;455429283497;29283497 2261;... 349722616921;226169211391;692113913580;139135804451;35804451 2636;... 445126363471;263634713854;347138543556;385435562659;35562659 4335;... 265943352882;433528824084;433528821999;288219992889;19992889 2175;... 288921752510;217525103409;251034093729;340937293489;37293489 3172;... 348931724568;317245684015;]'; %期望输出 t=[4554292834972261692113913580445126363471385435562659... 4335288240841999288921752510340937293489317245684015... 3666]; ptest=[284528334488;283344884554;448845542928;455429283497;2928 34972261;... 349722616921;226169211391;692113913580;139135804451;35804451 2636;... 445126363471;263634713854;347138543556;385435562659;35562659 4335;... 265943352882;433528824084;433528821999;288219992889;19992889 2175;... 288921752510;217525103409;251034093729;340937293489;37293489 3172;... 348931724568;317245684015;456840153666]'; [pn,minp,maxp,tn,mint,maxt]=premnmx(p,t);%将数据归一化 NodeNum1=20;%隐层第一层节点数 NodeNum2=40;%隐层第二层节点数 TypeNum=1;%输出维数 TF1='tansig'; TF2='tansig'; TF3='tansig'; net=newff(minmax(pn),[NodeNum1,NodeNum2,TypeNum], {TF1TF2TF3},'traingdx'); %网络创建traingdm net.trainParam.show=50; net.trainParam.epochs=50000;%训练次数设置 net.trainParam.goal=1e-5;%训练所要达到的精度

BP神经网络的Matlab语法介绍

1. 数据预处理 在训练神经网络前一般需要对数据进行预处理,一种重要的预处理手段是归一化处理。下面简要介绍归一化处理的原理与方法。 (1) 什么是归一化? 数据归一化,就是将数据映射到[0,1]或[-1,1]区间或更小的区间,比如 (0.1,0.9) 。 (2) 为什么要归一化处理? <1>输入数据的单位不一样,有些数据的范围可能特别大,导致的结果是神经网络收敛慢、训练时间长。 <2>数据范围大的输入在模式分类中的作用可能会偏大,而数据范围小的输入作用就可能会偏小。 <3>由于神经网络输出层的激活函数的值域是有限制的,因此需要将网络训练的目标数据映射到激活函数的值域。例如神经网络的输出层若采用S形激活函数,由于S形函数的值域限制在(0,1),也就是说神经网络的输出只能限制在(0,1),所以训练数据的输出就要归一化到[0,1]区间。 <4>S形激活函数在(0,1)区间以外区域很平缓,区分度太小。例如S形函数f(X)在参数a=1时,f(100)与f(5)只相差0.0067。 (3) 归一化算法 一种简单而快速的归一化算法是线性转换算法。线性转换算法常见有两种形式: <1> y = ( x - min )/( max - min ) 其中min为x的最小值,max为x的最大值,输入向量为x,归一化后的输出向量为y 。上式将数据归一化到[ 0 , 1 ]区间,当激活函数采用S形函数时(值域为(0,1))时这条式子适用。 <2> y = 2 * ( x - min ) / ( max - min ) - 1 这条公式将数据归一化到[ -1 , 1 ] 区间。当激活函数采用双极S形函数(值域为(-1,1))时这条式子适用。 (4) Matlab数据归一化处理函数 Matlab中归一化处理数据可以采用premnmx ,postmnmx ,tramnmx 这3个函数。 <1> premnmx 语法:[pn,minp,maxp,tn,mint,maxt] = premnmx(p,t) 参数: pn:p矩阵按行归一化后的矩阵 minp,maxp:p矩阵每一行的最小值,最大值

神经网络matlab实现实例

神经网络matlab实现实例 clear all clc %原始数据输入 p=[1000 1200 1300;1200 1300 1500;1300 1500 1800;1500 1800 1900 ;1800 1900 2000 ;1900 2000 2200 ;]'; %期望输出 t=[1500 1800 1900 2000 2200 2300 ]; ptest=[1000 1200 1300;1200 1300 1500;1300 1500 1800;1500 1800 1900 ;1800 1900 2000 ;1900 2000 2200 ;2000 2200 2300;]'; [pn,minp,maxp,tn,mint,maxt]=premnmx(p,t); %将数据归一化 NodeNum1=20; %隐层第一层节点数 NodeNum2=40;%隐层第二层节点数 TypeNum=1; %输出维数 TF1='tansig'; TF2='tansig' TF3='tansig' net=newff(minmax(pn),[NodeNum1,NodeNum2,TypeNum],{TF1,TF2,TF3},'train gdx'); %网络创建traingdm net.trainParam.show=50; net.trainParam.epochs=50000; %训练次数设置 net.trainParam.goal=1e-5; %训练所要达到的精度 net.trainParam.lr=0.01; %学习速率 net=train(net,pn,tn); p2n=tramnmx(ptest,minp,maxp); %测试数据的归一化 an=sim(net,p2n); [a]=postmnmx(an,mint,maxt) %数据的反归一化,即最终想要得到的预测结果 plot(1:length(t),t,'o',1:length(t)+1,a,'+'); title('o表示预测值---*表示实际值') grid on m=length(a); %向量a的长度 t1=[t,a(m)]; error=t1-a; %误差向量 figure plot(1:length(error),error,'-.') title('误差变化图') grid on

使用MATLAB遗传算法工具实例(详细)

第八章使用MATLAB遗传算法工具 最新发布的MATLAB 7.0 Release 14已经包含了一个专门设计的遗传算法与直接搜索工具箱(Genetic Algorithm and Direct Search Toolbox,GADS)。使用遗传算法与直接搜索工具箱,可以扩展MATLAB及其优化工具箱在处理优化问题方面的能力,可以处理传统的优化技术难以解决的问题,包括那些难以定义或不便于数学建模的问题,可以解决目标函数较复杂的问题,比如目标函数不连续、或具有高度非线性、随机性以及目标函数没有导数的情况。 本章8.1节首先介绍这个遗传算法与直接搜索工具箱,其余各节分别介绍该工具箱中的遗传算法工具及其使用方法。 8.1 遗传算法与直接搜索工具箱概述 本节介绍MATLAB的GADS(遗传算法与直接搜索)工具箱的特点、图形用户界面及运行要求,解释如何编写待优化函数的M文件,且通过举例加以阐明。 8.1.1 工具箱的特点 GADS工具箱是一系列函数的集合,它们扩展了优化工具箱和MATLAB数值计算环境的性能。遗传算法与直接搜索工具箱包含了要使用遗传算法和直接搜索算法来求解优化问题的一些例程。这些算法使我们能够求解那些标准优化工具箱围之外的各种优化问题。所有工具箱函数都是MATLAB的M文件,这些文件由实现特定优化算法的MATLAB语句所写成。 使用语句 type function_name 就可以看到这些函数的MATLAB代码。我们也可以通过编写自己的M文件来实现来扩展遗传算法和直接搜索工具箱的性能,也可以将该工具箱与MATLAB的其他工具箱或Simulink结合使用,来求解优化问题。 工具箱函数可以通过图形界面或MATLAB命令行来访问,它们是用MATLAB语言编写的,对用户开放,因此可以查看算法、修改源代码或生成用户函数。 遗传算法与直接搜索工具箱可以帮助我们求解那些不易用传统方法解决的问题,譬如表查找问题等。 遗传算法与直接搜索工具箱有一个精心设计的图形用户界面,可以帮助我们直观、方便、快速地求解最优化问题。 8.1.1.1 功能特点 遗传算法与直接搜索工具箱的功能特点如下: (1)图形用户界面和命令行函数可用来快速地描述问题、设置算法选项以及监控进程。 (2)具有多个选项的遗传算法工具可用于问题创建、适应度计算、选择、交叉和变异。 (3)直接搜索工具实现了一种模式搜索方法,其选项可用于定义网格尺寸、表 决方法和搜索方法。 (4)遗传算法与直接搜索工具箱函数可与MATLAB的优化工具箱或其他的 MATLAB程序结合使用。 (5)支持自动的M代码生成。 8.1.1.2 图形用户界面和命令行函数

用遗传算法优化BP神经网络的Matlab编程实例

用遗传算法优化BP神经网络的M a t l a b编程实例 由于BP网络的权值优化是一个无约束优化问题,而且权值要采用实数编码,所以直接利用Matlab遗传算法工具箱。以下贴出的代码是为一个19输入变量,1个输出变量情况下的非线性回归而设计的,如果要应用于其它情况,只需改动编解码函数即可。 程序一:GA训练BP权值的主函数 function net=GABPNET(XX,YY) %-------------------------------------------------------------------------- %??GABPNET.m %??使用遗传算法对BP网络权值阈值进行优化,再用BP算法训练网络 %-------------------------------------------------------------------------- %数据归一化预处理 nntwarn off XX=premnmx(XX); YY=premnmx(YY); %创建网络 net=newff(minmax(XX),[19,25,1],{'tansig','tansig','purelin' },'trainlm'); %下面使用遗传算法对网络进行优化 P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隐含层节点数 S=R*S1+S1*S2+S1+S2;%遗传算法编码长度 aa=ones(S,1)*[-1,1]; popu=50;%种群规模 initPpp=initializega(popu,aa,'gabpEval');%初始化种群 gen=100;%遗传代数 %下面调用gaot工具箱,其中目标函数定义为gabpEval [x,endPop,bPop,trace]=ga(aa,'gabpEval',[],initPpp,[1e-6 1 1],'maxGenTerm',gen,... ??'normGeomSelect',[0.09],['arithXover'],[2],'nonUnifMuta tion',[2 gen 3]); %绘收敛曲线图 figure(1) plot(trace(:,1),1./trace(:,3),'r-'); hold on plot(trace(:,1),1./trace(:,2),'b-'); xlabel('Generation'); ylabel('Sum-Squared Error'); figure(2) plot(trace(:,1),trace(:,3),'r-'); hold on plot(trace(:,1),trace(:,2),'b-'); xlabel('Generation'); ylabel('Fittness'); %下面将初步得到的权值矩阵赋给尚未开始训练的BP 网络 [W1,B1,W2,B2,P,T,A1,A2,SE,val]=gadecod(x); net.LW{2,1}=W1; net.LW{3,2}=W2; net.b{2,1}=B1; net.b{3,1}=B2; XX=P; YY=T; %设置训练参数 %训练网络 net=train(net,XX,YY); 程序二:适应值函数 function [sol, val] = gabpEval(sol,options) % val - the fittness of this individual % sol - the individual, returned to allow for Lamarckian evolution % options - [current_generation] load data2 nntwarn off XX=premnmx(XX); YY=premnmx(YY); P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隐含层节点数 S=R*S1+S1*S2+S1+S2;%遗传算法编码长度 for i=1:S, ? ?x(i)=sol(i); end; [W1, B1, W2, B2, P, T, A1, A2, SE, val]=gadecod(x); 程序三:编解码函数 function [W1, B1, W2, B2, P, T, A1, A2, SE, val]=gadecod(x) load data2

Matlab中各种神经网络的使用示例

Matlab中各种神经网络的使用示例 %通用BP神经网络 (1) %通用径向基函数网络 (2) %广义回归神经网络 (4) %通用感应器神经网络 (6) %通用BP神经网络 P=[-1 -1 2 2;0 5 0 5]; t=[-1 -1 1 1]; net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingd'); %输入参数依次为:'样本P范围',[各层神经元数目],{各层传递函数},'训练函数' %训练函数traingd--梯度下降法,有7个训练参数. %训练函数traingdm--有动量的梯度下降法,附加1个训练参数mc(动量因子,缺省为0.9) %训练函数traingda--有自适应lr的梯度下降法,附加3个训练参数:lr_inc(学习率增长比,缺省为1.05; % lr_dec(学习率下降比,缺省为0.7);max_perf_inc(表现函数增加最大比,缺省为1.04) %训练函数traingdx--有动量的梯度下降法中赋以自适应lr的方法,附加traingdm和traingda 的4个附加参数 %训练函数trainrp--弹性梯度下降法,可以消除输入数值很大或很小时的误差,附加4个训练参数: % delt_inc(权值变化增加量,缺省为1.2);delt_dec(权值变化减小量,缺省为0.5); % delta0(初始权值变化,缺省为0.07);deltamax(权值变化最大值,缺省为50.0) % 适合大型网络 %训练函数traincgf--Fletcher-Reeves共轭梯度法;训练函数traincgp--Polak-Ribiere共轭梯度法; %训练函数traincgb--Powell-Beale共轭梯度法 %共轭梯度法占用存储空间小,附加1训练参数searchFcn(一维线性搜索方法,缺省为srchcha);缺少1个训练参数lr %训练函数trainscg--量化共轭梯度法,与其他共轭梯度法相比,节约时间.适合大型网络 % 附加2个训练参数:sigma(因为二次求导对权值调整的影响参数,缺省为5.0e-5); % lambda(Hessian阵不确定性调节参数,缺省为5.0e-7) % 缺少1个训练参数:lr %训练函数trainbfg--BFGS拟牛顿回退法,收敛速度快,但需要更多内存,与共轭梯度法训练参数相同,适合小网络 %训练函数trainoss--一步正割的BP训练法,解决了BFGS消耗内存的问题,与共轭梯度法训练参数相同 %训练函数trainlm--Levenberg-Marquardt训练法,用于内存充足的中小型网络 net=init(net);

相关文档
最新文档