adaboost完整版

合集下载

Adaboost算法的MATLAB实现

Adaboost算法的MATLAB实现

Adaboost算法的MATLAB实现:clear allclctr_n=200;%the population of the train sette_n=200;%the population of the test setweak_learner_n=20;%the population of the weak_learner tr_set=[1,5;2,3;3,2;4,6;4,7;5,9;6,5;6,7;8,5;8,8];te_se=[1,5;2,3;3,2;4,6;4,7;5,9;6,5;6,7;8,5;8,8];tr_labels=[2,2,1,1,2,2,1,2,1,1];te_labels=[2,2,1,1,2,2,1,2,1,1];figure;subplot(2,2,1);hold on;axis square;indices=tr_labels==1;plot(tr_set(indices,1),tr_set(indices,2),'b*');indices=~indices;plot(tr_set(indices,1),tr_set(indices,2),'r*');title('Training set');subplot(2,2,2);hold on;axis square;indices=te_labels==1;plot(te_set(indices,1),te_set(indices,2),'b*')3;indices=~indices;plot(te_set(indices,1),te_set(indices,2),'r*');title('Training set');%Training and testing error ratestr_error=zeros(1,weak_learner_n);te_error=zeros(1,weak_learner_n);for i=1:weak_learner_nadaboost_model=adaboost_tr(@threshold_tr,@threshold_te,tr_set,tr_labels,i); [L_tr,hits_tr]=adaboost_te(adaboost_model,@threshold_te,te_set,te_labels); tr_error(i)=(tr_n-hits_tr)/tr_n;[L_te,hits_te]=adaboost_te(adaboost_model,@threshold_te,te_set,te_labels); te_error(i)=(te_n-hits_te)/te_n;endsubplot(2,2,3);plot(1:weak_learner_n,tr_error);axis([1,weak_learner_n,0,1]);title('Training Error');xlabel('weak classifier number');ylabel('error rate');grid on;subplot(2,2,4);axis square;plot(1:weak_learner_n,te_error);axis([1,weak_learner_n,0,1]);title('Testing Error');xlabel('weak classifier number');ylabel('error rate');grid on;这里需要另外分别撰写两个函数,其中一个为生成adaboost模型的训练函数,另外为测试测试样本的测试函数。

adaboost完整版

adaboost完整版
i
t ,i

n
j 1
wt , j
2.对每个特征f,训练一个弱分类器 h( x, f ) ;计算 所有特征的弱分类器的加权错误率
f i qi | h( x, f ) yi |
3.(在所有特征中)选取具有最小错误率的最佳弱 分类器 ht ( x ) 。 f min f i qi | h( x, f ) yi | i qi | h( x, ft ) yi |
3、强分类器的公式,权重的选取
一共m个样本,Y={-1,+1},-1代表负样本 错误率: f
q | h ( x, f ) y |
i i i
假设的权重:
1 1 t t ln 2 t
Adaboost 算法分析
对于boosting算法,存在两个问题: 1. 如何调整训练集,使得在训练集上训练的弱分类 器得以进行; 2. 如何将训练得到的各个弱分类器联合起来形成强 分类器。 针对以上两个问题,AdaBoost算法进行了调整: 1. 使用加权后选取的训练数据代替随机选取的训练 样本,这样将训练的焦点集中在比较难分的训练 数据样本上; 2. 将弱分类器联合起来,使用加权的投票机制代替 平均投票机制。让分类效果好的弱分类器具有较 大的权重,而分类效果差的分类器具有较小的权 重。
1 其中 t log 。 t 注:(另一种方法:不用循环,而是用识别率与误 识别率是否达到来进行循环)在每一次循环完之 后,运用5判断已经得到的弱分类器的加权后判 断的识别率与误识别率是否在预定范围内,若在: 停止循环,直接;不在:继续。
强分类器的构成 • 经过 T 次迭代后,获得了T 个最佳弱分类 ) 器 h1 ( x),..., h2 ( x ,可以按照下面的方式组合成一个强 分类器: T 1 T 1 t 1 t ht ( x) t 1 t C ( x) 2 0其他 • 其中 t log 。 t • 那么,这个强分类器对待一幅待检测图像时,相当 于让所有弱分 均投票结果比较得出最终的结果。

AdaBoost装袋提升算法

AdaBoost装袋提升算法

AdaBoost装袋提升算法参开资料:更多挖掘算法:介绍在介绍AdaBoost算法之前,需要了解⼀个类似的算法,装袋算法(bagging),bagging是⼀种提⾼分类准确率的算法,通过给定组合投票的⽅式,获得最优解。

⽐如你⽣病了,去n个医院看了n个医⽣,每个医⽣给你开了药⽅,最后的结果中,哪个药⽅的出现的次数多,那就说明这个药⽅就越有可能性是最由解,这个很好理解。

⽽bagging算法就是这个思想。

算法原理⽽AdaBoost算法的核⼼思想还是基于bagging算法,但是他⼜⼀点点的改进,上⾯的每个医⽣的投票结果都是⼀样的,说明地位平等,如果在这⾥加上⼀个权重,⼤城市的医⽣权重⾼点,⼩县城的医⽣权重低,这样通过最终计算权重和的⽅式,会更加的合理,这就是AdaBoost算法。

AdaBoost算法是⼀种迭代算法,只有最终分类误差率⼩于阈值算法才能停⽌,针对同⼀训练集数据训练不同的分类器,我们称弱分类器,最后按照权重和的形式组合起来,构成⼀个组合分类器,就是⼀个强分类器了。

算法的只要过程:1、对D训练集数据训练处⼀个分类器Ci2、通过分类器Ci对数据进⾏分类,计算此时误差率3、把上步骤中的分错的数据的权重提⾼,分对的权重降低,以此凸显了分错的数据。

为什么这么做呢,后⾯会做出解释。

完整的adaboost算法如下最后的sign函数是符号函数,如果最后的值为正,则分为+1类,否则即使-1类。

我们举个例⼦代⼊上⾯的过程,这样能够更好的理解。

adaboost的实现过程: 图中,“+”和“-”分别表⽰两种类别,在这个过程中,我们使⽤⽔平或者垂直的直线作为分类器,来进⾏分类。

第⼀步: 根据分类的正确率,得到⼀个新的样本分布D2,⼀个⼦分类器h1 其中划圈的样本表⽰被分错的。

在右边的途中,⽐较⼤的“+”表⽰对该样本做了加权。

算法最开始给了⼀个均匀分布 D 。

所以h1 ⾥的每个点的值是0.1。

ok,当划分后,有三个点划分错了,根据算法误差表达式得到误差为分错了的三个点的值之和,所以ɛ1=(0.1+0.1+0.1)=0.3,⽽ɑ1 根据表达式的可以算出来为0.42. 然后就根据算法把分错的点权值变⼤。

AdaBoost

AdaBoost

Zt is a normalization factor
Output: final hypothesis
f Ens ( x) = ∑ t =1α t ht ( x)
T
AdaBoost
t =1
(1) d n = 1/10
20
N = 10
ε1 = ∑ n=1 d n(1) I ( yn ≠ h1 ( xn )) = 0.3
( ε t = ∑ n =1 d nt ) I( yn ≠ ht ( xn )) N
3. compute hypothesis weight α t = ln
1 2
1- ε t
εt
( ( 4. update example distribution d nt +1) = d nt ) exp ( −α t yn ht ( x n ) ) Z t
N
xn ∈
d
,
yn ∈ {±1}
y = f ( x ) (or y ∼ P ( y | x )) x ∼ p( x)
Given new x, predict y
P(x, y) is unknown!
2
AdaBoost: Introduction
The Model: • Hypothesis class: • Loss:
AdaBoost: Frame work
Algorithm
Idea: • Simple Hypotheses are not perfect! • Hypotheses combination increased accuracy
6
Problems: • How to generate different hypotheses? • How to combine them? Method: • Compute distribution d1, . . . , dN on examples • Find hypothesis on the weighted training sample

数据挖掘领域十大经典算法之—AdaBoost算法(超详细附代码)

数据挖掘领域十大经典算法之—AdaBoost算法(超详细附代码)

数据挖掘领域⼗⼤经典算法之—AdaBoost算法(超详细附代码)相关⽂章:数据挖掘领域⼗⼤经典算法之—C4.5算法(超详细附代码)数据挖掘领域⼗⼤经典算法之—K-Means算法(超详细附代码)数据挖掘领域⼗⼤经典算法之—SVM算法(超详细附代码)数据挖掘领域⼗⼤经典算法之—Apriori算法数据挖掘领域⼗⼤经典算法之—EM算法数据挖掘领域⼗⼤经典算法之—PageRank算法数据挖掘领域⼗⼤经典算法之—K-邻近算法/kNN(超详细附代码)数据挖掘领域⼗⼤经典算法之—朴素贝叶斯算法(超详细附代码)数据挖掘领域⼗⼤经典算法之—CART算法(超详细附代码)简介Adaboost算法是⼀种提升⽅法,将多个弱分类器,组合成强分类器。

AdaBoost,是英⽂”Adaptive Boosting“(⾃适应增强)的缩写,由Yoav Freund和Robert Schapire在1995年提出。

它的⾃适应在于:前⼀个弱分类器分错的样本的权值(样本对应的权值)会得到加强,权值更新后的样本再次被⽤来训练下⼀个新的弱分类器。

在每轮训练中,⽤总体(样本总体)训练新的弱分类器,产⽣新的样本权值、该弱分类器的话语权,⼀直迭代直到达到预定的错误率或达到指定的最⼤迭代次数。

总体——样本——个体三者间的关系需要搞清除总体N。

样本:{ni}i从1到M。

个体:如n1=(1,2),样本n1中有两个个体。

算法原理(1)初始化训练数据(每个样本)的权值分布:如果有N个样本,则每⼀个训练的样本点最开始时都被赋予相同的权重:1/N。

(2)训练弱分类器。

具体训练过程中,如果某个样本已经被准确地分类,那么在构造下⼀个训练集中,它的权重就被降低;相反,如果某个样本点没有被准确地分类,那么它的权重就得到提⾼。

同时,得到弱分类器对应的话语权。

然后,更新权值后的样本集被⽤于训练下⼀个分类器,整个训练过程如此迭代地进⾏下去。

(3)将各个训练得到的弱分类器组合成强分类器。

机器学习算法系列(6):AdaBoost

机器学习算法系列(6):AdaBoost
em 1 − em
倍。因此,误分类样本在下一一轮学习中起更更大大的作用用。不不改变所给的训练数据,
而而不不断改变训练数据权值的分布,使得训练数据在基本分类器器的学习中起不不同的作业,这是 AdaBoost的一一个特点。 这里里里我们还引入入了了一一个规范化因子子,它的作用用在于使D m + 1成为一一个 概率分布。具体公式为
率低的弱分类器器在最终分类器器中权重较大大,否则较小小。得到最终分类器器。
2.2 AdaBoost算法流程
现在叙述AdaBoost算法。假定给定一一个二二类分类的训练数据集 T = {(x 1, y 1), (x 2, y 2), · · · , (x n, y n)} 其中y i属于二二分类的标记组合,即y i ∈ { + 1, − 1},AdaBoost算法利利用用一一下算法,从训练数据中 学习一一系列列弱分类器器或基本分类器器,并将这些弱分类器器线性组合成一一个强分类器器。 步骤一一:首首先,初始化训练数据的权值分布。假设每一一个训练样本最开始时都被赋予相同的权 值:1 / N,即每个训练样本在基本分类器器的学习中作用用相同,这一一假设保证步骤一一能够在原始数 据上学习基本分类器器G 1(x),数学化的语言言表示为: 1 D 1 = w 11, w 12, · · · , w 1i, · · · , w 1N , w 1i = , i = 1, 2, · · · , N N
​ 集成学习通过将多个学习器器进行行行结合,可获得比比单一一学习器器显著优越的泛化性能,它基于这样一一 种思想:对于一一个复杂任务来说,将多个专家的判断进行行行适当的综合所得出的判断,要比比其中任 何一一个专家单独的判断好,直观一一点理理解,就是我们平时所说的“三个臭臭皮皮匠,顶个诸葛亮”,通 过使用用多个决策者共同决策一一个实例例的分类从而而提高高分类器器的泛化能力力力。

adaboost——【人工智能 精品讲义】

adaboost——【人工智能 精品讲义】

。 Rtr
H final
exp
2
T t 1
2 t
记 t, t 0 ,则 Rtr H final 。 e2 2T
证明:
1、对 DT1 进行迭代展开
DT1 i DT i exp
T yihT xi
ZT
D1
i
exp
yi
T
t 1 T
tht
xi
Zt
t 1
D1
i
exp
yi
x
t1
例如在下图中
需要一些线段把红色的球和深蓝色的球分开, 显然然如果仅用一条直线的话,是分不开的。
使用提升算法来进行划分的话,先画出一条错 误率最小的线段如图a,但左下角的深蓝色的球被错 划分到红色区域,因此加重被错误划分的球的权重, 再下一次划分的时候,将更加考虑那些权重大的球, 如图c,最终得到一个准确的划分,如下图所示。也 就是说,在提升算法中,不要一个单个分类器进行 分类,而是用多个分类器来通过一定的组合后进行 划分。这就是AdaBoost算法。
计算弱分类器 ht : X 1,1,该弱分类器在分布 Dt 上的误差为:
t Dt ht xi yi
计算该弱分类器的权重: t
1 2
ln
1
t
t
更新训练样本的分布:Dt1
i
Dt
i expt
Zt
yiht
xi
,其中
Zt
为归
一化常数。
最后的强分类器为:
H
final
x
sign
T
tht
算法介绍(AdaBoost)
AdaBoost算法中不同的训练集是通过调整每 个样本对应的权重来实现的。开始时,每个样本 对应的权重是相同的,即其中n为样本个数,在此 样本分布下训练出一弱分类器。对于分类错误的 样本,加大其对应的权重;而对于分类正确的样 本,降低其权重,这样分错的样本就被突显出来, 从而得到一个新的样本分布。在新的样本分布下, 再次对样本进行训练,得到弱分类器。依次类推, 经过T次循环,得到T个弱分类器,把这T个弱分类 器按一定的权重叠加(boost)起来,得到最终想 要的强分类器。

机器学习十大算法:AdaBoost

机器学习十大算法:AdaBoost

Chapter7AdaBoostZhi-Hua Zhou and Yang YuContents7.1Introduction (127)7.2The Algorithm (128)7.2.1Notations (128)7.2.2A General Boosting Procedure (129)7.2.3The AdaBoost Algorithm (130)7.3Illustrative Examples (133)7.3.1Solving XOR Problem (133)7.3.2Performance on Real Data (134)7.4Real Application (136)7.5Advanced Topics (138)7.5.1Theoretical Issues (138)7.5.2Multiclass AdaBoost (142)7.5.3Other Advanced Topics (145)7.6Software Implementations (145)7.7Exercises (146)References (147)7.1IntroductionGeneralization ability,which characterizes how well the result learned from a given training data set can be applied to unseen new data,is the most central concept in machine learning.Researchers have devoted tremendous efforts to the pursuit of tech-niques that could lead to a learning system with strong generalization ability.One of the most successful paradigms is ensemble learning[32].In contrast to ordinary machine learning approaches which try to generate one learner from training data, ensemble methods try to construct a set of base learners and combine them.Base learners are usually generated from training data by a base learning algorithm which can be a decision tree,a neural network,or other kinds of machine learning algorithms. Just like“many hands make light work,”the generalization ability of an ensemble is usually significantly better than that of a single learner.Actually,ensemble meth-ods are appealing mainly because they are able to boost weak learners,which are127128AdaBoostslightly better than random guess,to strong learners,which can make very accurate predictions.So,“base learners”are also referred as“weak learners.”AdaBoost[9,10]is one of the most influential ensemble methods.It took birth from the answer to an interesting question posed by Kearns and Valiant in1988.That is,whether two complexity classes,weakly learnable and strongly learnable prob-lems,are equal.If the answer to the question is positive,a weak learner that performs just slightly better than random guess can be“boosted”into an arbitrarily accurate strong learner.Obviously,such a question is of great importance to machine learning. Schapire[21]found that the answer to the question is“yes,”and gave a proof by construction,which is thefirst boosting algorithm.An important practical deficiency of this algorithm is the requirement that the error bound of the base learners be known ahead of time,which is usually unknown in practice.Freund and Schapire[9]then pro-posed an adaptive boosting algorithm,named AdaBoost,which does not require those unavailable information.It is evident that AdaBoost was born with theoretical signif-icance,which has given rise to abundant research on theoretical aspects of ensemble methods in communities of machine learning and statistics.It is worth mentioning that for their AdaBoost paper[9],Schapire and Freund won the Godel Prize,which is one of the most prestigious awards in theoretical computer science,in the year2003. AdaBoost and its variants have been applied to diverse domains with great success, owing to their solid theoretical foundation,accurate prediction,and great simplicity (Schapire said it needs only“just10lines of code”).For example,Viola and Jones[27] combined AdaBoost with a cascade process for face detection.They regarded rectan-gular features as weak learners,and by using AdaBoost to weight the weak learners, they got very intuitive features for face detection.In order to get high accuracy as well as high efficiency,they used a cascade process(which is beyond the scope of this chap-ter).As a result,they reported a very strong face detector:On a466MHz machine,face detection on a384×288image costs only0.067second,which is15times faster than state-of-the-art face detectors at that time but with comparable accuracy.This face detector has been recognized as one of the most exciting breakthroughs in computer vision(in particular,face detection)during the past decade.It is not strange that“boost-ing”has become a buzzword in computer vision and many other application areas. In the rest of this chapter,we will introduce the algorithm and implementations,and give some illustrations on how the algorithm works.For readers who are eager to know more,we will introduce some theoretical results and extensions as advanced topics.7.2The Algorithm7.2.1NotationsWefirst introduce some notations that will be used in the rest of the chapter.Let X denote the instance space,or in other words,feature space.Let Y denote the set of labels that express the underlying concepts which are to be learned.For example,we7.2The Algorithm129 let Y={−1,+1}for binary classification.A training set D consists of m instances whose associated labels are observed,i.e.,D={(x i,y i)}(i∈{1,...,m}),while the label of a test instance is unknown and thus to be predicted.We assume both training and test instances are drawn independently and identically from an underlying distribution D.After training on a training data set D,a learning algorithm L will output a hypoth-esis h,which is a mapping from X to Y,or called as a classifier.The learning process can be regarded as picking the best hypothesis from a hypothesis space,where the word“best”refers to a loss function.For classification,the loss function can naturally be0/1-loss,loss0/1(h|x)=I[h(x)=y]where I[·]is the indication function which outputs1if the inner expression is true and0otherwise,which means that one error is counted if an instance is wrongly classified.In this chapter0/1-loss is used by default,but it is noteworthy that other kinds of loss functions can also be used in boosting.7.2.2A General Boosting ProcedureBoosting is actually a family of algorithms,among which the AdaBoost algorithm is the most influential one.So,it may be easier by starting from a general boosting procedure.Suppose we are dealing with a binary classification problem,that is,we are trying to classify instances as positive and ually we assume that there exists an unknown target concept,which correctly assigns“positive”labels to instances belonging to the concept and“negative”labels to others.This unknown target concept is actually what we want to learn.We call this target concept ground-truth.For a binary classification problem,a classifier working by random guess will have50%0/1-loss. Suppose we are unlucky and only have a weak classifier at hand,which is only slightly better than random guess on the underlying instance distribution D,say,it has49%0/1-loss.Let’s denote this weak classifier as h1.It is obvious that h1is not what we want,and we will try to improve it.A natural idea is to correct the mistakes made by h1.We can try to derive a new distribution D from D,which makes the mistakes of h1more evident,for example,it focuses more on the instances wrongly classified by h1(we will explain how to generate D in the next section).We can train a classifier h2from D .Again,suppose we are unlucky and h2is also a weak classifier.Since D was derived from D,if D satisfies some condition,h2will be able to achieve a better performance than h1on some places in D where h1does not work well, without scarifying the places where h1performs well.Thus,by combining h1and h2in an appropriate way(we will explain how to combine them in the next section), the combined classifier will be able to achieve less loss than that achieved by h1.By repeating the above process,we can expect to get a combined classifier which has very small(ideally,zero)0/1-loss on D.130AdaBoostInput: Instance distribution D ; Base learning algorithm L ;Number of learning rounds T .Process:1. D 1 = D . % Initialize distribution2. for t = 1, ··· ,T :3. h t = L (D t ); % Train a weak learner from distribution D t4. єt = Pr x ~D t ,y I [h t (x )≠ y ]; % Measure the error of h t5. D t +1 = AdjustDistribution (D t , єt )6. endOutput: H (x ) = CombineOutputs ({h t (x )})Figure 7.1A general boosting procedure.Briefly,boosting works by training a set of classifiers sequentially and combining them for prediction,where the later classifiers focus more on the mistakes of the earlier classifiers.Figure 7.1summarizes the general boosting procedure.7.2.3The AdaBoost AlgorithmFigure 7.1is not a real algorithm since there are some undecided parts such as Ad just Distribution and CombineOutputs .The AdaBoost algorithm can be viewed as an instantiation of the general boosting procedure,which is summarized in Figure 7.2.Input: Data set D = {(x 1, y 1), (x 2, y 2), . . . , (x m , y m )};Base learning algorithm L ;Number of learning rounds T .Process:1. D 1 (i ) = 1/m . % Initialize the weight distribution2. for t = 1, ··· ,T :3. h t = L (D , D t ); % Train a learner h t from D using distribution D t4. єt = Pr x ~D t ,y I [h t (x )≠ y ]; % Measure the error of h t5. if єt > 0.5 then break6. αt = ½ ln ( ); % Determine the weight of h t7. D t +1 (i ) =8. endOutput: H (x ) = sign (Σt =1αt h t (x ))× {exp(–αt ) if h t (x i ) = y i exp(αt ) if h t (x i ) ≠ y i % Update the distribution, where % Z t is a normalization factor which% enables D t +1 to be distribution T 1– єt єt D t (i )Z t D t (i )exp(–αt y i h t (x i ))Z t Figure 7.2The AdaBoost algorithm.7.2The Algorithm 131Now we explain the details.1AdaBoost generates a sequence of hypotheses and combines them with weights,which can be regarded as an additive weighted combi-nation in the form of H (x )=T t =1αt h t (x )From this view,AdaBoost actually solves two problems,that is,how to generate the hypotheses h t ’s and how to determine the proper weights αt ’s.In order to have a highly efficient error reduction process,we try to minimize an exponential lossloss exp (h )=E x ∼D ,y [e −yh (x )]where yh (x )is called as the classification margin of the hypothesis.Let’s consider one round in the boosting process.Suppose a set of hypotheses as well as their weights have already been obtained,and let H denote the combined hypothesis.Now,one more hypothesis h will be generated and is to be combined with H to form H +αh .The loss after the combination will beloss exp (H +αh )=E x ∼D ,y [e −y (H (x )+αh (x ))]The loss can be decomposed to each instance,which is called pointwise loss,asloss exp (H +αh |x )=E y [e −y (H (x )+αh (x ))|x ]Since y and h (x )must be +1or −1,we can expand the expectation as loss exp (H +αh |x )=e −y H (x ) e −αP (y =h (x )|x )+e αP (y =h (x )|x ) Suppose we have already generated h ,and thus the weight αthat minimizes the loss can be found when the derivative of the loss equals zero,that is,∂loss exp (H +αh |x )∂α=e −y H (x ) −e −αP (y =h (x )|x )+e αP (y =h (x )|x ) =0and the solution isα=12ln P (y =h (x )|x )P (y =h (x )|x )=12ln 1−P (y =h (x )|x )P (y =h (x )|x )By taking an expectation over x ,that is,solving∂loss exp (H +αh )∂α=0,and denoting=E x ∼D [y =h (x )],we getα=12ln 1− which is the way of determining αt in AdaBoost.1Here we explain the AdaBoost algorithm from the view of [11]since it is easier to understand than the original explanation in [9].132AdaBoostNow let’s consider how to generate h.Given a base learning algorithm,AdaBoost invokes it to produce a hypothesis from a particular instance distribution.So,we only need to consider what hypothesis is desired for the next round,and then generate an instance distribution to achieve this hypothesis.We can expand the pointwise loss to second order about h(x)=0,whenfixing α=1,loss exp(H+h|x)≈E y[e−y H(x)(1−yh(x)+y2h(x)2/2)|x]=E y[e−y H(x)(1−yh(x)+1/2)|x]since y2=1and h(x)2=1.Then a perfect hypothesis ish∗(x)=arg minh loss exp(H+h|x)=arg maxhE y[e−y H(x)yh(x)|x]=arg maxhe−H(x)P(y=1|x)·1·h(x)+e H(x)P(y=−1|x)·(−1)·h(x) Note that e−y H(x)is a constant in terms of h(x).By normalizing the expectation ash∗(x)=arg maxh e−H(x)P(y=1|x)·1·h(x)+e H(x)P(y=−1|x)·(−1)·h(x)e P(y=1|x)+e P(y=−1|x)we can rewrite the expectation using a new term w(x,y),which is drawn from e−y H(x)P(y|x),ash∗(x)=arg maxhE w(x,y)∼e−y H(x)P(y|x)[yh(x)|x]Since h∗(x)must be+1or−1,the solution to the optimization is that h∗(x)holds the same sign with y|x,that is,h∗(x)=E w(x,y)∼e−y H(x)P(y|x)[y|x]=P w(x,y)∼e−y H(x)P(y|x)(y=1|x)−P w(x,y)∼e−y H(x)P(y|x)(y=−1|x)As can be seen,h∗simply performs the optimal classification of x under the distri-bution e−y H(x)P(y|x).Therefore,e−y H(x)P(y|x)is the desired distribution for a hypothesis minimizing0/1-loss.So,when the hypothesis h(x)has been learned andα=12ln1−has been deter-mined in the current round,the distribution for the next round should beD t+1(x)=e−y(H(x)+αh(x))P(y|x)=e−y H(x)P(y|x)·e−αyh(x)=D t(x)·e−αyh(x)which is the way of updating instance distribution in AdaBoost.But,why optimizing the exponential loss works for minimizing the0/1-loss? Actually,we can see thath∗(x)=arg minh E x∼D,y[e−yh(x)|x]=12lnP(y=1|x)P(y=−1|x)7.3Illustrative Examples133 and therefore we havesign(h∗(x))=arg maxyP(y|x)which implies that the optimal solution to the exponential loss achieves the minimum Bayesian error for the classification problem.Moreover,we can see that the function h∗which minimizes the exponential loss is the logistic regression model up to a factor 2.So,by ignoring the factor1/2,AdaBoost can also be viewed asfitting an additive logistic regression model.It is noteworthy that the data distribution is not known in practice,and the AdaBoost algorithm works on a given training set withfinite training examples.Therefore,all the expectations in the above derivations are taken on the training examples,and the weights are also imposed on training examples.For base learning algorithms that cannot handle weighted training examples,a resampling mechanism,which samples training examples according to desired weights,can be used instead.7.3Illustrative ExamplesIn this section,we demonstrate how the AdaBoost algorithm works,from an illustra-tion on a toy problem to real data sets.7.3.1Solving XOR ProblemWe consider an artificial data set in a two-dimensional space,plotted in Figure7.3(a). There are only four instances,that is,⎧⎪⎪⎪⎨⎪⎪⎪⎩(x1=(0,+1),y1=+1) (x2=(0,−1),y2=+1) (x3=(+1,0),y3=−1) (x4=(−1,0),y4=−1)⎫⎪⎪⎪⎬⎪⎪⎪⎭This is the XOR problem.The two classes cannot be separated by a linear classifier which corresponds to a line on thefigure.Suppose we have a base learning algorithm which tries to select the best of the fol-lowing eight functions.Note that none of them is perfect.For equally good functions, the base learning algorithm will pick one function from them randomly.h1(x)=+1,if(x1>−0.5)−1,otherwise h2(x)=−1,if(x1>−0.5)+1,otherwiseh3(x)=+1,if(x1>+0.5)−1,otherwise h4(x)=−1,if(x1>+0.5)+1,otherwise134AdaBoost(a) The XOR data(b) 1st round(c) 2nd round(d) 3rd round Figure7.3AdaBoost on the XOR problem.h5(x)=+1,if(x2>−0.5)−1,otherwise h6(x)=−1,if(x2>−0.5)+1,otherwiseh7(x)=+1,if(x2>+0.5)−1,otherwise h8(x)=−1,if(x2>+0.5)+1,otherwisewhere x1and x2are the values of x at thefirst and second dimension,respectively. Now we track how AdaBoost works:1.Thefirst step is to invoke the base learning algorithm on the original data.h2,h3,h5,and h8all have0.25classification errors.Suppose h2is picked as thefirst base learner.One instance,x1,is wrongly classified,so the error is1/4=0.25.The weight of h2is0.5ln3≈0.55.Figure7.3(b)visualizes the classification, where the shadowed area is classified as negative(−1)and the weights of the classification,0.55and−0.55,are displayed.2.The weight of x1is increased and the base learning algorithm is invoked again.This time h3,h5,and h8have equal errors.Suppose h3is picked,of which the weight is0.80.Figure7.3(c)shows the combined classification of h2and h3with their weights,where different gray levels are used for distinguishing negative areas according to classification weights.3.The weight of x3is increased,and this time only h5and h8equally have thelowest errors.Suppose h5is picked,of which the weight is1.10.Figure7.3(d) shows the combined classification of h2,h3,and h8.If we look at the sign of classification weights in each area in Figure7.3(d),all the instances are correctly classified.Thus,by combining the imperfect linear classifiers,AdaBoost has produced a nonlinear classifier which has zero error.7.3.2Performance on Real DataWe evaluate the AdaBoost algorithm on56data sets from the UCI Machine Learning Repository,2which covers a broad range of real-world tasks.We use the Weka(will be introduced in Section7.6)implementation of AdaBoost.M1using reweighting with /∼mlearn/MLRepository.html7.3Illustrative Examples 135AdaBoost with decision tree (unpruned)AdaBoost with decision tree (pruned)AdaBoost with decision stumpD e c i s i o n s t u m p D e c i s i o n t r e e (p r u n e d )1.000.800.600.400.200.00 1.000.800.600.400.200.000.000.200.400.600.80 1.000.000.200.400.600.80 1.00D e c i s i o n t r e e (u n p r u n e d )1.000.800.600.400.200.000.000.200.400.600.80 1.00Figure 7.4Comparison of predictive errors of AdaBoost against decision stump,pruned,and unpruned single decision trees on 56UCI data sets.50base learners.Almost all kinds of learning algorithms can be taken as base learning algorithms,such as decision trees,neural networks,and so on.Here,we have tried three base learning algorithms,including decision stump,pruned,and unpruned J4.8decision trees (Weka implementation of C4.5).We plot the comparison results in Figure 7.4,where each circle represents a data set and locates according to the predictive errors of the two compared algorithms.In each plot of Figure 7.4,the diagonal line indicates where the two compared algorithms have identical errors.It can be observed that AdaBoost often outperforms its base learning algorithm,with a few exceptions on which it degenerates the performance.The famous bias-variance decomposition [12]has been employed to empirically study why AdaBoost achieves excellent performance [2,3,34].This powerful tool breaks the expected error of a learning approach into the sum of three nonnegative quantities,that is,the intrinsic noise,the bias,and the variance.The bias measures how closely the average estimate of the learning approach is able to approximate the target,and the variance measures how much the estimate of the learning approach fluctuates for the different training sets of the same size.It has been observed [2,3,34]that AdaBoost primarily reduces the bias but it is also able to reduce the variance.136AdaBoostFigure7.5Four feature masks to be applied to each rectangle.7.4Real ApplicationViola and Jones[27]combined AdaBoost with a cascade process for face detection. As the result,they reported that on a466MHz machine,face detection on a384×288 image costs only0.067seconds,which is almost15times faster than state-of-the-art face detectors at that time but with comparable accuracy.This face detector has been recognized as one of the most exciting breakthroughs in computer vision(in particular,face detection)during the past decade.In this section,we briefly introduce how AdaBoost works in the Viola-Jones face detector.Here the task is to locate all possible human faces in a given image.An image is first divided into subimages,say24×24squares.Each subimage is then represented by a feature vector.To make the computational process efficient,very simple features are used.All possible rectangles in a subimage are examined.On every rectangle, four features are extracted using the masks shown in Figure7.5.With each mask, the sum of pixels’gray level in white areas is subtracted by the sum of those in dark areas,which is regarded as a feature.Thus,by a24×24splitting,there are more than 1million features,but each of the features can be calculated very fast.Each feature is regarded as a weak learner,that is,h i,p,θ(x)=I[px i≤pθ](p∈{+1,−1})where x i is the value of x at the i-th feature.The base learning algorithm tries tofind the best weak classifier h i∗,p∗,θ∗that minimizes the classification error,that is,E(x,y)I[h i,p,θ(x)=y](i∗,p∗,θ∗)=arg mini,p,θFace rectangles are regarded as positive examples,as shown in Figure7.6,while rectangles that do not contain any face are regarded as negative examples.Then,the AdaBoost process is applied and it will return a few weak learners,each corresponds to one of the over1million features.Actually,the AdaBoost process can be regarded as a feature selection tool here.Figure7.7shows thefirst two selected features and their position relative to a human face.It is evident that these two features are intuitive,where thefirst feature measures how the intensity of the eye areas differ from that of the lower areas,while7.4Real Application137Figure7.6Positive training examples[27].the second feature measures how the intensity of the two eye areas differ from the area between two eyes.Using the selected features in order,an extremely imbalanced decision tree is built, which is called cascade of classifiers,as illustrated in Figure7.8.The parameterθis adjusted in the cascade such that,at each tree node,branching into“not a face”means that the image is really not a face.In other words,the false negative rate is minimized.This design owes to the fact that a nonface image is easier to be recognized,and it is possible to use a few features tofilter out a lot of candidate image rectangles,which endows the high efficiency.It was reported[27]that10 features per subimage are examined in average.Some test results of the Viola-Jones face detector are shown in Figure7.9.138AdaBoost7.5Advanced Topics7.5.1Theoretical IssuesComputational learning theory studies some fundamental theoretical issues of machine learning.First introduced by Valiant in1984[25],the Probably Approx-imately Correct(PAC)framework models learning algorithms in a distribution free manner.Roughly speaking,for binary classification,a problem is learnable or stronglytimelearnable if there exists an algorithm that outputs a hypothesis h in polynomial7.5Advanced Topics139Figure7.9Outputs of the Viola-Jones face detector on a number of test images[27]. such that for all0<δ, ≤0.5,PE x∼D,y[I[h(x)=y]]<≥1−δand a problem is weakly learnable if the above holds for all0<δ≤0.5but only when is slightly smaller than0.5(or in other words,h is only slightly better than random guess).In1988,Kearns and Valiant[15]posed an interesting question,that is,whether the strongly learnable problem class equals the weakly learnable problem class.This question is of fundamental importance,since if the answer is“yes,”any weak learner is potentially able to be boosted to a strong learner.In1989,Schapire[21]proved that the answer is really“yes,”and the proof he gave is a construction,which is thefirst140AdaBoostboosting algorithm.One year later,Freund[7]developed a more efficient algorithm. Both algorithms,however,suffered from the practical deficiency that the error bound of the base learners need to be known ahead of time,which is usually unknown in ter,in1995,Freund and Schapire[9]developed the AdaBoost algorithm, which is effective and efficient in practice.Freund and Schapire[9]proved that,if the base learners of AdaBoost have errors 1, 2,···, T,the error of thefinal combined learner, ,is upper bounded as=E x∼D,y I[H(x)=y]≤2TTt=1t t≤e−2Tt=1γ2twhereγt=0.5− t.It can be seen that AdaBoost reduces the error exponentially fast.Also,it can be derived that,to achieve an error less than ,the round T is upperbounded asT≤12γln1where it is assumed thatγ=γ1=γ2=···=γT.In practice,however,all the operations of AdaBoost can only be carried out on training data D,that is,D=E x∼D,y I[H(x)=y]and thus the errors are training errors,while the generalization error,that is,the error over instance distribution DD=E x∼D,y I[H(x)=y]is of more interest.The initial analysis[9]showed that the generalization error of AdaBoost is upperbounded asD≤ D+˜OdTmwith probability at least1−δ,where d is the VC-dimension of base learners,m is the number of training instances,and˜O(·)is used instead of O(·)to hide logarithmic terms and constant factors.The above bound suggests that in order to achieve a good generalization ability, it is necessary to constrain the complexity of base learners as well as the number of learning rounds;otherwise AdaBoost will overfit.However,empirical studies show that AdaBoost often does not overfit,that is,its test error often tends to decrease even after the training error reaches zero,even after a large number of rounds,such as 1000.For example,Schapire et al.[22]plotted the performance of AdaBoost on the letter data set from UCI Machine Learning Repository,as shown in Figure7.10(left), where the higher curve is test error while the lower one is training error.It can be observed that AdaBoost achieves zero training error in less than10rounds but the generalization error keeps on reducing.This phenomenon seems to counter Occam’s7.5Advanced Topics 141e r r o r r a t e r a t i o of t e s t s e t t θ20151050 1.00.5101001000-1-0.50.51Figure 7.10Training and test error (left)and margin distribution (right)of AdaBoost on the letter data set [22].Razor,that is,nothing more than necessary should be done,which is one of the basic principles in machine learning.Many researchers have studied this phenomena,and several theoretical explana-tions have been given,for example,[11].Schapire et al.[22]introduced the margin -based explanation.They argued that AdaBoost is able to increase the margin even after the training error reaches zero,and thus it does not overfit even after a large number of rounds.The classification margin of h on x is defined as yh (x ),and that of H (x )= T t =1αt h t (x )is defined asy H (x )= Tt =1αt yh t (x ) Tt =1αtFigure 7.10(right)plots the distribution of y H (x )≤θfor different values of θ.It was proved in [22]that the generalization error is upper bounded asD ≤P x ∼D ,y (y H (x )≤θ)+˜O d m θ2+ln 1δ ≤2TT t =1 1−θt (1− )1+θ+˜O m θ2+ln δ with probability at least 1−δ.This bound qualitatively explains that when other variables in the bound are fixed,the larger the margin,the smaller the generalization error.However,this margin-based explanation was challenged by Brieman [4].Using minimum margin ,=min x ∈Dy H (x )Breiman proved a generalization error bound is tighter than the above one using minimum margin.Motivated by the tighter bound,the arc-gv algorithm,which is a variant of AdaBoost,was proposed to maximize the minimum margin directly,by142AdaBoost updatingαt according toαt=12ln1+γt1−γt−12ln1+ t1− tInterestingly,the minimum margin of arc-gv is uniformly better than that of AdaBoost, but the test error of arc-gv increases drastically on all tested data sets[4].Thus,the margin theory for AdaBoost was almost sentenced to death.In2006,Reyzin and Schapire[20]reported an interestingfinding.It is well-known that the bound of the generalization error is associated with margin,the number of rounds,and the complexity of base learners.When comparing arc-gv with AdaBoost, Breiman[4]tried to control the complexity of base learners by using decision trees with the same number of leaves,but Reyzin and Schapire found that these are trees with very different shapes.The trees generated by arc-gv tend to have larger depth, while those generated by AdaBoost tend to have larger width.Figure7.11(top) shows the difference of depth of the trees generated by the two algorithms on the breast cancer data set from UCI Machine Learning Repository.Although the trees have the same number of leaves,it seems that a deeper tree makes more attribute tests than a wider tree,and therefore they are unlikely to have equal complexity. So,Reyzin and Schapire repeated Breiman’s experiments by using decision stump, which has only one leaf and therefore is with afixed complexity,and found that the margin distribution of AdaBoost is actually better than that of arc-gv,as illustrated in Figure7.11(bottom).Recently,Wang et al.[28]introduced equilibrium margin and proved a new bound tighter than that obtained by using minimum margin,which suggests that the mini-mum margin may not be crucial for the generalization error of AdaBoost.It will be interesting to develop an algorithm that maximizes equilibrium margin directly,and to see whether the test error of such an algorithm is smaller than that of AdaBoost, which remains an open problem.7.5.2Multiclass AdaBoostIn the previous sections we focused on AdaBoost for binary classification,that is, Y={+1,−1}.In many classification tasks,however,an instance belongs to one of many instead of two classes.For example,a handwritten number belongs to1of10 classes,that is,Y={0,...,9}.There is more than one way to deal with a multiclass classification problem.AdaBoost.M1[9]is a very direct extension,which is as same as the algorithm shown in Figure7.2,except that now the base learners are multiclass learners instead of binary classifiers.This algorithm could not use binary base classifiers,and requires every base learner have less than1/2multiclass0/1-loss,which is an overstrong constraint. SAMME[35]is an improvement over AdaBoost.M1,which replaces Line5of AdaBoost.M1in Figure7.2byαt=12ln1− tt+ln(|Y|−1)。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

F j 和它前面的一个特征值F j 1之 • 这样,当选取当前元素的特征值 间的数作为阈值时,所得到的弱分类器就在当前元素处把样本 分开——也就是说这个阈值对应的弱分类器将当前元素前的所 有元素分类为人脸(或非人脸),而把当前元素后(含)的所 有元素分类为非人脸(或人脸)。 • 可以认为这个阈值所带来的分类误差为:
• 后来,Lienhart等人提出扩展的Haar-like特征下图 所示,每个特征由2~3个矩形组成,分别检测边 界、细线、中心特征等。
• 在确定了特征形式后,Harr-like特征的数量就取决于训 练样本图像矩阵的大小。当人脸样本图像和非人脸样 本图像的矩阵大小为24×24时(一般人脸检测的训练样 本为24×24大小),原始Harr-like特征库的四种特征的 总数量就为162336个,见下表。
1如何选取特征?---类haar特征
1.1矩形特征
• Viola和Jone将Adaboost应用于人脸检测,在保证 检测率的同时,首次使得人脸检测达到了实时的 速度。 • 为了保证Adaboost分类器的分类能力,其选择的 弱分类器一般都应该尽可能的简单,通常都是一 条简单的规则,对物体的某个特征进行简单判断。
ht ( x) h( x, ft )
4.按照这个最佳弱分类器,调整权重: t 1 ei t wt 1,i wt ,i t 1 t , 其中ei 0 表示 xi 被正确分类,ei 1 表示 xi 被错 误分类。
5.最后的强分类器为:
T 1 T 1 t ht ( x) t C ( x) 2 t 1 t 1 0其他

( m,n ) ( s ,t )
m x 1 nt 1 n y 1 [ ]* [ ] s t x 1 y 1
m s 1
m,n表示样本图像的宽和高比例,s,t表示haar矩形特 征宽和高。公式表示样本图像逐渐缩小,得到的某一 haar特征的总数 • Adaboost算法通过从大量的haar特征中挑选出最优的 特征,并将其转换成对应的弱分类器进行分类使用, 从而达到对目标进行分类的目的。
min( S (T S ), S (T S ))

于是,通过把这个排序的表扫描从头到尾扫描一遍就可以 为弱分类器选择使分类误差最小的阈值(最优阈值),也就是 为这个特征选取了一个最佳弱分类器。对于所有特征,应用以 上寻找阈值的方法,就得到了所有特征对应的弱分类器,组成 一个弱分类器集,作为训练的输入。
• 级联分类器将若干个强分类器分级串联在一起,强分 类器一级比一级复杂,一级比一级严格。检测中非目 标图像会在前端被排除掉,只有目标图像才能通过各 级强分类器的检测。此外,由于非目标图像会被级联 分类器的前几级迅速排除掉,从而加快了Adaboost算 法的检测速度。
5、补充负样本
• 一级训练结束后,应用级联分类器于负样 本,将分类错误的负样本留下,再补充新 的负样本,作为下一级训练的负样本。
1 其中 t log 。 t 注:(另一种方法:不用循环,而是用识别率与误 识别率是否达到来进行循环)在每一次循环完之 后,运用5判断已经得到的弱分类器的加权后判 断的识别率与误识别率是否在预定范围内,若在: 停止循环,直接;不在:继续。
强分类器的构成 • 经过 T 次迭代后,获得了T 个最佳弱分类 ) 器 h1 ( x),..., h2 ( x ,可以按照下面的方式组合成一个强 分类器: T 1 T 1 t 1 t ht ( x) t 1 t C ( x) 2 0其他 • 其中 t log 。 t • 那么,这个强分类器对待一幅待检测图像时,相当 于让所有弱分类器投票,再对投票结果按照弱分类 器的错误率加权求和,将投票加权求和的结果与平 均投票结果比较得出最终的结果。
3、Adaboost算法基本原理
• 强分类器:如果一个学习算法通过一组样本的学 习后,能够达到理想的识别率,则称它为强分类 器。 • 弱分类器:如果一个学习算法的识别率仅好于随 机的猜测,则称其为弱分类器。 通常,针对一个具体的识别问题,我们很难 找到一个理想的强分类器,但是弱分类器一般都 会很多,基于这种现象,Freund和Schapire提出 了Adaboosting算法:通过一定的算法可以将一组 弱分类器提升为一个强分类器。
4、级联分类器
• 利用训练过程得到的弱分类器,使用上式将部分弱分类器 组合得到若干强分类器,各强分类器对目标都有较强的检 测能力。如果将多个强分类器级联在一起,那么能够通过 各级强分类器检测的对象是人脸的可能性也最大。根据这 一原理,Adaboost算法引入了一种瀑布型的分类器---级联 分类器。级联分类器的检测示意图如下图:(分类器误识 别率不断降低。确定不是证样本的排除,不确定的到下一 个分类器中)
3.1Adaboost算法的具体描述如下:
输入:一组训练集:( x1 , y1 ),...,( xn , yn ) ,其中 x 为样本 yi 为样本标识,yi (0,1) ;其中0,1分别表 描述, 示正例子和反例。在人脸检测中,可以定义0为 非人脸,1为人脸。 1 w1, j 初始化:初始样本权值设为 n (可能会导致 证样本比例很小,所以常用正m个,负n个,则 正的权重为1/2m,负的权重1/2n,使得正负比例 分别为1/2)。 对 t 1,2,..., T,(T为循环数,即找到T个弱分类器) 循环执行下面的步骤: 1.归一化权重: q wt ,i
基于类haar特征的 Adaboost算法
主要内容:
训练系统分为“训练部分”和“补充部分”,14为训练部分,5为补充部分。 1、以样本集为输入,在给定的矩形特征原型下, 计算并获得矩形特征集; 2、以特征集为输入,根据给定的弱学习算法, 确定阈值,将特征与弱分类器一一对应,获得弱分 类器集; 3、以弱分类器集为输入,在训练检出率和误判 率限制下,使用A d a B o o s t 算法挑选最优的弱 分类器构成强分类器; 4、以强分类器集为输入,将其组合为级联分类 器; 5、以非人脸图片集为输入,组合强分类器为临 时的级联分类器,筛选并补充非人脸样本。
三个问题详解: 1、强分类器为什么是级联的?
3、强分类器的公式,权重的选取?
1、强分类器为什么是级联的? 每一级训练都会产生一个强分类器,同时产生一组 特征,而且,随着训练级数的增加,特征数也会随 着增加。例如第一级训练结束后,会产生15个特征, 第六级就会产生50个特征。如果只用第六级的分类 器进行特征识别,由于特征数较多,运行起来会很 慢。这时,将各级的分类器级联起来,容易判断的 特征会再第一级分类器中被判断出来,将其踢出样 本集,这样留下的样本数会越来越少,速度就会愈 来愈快。
3、强分类器的公式,权重的选取
一共m个样本,Y={-1,+1},-1代表负样本 错误率: f
q | h ( x, f ) y |
i i i
假设的权Байду номын сангаас:
1 1 t t ln 2 t
ii( x, y)
x' x , y ' y

i( x , y )
其中ii(x,y)为积分图,i(x,y)为原始图像,如下图 所示。x,y表示图像的像素坐标。上式表示对 (x,y)左上角像素求和。
• 有了积分图,矩形特征值就可以通过很少的计算 量得到。任意一个矩形内的像素和可以由积分图 上对应的四点得到。如图6(b):矩形框A、B、C、 D内的像素和分别为A、B、C、D,即1处的积分 值为A,2点处的积分值为A+B,3点处的积分值 为A+C,4点处的积分值为A+B+C+D,如果求D 区域内的像素值的和,则D=4+1-(2+3),其实就 是利用了D的四个顶点处的积分值就可以求得D。 由此可见,矩形特征的特征值的计算,只于此特 征的端点的积分图有关,而与图像的坐标值无关。 所以积分图的引入,大大提高了检测速度。这也 就是Viola方法速度非常快的根本原因。
i
t ,i

n
j 1
wt , j
2.对每个特征f,训练一个弱分类器 h( x, f ) ;计算 所有特征的弱分类器的加权错误率
f i qi | h( x, f ) yi |
3.(在所有特征中)选取具有最小错误率的最佳弱 分类器 ht ( x ) 。 f min f i qi | h( x, f ) yi | i qi | h( x, ft ) yi |

选取一个最佳弱分类器就是选择那个对所有训练 样本的分类误差在所有弱分类器中最低的那个弱 分类器(特征)。 • 对于每个特征 f,计算所有训练样本的特征值, 并将其排序。通过扫描一遍排好序的特征值,可 以为这个特征确定一个最优的阈值,从而训练成 一个弱分类器。具体来说,对排好序的表中的每 个元素,计算下面四个值: 1)全部人脸样本的权重的和 T ; 2) 全部非人脸样本的权重的和 T ; 3) 在此元素之前的人脸样本的权重的和 S ; 4) 在此元素之前的非人脸样本的权重的和 S;
2、弱分类器及其选取
• 一个弱分类器h(x, f , p,q)由一个特征f,阈值q和 指示不等号方向的p 组成:
1 pf ( x) p h ( x, f , p , ) 0其他
• 训练一个弱分类器(特征f)就是在当前权重分布 的情况下,确定f 的最优阈值,使得这个弱分类 器(特征f)对所有训练样本的分类误差最低。
Adaboost 算法分析
对于boosting算法,存在两个问题: 1. 如何调整训练集,使得在训练集上训练的弱分类 器得以进行; 2. 如何将训练得到的各个弱分类器联合起来形成强 分类器。 针对以上两个问题,AdaBoost算法进行了调整: 1. 使用加权后选取的训练数据代替随机选取的训练 样本,这样将训练的焦点集中在比较难分的训练 数据样本上; 2. 将弱分类器联合起来,使用加权的投票机制代替 平均投票机制。让分类效果好的弱分类器具有较 大的权重,而分类效果差的分类器具有较小的权 重。
相关文档
最新文档