2015-VEC-研究生-2015 -ok

合集下载

基于TWSVM算法的发动机故障识别方法

基于TWSVM算法的发动机故障识别方法

第37卷(2019)第1期内 燃 机 学 报 Transactions of CSICEV ol.37(2019)No.1收稿日期:2018-03-23;修回日期:2018-07-26.基金项目:国家自然科学基金资助项目(51779050);黑龙江省自然科学基金资助项目(F2016022). 作者简介:柳长源,博士,副教授,硕士研究生导师,E-mail :5616171@.DOI: 10.16236/ki.nrjxb.201901012基于TWSVM 算法的发动机故障识别方法柳长源1, 2,车路平1,毕晓君2(1. 哈尔滨理工大学 电气与电子工程学院,黑龙江 哈尔滨 150080; 2. 哈尔滨工程大学 信息与通信工程学院,黑龙江 哈尔滨 150009)摘要:为了快速有效地诊断出汽油发动机故障,提出了一种基于孪生支持向量机(TWSVM )的发动机故障诊断方法.该方法利用HC 、CO 、CO 2、O 2和NO x 共5种尾气参数值,并对其进行规范化处理,然后把这些数据作为特征向量,用于孪生支持向量机构成的多分类器中进行训练和测试,从而达到识别故障类别的目的.试验结果表明:采用孪生支持向量机分类方法比利用传统支持向量机具有更好的分类效果,且训练速度更快;在小样本数据情况下,故障诊断正确率可达到98.4%,能有效描述汽车尾气成分变化与发动机故障状态之间的复杂关系. 关键词:汽油机;故障诊断;孪生支持向量机;汽车尾气;分类器;核函数中图分类号:TK418 文献标志码:A 文章编号:1000-0909(2019)01-0084-06Engine Fault Identification Based on TWSVM AlgorithmLiu Changyuan 1, 2,Che Luping 1,Bi Xiaojun 2(1. School of Electrical and Electronic Engineering ,Harbin University of Science and Technology ,Harbin 150080,China ;2. Institute of Information and Communication ,Harbin Engineering University ,Harbin 150009,China ) Abstract :In or der to diagnose the failur e of gasoline engine quickly and effectively ,an engine fault diagnosismethod based on twin support vector machine was proposed. The method is to use the five exhaust gas parameters such as HC ,CO ,CO 2,O 2,NO x ,and normalize the data ,extract the eigenvector as the learning sample ,and then use the multi-classifier based on the twin support vector machine to train and test ,so as to achieve the purpose of identify-ing the type of failure. The experimental results show that the classification method of twin support vector machine has stronger classification ability than that of the traditional support vector machine ,and the training speed is faster. In the case of less samples data ,the correctness of fault diagnosis can reach 98.4%. This can effectively describe the compli-cated relationship between the car exhaust composition changes and the engine fault state.Keywords :gasoline engine ;fault diagnosis ;twin suppo t vecto machine (TWSVM );automobile exhaust ;classifier ;kernel function发动机工作状态的好坏直接影响着汽车的性能,由于发动机排气中各成分体积分数值含有大量燃烧过程的信息,不同故障情况下,其尾气中各成分体积分数值不同,尤其是HC 、CO 2、CO 、O 2和NO x 气体的体积分数含量有很大的变化.因此,利用发动机排气成分中的各气体体积分数值的不同来推断发动机故障,是现在汽车发动机故障诊断的重要趋势,如今许多文献都对汽车尾气成分含量和发动机故障之间的关系进行了研究[1-4]. 由于支持向量机(support vector machine ,SVM )具有较强的泛化能力和分类能力,所以运用支持向量机建立尾气成分含量和发动机故障的模型是目前发动机故障诊断的一种常用方法[5],然而SVM 的训练算法训练时间长,且不适合大量样本的分类.针对这些问题,Jayadeva 等在传统SVM 的基础上提出了一种新的学习算法,称为孪生支持向量机(twin support vector machine ,TWSVM ).TWSVM 的训练学习速度更快,计算复杂度更低,仅为传统SVM2019年1月 柳长源等:基于TWSVM 算法的发动机故障识别方法 ·85·训练时间的1/4[6-8].随着时间的发展,TWSVM 算法针对不同的需求有了很多的改进,如最小二乘法的孪生支持向量机、带有局部信息的加权孪生支持向量机等,但是孪生支持向量机的本质是相同的,即解决两个二次规划问题,也是对传统支持向量机的一个有效拓展,现如今孪生支持向量机已得到广泛应用,如徐凯[9]的滚动轴承故障诊断和王立国等[10]的图像半监督分类等.笔者引入孪生支持向量机到汽车发动机故障诊断这个领域,结合孪生支持向量机和汽车尾气成分的变化规律的特点,并在其基础上进行发动机故障诊断模型分析,通过试验比较传统支持向量机和孪生支持向量机的诊断结果,以获得发动机故障诊断的最优 结果.1 孪生支持向量机1.1 TWSVM 的基本思想TWSVM 的基本原理是寻求两个不平行的超平面,并且使每个超平面尽可能接近一个类而远离另一个,与SVM 利用支持向量构造与两类样本等距离平行的超平面不同[11].图1所示方块表示一类,圆形表示另一类.TWSVM 是通过产生两个非平行超平面,来求解一对小尺寸的二次规划问题(QPP ),使每个超平面更接近一类而远离另一类,而不是像传统支持向量机那样求解一个大尺寸的二次规划问题来进行分类.理论上,TWSVM 相比传统的SVM 来说,TWSVM 的训练学习速度更快,计算复杂度更低,仅为传统SVM 的1/4[12].图1 TWSVM 的分类面构成Fig.1 Classification of TWSVM1.2 孪生支持向量机分类算法孪生支持向量机是基于支持向量机的思想提出来的,是采用两个非平行的分类超平面来对相应的样本进行拟合.假设存在一个训练样本集为:(,)i j j x y ,i =1,2; j =1,2,…,l ;∈i n j x R {1,1}∈−j y ,其中,i j x 表示第i 类样本中的第j 个样本,j y 表示训练样本的标签.训练样本集的总数为l =l 1+l 2,其中,1l 表示第一类样本的总数,2l 表示第二类样本的总数.令矩阵1×∈l n R A 表示第一种类别的样本集合,矩阵2×∈l nR B 表示第二种类别的样本集合.TWSVM 的优化问题就可以表示为一对不等式优化问题TWSVM 1和TWSVM 2[6],即(1)(1)T (1)(1)T111(TWSVM )min ()()2w b w b c ξ+++112A e A e e (1)(1)s.t (),0w b ξξ−++22B e e ≥≥ (1)(2)(2)T (2)(2)T221(TWSVM )min ()()2w b w b c η+++221B e B e e (2)(2)s.t (),0w b ηη−++12A e e ≥≥ (2)式中:(1)w 和(2)w 表示两个超平面的法向量;(1)b 和(2)b 表示两个超平面的偏移量;1T (1,,1)l =∈1 e R ;2T (1,,1)l =∈2 e R .对于TWSVM 1不等式的优化问题来说,通过引入拉格朗日函数,对其求解得式(3).(1)(1)(1)(1)T 1(,,,,)()2L w b w b βξ=+⋅1A e α(1)(1)T T11()w b c c ξξ+++−122A e e e T (1)(1)T ()w b αξξ⎡⎤−++−−⎣⎦22B e e β (3)式中:2T 1(,,)l αα=⋅⋅⋅α;2T 1(,,)l ββ=⋅⋅⋅β.令(1)(1)(,,,,)L w b ξαβ对(1)w 、(1)b 和ξ的偏导数为零,由此可得:T (1)(1)T ()0w b ++=1A A e βα(4)T (1)(1)T()0w b ++=112e A e e α (5)10c −−=2e αβ (6)由式(6)可知: 10c ≤≤α (7)联合式(4)和(5)可得:T T (1)(1)T T T[][ ][][]0w b +=112A e A e B e α (8)若定义T T []=1H A e ,T T[]=2G B e ,(1)(1)T []w b =u ,则式(8)变为 T T 0+=H Hu G α(9)根据式(4)~(7)可得优化问题TWSVM 1的对偶问题为T T 1T T1min ()2−−2G H H G e αα1(0)c ≤≤α (10)同理可得到优化问题TWSVM 2的对偶问题为T T 1T T 11min ()2P γγ−−Q Q P e γ2(0)c ≤≤γ (11)一旦求得u 和v ,就可得出两个决策超平面,即T (1)(1)0w b +=x(12)·86· 内 燃 机 学 报 第37卷 第1期T (2)(2)0w b +=x(13)因此,TWSVM 分类决策函数为T ()()Label()arg min{}arg min i i i d w b ==+=x x12class 1;1,2class 2d i d ⇒⎧⎫=⎨⎬⇒⎩⎭(14)通过引入具体的核函数,可将TWSVM 进一步推广到非线性分类情况.对于非线性二分类问题,用K (x ,y )表示核函数,构造两个基于核函数的超平 面[13]为T T T T 1122(,)0(,)0K u b K u b +=+=x c x c 和(15)为得到上述两个超平面,非线性TWSVM 求解优化问题为2T T11111(TWSVM )min (,)2K u b c ξ++12A C e e T 11s.t (,),0K u b ξξ⎡⎤−++⎣⎦22BC e e ≥≥(16)2T T22221(TWSVM )min (,)2K u b c η++21B C e eT12s.t (,),0K u b ηη⎡⎤−++⎣⎦12A C e e ≥≥(17)其中矩阵C 代表所有训练样本,它的每一行即为一个训练样本.非线性情况下,TWSVM 的决策规则仍可归纳为:测试样本离哪个超平面近就被归于哪个类.具体的决策函数为T TLabel()arg min (,)k kx K u b ⎡=+⎣x C(1,2,,)=⋅⋅⋅i k (18)1.3 分类器的设计由于孪生支持向量机本身是基于SVM 发展而来的,属于二元分类器,对于多类别分类诊断,就需要构造出多元分类器.“一对一”(one-versus-one ,OVO )策略分类方法最早是由Knerr 为多分类SVM 提出的,将该策略与传统TWSVM 结合得到的一对一孪生支持向量机(OVOTWSVM )具有比OVOSVM 更好的分类性 能[13-15].假设对于M 类分类问题来说,“一对一”分类方法需要在任意两类样本之间构建一个二分类TWSVM 子分类器,需要构造m (m -1)/2个二分类器,也即构造m (m -1)个超平面来区分m 个类别.在对测试样本进行分类前,先要将所有的训练样本在每一个二分类向量机模型中进行独立训练,从而得到m (m -1)个二分类TWSVM 子分类器,当输入一个新的测试样本时,先利用每个二分类器对其进行单独判别.设任意一个分类器的分类函数为f ij ,当 f ij <0时,则判定x 属于第i 类,则i 类就可以得到一票;当f ij >0时,则判定x 属于第j 类,则j 类就得一票,累加所有m (m -1)个子分类器的得票数,最后得票最多的类别即为该测试样本的类别[13,16].“一对一”孪生支持向量机的示意,如图2所示.图2 OVOTWSVM 示意Fig.2 Schematic diagram of OVOTWSVM2 基于TWSVM 的发动机故障诊断2.1 试验方案及特征提取以桑塔纳AJR 汽油发动机为台架试验对象,所用的桑塔纳AJR 发动机(1.8L 2VQSEA827NF )为2气门、横流扫气的汽油发动机.发动机性能的具体参数,如表1所示.表1 发动机性能参数Tab.1 Engine performance parameters参数数值缸径/mm 81.0 活塞行程/mm 86.4 排量/L 1.781 压缩比 9.5 最大功率/kW 74(5200r/min ) 最大转矩/(N ·m ) 155(3800r/min )利用发动机机故障试验平台将发动机状态设置为正常、氧传感器信号故障、节气门控制器电位计反馈信号故障、2缸和3缸点火圈故障以及2缸喷油器控制故障共5种状态,并用K81故障解码器验证故障设置准确性,将这5种状态的故障编码f 分别设置为(1,2,3,4,5),利用A VL4000尾气分析仪测取发动机怠速和加速时不同故障状态下的尾气排放试验数据,通过K81解码器读取发动机运行状况的数据流即发动机转数和负荷(每循环喷油持续时间).每组试验包含怠速和加速时的数据,每个故障重复采集50次共100组数据,即5种状态下500组试验数据,输出为每一组向量对应的故障码.每种状态的输入特征向量为{CO ,CO 2,O 2,HC ,NO x },其中CO 、CO 2、O 2、HC 和NO x 分别为汽车尾气排放的体积分数或气体的质量分数.部分数据见表2.2019年1月 柳长源等:基于TWSVM 算法的发动机故障识别方法 ·87·表2 发动机尾气排放与不同故障状态关系的部分数据Tab.2 Relationship between engine exhaust emissions and different fault conditions of partial data故障编码 CO 排放/% CO 2排放/%O 2排放/% HC 排放/10-3NO x 排放/10-3发动机转速/(r ·min -1) 喷油持续时间/ms1 2.65 4.10 13.2 6.166 0.546 800 3.55 1 5.82 4.70 10.4 7.709 0.572 2480 3.25 1 2.77 4.10 13.4 6.270 0.578 760 3.50 1 5.55 4.90 10.4 7.784 0.590 2560 3.25 1 2.57 4.00 13.4 6.192 0.570 800 3.50 1 5.69 4.80 10.4 7.955 0.617 2560 3.30 2 0.42 4.70 13.5 2.045 0.581 800 3.05 2 0.77 6.70 10.8 6.517 1.198 2560 2.75 2 0.16 4.70 13.7 5.032 0.607 800 2.75 2 0.75 6.70 10.8 6.489 1.145 2600 2.65 2 0.13 4.70 13.8 4.978 0.655 760 2.75 2 0.60 6.90 10.7 6.552 1.210 2720 2.65 3 2.36 4.00 13.4 5.638 0.302 800 3.05 3 5.48 4.80 10.4 7.511 0.360 2480 2.80 3 2.51 4.10 13.4 5.837 0.327 760 3.25 3 5.51 4.80 10.5 7.440 0.365 2440 2.80 3 2.53 3.70 13.6 5.109 0.323 800 2.95 3 5.53 4.80 10.3 7.188 0.338 2520 2.70 4 2.36 3.80 13.5 5.621 0.232 800 2.45 4 4.95 5.00 10.4 7.360 0.329 2400 2.55 4 2.51 3.80 13.4 5.697 0.241 800 2.40 4 5.29 4.90 10.3 7.645 0.337 2360 2.55 4 2.58 3.60 13.6 5.502 0.204 800 2.50 4 5.23 4.90 10.3 7.464 0.296 2320 2.60 5 0.18 4.60 13.8 2.944 0.354 800 2.75 5 0.68 6.80 10.7 3.687 0.931 2200 2.55 5 0.21 4.50 13.9 2.902 0.346 800 2.75 5 0.66 6.80 10.7 3.657 0.893 22402.455 0.13 4.50 14.0 2.889 0.290 800 2.65 5 0.58 6.84 10.4 3.715 0.87721602.50由于输入特征向量的取值范围不同,需对每一组样本数据进行归一化处理.为了提高数据的稳定性,采用归一化公式为min max min ()=−−i i y x x x x (19)式中:x i 、y i 为样本转换前、后的值;x min 、x max 为样本的最小值和最大值.利用该归一化公式将每一类别的样本数据值归一化到[0,1]区间,从而使诊断结果更加准确[17]. 2.2 孪生支持向量机诊断模型的构建笔者选择的是“一对一”分类器算法,并在小样本训练数据的情况下,对发动机故障诊断分类效果进行分析.首先,把要进行训练和测试的样本集以及对应的标签设置好;再用归一化的方式对数据进行预处理,根据类别数目的多少构建分类器;接下来把数据放入分类器进行训练和测试,根据“一对一”分类投票原则进行票数统计,从而得到分类结果.其模型流程示意,如图3所示.图3 OVOTWSVM 流程示意Fig.3 OVOTWSVM process·88·内 燃 机 学 报第37卷 第1期3数据测试3.1不同条件下故障检测准确率对比试验同传统支持向量机方法相似的是,TWSVM对于核函数的选择也比较敏感,为了找到合适的核函数,设置了3种核函数的试验,分别对应线性核函数、径向基核函数和多项式核函数[6].为了得到更好的试验结果,对支持向量机和孪生支持向量机进行了试验比较,选取相同数量的样本数据,通过比较在3种不同核函数情况下的准确率,选取对两种算法的准确率影响最大的核函数.因为这每一组测试样本数据都有对应的实际故障状态,把测试样本的实际故障状态与通过算法预测出来的故障状态(即故障码)进行比对,如果两者相同,则预测结果正确.用所有预测结果正确的数量除以所有测试样本的数量就可以得到其准确率.试验过程为:(1)表2的500组样本数据集是由5种故障状态且每种故障状态有100组数据组成的,数据采集后,首先对表2的样本数据进行归一化处理,在归一化数据的基础上进行了试验;(2)试验是从整个500组样本数据随机选取75%,即375组数据进行训练,剩余25%,即125组数据进行测试,共进行20次试验,取平均值作为最终准确率.试验数据结果如表3所示.表中数据都是20次试验结果的平均值,很明显能够看出相同样本数据情况下,对孪生支持向量机影响最大的核函数是线性核函数,对标准支持向量机影响最大的核函数是径向基核函数.标准支持向量机最好的结果是在选取径向基核函数的时候,其准确率为95.4%.孪生支持向量机最好的核函数是线性核函数,其准确率为98.4%.表3不同核函数的诊断准确率比较Tab.3Comparison of diagnostic accuracy of different kernel functions %参数线性核函数径向基核函数多项式核函数支持向量机 92.3 95.4 88.2 孪生支持向量机 98.4 93.5 91.6 为了更直观地对比SVM和TWSVM之间的差别,用MATLAB编程,把测试集实际分类和测试集预测分类以图片的形式表现出来.选取了以375组样本数据训练,125组样本数据测试.由于标准支持向量机最好的结果是在选取径向基核函数的时候,故选取了SVM径向基核函数20次试验分类结果图中的一个,如图4a所示.同样孪生支持向量机最好的结果是选取线性核函数的时候,故选取了TWSVM线性核函数20次试验分类结果图中的一个如图4b所示.(a)SVM分类(b)TWSVM分类图4SVM和TWSVM分类Fig.4SVM and TWSVM classification diagram由图4可知,测试集的实际分类结果用圆圈“○”表示,测试集的预测结果用十字“+”表示,如果符号为“,则说明预测结果和实际结果相对应,即试验结果是预测正确的.在图4a中可以明显看出,有较多标识符“+”没有与“○”相对应,形成“,说明预测结果和实际结果不匹配,即预测错误.图4b预测的基本上都是“,即预测结果成功率高,说明孪生支持向量机的预测效果比传统支持向量机的预测效果要好.3.2数据运行时间的对比汽车发动机故障诊断软件的运行环境为:操作系统为Window7,CPU:Core(TM)i5-3210M,主频为2.5 GHZ,硬盘为500G,运行内存为4GB,MATLAB2012b.为了进一步比较TWSVM与SVM 之间的差别,对其运行时间进行了比较,对每一个不同数据模型进行测试,选取20次试验的运行时间的平均数进行了最终的统计,如表4所示.2019年1月柳长源等:基于TWSVM算法的发动机故障识别方法 ·89·由表4可知,随数据样本的增多,孪生支持向量机的运行时间会比支持向量机的运行时间大大减少.样本数据越多,孪生支持向量机的运行时间就越占优势,两者的运行时间差距甚至成级数上涨.试验进一步验证了利用汽车尾气分析发动机故障,孪生支持向量机比传统支持向量机的诊断效果更有优势.表4不同数据模型的运行时间对比Tab.4Comparison of running time of different data models运行时间/s试验样本数SVM TWSVM 100 0.84 0.15200 3.71 0.84500 20.83 4.39在工程应用上,尤其是在诊断大量汽车发动机故障和紧急状况的时候,用这个诊断方法能够节省大量的人力和物力,减少了诊断故障的时间,能够为汽修人员快速地指明故障类型,从而高效地解决汽车发动机故障.4结 论(1) 线性孪生支持向量机比标准支持向量机的故障诊断精度高且达到98.4%.(2) 孪生支持向量机的运行时间也比传统支持向量机短,随着样本数据的扩大,这种优势更加明显.(3) 基于孪生支持向量机的分类方法具有更好的诊断效果.参考文献:[1]张光磊,刘堂胜. 汽车发动机故障诊断与检修[J]. 装备制造技术,2015(8):248-250.[2]田径,刘忠长,许允. 柴油机瞬变过程烟度排放的劣变分析[J]. 内燃机学报,2016,34(2):125-134.[3]柳长源. 相关向量机多分类算法的研究与应用[D]. 哈尔滨:哈尔滨工程大学信息与通信工程学院,2013.[4]徐亚丹,王俊. 基于BP人工神经网络和尾气分析的汽车故障诊断方法研究[J]. 农业装备与车辆工程,2008,208(11):15-17.[5]李增芳,金春华,何勇. 基于废气成分分析和支持向量机的发动机故障诊断[J]. 农业工程学报,2010,26(4):143-146.[6]邱建坤. 基于孪生支持向量机的特征选择与多类分类算法研究[D]. 秦皇岛:燕山大学电气工程学院,2015.[7]Jayadeva,Khemchandani R,Chandra S. Twin support vector machines for pattern classification[J]. IEEET ransactions on Pattern Analysis & MachineIntelligence,2007,29(5):905-910.[8]Qi Z,T ian Y,Shi Y. Robust twin support vector ma-chine for pattern classification[J]. Pattern Recognition,2013,46(1):305-316.[9]徐凯. 孪生支持向量机在滚动轴承振动故障诊断中的应用[J]. 煤矿机械,2016,37(4):147-150. [10]王立国,杜心平. K均值聚类和孪生支持向量机相结合的高光谱图像半监督分类[J]. 应用科技,2017,44(3):1-7.[11]Li D,T ian Y,Xu H. Deep twin support vector ma-chine[J]. IEEE International Conference on Data MiningWorkshop,2015,29(1):65-73.[12]Xu Y. Maximum margin of twin spheres support vector machine for imbalanced data classification[J]. IEEET ransactions on Cybernetics,2016,47(6):1540-1550.[13]丁世飞,张建,张谢锴,等. 多分类孪生支持向量机研究进展[J]. 软件学报,2014,10(11):65-73. [14]于俊钊. 孪生支持向量机及其优化方法研究[D]. 北京:中国矿业大学计算机科学与技术学院,2014. [15]Shao Y H,Chen W J,Wang Z,et al. Weighted linear loss twin support vector machine for large-scale classifi-cation[J]. Knowledge-Based Systems,2015,73(1):276-288.[16]T omar D,Agarwal S. Multi-class twin support vector machine for pattern classification[C]// Proceedings of3rd International Conference on Advanced Computing,Networking and Informatics,Springer India,2016:97-110.[17]孙逊. 基于BP神经网络的电控发动机故障诊断研究[D]. 北京:北京林业大学车辆工程学院,2012.。

第一讲_20110528

第一讲_20110528

心理与教育统计学卢春明北京师范大学认知神经科学与学习研究所chmlubnu@25101520-1.5-1.0-0.50.00.51.01.5Indexy5101520-1.5-1.0-0.50.00.51.01.5Indexy5101520-1.5-1.0-0.50.00.51.01.5y5101520-1.5-1.0-0.50.00.51.01.5y36810121416-10-50510Fitted v alues R e s i d u a l sResiduals vs FittedZambia-2-1012-2-10123Theoretical QuantilesS t a n d a r d i z e d r e s i d u a l sNormal Q-QZambia68101214160.00.51.01.5S t a n d a r d i z e d r e s i d u a l sScale-LocationZambia0.00.10.20.30.40.5-2-10123S t a n d a r d i z e d r e s i d u a l sCook's distance10.50.51Residuals vs LeverageLibya4HeightBanner of agnes(x = iris[subset, 1:4])0.511.52 2.533.5gg S g g S S V g V g S V V S g V V V S g ggg gggV V V V V VVS S SS SS0.00.51.01.52.02.53.03.5Dendrogram of agnes(x = iris[subset, 1:4])H e i g h t5A BCDEFGH2510205010ABCDEFGH2510205010R MR FU MU F102030RMRFUMUF1020306persp()606570758085902040608symbols()2020200220220220220240260280300320320340343603638contour()image()7Sepal.Length2.03.04.00.5 1.5 2.54.56.07.52.03.04.Sepal.WidthPetal.Length13574.55.56.57.50.51.52.51234567Petal.WidthMazda RX4Mazda RX4 WagDatsun 710Hornet 4 Driv e Hornet SportaboutValiantDuster 360Merc 240DMaleFemale C h i l dA d u l tNoYesNoYes8RMRF UM UF RM RF UM UF RM RF UM UF 50-5455-5960-64010203040-303-303WashingtonOregon Wyoming Oklahoma VirginiaRhode Island Massachusetts New Jersey Missouri Arkansas Tennessee Georgia Colorado Texas C a l i f o r n i aM a r y l a n d A r i z o n aN e w M e x i c oD e l a w a r eA l a b a m aL o u i s i a n a I l l i n o i sN e wY o r kM i c h i g a nN e v a d aA l a s k aM i s s i s s i p p iS o u t h C a r o l i n a910050100150200250300350020*******800temperaturep r e s s u r ePressure (mm Hg)versusTemperature (Celsius)11048121602460246Travel Time (s)R e s p o n s e s p e r T ra v el Re spons e sperS ec o nd Bird 131Histogram of Y Y De n s i t y -3-2-101230.00.10.20.30.40.5050100150200Rural Male Rural Female Urban Male Urban Female11.718.126.941668.711.720.330.954.315.424.33754.671.18.413.619.335.1500.512t o o t h l e n g t h Vitamin C dose (mg)0.51205101520253035Ascorbic acid Orange juice x yzBlueberry Cherry Apple Boston Cream OtherVanilla120.00.20.40.60.8 1.0050100150200250300350234 (65%)159 (44%)1.2N u m b e r o f V e s s e l s Sampling Fraction C o m p l e t e n e s s 1.01.21.41.61.82.0N = 360 brokenness = 0.513Barley Yield (bushels/acre) Svansota No. 462M anchuria No. 475VelvetPeatland GlabronNo. 457Wisconsin No. 38Trebi2030405060Grand RapidsSvansota No. 462Manchuria No. 475VelvetPeatland GlabronNo. 457Wisconsin No. 38TrebiDuluthSvansota No. 462Manchuria No. 475VelvetPeatland GlabronNo. 457Wisconsin No. 38TrebiUniversity FarmSvansota No. 462Manchuria No. 475VelvetPeatland GlabronNo. 457Wisconsin No. 38TrebiMorrisSvansota No. 462Manchuria No. 475VelvetPeatland GlabronNo. 457Wisconsin No. 38TrebiCrookstonSvansotaNo. 462Manchuria No. 475Velvet Peatland Glabron No. 457Wisconsin No. 38Trebi Waseca19321931Auckland1415ari1asg 2n g n g 5n g n gv p < 0.001≤0.059>0.059v p < 0.001≤0.066>0.066Node 3 (n = 79)o r m l a u 00.20.40.60.81Node 4 (n = 8)o r m l a u 00.20.40.60.81tmsp = 0.049≤-0.066>-0.066Node 6 (n = 65)o r m l a u 00.20.40.60.81Node 7 (n = 44)or m l a u 00.20.40.60.81年1960197019801990200020102020入学率828486889092949698100102R 2=0.86, p < 0.0001什么是统计?统计就是指整理、总结并解释信息的一系列数学过程。

计量经济学2009

计量经济学2009

考博圈:考博路上,彼此陪伴!吉林大学2009年攻读博士研究生入学考试试题考试科目:计量经济学(满分100分)一、(25分)经典线性模型的基本理论1.假设i y 是随机变量,'i x 是k 元解释变量,则一元线性模型为:ni x y i i i ,...,2,1,=+'=εβ请给出关于残差项i ε的高斯--马尔科夫假设。

2.假设解释变量的数据矩阵x 满足:k x rank rank =)(,试推导出上述模型参数β和残差项方差2σ的普通最小二乘估计。

3.假设线性模型满足高斯--马尔科夫假设并具有常数项,n 是样本容量,y=(y 1,y 2,...,y n ),,x 是解释变量构成的数据矩阵,b 是回归系数向量的最小二乘估计,并令:y y e bx yˆ,ˆ-==试证明:(a)最小二乘估计是无偏估计,而2σ的普通最小二乘估计是有偏估计;(b)最小二乘估计的残差项的和为零,即:01=∑=n i i e;(c)回归超平面通过数据的“中心”,即满足:y b x y ,=和x 是样本均值;(d)从回归方程中获得的拟合值的均值等于样本观测值的均值,即y y=ˆ;(e)线性回归模型的平方和分解公式成立:e e y yy y '+'='ˆˆ二、(15分)线性模型的参数估计和假设检验1.线性模型诊断(a)请说明什么是线性模型中解释变量的多重共线性?(b)当线性模型中出现多重共线性后,如果采用最小二乘估计导致什么问题?(c)如果出现多重共线性,如何进行模型改进和采用什么估计方法?2.线性模型检验(a)请给出线性模型中单个解释变量系数显著性的t—检验统计量和检验过程;(b)请给出线性模型中多个解释变量系数线性约束条件的F—检验统计量和检验过程。

三、(20分)时间序列分析的基本理论1.时间序列概念与模型(a)请给出了时间序列的“时域表示”和“频域表示”方法;(b)请说明什么是时间序列的自相关函数和偏自相关函数,这些函数具有什么重要应用?(c)请给出条件异方差模型(ARCH模型)和广义条件异方差模型(GARCH模型),并说明如何在此类模型中描述条件波动率影响的零对称性。

一种有效的基于卷积神经网络的车辆检索算法

一种有效的基于卷积神经网络的车辆检索算法

一种有效的基于卷积神经网络的车辆检索算法车辆检索是基于图像的相似搜索的子任务,在电子商务和智能安防领域有着重要的实际应用价值。

文章提出了一种有效的基于卷积神经网络车辆检索的算法。

利用YOLOV2检测算法提取图片中的车辆位置减少背景对车辆检索造成的影响。

提出了改变训练集中车辆的颜色进行数据增强,缓解训练数据集较少的问题。

基于微调的ResNet50网络提取车辆的特征用来进行相似匹配。

在香港大学车辆数据集上的实验结果表明文中提出的算法取得了不错的检索效果。

标签:车辆检索;以图搜图;卷积神经网络;深度学习;电子商务Abstract:Vehicle retrieval is a sub-task of similar search based on image,which has important practical application value in the field of e-business and intelligent security. This paper presents an effective vehicle retrieval algorithm based on convolution neural network. YOLOV2 detection algorithm is used to extract the vehicle position in the image to reduce the impact of background on the vehicle retrieval. The problem of changing the color of vehicle in training concentration to enhance the data is put forward to alleviate the problem of less training data set. The ResNet50 network based on fine tuning is used to extract vehicle features for similar matching. The experimental results on the vehicle data set of the University of Hong Kong show that the proposed algorithm has achieved good retrieval results.Keywords:vehicle retrieval;graph searching via graph;convolutional neural network;deep learning;e-business引言隨着国家经济的发展和家庭总收入的不断增加,汽车作为一种舒适的交通工具已经成为了一个必备的消费需求。

A Study of Synthetic and Observed H_alpha Spectra of TT Hydrae

A Study of Synthetic and Observed H_alpha Spectra of TT Hydrae
Subject headings: Radiative transfer — Accretion, accretion discs — Stars: binaries: eclipsing — Stars: binaries: close — Stars: novae, cataclysmic variables — Stars: individual(TT Hya)
arXiv:astro-ph/0501108v1 7 Jan 2005
ABSTRACT
The formation and properties of accretion discs and circumstellar material in Algol-type systems is not very well understood. In order to study the underlying physics of these structures, we have calculated synthetic Hα spectra of TT Hya, which is an Algol-type eclipsing binary with an accretion disc. Both the primary and secondary stars were considered in the calculations as well as a disc surrounding the primary. The Roche model for the secondary star was assumed. The synthetic spectra cover all the phases including primary eclipse and are compared with the observed spectra. The influence of various effects and free parameters of the disc on the emerging spectrum was studied. This enabled us to put some constraints on the geometry, temperature, density and velocity fields within the disc. Differences found between the observed and synthetic spectra unravel the existence of a gas stream as well as a hotter disc-gas interaction region. An additional cooler circumstellar region between the C1 and C2 Roche surfaces is suggested to account for various observed effects. A new computer code called shellspec was created for this purpose and is briefly described in this work as well. It is designed to solve simple radiative transfer along the line of sight in 3D moving media. The scattered light from a central object is taken into account assuming an optically thin environment. Output intensities are then integrated through the 2D projection surface of a 3D object. The assumptions of the code include LTE and optional known state quantities and velocity fields in 3D.

2015年全国研究生数学建模大赛优秀论文D题11

2015年全国研究生数学建模大赛优秀论文D题11

ati Q ati Sti
e(ti )
ti 时刻列车牵引加速度 ti 时刻列车实际加速度
计算距离,是列车到刚通过的一站的距离 列车在第 N 段中 ti 时刻的能耗
1t
i
ti 时刻牵引加速度与最大加速度百分比
i
2t
T
ti 时刻制动加速度与最大加速度百分比
第 N 段列车总运行时间
vti
ST
ti 时刻列车运行速度
A1
始发站
Ai
第i站
Ai+1
第i+1站
A14
终点站
运行方向
图 3.1 列车参考坐标系
图中, A1 站为始发站, A14 站为终点站,列车由始发站 A1 向终点站 A14 运行, A1 位 于公里标 22903m 处, A14 位于公里标 175m 处,起始公里标 0 位于终点站右侧。其中计 算公里标( m)是到起点的距离,计算距离( m)是到刚通过的一站的距离。根据公里 标得到 A6 站到 A7 站的距离是 1354m。 在两车站间运行时间一定的条件下,计算寻找列车从 A6 站出发到达 A7 站的最节能 运行的速度距离曲线,问题的本质是制定一种列车在约束条件下的运行策略,使得发动 机的总能耗最低。建立的单列车单区间节能优化模型的如下: 目标函数: min E min e(t , c(t ))dt
-2-
一 问题重述
轨道交通系统的能耗是指列车牵引、通风空调、电梯、照明、给排水、弱电等设备 产生的能耗。在低碳环保、节能减排日益受到关注的情况下,针对减少列车牵引能耗的 列车运行优化控制近年来成为轨道交通领域的重要研究方向。 本题给出列车运行过程、列车动力学模型、运行时间和运行能耗的关系以及再生能 量利用原理。并给定列车参数和线路参数。 根据已知内容,需要解决的问题如下: 问题一:单列车节能运行优化控制问题 1、计算寻找一条列车从 A6 站出发到达 A7 站的最节能运行的速度距离曲线,其中两 车站间的运行时间为 110 秒。 2、计算寻找一条列车从 A6 站出发到达 A8 站的最节能运行的速度距离曲线,其中要 求列车在 A7 车站停站 45 秒, (不包括停站时间) 。 A6 站和 A8 站间总运行时间规定为 220 秒 问题二:多列车节能运行优化控制问题 列车运行总能耗最低的间隔 H。 2、 重新制定运行图和相应的速度距离曲线, 考虑高峰时间 (早高峰 7200 秒至 12600 秒,晚高峰 43200 至 50400 秒)发车间隔不大于 2.5 分钟且不小于 2 分钟,其余时间发 车间隔不小于 5 分钟,每天 240 列。 问题三:列车延误后运行优化控制问题 1、若列车 i 在车站 A j 延误 DT j (10 秒)发车,找出在确保安全的前提下,首先使 所有后续列车尽快恢复正点运行,其次恢复期间耗能最少的列车运行曲线。 2、随机变量延误 DT j 发车,在尽快恢复正点运行,恢复期间耗能最少的目标函数 下,给出列车运行曲线。

Evolutionary Support Vector Machines and their

U N I V E R S I T ¨AT D O R T M U N D REIHE COMPUTATIONAL INTELLIGENCE S O N D E R F O R S C H U N G S B E R E I C H 531Design und Management komplexer technischer Prozesse und Systeme mit Methoden der Computational Intelligence Evolutionary Support Vector Machines and theirApplication for ClassificationRuxandra Stoean,Mike Preuss,Catalin Stoeanand D.DumitrescuNr.CI-212/06Interner Bericht ISSN 1433-3325June 2006Sekretariat des SFB 531·Universit ¨a t Dortmund ·Fachbereich Informatik/XI 44221Dortmund ·GermanyDiese Arbeit ist im Sonderforschungsbereich 531,”Computational Intelligence“,der Universit ¨a t Dortmund entstanden und wurde auf seine Veranlassung unter Verwendung der ihm von der Deutschen Forschungsgemeinschaft zur Verf ¨u gung gestellten Mittel gedruckt.Evolutionary Support Vector Machines and theirApplication for ClassificationRuxandra Stoean1,Mike Preuss2,Catalin Stoean1,and D.Dumitrescu31Department of Computer Science,Faculty of Mathematics and Computer Science,University of Craiova,Romania{ruxandra.stoean,catalin.stoean}@inf.ucv.ro2Chair of Algorithm Engineering,Department of Computer Science,University of Dortmund,Germanymike.preuss@uni-dortmund.de3Department of Computer Science,Faculty of Mathematics and Computer Science,”Babes-Bolyai”University of Cluj-Napoca,Romaniaddumitr@cs.ubbcluj.roAbstract.We propose a novel learning technique for classification as result ofthe hybridization between support vector machines and evolutionary algorithms.Evolutionary support vector machines consider the classification task as in sup-port vector machines but use evolutionary algorithms to solve the optimizationproblem of determining the decision function.They can acquire the coefficientsof the separating hyperplane,which is often not possible within classical tech-niques.More important,ESVMs obtain the coefficients directly from the evolu-tionary algorithm and can refer them at any point during a run.The concept isfurthermore extended to handle large amounts of data,a problem frequently oc-curring e.g.in spam mail detection,one of our test cases.Evolutionary supportvector machines are validated on this and three other real-world classificationtasks;obtained results show the promise of this new technique.Keywords:support vector machines,coefficients of decision surface,evolution-ary algorithms,evolutionary support vector machines,parameter tuning1IntroductionSupport vector machines(SVMs)represent a state-of-the-art learning technique that has managed to reach very competitive results in different types of classification and regression tasks.Their engine,however,is quite complicated,as far as proper under-standing of the calculus and correct implementation of the mechanisms are concerned. This paper presents a novel approach,evolutionary support vector machines(ESVMs), which offers a simpler alternative to the standard technique inside SVMs,delivered by evolutionary algorithms(EAs).Note that this is not thefirst attempt to hybridize SVMs and EAs.Existing alternatives are discussed in§2.2.Nevertheless,we claim that our approach is significantly different from these.ESVMs as presented here are constructed solely based on SVMs applied for classi-fication.Validation is achieved by considering four real-world classification tasks.Be-sides comparing results,the potential of the utilized,simplistic EA through parametriza-tion is investigated.To enable handling large data sets,thefirst approach is enhanced2Ruxandra Stoean,Mike Preuss,Catalin Stoean,D.Dumitrescuby use of a chunking technique,resulting in a more versatile algorithm.A second so-lution for dealing with a high number of samples is brought by a reconsideration of the elements of thefirst evolutionary algorithm.Obtained results prove suitability and competitiveness of the new approach,so ESVMs qualify as viable simpler alternative to standard SVMs in this context.However,this is only afirst attempt with the new approach.Many of its components still remain to be improved.The paper is organized as follows:§2outlines the concepts of classical SVMs to-gether with existing evolutionary approaches aimed at boosting their performance.§3 presents the new approach of ESVMs.Their validation is achieved on real-world exam-ples in§4.Improved ESVMs with two new mechanisms for reducing problem size in case of large data sets are presented in§5.The last section comprises conclusions and outlines ideas for further improvement.2PrerequisitesGiven{f t∈T|,f t:R n→{1,2,...,k}},a set of functions,and{(x i,y i)}i=1,2,...,m, a training set where every x i∈R n represents a data vector and each y i corresponds to a class,a classification task consists in learning the optimal function f t∗that mini-mizes the discrepancy between the given classes of data vectors and the actual classes produced by the learning machine.Finally,accuracy of the machine is computed on previously unseen test data vectors.In the classical architecture,SVMs reduce k-class classification problems to many binary classification tasks that are separately consid-ered and solved.A voting system then decides the class for data vectors in the test set. SVMs regard classification from a geometrical point of view,i.e.they assume the ex-istence of a separating surface between two classes labelled as-1and1,respectively. The aim of SVMs then becomes the discovery of this decision hyperplane,i.e.its coef-ficients.2.1Classical Support Vector MachinesIf training data is known to be linearly separable,then there exists a linear hyperplane that performs the partition,i.e. w,x −b=0,where w∈R n is the normal to the hy-perplane and represents hyperplane orientation and b∈R denotes hyperplane location. The separating hyperplane is thus determined by its coefficients,w and b.Consequently, the positive data vectors lie on the corresponding side of the hyperplane and their neg-ative counterparts on the opposite side.As a stronger statement for linear separability, the positive and negative vectors each lie on the corresponding side of a matching sup-porting hyperplane for the respective class(Figure1a)[1],written in brief as:y i( w,x i −b)>1,i=1,2,...,m.In order to achieve the classification goal,SVMs must determine the optimal values for the coefficients of the decision hyperplane that separates the training data with as few exceptions as possible.In addition,according to the principle of Structural Risk Minimization from Statistical Learning Theory[2],separation must be performed withEvolutionary Support Vector Machines for Classification3 a maximal margin between classes.Summing up,classification of linear separable data with a linear hyperplane through SVMs leads thus to the following optimization prob-lem: find w∗and b∗as to minimize w∗ 22subject to y i( w∗,x i −b∗)≥1,i=1,2,...,m.(1)Training data are not linearly separable in general and it is obvious that a linear sepa-rating hyperplane is not able to build a partition without any errors.However,a linear separation that minimizes training error can be tried as a solution to the classification problem.Training errors can be traced by observing the deviations of data vectors from the corresponding supporting hyperplane,i.e.from the ideal condition of data separabil-ity.Such a deviation corresponds to a value of±ξiw ,ξi≥0.These values may indicatedifferent nuanced digressions(Figure1b),but only aξi higher than unity signals a classification error.Minimization of training error is achieved by adding the indicator of error for every training data vector into the separability statement and,at the same time,by minimizing the sum of indicators for errors.Adding up,classification of lin-ear nonseparable data with a linear hyperplane through SVMs leads to the following optimization problem,where C is a hyperparameter representing the penalty for errors: find w∗and b∗as to minimize w∗ 22+C m i=1ξi,C>0subject to y i( w∗,x i −b∗)≥1−ξi,ξi≥0,i=1,2,...,m.(2)If a linear hyperplane does not provide satisfactory results for the classification task, then a nonlinear decision surface can be appointed.The initial space of training data vectors can be nonlinearly mapped into a high enough dimensional feature space,where a linear decision hyperplane can be subsequently built.The separating hyperplane will achieve an accurate classification in the feature space which will correspond to a non-linear decision function in the initial space(Figure1c).The procedure leads therefore to the creation of a linear separating hyperplane that would,as before,minimize train-ing error,only this time it would perform in the feature space.Accordingly,a nonlinear mapΦ:R n→H is considered and data vectors from the initial space are mapped into H.w is also mapped throughΦinto H.As a result,the squared norm that is in-volved in maximizing the margin of separation is now Φ(w) 2.Also,the equation of the hyperplane consequently changes to Φ(w),Φ(x i) −b=0.Nevertheless,as simple in theory,the appointment of an appropriate mapΦwith the above properties is a difficult task.As in the training algorithm vectors appear only as part of dot products,the issue would be simplified if there were a kernel function K that would obey K(x,y)= Φ(x),Φ(y) ,where x,y∈R n.In this way,one would use K everywhere and would never need to explicitly even know whatΦis.The remaining problem is that kernel functions with this property are those that obey the corresponding conditions of Mercer’s theorem from functional analysis,which are not easy to check. There are,however,a couple of classical kernels that had been demonstrated to meet Mercer’s condition,i.e.the polynomial classifier of degree p:K(x,y)= x,y p andthe radial basis function classifier:K(x,y)=e x−y 2σ,where p andσare also hyper-parameters of SVMs.In conclusion,classification of linear nonseparable data with a4Ruxandra Stoean,Mike Preuss,Catalin Stoean,D.Dumitrescu(a)(b)(c)Fig.1:(a)Decision hyperplane(continuous line)that separates between circles(positive)and squares(negative)and supporting hyperplanes(dotted lines).(b)Position of data and corre-sponding indicators for errors-correct placement,ξi=0(label1)margin position,ξi<1 (label2)and classification error,ξi>1(label3).(c)Initial data space(left),nonlinear map into the higher dimension/its linear separation(right),and corresponding nonlinear surface(bottom). nonlinear hyperplane through SVMs leads to the same optimization problem as in(2) which is now considered in the feature space and with the use of a kernel function: find w∗and b∗as to minimize K(w∗,w∗)2+C m i=1ξi,C>0(3)subject to y i(K(w∗,x i)−b∗)≥1−ξi,ξi≥0,i=1,2,...,m.The optimization problem(corresponding to either situation above)is subsequently solved.Accuracy on the test set in then computed,i.e.the side of the decision boundary on which each new data vector lies is determined.Classical SVMs approach the op-timization problem through a generalized form of the method of Lagrange multipliers [3].But the mathematics of the technique can be found to be very difficult both to grasp and apply.This is the reason why present approach aims to simplify(and improve) SVMs through a hybridization with EAsby utilizing these in order to determine optimal values for the coefficients of the separating hyperplane(w and b)directly.2.2Evolutionary Approaches to Support Vector MachinesEAs have been widely used in hybridization with SVMs in order to boost performance of classical architecture.Their combination envisaged two different directions:model selection and feature selection.Model selection concerns adjustment of hyperparame-ters(free parameters)within SVMs,i.e.the penalty for errors,C,and parameters of the kernel,p orσwhich,in standard variants,is performed through grid search or gradi-ent descent methods.Evolution of hyperparameters can be achieved through evolution strategies[4].When dealing with high dimensional classification problems,feature se-lection regards the choice of the most relevant features as input for a SVM.The optimal subset of features can be evolved using genetic algorithms in[5]and genetic program-ming in[6].To the best of our knowledge,evolution of coefficients of the separating hyperplane within SVMs has not been accomplished yet.3Evolutionary Support Vector Machines for ClassificationWithin the new hybridized technique of ESVMs separation of positive and negative vectors proceeds as in standard SVMs,while the optimization problem is solved byEvolutionary Support Vector Machines for Classification5 means of EAs.Therefore,the coefficients of the separating hyperplane,i.e.w and b,are encoded in the representation of the EA and their evolution is performed with respect to the objective function and the constraints in the optimization problem(3)within SVMs, which is considered for reasons of generality.3.1Evolving the Coefficients of the Separating HyperplaneRepresentation An individual encodes the coefficients of the separating hyperplane,w and b.Since indicators for errors of classification,ξi,i=1,2,...,m,appear in the con-ditions for hyperplane optimality,ESVMs handle them through inclusion in the struc-ture of an individual,as well:c=(w1,...,w n,b,ξ1,....,ξm).(4) After termination of the algorithm,the best individual from all generations gives ap-proximately optimal values for the coefficients of the decision hyperplane.If proper values for parameters of the evolutionary algorithm are chosen,training errors of clas-sification can also result from the optimal individual(those indicators whose values are higher than unity)but with some loss in accuracy;otherwise,indicators grow in the direction of errors driving the evolutionary cycle towards its goal,but do not reach the limit of unity when the evolutionary process stops.An example of an ESVM which also provides the errors of classification is nevertheless exhibited for simple artificial 2-dimensional data sets separated by various kernels(Figure2).In such a situation,the number of samples and the dimensionality of data are both low,thus accuracy and run-time are not affected by the choice of parameters which leads to the discovery of all training errors.Initial population Individuals are randomly generated such that w i∈[−1,1],i= 1,2,...,n,b∈[−1,1]andξj∈[0,1],j=1,2,...,m.Fitness evaluation Thefitness function derives from the objective function of the optimization problem and has to be minimized.Constraints are handled by penalizing the infeasible individuals by appointing t:R→R which returns the value of the argu-ment if negative while otherwise0.The expression of the function is thus as follows:f(w,b,ξ)=K(w,w)+Cmi=1ξi+m i=1[t(y i(K(w,x i)−b)−1+ξi)]2.(5)Genetic operators Operators were chosen experimentally.Tournament selection, intermediate crossover and mutation with normal perturbation are applied.Mutation of errors is constrained,preventing theξi s from taking negative values.Stop condition The algorithm stops after a predefined number of generations.As the coefficients of the separating hyperplane are found,the class for a new,unseen test data vector can be determined,following class(x)=sgn(K(w,x)−b).This is unlike classical SVMs where it is seldom the case that coefficients can be determined following the standard technique,as the mapΦcannot be always explicitly determined. In this situation,the class for a new vector follows from computational artifices.6Ruxandra Stoean,Mike Preuss,Catalin Stoean,D.Dumitrescu(a)(b)(c)Fig.2:Vizualization of ESVMs on 2-dimensional points.Errors of classification are squared.Odd (a),even (b)polynomial and radial (c)kernels3.2ESVMs for Multi-class ClassificationMulti-class ESVMs employ one classical and very successful SVM technique,the ONE-AGAINST-ONE (1-a-1)[7].As the classification problem is k -class,k >2,1-a-1considers k (k−1)2SVMs,where each machine is trained on data from every two classes,i and j ,where i corresponds to 1and j to -1.For every SVM,the class of x is computed and if x is in class i ,the vote for the i -th class is incremented by one;conversely,the vote for class j is added by one.Finally,x is taken to belong to the class with the largest vote.In case two classes have identical number of votes,the one with the smaller index is selected.Consequently,1-a-1multi-class ESVMs are straightforward.k (k−1)2ESVMs are built for every two classes and voting is subsequently conducted.4Experimental Evaluation:Real-World ClassificationExperiments have been conducted on four data sets (with no missing values)concerning real-world problems coming from the UCI Repository of Machine Learning Databases 4,i.e.diabetes mellitus diagnosis,spam detection,iris recognition and soybean disease diagnosis.The motivation for the choice of test cases was manifold.Diabetes and spam are two-class problems,while soybean and iris are multi-class.Differentiating,on the one hand,diagnosis is a better-known benchmark,but filtering is an issue of current major concern;moreover,the latter has a lot more features and samples,which makes a huge difference for classification as well as for optimization.On the other hand,iris has a lot more samples while soybean has a lot more attributes.For all reasons mentioned above,the selection of test problems certainly contains all the variety of situations that is necessary for the objective validation of the new approach of ESVMs.Brief information on the classification tasks and SVM and EA parameter values are given in Table 1.The error penalty was invariably set to 1.For each data set,30runs of the ESVM were conducted;in every run 75%ran-dom cases were appointed to the training set and the remaining 25%went into test.Experiments showed the necessity for data normalization in diabetes,spam and iris.4Available at /mlearn/MLRepository.htmlEvolutionary Support Vector Machines for Classification7 Table1:Data set properties and ESVM algorithm parameter values.Rightmost columns hold tuned parameter sets for modified ESVMs,ESVMs with chunking,and utilized parameter bounds.Diabetes Iris Soybean SpamData set description modif.chunk.bounds Number of samples768150474601(434)(10/500) Number of attributes843557Number of classes2342ESVM parameter values man.tun.man.tun.man.tun.man.tun.tun.tun.Kernel parameter(p orσ)p=2σ=1p=1p=1p=1 Population size10019810046100162100154519010/200 Number of generations25029610022010029325028718028650/300 Crossover probability0.400.870.300.770.300.040.300.840.950.110.01/1 Mutation probability0.400.210.500.570.500.390.500.200.030.080.01/1 Error mutation probability0.500.200.500.020.500.090.500.07—0.800.01/1 Mutation strength0.104.110.104.040.100.160.103.32 3.750.980.001/5 Error mutation strength0.100.020.103.110.103.800.100.01—0.010.001/5 No further modification of the data was carried out and all data was used in the ex-periments.Obtained test accuracies are presented in Table2.Differentiated(spam/non spam for spamfiltering and ill/healthy for diabetes)accuracies are also depicted.In or-der to validate the manually found EA parameter values,the parameter tuning method SPO[8]was applied with a budget of1000optimization runs.The last4columns of Table2hold performances and standard deviations of the best configuration of an ini-tial latin hypersquare sample(LHS)with size10×#parameters,and thefinally found best parameter setting.Resulting parameter values are depicted in Table1.They indi-cate that for all cases,except for soybean data and chunking enhanced ESVM on the spam data,crossover probabilities were dramatically increased,while often reducing mutation probabilities,especially for errors.It must be stated that in most cases,results achieved with manually determined parameter values are only improved by increasing effort(population size or number of generations).Comparison to worst and best found results of different techniques,either SVMs or others,was conducted.Assessment cannot be objective,however,as outlined meth-ods either use different sizes for the training/test sets or other types of cross-validation and number of runs or employ various preprocessing procedures for feature or sam-ple selection.Diabetes diagnosis was approached in SV M light[9]where an accuracy of76.95%was obtained and Critical SVMs[10]with a result of82.29%.Accuracy on spam detection is reported in[11]where k-Nearest Neighbourhood on non preprocessed data resulted in66.5%accuracy and in[12]where functional link network wrapped into a genetic algorithm for input and output feature selection gave a result of92.44%.1-a-1 multi-class SVMs on the Iris data set were perfomed in[7](accompanied by a shrink-ing technique)and[13];obtained results were of97.33%and98.67%,respectively. Results for the Soybean small data set were provided in[14],where,among others, Naive Bayes was applied and provided an accuracy of95.50%,reaching100%when a pair-wise classification strategy was employed.8Ruxandra Stoean,Mike Preuss,Catalin Stoean,D.DumitrescuTable2:Accuracies of different ESVM versions on the considered test sets,in percent.Average Worst Best StD LHS best StD SPO StD ESVMsDiabetes(overall)76.3071.3580.73 2.2475.82 3.2777.312.45 Diabetes(ill)50.8139.1960.27 4.5349.357.4752.645.32 Diabetes(healthy)90.5484.8096.00 2.7189.60 2.3690.212.64 Iris(overall)95.1891.11100.0 2.4895.11 2.9595.112.95 Soybean(overall)99.0294.11100.0 2.2399.61 1.4799.801.06 Spamfiltering(overall)87.7485.7489.83 1.0689.27 1.3790.590.98 Spamfiltering(spam)77.4870.3182.50 2.7780.63 3.5183.762.21 Spamfiltering(non spam)94.4192.6296.300.8994.820.9495.060.62 ESVMs with ChunkingSpamfiltering(overall)87.3083.1390.00 1.7787.52 1.3188.371.15 Spamfiltering(spam)83.4775.5486.81 2.7886.26 2.6686.352.70 Spamfiltering(non spam)89.7884.2292.52 2.1188.33 2.4889.682.06 Modified ESVMsSpamfiltering(overall)88.4086.5290.35 1.0290.060.9991.250.83 Spamfiltering(spam)79.6375.3984.70 2.1682.73 2.2885.522.08 Spamfiltering(non spam)94.1791.3495.84 1.0594.890.9295.000.72 5Improving Training TimeObtained results for the tasks we have undertaken to solve have proven to be competitive as compared to accuracies of different powerful classification techniques.However,for large problems,i.e.spamfiltering,the amount of runtime needed for training is≈800s. This stems from the large genomes employed,as indicators for errors of every sample in the training set are included in the representation.Consequently,we tackle this prob-lem with two approaches:an adaptation of a chunking procedure inside ESVMs and a modified version of the evolutionary approach.5.1Reducing Samples for Large ProblemsWe propose a novel algorithm to reduce the number of samples for one run of ESVMs which is an adaptation of the widely known shrinking technique within SVMs,called chunking[15].In standard SVMs,this technique implies the identification of Lagrange multipliers that denote samples which do not fulfill restrictions.As we renounced the standard solving of the optimization problem within SVMs for the EA-based approach, the chunking algorithm was adapted tofit our technique(Algorithm1).ESVM with chunking was applied to the spam data set.Values for parameters were set as before,except the number of generations for each EA which is now set to100. The chunk size,i.e N,was chosen as200and the number of iterations with no im-provement was designated to be5.Results are shown in Table2.Average runtime was of103.2s/run.Therefore,the novel algorithm of ESVM with chunking reached its goal, running8times faster than previous one,at a cost of loss in accuracy of0.4%.Evolutionary Support Vector Machines for Classification9Algorithm1ESVM with ChunkingRandomly choose N samples from the training data set,equally distributed,to make a chunk; while a predefined number of iterations passes with no improvement doiffirst chunk thenRandomly initialize population of a new EA;elseUse best evolved hyperplane coefficients and random indicators for errors tofill half of the population of a new EA and randomly initialize the other half;end ifApply EA andfind coefficients of the hyperplane;Compute side of all samples in the training set with evolved hyperplane coefficients;From incorrectly placed,randomly choose(if available)N/2samples,equally distributed;Randomly choose the rest up to N from the current chunk and add all to the new chunk if obtained training accuracy if higher than the best one obtained so far then Update best accuracy and best evolved hyperplane coefficients;set improvement to true;end ifend whileApply best obtained coefficients on the test set and compute accuracy5.2A Reconsideration of the Evolutionary AlgorithmSince ESVMs directly provide hyperplane coefficients at all times,we propose to drop the indicators for errors from the EA representation and,instead,compute their val-ues in a simple geometrical fashion.Consequently,this time,individual representation contains only w and b.Next,all indicatorsξi,i=1,2,...,m are computed in order to be referred in thefitness function(5).The current individual(which is the current separating hyperplane)is considered and,following[1],supporting hyperplanes are de-termined.Then,for every training vector,deviation to the corresponding supporting hyperplane,following its class,is calculated.If sign of deviation equals class,corre-spondingξi=0;else,deviation is returned.The EA proceeds with the same values for parameters as in Table1(certainly except probabilities and step sizes for theξi s)and,in the end of the run,hyperplane coefficients are again directly acquired.Empirical results on the spam data set(Table2)and the average runtime of600s seem to support the new approach.It is interesting to remark that the modified algorithm is not that much faster, but provides some improvement in accuracy.In contrast to the chunking approach,it also seems especially better suited for achieving high non spam recognition rates,in or-der to prevent erroneous deletion of good mail.Surprisingly,SPO here decreases effort while increasing accuracies(Table1),resulting in further speedup.6Conclusions and Future WorkProposed new hybridized learning technique incorporates the vision upon classification of SVMs but solves the inherent optimization problem by means of evolutionary algo-rithms.ESVMs present many advantages as compared to SVMs.First of all,they are definitely much easier to understand and use.Secondly and more important,the evo-lutionary solving of the optimization problem enables the acquirement of hyperplane10Ruxandra Stoean,Mike Preuss,Catalin Stoean,D.Dumitrescucoefficients directly and at all times within a run.Thirdly,accuracy on several bench-mark real-world problems is comparable to those of state-of-the-art SVM methods or to results of other powerful techniques from different other machine learningfields.In order to enhance suitability of the new technique for any classification issue,two novel mechanisms for reducing size in large problems are also proposed;obtained results support their employment.Although already competitive,the novel ESVMs for classification can still be im-proved.Other kernels may be found and used for better separation;also,the two ap-pointed classical kernels may have parameters evolved by means of evolutionary algo-rithms as in some approaches for model selection.Also,other preprocessing techniques can be considered;feature selection through an evolutionary algorithm as found in lit-erature can surely boost accuracy and runtime.Definitely,other ways to handle large data sets can be imagined;a genetic algorithm can also be used for sample selection. Finally,the construction of ESVMs for regression problems is a task for future work. References1.Bosch,R.A.,Smith,J.A.:Separating Hyperplanes and the Authorship of the Disputed Feder-alist Papers.American Mathematical Monthly,V ol.105,No.7,(1998),601-6082.Vapnik,V.,Statistical Learning Theory.Wiley,New York(1998)3.Haykin,S.:Neural Networks:A Comprehensive Foundation.Prentice Hall,New Jersey(1999)4.Friedrichs,F.,Igel,C.:Evolutionary tuning of multiple SVM parameters,In Proc.12th Euro-pean Symposium on Artificial Neural Networks(2004)519-5245.Feres de Souza,B.,Ponce de Leon F.de Carvalho,A.:Gene selection based on multi-class support vector machines and genetic algorithms.Journal of Genetics and Molecular Research, V ol.4,No.3(2005)599-6076.Eads,D.,Hill,D.,Davis,S.,Perkins,S.,Ma,J.,Porter,R.,Theiler,J.:Genetic Algorithms and Support Vector Machines for Time Series Classification.5th Conf.on the Applications and Science of Neural Networks,Fuzzy Systems,and Evolutionary Computation,Proc.Symposium on Optical Science and Technology,SPIE,Seattle,W A,4787(2002)74-857.Hsu,C.-W.,Lin,C.-J.:A Comparison of Methods for Multi-class Support Vector Machines. IEEE Transactions on Neural Networks,V ol.13,No.2(2002)415-4258.Bartz-Beielstein,T.:Experimental research in evolutionary computation-the new experimen-talism.Natural Computing Series,Springer-Verlag(2006)9.Joachims,T.:Making Large-Scale Support Vector Machine Learning Practical.In Advances in Kernel Methods:Support Vector Learning(1999)169-18410.Raicharoen,T.and Lursinsap,C.:Critical Support Vector Machine Without Kernel Function. In Proc.9th Intl.Conf.on Neural Information Processing,Singapore,V ol.5(2002)2532-2536 11.Tax,D.M.J.:DDtools,the data description toolbox for Matlab(2005)http://www-ict.ewi.tudelft.nl/davidt/occ/index.html12.Sierra,A.,Corbacho,F.:Input and Output Feature Selection.ICANN2002,LNCS2415 (2002)625-63013.Weston,J.,Watkins,C.:Multi-class Support Vector Machines.Technical Report,CSD-TR-98-04,Royal Holloway,University of London,Department of Computer Science(1998) 14.Bailey,J.,Manoukian,T.,Ramamohanarao,K.:Classification Using Constrained Emerging Patterns.In Proc.of W AIM2003(2003)226-23715.Perez-Cruz,F.,Figueiras-Vidal,A.R.,Artes-Rodriguez,A.:Double chunking for solving SVMs for very large datasets.Learning’04,Elche,Spain(2004)/ archive/00001184/01/learn04.pdf.。

2015-Inflammation-研究生-ok

(Different mediators may have: similar actions amplification of response / opposing effects control response)
炎症介质一旦被激活或由细胞释放,存在时间很短暂
(Once activated and released from the cell, mediators quickly decay……..)
neutrophil
platelets
Mast cells: Release histamine, cytokines, and proteases;
let the body know something is wrong. Lymphocytes: Firepower for the immune response. Platelets: They aggregate to form a blood clot to close the wound.
immune reactions
组织胺与5-羟色胺(Histamine & Serotonin )
acute allergic responses
1. 血管活性胺(Vasoactive Amines)
(1)组织胺(Histamine)
(广泛分布于各组织中,以皮肤、肺支气管平滑肌、胃肠粘膜组织内含量最高) 体内缺少分解组织胺的酶
由细胞、组织或血浆产生和释放,参与并介导 炎症反应发生,且具有致炎或抑炎作用的自体活性 物质。
促炎介质
TNF, IL-1, IL-6, IL-8
抑(抗)炎介质
PGE2, IL-4,IL-10,IL-11, IL-13,NO

Maxwell使用2015-3-23

Maxwell使⽤2015-3-23Maxwell使⽤要点⼀、安装不要采⽤中⽂⽂件夹。

⼆、Maxwell环境内建模1、在Maxwell环境内只能画直线、圆弧、圆等简单形状。

2、由线构造⾯(1)构成⾯域的多个线段须闭合,形成⾯域的边界闭合边界(2)如果该闭合边界的线段是由多个线段组成,则须线合并成⼀体,然后转换成⾯域。

合并成⼀体:选中所有线段ModelerBooleanUnite转换成⾯域:选中所有线段ModelerSurfaceCover Lines由多个线段构成 Unite成⼀体 Cover成⾯域三、AutoCAD建模后导⼊Maxwell1、在AutoCAD建模时,定转⼦圆⼼应取(0,0)点。

2、各个⾯域须⾃⾏封闭。

3、模型转换成⾯域。

建模时,须保证构成⾯域的边界是可靠连接的。

然后绘图⾯域,选择模型,按回车键,将所绘制的模型转换成⾯域。

转换成⾯域后,可以通过⿏标移动,删除、Undo,来检查模型的正确性。

4、输出.sat格式模型⽂件⽂件输出,选择ACIS(*.sat)⽂件类型,取合适的⽂件名,点击“保存”,框选要输出的模型,回车。

此时.sat⽂件已经保存,可以在相应⽂件夹找到.sat ⽂件。

5、将.sat模型导⼊MaxwellModelerImport,找到已输出的.sat⽂件,点击“打开”,则可输⼊有限元模型。

6、有限元模型可以⼀次性建模输⼊,也可以将各零件分别建模、转换.sat、分别输⼊。

四、Maxwell前处理及求解设置1、选择模型的每⼀⾯域,修改其名称和颜⾊。

⾸先需正确对绕组分相2、将属于同⼀类的⾯域合并,以便施加励磁、材料,提⾼⼯作效率。

按住Ctrl键,选中需要合并的各个⾯域,ModelerBooleanUnite3、建⽴磁钢、冲⽚等材料。

(1)准备磁钢材料的Br、Hc,冲⽚材料的B-H曲线。

B-H曲线宜⽤记事本等编辑器,按要求的格式,输⼊磁化曲线数据,保存为.bh后缀名⽂件。

磁化曲线格式为:(2)⿏标右键点击Materials,选择Edit Library…打开材料库A、添加B-H磁化曲线在打开的材料窗体,点击Add Material…添加材料在相对磁导率类型下拉列表选择“Nonlinear”,使得Value出现“B-HCurve…”按钮点击按钮进⼊B-H曲线输⼊窗体。

12746667_急性脑梗死rt-PA溶栓治疗进展____

㊃综 述㊃[收稿日期]2015-09-06;[修回日期]2016-01-11[基金项目]河北省医学科学研究重点课题(20120184)[作者简介]刘静(1973-),女,河北唐山人,河北省唐山市工人医院主任医师,医学博士,从事神经危重症及免疫性疾病诊治研究㊂*通讯作者㊂E -m a i l :Y i b i n 07@s i n a .c o m急性脑梗死r t -P A 溶栓治疗进展刘 静,吴雅坤(综述),吕宪民,曹亦宾*(审校)(河北省唐山市工人医院神经内二科,河北唐山063000)[关键词] 脑梗死;组织型纤溶酶原激活物;阿加曲班;综述文献 d o i :10.3969/j .i s s n .1007-3205.2016.03.031[中图分类号] R 743.33 [文献标志码] A [文章编号] 1007-3205(2016)03-0355-03脑卒中发病率高㊁致死率高㊁致残率高,其中急性脑梗死是主要的卒中类型,占所有卒中的近80%,不仅给患者带来躯体和精神的痛苦,还给社会带来巨大的经济负担㊂对于急性脑梗死的治疗,在对其发病机制和病理学改变的认识以及影像学技术迅速发展的情况下,越来越强调早发现㊁早治疗的重要性㊂而越早接受治疗,其神经功能恢复的可能性就越大㊂急性脑梗死的治疗关键在于血管再通恢复血流㊁挽救缺血半暗带减少最后梗死面积及预防血管再闭塞,从而改善临床预后㊂目前公认的急性脑梗死的早期药物治疗当属静脉溶栓以再通血管治疗,但溶栓治疗后续如何治疗以避免血管再闭塞等尚存争议,现就相关进展情况综述如下㊂1 溶栓治疗阿替普酶是重组组织型纤溶酶原激活剂(r e c o m b i n a n t t i s s u e p l a s m i n o ge n a c t i v a t o r ,r t -P A ),它能通过激活纤溶酶原成为纤溶酶而溶解血块㊂r t -P A 的半衰期仅4~6m i n ,它对身体凝血功能和各组分的系统性作用比较小,因而出血不良反应少,故其具有强力溶解血栓的作用,且特异性较高㊁比较安全㊂目前国际推荐静脉应用r t -P A 进行溶栓治疗,这种血管再通治疗是目前最具有临床证据的方法㊂1995年发表的国立神经病与中风研究所(N a t i o n a l I n s t i t u t eo fN e u r o l o gi c a lD i s e a s ea n d S t r o k e ,N I N D S )临床试验结果表明,发病3h 内的急性脑梗死患者,采用静脉r t -P A 溶栓治疗,3个月后随访患者,神经功能完全或接近完全恢复者(31%~50%)显著高于安慰剂对照组(20%~28%),且随访3个月和1年的患者病死率相近[1]㊂同期的非介入性治疗(A l t e p l a s eT h r o m b o l ys i s f o r A c u t e N o n i n t e r v e n t i o n a l T h e r a p y in I s c h e m i c S t r o k e ,A T L A N T I S )试验,欧洲急性中风研究(E u r o p e -a n C o o p e r a t i v e A c u t e S t r o k e S t u d y ,E C A S S )Ⅰ㊁Ⅱ临床试验进一步证实了这一治疗方法的有效性,从而确立了进行脑梗死发病3h 的溶栓时间窗以及静脉r t -P A 的应用方法,规定标准剂量是0.9m g /k g ㊂中国也于2004年批准使用r t -P A 治疗急性脑梗死㊂2008㊁2009年分别发表了E C A S S Ⅲ试验结果,这是多中心双盲临床试验,再度证实了静脉r t -P A 溶栓治疗的有效性和安全性,在这些试验中溶栓时间窗延长到4.5h ,发现也能使患者获得显著的㊁中等度的疗效,增加了30%的预后良好的机会,且能改善不同亚型脑梗死患者30d 和90d 的神经功能[2-3]㊂E C A S SⅢ可以说是急性卒中治疗中的一个新的里程碑㊂在一项对r t -P A溶栓临床试验的分析中(C o c h r a n e 系统评价,包括8个随机对照试验(2889例患者)发现,静脉r t -P A 溶栓时间窗延长到6h ,可增加致死性颅内出血率(O R =3.60,95%C I =2.28~5.68),但可显著降低远期病死率或残疾率(O R =0.80,95%C I =0.69~0.93),即每治疗1000例患者将减少55例死亡或残疾[4]㊂另一项2010年发表的荟萃分析也表明,脑梗死患者发病4.5h 内应用r t -P A 溶栓治疗可使其获益,超过这一时间窗进行溶栓的风险可能高于获益[5]㊂但随着影像学的发展,近来也有研究报道认为,如果磁共振灌注加权成像/弥散加权成像失匹配>20%,可以考虑将溶栓时间窗延长至6.0h [6]㊂另有报道显示不同种族人群可能对r t -P A 的最佳用量不一致㊂2003年日本在其国内进行了对3h 内急性脑梗死的r t -P A 静脉溶栓治疗的大规模开放试验,选择剂量0.6m g /k g ,最大剂量60m g ,发现其安全性和有效性结果与N I N D S 试验相似㊂故日本指南推荐r t -P A 静脉溶栓剂量为0.6m g /k g,最大㊃553㊃第37卷第3期2016年3月河北医科大学学报J O U R N A L O F H E B E I M E D I C A L U N I V E R S I T YV o l .37 N o .3M a r . 2016 Copyright ©博看网. All Rights Reserved.剂量60m g[7]㊂2012年我国陈宝龙等[8]研究报道,纳入69例急性缺血性卒中患者,采用静脉r t-P A (0.9m g/k g,最大剂量90m g)溶栓治疗,同时加用自由基清除剂依达拉奉,也得到了与N I N D S试验相似的结果㊂2联合药物治疗急性缺血性卒中应用r t-P A的效果与血栓溶解的速度和程度以及动脉再通有关[9-11]㊂研究表明,仅有20%~30%患者在静脉r t P A治疗2h内经颅多普勒超声提示会有完全的血管再通,60%的患者仅部分再通,而再通后又有34%的患者会再闭塞[12-13]㊂再闭塞会导致神经功能恶化以及更高的医院病死率,因此迫切需要能安全放大r t-P A疗效并便于完成的治疗方法㊂虽然目前指南不推荐在溶栓24h内应用抗血小板药物或抗凝药物,但为了防止再通的血管再闭塞,增加r t-P A的疗效,众多学者对联合应用抗血小板聚集药物及抗凝药物如肝素等进行了大量研究,但在具体剂量㊁治疗时间窗及何种药物更安全有效等方面尚未取得一致意见[14]㊂研究表明,大量凝血酶在血栓形成的同时被结合在纤维蛋白血栓中,因此血栓溶解后就会有大量的凝血酶被释放出来,这可导致血栓溶解后的高凝状态㊁血栓再形成及血管再闭塞,所以这也是部分研究者在溶栓后采用抗凝治疗的依据㊂但常用抗凝剂如肝素㊁低分子肝素由于相对分子质量大,仅能拮抗血液中的凝血酶,对结合在血栓中的凝血酶很难发挥抗凝作用[15]㊂阿加曲班是凝血酶抑制剂,选择性抑制自由的和结合的凝血酶[16-17],它可直接灭活凝血酶,既能结合血液循环中的凝血酶还能结合凝结血块中的凝血酶,而肝素只能结合血液循环中的凝血酶[18];另外,可间接抑制凝血酶的产生,治疗剂量下对血小板功能无影响,不会像肝素那样导致血小板减少;对凝血酶原具有高度的选择性,不像肝素和肝素类似物[19]那样需要辅助因子抗凝血酶才能发挥抗凝作用㊂因此,阿加曲班可以用于抗凝血酶缺乏或功能低下患者的抗凝㊂而且,因其与活化部分凝血酶时间(a c t i v a t e d p a r t i a l t h r o m b o p l a st i n t i m e,A P T T)或活化凝血时间(a c t i v e c l o t t i n g t i m e,A C T)有良好的相关性[20],其效果和安全性可以预测㊂阿加曲班还有以下优点[21-23]:①半衰期39~51m i n,且不受性别㊁年龄和肾功能的影响,允许在出血的病例中快速抵消它的作用,停药后2~4h内A P T T恢复正常,故容易控制药物抗凝水平;②它在相当宽的剂量范围内或抗栓浓度范围内无出血等不良反应㊂③无免疫原性,不像肝素会产生抗体,也不像肝素类似物和低分子肝素那样会与肝素抗体发生交叉反应,不产生任何中和或非中和抗体,效价恒定㊂目前,日本㊁韩国已批准阿加曲班可以用于急性脑血栓形成和慢性动脉闭塞症;美国食品和药物管理局也批准阿加曲班可以用于肝素诱导的血小板减少症患者,预防其血栓形成;加拿大也批准应用于临床[24]㊂阿加曲班单独应用㊁联合溶栓药物或阿司匹林应用的安全性已在急性心肌梗死的患者中得到证实[25-27]㊂在一项阿加曲班㊁阿司匹林分别联合静脉溶栓治疗急性心肌梗死的随机对照试验中,与阿司匹林比较,应用阿加曲班使冠状动脉再灌注的发生率更高[28]㊂在动物卒中模型中,阿加曲班通过改善微循环血流㊁提高血管再通的速度和完全性以及阻止再闭塞而增加了r t-P A的疗效,并且安全[29-31]㊂阿加曲班和静脉溶栓联合治疗卒中的临床研究报道于2012年1月,B a r r e t o等[32]在S t r o k e杂志发表一项 阿加曲班-r t P A卒中研究 的临床结果表明,在标准剂量静脉r t P A治疗期间,应用阿加曲班100μg/k g静脉注射后再持续静脉48h治疗,在近端颅内动脉闭塞导致的中度神经功能缺失的65例患者中,4例患者发生了有意义的颅内出血(6.2%,95%C I=1.7~15.0),其中3例是症状性脑出血(4.6%,95%C I=0.9~12.9);7例(10.8%)患者在最初7天内死亡;在2h的监测期间,47例经颅多普勒超声中有29例(61.7%)再通,其中完全再通19例(40.4%),部分再通10例(21.3%)㊂与既往研究报道比较,这一联合治疗方法可能是安全的;两者联合比单纯应用r t P A能使更多的闭塞血管完全再通㊂该研究结果的发表为安全提高急性缺血性卒中溶栓治疗效果,提高血管再通的速度和完全性㊁减少再闭塞提供了很好的前景㊂目前国内尚未见相关研究报道,我们正在研究探讨之中㊂[参考文献][1] T h eN a t i o n a l I n s t i t u t eo fN e u r o l o g i c a lD i s o r d e r sa n dS t r o k er t-P AS t r o k eS t u d y G r o u p.T i s s u e p l a s m i n o g e na c t i v a t o r f o ra c u t e i s c h e m i cs t r o k e[J].N E n g lJ M e d,1995,333(24):1581-1587.[2] B l u h m k i E,C h a m o r r o A,D v a l o sA,e ta l.S t r o k e t r e a t m e n tw i t ha l t e p l a s e g i v e n3.0-4.5ha f t e r o n s e t o f a c u t e i s c h a e m i cs t r o k e(E C A S SⅢ):a d d i t i o n a l o u t c o m e s a n d s u b g r o u pa n a l y s i s o f ar a n d o m i z e dc o n t r o l l e dt r i a l[J].L a n c e tN e u r o l,2009,8(12):1095-1102.[3] H a c k e W,K a s t e M,B l u h m k iE,e ta l.T h r o m b o l y s i s w i t ha l t e p l a s e3t o4.5h o u r sa f t e ra c u t ei s c h e m i cs t r o k e[J].N㊃653㊃河北医科大学学报第37卷第3期Copyright©博看网. All Rights Reserved.E n g l JM e d,2008,359(13):1317-1329.[4] W a r d l a wJ M,d e lZ o p p oG,Y a m a g u c h iT.T h r o m b o l y s i s f o ra c u t ei s c h a e m i cs t r o k e[J].C o c h r a n e D a t ab a s e S y s t R e v,2000,(2):C D000213.[5] L e e s K R,B l u h m k i E,v o n K u mm e r R,e t a l.T i m e t ot r e a t m e n tw i t hi n t r a v e n o u sa l t e p l a s ea n do u t c o m e i ns t r o k ea nu p d a t e d p o o l e da n a l y s i so fE C A S S,A T L A N T I S,N I N D S,a n dE P I T H E Tt r i a l s[J].L a n c e t,2010,375(9727):1695-1703.[6] B iM,M a Q,Z h a n g S,e ta l.L o c a l m i l dh y p o t h e r m i a w i t ht h r o m b o l y s i s f o ra c u t e i s c h e m i cs t r o k ew i t h i na6-h w i n d o w[J].C l i nN e u r o lN e u r o s u r g,2011,113(9):768-773. [7] Y a m a g u c h iT,M o r iE,M i n e m a t s u K,e ta l.A l t e p l a s ea t0.6m g/k g f o ra c u t e i s c h e m i cs t r o k e w i t hi n3h o u r so fo n s e t:J a p a n A l t e p l a s e C l i n i c a l T r i a l[J].S t r o k e,2006,37(7):1810-1815.[8]陈宝龙,李俊杰,刘亚华,等.超早期脑梗死溶栓治疗联合应用依达拉奉后神经功能缺损症状的临床观察[J].空军医学杂志,2012,28(2):78-80.[9] A l e x a n d r o vA V,B u r g i n W S,D e m c h u k AM,e ta l.S p e e do fi n t r a c r a n i a lc l o tl y s i s w i t hi n t r a v e n o u st i s s u e p l a s m i n o g e na c t i v a t o rt h e r a p y:s o n o g r a p h i cc l a s s i f i c a t i o n a n ds h o r t-t e r mi m p r o v e m e n t[J].C i r c u l a t i o n,2001,103(24):2897-2902.[10] D e m c h u kAM,B u r g i n W S,C h r i s t o u I,e t a l.T h r o m b o l y s i s I nB r a i nI s c h e m i a(T I B I)t r a n s c r a n i a l D o p p l e r f l o w g r a d e sp r e d i c tc l i n i c a l s e v e r i t y,e a r l y r e c o v e r y,a n d m o r t a l i t y i np a t i e n t s t r e a t e dw i t h i n t r a v e n o u s t i s s u e p l a s m i n o g e n a c t i v a t o r[J].S t r o k e,2001,32(1):89-93.[11] L a b i c h eL A,A l-S e n a n i F,W o j n e rAW,e t a l.I s t h eb e n e f i t o fe a r l y r e c a n a l i z a t i o n s u s t a i n e d a t3m o n t h s A p r o s p e c t i v ec o h o r t s t ud y[J].S t r o k e,2003,34(3):695-698.[12] A l e x a n d r o vA V,D e m c h u kAM,B u r g i nW S,e t a l.U l t r a s o u n d-e n h a n c e dt h r o m b o l y s i sf o ra c u t ei s c h e m i cs t r o k e:p h a s eⅠ.F i n d i n g s o f t h eC L O T B U S Tt r i a l[J].JN e u r o i m a g i n g,2004,14(2):113-117.[13] A l e x a n d r o v A V,G r o t t aJ C.A r t e r i a lr e o c c l u s i o ni n s t r o k ep a t i e n t s t r e a t e dw i t h i n t r a v e n o u s t i s s u e p l a s m i n o g e n a c t i v a t o r[J].N e u r o l o g y,2002,59(6):862-867.[14] F u l g h a mJ R,I n g a l lT J,S t e a dL G,e t a l.M a n a g e m e n t o f a c u t ei s c h e m i cs t r o k e[J].M a y o C l i n P r o,2004,79(11):1459-1469.[15]赵智江,张微微,赵秀欣,等.阿加曲班治疗急性缺血性脑卒中疗效及应用时机的临床研究[J].中华老年心脑血管病杂志,2012,14(2):181-182.[16] W a l e n g aJ M.A no v e r v i e w o ft h ed i r e c tt h r o m b i ni n h i b i t o ra r g a t r ob a n[J].P a t h o p h y s i o l H a e m o s t T h r o m b,2002,32(S u p p l3):9-14.[17] T a n a k a K A,S z l a m F,K a t o r i N,e t a l.T h e e f f e c t s o fa r g a t r ob a no n t h r o m b i n g e n e r a t i o na n dh e m o s t a t ic a c t i v a t i o ni nv i t r o[J].A n e s t hA n a l g,2004,99(5):1283-1289.[18] B e r r y C N,G i r a r d o t C,L e c o f f r e C,e t a l.E f f e c t s o ft h es y n t h e t i ct h r o m b i ni n h i b i t o ra r g a t r o b a n o nf i b r i n-o rc l o t-i n c o r p o r a t e d t h r o m b i n:c o m p a r i s o n w i t h h e p a r i n a n dr e c o m b i n a n t H i r u d i n[J].T h r o m b H a e m o s t,1994,72(3):381-386.[19] K u m a d a T,A b i k o Y.C o m p a r a t i v es t u d y o nh e p a r i na n das y n t h e t i c t h r o m b i n i n h i b i t o r n o.805(M D-805*)i ne x p e r i m e n t a l a n t i t h r o m b i nⅢ-d ef i c i e n ta n i m a l s[J].T h r o m bR e s,1981,24(4):285-298.[20] F a r e e dJ,J e s k e W P.S m a l l-m o l e c u l e d i r e c ta n t i t h r o m b i n s:a r g a t r ob a n[J].B e s tP r ac tR e sC l i n H a e m a t o l,2004,7(1):127-138.[21]朱晓冬,常宝成,程焱.阿加曲班临床应用的进展[J].中国医师进修杂志,2014,37(19):73-75.[22]刘君,张冠群,崔晓.阿加曲班治疗急性进展性脑梗死的临床观察[J].卒中与神经疾病,2014,21(1):45-47.[23] W a r k e n t i n T E.H e p a r i n-i n d u c e d t h r o m b o c y t o p e n i a:ac l i n i c o p a t h o l o g i cs y nd r o m e[J].T h r o m b H ae m o s t,1999,82(2):439-447.[24]I k o m aH.D e v e l o p m e n t o f a r g a t r o b a n a s a n a n t i c o a g u l a n t a n da n t i t h r o mb i n a g e n ti n J a p a n[J].P a t h o p h y s i o l H a e m o s tT h r o m b,2002,32(S u p p l3):23-28.[25]J a n g I K,B r o w n D F,G i u g l i a n o R P,e ta l.A m u l t i c e n t e r,r a n d o m i z e d s t u d y o f a r g a t r o b a nv e r s u sh e p a r i na sa d j u n c t t ot i s s u e p l a s m i n o g e n a c t i v a t o r(t P A)i n a c u t e m y o c a r d i a li n f a r c t i o n:m y o c a r d i a li n f a r c t i o n w i t h n o v a s t a n a n d T P A(M I N T)s t u d y[J].JA m C o l lC a r d i o l,1999,33(7):1879-1885.[26] M o l e d i n aM,C h a k i rM,G a n d h i P J.As y n o p s i so f t h e c l i n i c a lu s e s o fa r g a t r o b a n[J].J T h r o m b T h r o m b o l y s i s,2001,12(2):141-149.[27] W y k r z y k o w s k a J J,K a t h i r e s a nS,J a n g I K.C l i n i c i a nu p d a t e:d i re c t t h r o m b i n i n h i b i t o r s i na c u t e c o r o n a r y s y n d r o m e s[J].JT h r o m bT h r o m b o l y s i s,2003,15(1):47-57.[28]J a n g I K,G o l d H K,L e i n b a c h R C,e ta l.I n v i v ot h r o m b i ni n h i b i t i o ne n h a n c e sa n ds u s t a i n sa r t e r i a lr e c a n a l i z a t i o n w i t hr e c o m b i n a n t t i s s u e-t y p e p l a s m i n o g e na c t i v a t o r[J].C i r cR e s,1990,67(6):1552-1561.[29] K a w a iH,U m e m u r a K,N a k a s h i m a M.E f f e c to fa r g a t r o b a no n m i c r o t h r o m b if o r m a t i o n a n d b r a i n d a m a g ei n t h e r a tm i d d l e c e r e b r a l a r t e r y t h r o m b o s i s m o d e l[J].J p n JP h a r m a c o l,1995,69(2):143-148.[30] M o r r i s D C,Z h a n g L,Z h a n g Z G,e ta l.E x t e n s i o n o ft h et h e r a p e u t i c w i n d o w f o r r e c o m b i n a n t t i s s u e p l a s m i n o g e na c t i v a t o rw i t h a r g a t r ob a n i n a r a tm o d e l o f e m b o l ic s t r o k e[J].S t r o k e,2001,32(11):2635-2640.[31] T a m a o Y,K i k u m o t o R.E f f e c to f A r g a t r o b a n,a s e l e c t i v et h r o m b i n i n h i b i t o r,o na n i m a lm o d e l so f c e r e b r a l t h r o m b o s i s[J].S e m i nT h r o m bH e m o s t,1997,23(6):523-530.[32] B a r r e t oA D,A l e x a n d r o vA V,L y d e nP,e t a l.T h ea r g a t r o b a na n d t i s s u e-t y p e p l a s m i n o g e n a c t i v a t o r s t r o k e s t u d y:f i n a lr e s u l t s o f a p i l o t s a f e t y s t u d y[J].S t r o k e,2012,43(3):770-775.(本文编辑:许卓文)㊃753㊃刘静等急性脑梗死r t-P A溶栓治疗进展Copyright©博看网. All Rights Reserved.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
(细胞桥粒),
由Cadherins (钙依赖性黏附蛋白) 通过同嗜作用形成
Tight Junction (TJ)
形成转运屏障
TJ对细胞转运具有高度抗性,主要见于脑的微循环VEC
Tight junction (TJ) and adherens junction (AJ)
Endothelial cell F-actin
inhibition of Ca2+ influx into smooth muscle cell
Galley & Webster 2004
W. Rose
(1) Nitric Oxide (NO)
• • • • • • Actions Vasodilator Effects Most important intrinsic dilator
内皮素(ET) VEC 转化血液中的血管紧张素Ⅰ(AngⅠ) 生成的AngⅡ 血小板活化因子(PAF)、血栓素A2(TXA2) 等
Vasoconstriction
1.舒血管物质
(1) Nitric Oxide (NO) / EDRF:
合成途径(EC): Endothelial cell
e
舒血管机制(VSMC): Smooth muscle cell
VSMC
PGI2:
增强EDRF/NO的舒血管效应; 减少ET的合成释放,减弱其缩血管效应; 最强的血小板粘附、聚集抑制剂
有选择性的通透

内皮细胞在血脑屏障中的作用
(二) 屏障与物质交换功能
VEC能选择性的调节小分子 甚至超大分子物质通过血管壁
物质交换:(1) 穿EC的小管运输; (2)吞饮小泡(囊泡)的转运;(3)受体调节的转运;
(4) 脂溶性物质经EC膜弥散;(5) 通过EC的间隙扩散(受多种物质调控)
组胺、激肽、5-HT Luminal side
(三) VEC的结构特点
4. Weibel-Palade body(W-P小体)
Von Willebrand 因子贮存和加工器官 怀布尔-帕拉德小体
小体来源于高尔基体
特殊颗粒
VEC特异的细胞器
EC的形态学标志
是vWF贮存和加工的器官
Lung Microvascular Endothelial Cells - HMVEC-L stained for von Willebrand factor (vWF)
(一) 衬里功能
血管壁:由内膜、中膜、外膜构成
内膜:由内皮和内皮下层构成
内弹性膜
内皮细胞:为血流提供光滑的表面
重要机制: VEC通过合成与分泌相关物质, 为血管提供一个抗血栓的表面, 保证血液的正常流动。
(二) 屏障功能与物质交换功能
屏障功能
VEC的紧密连接,
TJ
能防止血浆大分子物质 和血细胞漏出 BBB: 阻止某些物质(有害的) 由血液进入脑组织
ZO-1
Paracellular space
junctional adhesion molecules (JAM) (连接粘附分子) JAM 闭合蛋白
Myosin
小带闭合蛋白 ZO-1/2
Occludin
封闭蛋白
Cingulin
ZO-1 TJ
Claudin
紧密连接蛋白高表达
α
in n te a -c
人肺微血管内皮细胞
Von Willebrand staining(血管性血友病因子/ 凝血因子8相关抗原)
1. EC的特异性
Human Endothelial Cell Surface Marker
PECAM-1 or CD31: Platelet-Endothelial Cells Adhesion Molecule-1
HUVECs F-actin microfilaments yellowish
von Willebrand factor(vWF) (Weibel-Palade bodies) orange
黏着斑蛋白
vinculin plaques (greenish) nuclei (blue) (DAPI)
粘着斑蛋白、细胞骨架蛋白及肌动蛋白相结合,在细胞粘附、伸展、运动、增殖、存活等过程中起重要作用.
Ca2+
β
Fig. 1
in n te a -c
Cadherin
AJ
钙依赖性的黏附蛋白
内皮屏障功能的结构基础 细胞间连接结构的组成 Simplified schematic image of tight-junction (TJ) and adherens junction (AJ)
Gap Junction (GJ)
(二)血管内皮细胞分类
有窗 不同部位的VEC的形态、结构有一定的差异
有孔EC
连续EC:内皮细胞连续完整。(CNS的VEC) 有窗/有孔EC: 胞质局部薄弱,内外两层融合成一层; 或EC上有洞隙,允许较大的分子通过。
存在于胃肠粘膜、肾小球、肾小管周围的毛细血管
(三) VEC的结构特点
1. 质膜上的小凹(Caveolae)
Endothelial transport pathways
Abluminal side
(三)释放血管活性物质,调节血管张力
1. 舒血管作用
内皮依赖性舒张因子(NO/EDRF)
Vasodilatation
前列环素(PGI2 )
血管内皮细胞超极化因子(EDHF ) 缓激肽、乙酰辅酶A等
2. 缩血管作用
参与细胞间的信息传递
(五) VEC的异质性
(VEC heterogeneity)
用DNA芯片技术分析表明,来自不同种属、不同
组织的血管内皮细胞 (VEC) ,其基因表达存在差异,
影响其对生理和病理变化的适应,表现为功能的异质
性。
(五)VEC的异质性(VEC heterogeneity)
分子表达的异质性: 表面受体表达(组胺、P物质受体等) 黏附分子表达 (p-selectin, ICAM-1) 表面复合糖的表达(唾液酸残基亚型)…
endothelial marker genes in mouse organs
endothelial marker genes and adhesion molecules in different microvascular
2. VEC的功能异质性
All endothelial cells inhibit coagulation of the blood.
PECAM-1 is typically an antigen that is shared by both endothelial and distinct hematopoietic (造血) cells.
血管内皮钙黏蛋白(VE-cadherin) 内皮细胞一氧化氮合酶(eNOS)、血管紧张素酶
(一) 血管内皮细胞
What’s the vessel endothelial cells(VEC)?
VEC:循环血液与血管壁内皮下组织之间的单层细胞,通过结缔组织, 附着于内皮下组织。
VEC形态呈梭形、核居中,细胞相互排列紧密,呈“鹅卵石”样外观 。
(基膜)
血管内皮细胞为扁平的鳞状细胞,随血流呈单层纵向排列。 内皮细胞产生和分泌的纤维连接蛋白(fibronecti,FN) 将内皮细胞与其下面对胶原组织粘连在一起。
Anti-thrombotic Prevents cell adhesion & Platelet aggregation Anti-atherogenic Inhibits lipidoxidation & monocyte migration Growth inhibitor inhibits cellular growth & migration Antioxidant Scavenges superoxide anions Prevents generation of thrombosis
血管内皮细胞
(Vascular endothelia cell, VEC)
被覆功能→ 屏障功能→ 炎症反应→ 代谢及内分泌器官
血管内皮细胞
(Vascular endothelia cell, VEC)
一、VEC的结构
二、VEC的功能
三、VEC在炎症反应中的作用
四、内皮功能障碍与心血管系统疾病
一、血管内皮细胞(VEC)的结构特点
1. EC的特异性
微静脉,具有对炎性介质高度敏感的特性。
在组胺、缓激肽和5-HT等的刺激下,微静脉比微动脉和毛细血管更
易引起血浆外渗、细胞粘附。
VEC功能的 异质性:
Vasodilation Fluid filtration Albumin leakage Leukocyte-capillary plugging Oxidant production
(三) VEC的结构特点
3. VEC表面突起 Microvilli(微绒毛) and Glycoprotein
Surface Coat
VEC向管腔内伸出的胞质突起, 起吸收作用,炎症时与捕捉白细胞有关。
生理条件下,发挥血管保护作用:
Microvilli of Cultured HUVEC (EM)
Cav-1, Cav-2 and Cav-3
由细胞膜内馅形成,其内存在许多信号分子,包括:PDGF受体, PKC,PLC, PI3K等,与细胞信号转导有关。 通过这些信号分子,实现对内皮细胞功能的调节作用
小凹蛋白与心血管疾病
1) 研究发现,在VEC及心肌细胞,Cav-1 能够抑制eNOS 活性。
VEC沉默Cav-1,导致eNOS 活性增加和NO 的释放,引起血管弹性改变。 调节内皮功能 相反,过表达Cav-1,将导致NO 介导的血管舒缩反应受抑制。 2) eNOS 和NADPH 氧化酶均定位于小凹处,提示小凹可能为NO 和O2-.的产生提供场 所。 3) VEGFR定位于小凹中,Cav-1 通过正或负性调控VEGFR2 而调节VEGF 信号通路。 参与血管生成 4) 心肌细胞过表达Cav-3,导致严重的心肌组织退化,纤维化及心肌功能障碍。 而Cav-1 与Cav-3 双重基因缺陷小鼠,小鼠表现为左心室壁肥厚、室间隔肥厚。 5)在体人的研究发现,高胆固醇血症患者Cav-1 水平高于正常人,导致eNOS 活性降低引 起的体内NO水平减少。 Cav和小凹在高血压、动脉粥样硬化、心肌病、糖尿病大血管病变及心肌IR损伤中都有重要作用
相关文档
最新文档