Tianyao Huang_SPL2014_Adaptive Compressed Sensing via Minimizing Cramer–Rao Bound

合集下载

基于贝叶斯压缩感知的噪声MIMO雷达目标成像

基于贝叶斯压缩感知的噪声MIMO雷达目标成像

第37卷第2期南京理工大学学报V01.37N o.2兰Q!!生垒旦!竺!竺型竺堕!尘!竺!竺!堡!!!堡坚兰!堡!!!!!!里!皇!竺!竺墅垒巳!:呈Q!兰基于贝叶斯压缩感知的噪声M I M O雷达目标成像王超宇,贺亚鹏,胡恒,朱晓华(南京理工大学电子工程与光电技术学院,江苏南京210094)摘要:为了提高低信噪比下压缩感知雷达的成像性能,该文提出了一种基于贝叶斯压缩感知的噪声多入多出(M I M O)雷达成像方法。

给出了噪声M I M O雷达系统稀疏感知模型,构造了贝叶斯概率密度函数,利用最大后验概率优化方法对目标函数进行优化求解。

优化估计的结果接近最佳稀疏度,与传统压缩感知重构方法相比,该方法能够有效降低目标场景向量的估计误差,提高目标二维像的质量,对噪声干扰的鲁棒性更好。

仿真结果验证了该方法的有效性。

关键词:贝叶斯压缩感知;噪声多入多出雷达;目标成像中图分类号:TN958.8文章编号:1005—9830(2013)02-0262—07N oi se M I M O r adar t ar get i m agi ng based on B ayes i anW a ng C haoyu,H e Y a peng,H u H eng,Z hu X i a ohua(School of E l ect r oni c E ngi neer i ng a nd O pt oel ec t r oni c T echnol ogy,N U ST,N anj i ng210094,C hi na)A bst r act:To e nha nc e noi se r at i o,t he noi s eB ayesi an co m pr e ssi v et he pe r f or m ance of t heco m pr e ssi v e sensi ngr adar i m agi ng i n t he l ow s i gnal t o m ul t i pl e i nput m ul t i pl e out put(M I M O)r adar t ar get i m agi ng bas ed on t he s ens i ng(B C S)i s pr opos ed.T he s par s e sensi ng m odel of t he noi s e M I M O r a darand t he B a ye si a n pr ob abi l i t y densi t y f unct i on a r e pr es ent ed,and a n opt i m i zat i on m et hod bas ed o nm a xi m um a pos t er i ori is e m pl oyed t o SO l V e t he a bove pr obl em.T he es t i m at e s i gnal vect or of t he t ar gets ce ne cl oses t o t he best opt i m i ze r e sul t s.C om par ed w i t h t he t radi t i onal c om pr ess e d sensi ngr econs t ruct i on m et hod,t he pr opose d m et hod ca n eff ect i vel y r e duc e e r r or s of t he est i m a t e,i m pr ove t hequal i t y of t he t w o di m ensi onal i m age,and s how t he bet t er r obus t ness t o noi s e.Si m ul at i on r es ul t s dem-onst r at e t he ef f ect i venes s of t he m et ho d.K ey w or ds:B aye si an co m pr e ssi v e s ens i ng;noi s e m ul t i pl e i nput m ul t i pl e out put r adar;t arget i m a gi ng收稿日期:2011—10—13修回日期:2012-07—10基金项目:南京理工大学自主科研专项计划(2010ZD J H05)作者简介:王超字(1985一),男,博士生,主要研究方向:稀疏成像,E-m ai l:w angchaoyu@yahoo.cn;通讯作者:朱晓华(1966一),男,教授,博士生导师,主要研究方向:雷达系统,高速数字信号处理等,E.m ai l:zxh@m ai l.nj ust.edu.cn。

角度域和时延域联合稀疏信道估计

角度域和时延域联合稀疏信道估计

角度域和时延域联合稀疏信道估计张跃明;张兵山;归琳;秦启波;熊箭【摘要】针对多输入多输出(MIMO)系统在双选信道下信道估计问题,以及挖掘信道在时延域和角度域的联合稀疏特性,提出了一种新的基于压缩感知的联合稀疏信道估计方案.首先,基于基扩展模型,将信道估计建模为结构化压缩感知问题,随后基于压缩感知模型,提出了两种新的贪婪算法,有效地恢复了时变信道参数.其中两步同时正交匹配追踪(TS-SOMP)算法先在时延域中找到所有非零抽头位置,然后估计非零角度域系数.两环同时正交匹配追踪(TL-SOMP)算法包括内外两个循环,在外部循环中找到一个非零抽头位置后,即可直接在内部循环求解非零角度域系数.最后,给出了归一化均方误差(NMSE)的仿真曲线,验证了本算法的有效性.【期刊名称】《上海师范大学学报(自然科学版)》【年(卷),期】2018(047)002【总页数】6页(P192-197)【关键词】信道估计;压缩感知;双选;系统;角度域【作者】张跃明;张兵山;归琳;秦启波;熊箭【作者单位】上海交通大学电子信息与电气工程学院,上海200240;北京跟踪与通信技术研究所,北京100094;上海交通大学电子信息与电气工程学院,上海200240;上海交通大学电子信息与电气工程学院,上海200240;上海交通大学电子信息与电气工程学院,上海200240【正文语种】中文【中图分类】TN929.50 引言在高速移动性环境中,宽带无线系统不但存在频率选择性衰落,也存在时间选择性衰落,这种场景被称为双选(DS)信道[1].对于DS信道场景中的多输入多输出(MIMO)系统,由于存在大量未知信道参数,很难获得准确的信道状态信息(CSI).为了高效地获得CSI,已经有研究人员提出了几种DS信道下MIMO系统的信道估计方案[2-3].然而,这些方案都基于丰富多径信道的假设,导频开销很大.越来越多的研究已经证实,许多实际的无线信道表现出稀疏性,因此可以将压缩感知(CS)理论用于信道估计[4].文献[5]基于信道在时延域的稀疏性,利用CS方法提高信道估计精度.实际环境中,由于基站(BS)周围的散射物有限,MIMO信道通常在角度域也表现出稀疏性[6].文献[7]和[8]同时利用了时延域和角度域的稀疏性,提出基于CS 的MIMO信道估计方案.然而,上述信道估计方案都是基于平坦衰落或时不变的信道模型,对于DS信道场景中的MIMO系统,还没有研究人员同时利用时延域和角度域的稀疏特性实现信道估计.针对DS信道场景中的MIMO系统,本文作者提出一种新的基于CS的联合稀疏信道估计方案.首先利用复指数基扩展模型(CE-BEM)对DS信道的时变性进行建模,从而将信道估计目标转化为角度域系数恢复问题,然后详细分析了待估计系数矩阵的稀疏结构,接着,提出两种新的贪婪算法对信道参数进行恢复,并通过MATLAB平台仿真实验,验证了本算法具有良好的性能.1 系统模型1.1 双选信道下的复指数扩展模型本文作者研究MIMO正交频分复用(OFDM)下行传输,设基站配备有Nt个发射天线,用户是单天线.用户端的接收信号(1)其中,F是傅里叶变换矩阵,Xnt(nt∈[1,Nt])是第nt个发射天线的发射数据,W表示高斯白噪声,是时域信道矩阵.利用CE-BEM对DS信道进行建模,(2)其中表示第nt个发射天线与用户在第1个时刻,第l条离散径的信道增益,bq(q∈[0,Q-1])是CE-BEM的基函数,是CE-BEM系数,ξl代建模误差.将公式(2)带入公式(1),得到:(3)其中,由的前L列构成,Z为高斯白噪声和CE-BEM建模误差.为了减少MIMO系统的导频开销,采用非正交导频模式,即不同发射天线的导频位置相同.此外,利用频域克罗内克函数(FDKD)导频配置方式,即G个有效导频左右分别放置Q-1个保护导频[9],其中有效导频值设为随机的1或-1,保护导频设为0.设有效导频序列为κval={k0,…,kG-1},则所有导频(包括有效导频和保护导频)序列表示为κ=∪{k-Q+1,…,k,…,k+Q-1},k∈κval.此处,重新定义Q个新子集(4)基于CE-BEM模型和上述稀疏导频模式,对应于κq的接收导频子载波[10](5)其中,为有效导频的值.1.2 建模与稀疏性分析将信道模型转换为角度域分析,第l个信道抽头对应的角度域信道矩阵表示为:(6)其中,Ut是一个酉矩阵,即这里为Ut的共轭转置,In为n阶单位向量,其(m,n)项为定义第l个信道抽头的第q个CE-BEM系数向量为角度域中与之对应的系数向量为满足:(7)结合(2)、(6)和(7)式,角度域信道矩阵可以表示为:(8)其中式中的接收导频载波(9)其中,从而,得到最终的结构化压缩信道估计模型(10)其中,R=([Y]κ1…[Y]κQ);M=IN⊗⊗表示Kronecker积;S是被估计的系数矩阵.因此将信道估计目标转换为求解接下来,分析矩阵S的稀疏结构.首先,考虑信道在时延域的稀疏性.在宽带系统中,时延间隔通常远大于采样周期[5],因此许多矩阵是零矩阵或者所有系数近似等于零.设时延域中的稀疏度是Kd,即中只有Kd个矩阵(对应序列ι={lt1,…,ltKd})有相对较大的系数,其它系数小的矩阵可以被忽略.因此,对所有nt∈[1,Nt],由于∉ι,(11)那么对每个中只有Kd个非零向量.其次,考虑信道在角度域的稀疏性.在实际的MIMO信道中,基站往往高于周围建筑物[6],因此,有用信号只集中在部分角度,角度域呈现出稀疏特性.设角度域中的稀疏度是Ka,即中只有Ka列(相应序列有相对较大的系数,而其它系数较小的列可以被忽略.与式(11)相似,对nt∉有:(12)很明显,对的每个向量应该是一个稀疏度为Ka的向量,且的每个向量中非零元素位置相同.综上所述,当且仅当l∈ι(|ι|=Kd),向量非零,并且对每个l的非零向量共享相同的非零位置.2 贪婪算法基于结构化压缩感知模型,提出两种新的贪婪算法来计算信道参数.两步同时正交匹配追踪(TS-SOMP)算法(图1)包括两个阶段:首先找到所有非零抽头位置.搜寻最佳序号mi∈[0,L-1]使残差最小.根据所获得的mi更新支持向量Ω和矩阵Θ.然后,并计算新的残差.估计非零角度域系数,用同时正交匹配追踪(SOMP)算法[11]计算非零角度域系数.SOMP算法用所选择的矩阵Θ,将接收信号R与稀疏度Kd×Ka作为输入,SΩ作为输出.两环同时正交匹配追踪(TL-SOMP)算法包括内外两层循环.在外部循环的每次迭代中,搜寻最佳序号mi∈[0,L-1]使残差最小.在内部循环的每次迭代中,计算最优序列kj∈[1,Nt]使最大.基于mi和kj,更新支持向量Ω和选择矩阵Θ,然后计算新的残差.最后,得到非零系数SΩ=Θ†R.采用正交匹配追踪(OMP)算法和SOMP算法也可以估计稀疏向量,然而,OMP算法忽略了不同系数向量的联合稀疏性,SOMP算法从NtL行中搜索Kd×Ka个非零行,搜索维度大,精度低.而本文作者提出的TS-SOMP算法中,在阶段1获得非零抽头位置之后,阶段2的未知行数减少至Kd×Nt≪Nt×L,估计的准确性会得到改善.此外,一旦TL-SOMP算法在时延域中找到一个非零抽头位置,就可以从Nt个未知行中估计出Ka个非零行,因此该算法会获得更高的估计精度.根据本算法估计系数向量由(7)式可以得到CE-BEM的系数利用文献[11]中提出的离散长椭球形序列(DPSSs)对估计的CE-BEM系数进行平滑处理再根据(2)式计算信道矩阵Hl.3 仿真结果与分析用MATLAB仿真验证所提算法的性能.表1列出了MIMO-OFDM的系统参数. 表1 仿真参数参数数值子载波数1024发射天线数8CP长64导频组40CE-BEM 阶3子载波间隔15kHz子载波频率3GHz调制QPSK仿真中移动台移动速度为350 km/h,Kd=3,Ka=3,使用斯坦福大学的Interim-1信道模型生成信道参数,信道抽头时延为[0,0.4,0.9] μs,增益是[0,-15,-20] dB.导频子载波数P=(2Q-1)G=200,导频模式由文献[11]中的随机算法获得.为了评估信道估计性能,使用归一化均方误差其中是真实信道参数,是估计值.图1给出了归一化均方误差(NMSE)随信噪比(SNR)变化的曲线.可以看出,所提出的两种算法比传统的SOMP/OMP算法优越.当归一化均方误差NMSE=-20 dB时,与传统SOMP算法相比,TL-SOMP算法实现了约2 dB的SNR增益.这是因为在搜索到时延域中的非零抽头位置之后,可以用较少的列来重建测量矩阵,从而有效地减少估计误差.图1 不同算法的NMSE性能比较4 结论针对DS信道的MIMO-OFDM系统,本文作者同时利用了时延域和角度域的稀疏性,提出了一种新的联合稀疏信道估计模型,并基于该模型提出了两种新的贪婪算法.TS-SOMP算法首先在时延域中找到所有非零抽头位置,然后估计非零角度域系数;TL-SOMP算法在外部循环中找到一个非零抽头位置后,即可直接在内部循环求解非零角度域系数.仿真结果表明,与传统的SOMP/OMP算法相比,本研究所提算法具有更高的估计精度.参考文献:[1] Ren X,Chen W,Tao M X.Position-based compressed channel estimation and pilot design for high-mobility OFDM systems [J].IEEE Transactions on Vehicular Technology,2015,64(5):1918-1929.[2] Aboutorab N,Hardjawana W,Vucetic B.A new iterative Doppler-assisted channel estimation joint with parallel ICI cancellation for high-mobility MIMO-OFDM systems [J].IEEE Transactions on VehicularTechnology,2012,61(4):1577-1589.[3] Muralidhar K,Sreedhar D.Pilot design for vector state-scalar observation Kalman channel estimators in doubly-selective MIMO-OFDM systems [J].IEEE Wireless Communications Letters,2013,2(2):147-150.[4] Zhang Y,Venkatesan R,Dobre O A,et al.Novel compressed sensing-based channel estimation algorithm and near-optimal pilot placement scheme [J].IEEE Transactions on Wireless Communications,2016,15(4):2590-2603.[5] Qi C H,Yue G S,Wu L A,et al.Pilot design schemes for sparse channel estimation in OFDM systems [J].IEEE Transactions on Vehicular Technology,2015,64(4):1493-1505.[6] Rao X B,Lau V K N.Distributed compressive CSIT estimation and feedback for FDD multi-user massive MIMO systems [J].IEEE Transactions on Signal Processing,2014,62(12):3261-3271.[7] Kim S.Angle-domain frequency-selective sparse channel estimation for underwater MIMO-OFDM systems [J].IEEE CommunicationsLetters,2012,16(5):685-687.[8] Pan Y Q,Meng X,Gao X M.A new sparse channel estimation for 2D MIMO-OFDM systems based on compressive sensing [C].Proceedings of the 6th International Conference on Wireless Communications and Signal Processing,Hefei:IEEE,2014.[9] Hrycak T,Das S,Matz G,et al.Practical estimation of rapidly varying channels for OFDM systems [J].IEEE Transactions on Communications,2011,59(11):3040-3048.[10] Gong B,Gui L,Qin Q B,et al.Block distributed compressive sensing-based doubly selective channel estimation and pilot design for large-scale MIMO systems [J].IEEE Transactions on VehicularTechnology,2017,66(10):9149-9161.[11] Cheng P,Chen Z,Rui Y,et al.Channel estimation for OFDM systems over doubly selective channels:a distributed compressive sensing based approach [J].IEEE Transactions on Communications,2013,61(10):4173-4185.。

2014年总目次

2014年总目次

第 12 期
2014 年总目次
1531
采用 FFT 方法的抗阶数过估计信道盲辨识算法 白曜铭 蒋建中 孙有铭 郭军利(65 ) 使用杂波先验信息和参考数据的距离扩展目标检测 邹 鲲 廖桂生 李 军 刘自富(141 ) 非理想信道条件下多用户多向中继系统的性能分析 陈 卓 蔡跃明(149) 双基地 MIMO 雷达目标参数估计及动态跟踪新算法 李 丽 邱天爽(155 ) 非下采样形态学 Shearlet 变换:提高结构细节捕捉的图像表示新方法 刘俊良 雷 琳 周石琳(163) 物理层安全的信号子空间人工噪声跳空方法 赵刘可 金 梁 马克明(172) 协作频谱感知中的可信双门限硬判决融合算法 张 亮 冯景瑜 卢光跃(181 ) 低复杂度的 CPM信号联合定时和序列检测算法 陈 沛 黄 焱 钟 凯(189) 认知无线电网络中基于强化学习的智能信道选择算法 刘 洋 崔 颖 李 鸥(253) 特征值极限分布的改进合作频谱感知 卢光跃 弥 寅 包志强(261 ) 认知 MIMO 系统基于正交投影的 SLNR 收发联合设计算法 唐 帅 朱世磊 胡捍英 仵国锋(268) 认知无线网络动态权值调整的频谱切换策略 赵知劲 张璐苹(276) 认知 MIMO 系统中改进的干扰信道学习算法 潘必胜 胡捍英 郑娜娥(281 ) 语音压缩感知硬阈值梯度追踪重构算法 杨真真 杨 震(390) 利用无约束函数的 QAM信号自适应盲均衡方法 杨 宾 王大磊 吴 瑛 王秀秀(399) 数字助听器中单通道语音增强算法的研究 曹旭来 张玲华 林志敏 郑宝玉(405 ) 多观测样本联合信息加权稀疏表示分类算法 胡正平 赵艳霜 赵淑欢(413) 改进的 Freeman 链码在边缘跟踪及直线提取中的应用研究 王竞雪 宋伟东 赵丽科 王伟玺(422) 远距离复杂背景鲁棒的步态特征提取与表示方法 聂栋栋 马勤勇(431 ) SAR 图像的 Gamma 混合分布建模方法 江金龙 孙 洪 陈嘉宇 查代奉(504) 非周期长码直扩信号的盲解扩 赵知劲 顾骁炜 沈 雷 尚俊娜(5 1 1 ) 尺度自适应 CBWH 跟踪算法研究 刘 峰 张 超 吴小培 王营冠 朱周元(5 17) 对称 α稳定分布噪声环境下短波衰落信号时延估计的新算法 马金全 葛临东 童 莉(526) 大型阵列天线子阵划分及栅瓣抑制方法 程乃平 潘点飞(535 ) 雷达目标一维全极化特征提取方法 刘 勇 梁 伟 周宏潮 王同权 齐照辉(544) Alpha 稳定分布噪声条件下的广义 S 变换时频分析 舒 彤 余香梅 查代奉 邱天爽(634) MIMO 雷达中的谱修正旁瓣抑制技术研究 李锐洋 李 军 林洪娟 何子述(642) 改进的 Fast Marching 肝脏分割方法 宋 晓 黄晓阳 王博亮(648) 全双工 MIMO 中继的自干扰抑制 王蒙蒙 芮贤义(655 ) 宽范围的码辅助载波恢复算法 沈海鸥 王永民 许 华 李晓东(659) 多模盲均衡载波同步联合架构及分析 杨大龙 张 健 陈志强 陈大海(665 ) 改进的多级线性预测晚期混响抑制算法 赵 红 李双田(674) 双通道能量差后滤波语音增强算法统计分析和改进 王世伟 胡笑浒 郑成诗 李晓东(766) 直扩系统时变单音干扰抑制的新方法 郝张红 赵宏志 邵士海 唐友喜 张嘉岷(777) 最大间隔核优化的雷达目标识别新方法 肖永生 黄丽贞 朱稢昊 周建江(783) 临近空间慢速平台 SAR 快速成像模式研究及算法研究 左伟华 皮亦鸣 闵 锐(789) 较大尺度运动下的人体特征点跟踪算法研究 陈婷婷 阮秋琦(797) 稳健的宽带恒定束宽波束形成算法 陈 辉 赵拥军 丁永超 张 睿(804) 双麦克风噪声消除的高斯混合模型法 陈 浩 鲍长春 夏丙寅(813) 适用于圆形高阶 QAM信号的盲均衡算法 吴天琳 彭 华(822) 身份认证中灰度共生矩阵和小波分析的活体人脸检测算法 曹 瑜 涂 玲 毋立芳(830) 提高卫星通信系统吞吐量的复数域网络编码算法 王庚润 李 炯 罗 建 刘爱军 郭道省(882) 近邻类加权结构稀疏表示图像识别算法 胡正平 赵淑欢 彭 燕 王 宁(891 ) 用于高阶 MPPSK 信号检测的多分类 SVM新算法 徐红梅 吴乐南(901 ) 利用最大偏差比的 LDPC 码识别算法 刘海达 李 静 彭 华(908) 压缩感知下的稀疏表示语声恢复模型与算法 李 洋 李双田(914)

2014年度上海科技奖初评公示

2014年度上海科技奖初评公示
上海杰图天下网络科技有限公司,上海杰图软件技术有限公司
浦东新区
27
TD-LTE 4G新一代移动通信系统
季利军,叶建峰,张 怡,范 瑾,张建林,林凌峰,张鹏杰,蒋智宁,张晓文,李在清,马雨出,赵 昆,张成安,解安亮,周宝龙
上海贝尔股份有限公司
浦东新区
28
基于O2O模式的区域健康服务平台
江萍,储泰山,黄岳嵘,周燿军,李 超,邹 瑜,池 捷
阮为民,朱武标,甘靖戈,刘玉兵,梁世升,乔进友,李 洁
上海三菱电梯有限公司
闵行区
20
电站锅炉可靠性与寿命及安全性保障技术研究和应用
史进渊,杨 宇,李立人,吾之英,邓志成,窦文宇,陈朝松,王延峰,汪 勇,何 毅
上海发电设备成套设计研究院,上海上发院发电成套设备工程有限公司,中国特种设备检测研究院
闵行区
印丽萍,王有福,郭成亮,薛华杰,傅怡宁,陈 沁,徐 瑛,刘卉秋,杨 轶,阮祺琳
中华人民共和国上海出入境检验检疫局,中华人民共和国辽宁出入境检验检疫局,中华人民共和国秦皇岛出入境检验检疫局,中华人民共和国宁波出入境检验检疫局,上海大学
上海出入境检验检疫局
50
二代加百万千瓦级核电蒸汽发生器研制
徐凯祥,唐伟宝,张茂龙,江才林,许遵言,李双燕,吴新华,程嘉伟,王志强,孙志远,盛旭婷,周玉山,江燕云,顾佳磊,苏 玉
姚见儿,王 毅,周雪雷,蔡勇华
上海透景生命科技有限公司
浦东新区
35
重大疾病生物样本库标准化建设与应用
郜恒骏,王红阳,李锦军,朱明华,杜 祥,王伟业,顾健人
上海芯超生物科技有限公司,中国人民解放军第二军医大学东方肝胆外科医院,上海市肿瘤研究所,上海长海医院,复旦大学附属肿瘤医院

CORRESPONDENCE ADDRESS

CORRESPONDENCE ADDRESS

DISCRIMINATIVE COMMON VECTORS FOR FACE RECOGNITION Hakan Cevikalp1, Marian Neamtu2, Mitch Wilkes1, and Atalay Barkana31Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA.2Center for Constructive Approximation, Department of Mathematics, Vanderbilt University, Nashville, Tennessee, USA.3Department of Electrical and Electronics Engineering, Osmangazi University, Eskisehir, Turkey.CORRESPONDENCE ADDRESS:Prof. Mitch WilkesDepartment of Electrical Engineering and Computer Science,Vanderbilt University, Nashville, Tennessee, USATel: (615) 343-6016Fax: (615) 322-7062e-mail: mitch.wilkes@AbstractIn face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the within-class scatter matrix is singular and the Linear Discriminant Analysis (LDA) method cannot be applied directly. This problem is known as the “small sample size” problem. In this paper, we propose a new face recognition method called the Discriminative Common Vector method based on a variation of Fisher’s Linear Discriminant Analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the within-class scatter matrix of the samples in the training set while the other uses the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the discriminative common vectors. Then the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher’s Linear Discriminant criterion given in the paper. Our test results show that the Discriminative Common Vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.Index Terms: Common Vectors, Discriminative Common Vectors, Face Recognition, Fisher’s Linear Discriminant Analysis, Principal Component Analysis, Small Sample Size, Subspace Methods.I. INTRODUCTIONRecently, due to military, commercial, and law enforcement applications, there has been much interest in automatically recognizing faces in still and video images. This research spans several disciplines such as image processing, pattern recognition, computer vision and neural networks. The data come from a wide variety of sources. One group of sources is the relatively controlled format images such as passports, credit cards, photo ID’s, driver’s licenses, and mug shots. A more challenging class of application imagery includes real-time detection and recognition of faces in surveillance video images, which present additional constraints in terms of speed and processing requirements [1].Face recognition can be defined as the identification of individuals from images of their faces by using a stored database of faces labeled with people’s identities. This task is complex and can be decomposed into the smaller steps of detection of faces in a cluttered background, localization of these faces followed by extraction of features from the face regions, and finally recognition and verification [2]. It is a difficult problem as there are numerous factors such as 3-D pose, facial expression, hair style, make up, and so on, which affect the appearance of an individual’s facial features.In addition to these varying factors, lighting, background, and scale changes make this task even more challenging. Additional problematic conditions include noise, occlusion, and many other possible factors.Many methods have been proposed for face recognition within the last two decades [1], [3]. Among these methods, appearance-based approaches operate directly on images or appearances of face objects, and process the images as two-dimensional (2-D) holistic patterns. In these approaches, a two-dimensional image of size w by h pixels is represented by a vector in a wh-dimensional space. Therefore, each facial image corresponds to a point in this space. This spaceis called the sample space or the image space, and its dimension typically is very high [4]. However, since face images have similar structure, the image vectors are correlated, and any image in the sample space can be represented in a lower-dimensional subspace without losing a significant amount of information. The Eigenface method has been proposed for finding such a lower-dimensional subspace [5]. The key idea behind the Eigenface method, which uses Principal Component Analysis (PCA), is to find the best set of projection directions in the sample space that will maximize the total scatter across all images such that ||max arg )(W S W W J T T Wopt PCA = is maximized. Here T S is the total scatter matrix of the trainingset samples, and W is the matrix whose columns are the orthonormal projection vectors. The projection directions are also called the eigenfaces. Any face image in the sample space can be approximated by a linear combination of the significant eigenfaces. The sum of the eigenvalues that correspond to the eigenfaces not used in reconstruction gives the mean square error of reconstruction. This method is an unsupervised technique, since it does not consider the classes within the training set data. In choosing a criterion that maximizes the total scatter, this approach tends to model unwanted within-class variations such as those resulting from differences in lighting, facial expression, and other factors [6], [7]. Additionally, since the criterion does not attempt to minimize within-class variation, the resulting classes may tend to have more overlap than other approaches. Thus, the projection vectors chosen for optimal reconstruction may obscure the existence of the separate classes.The Linear Discriminant Analysis (LDA) method is proposed in [6] and [7]. This method overcomes the limitations of the Eigenface method by applying the Fisher’s Linear Discriminant criterion. This criterion tries to maximize the ratio ||||maxarg )(W S W W S W W J W T B T W opt FLD =, where B S isthe between-class scatter matrix, and W S is the within-class scatter matrix. Thus, by applying this method, we find the projection directions that on one hand maximize the Euclidean distance between the face images of different classes and on the other minimize the distance between the face images of the same class. This ratio is maximized when the column vectors of the projection matrix W are the eigenvectors of B W S S 1−. In face recognition tasks, this method cannot be applied directly since the dimension of the sample space is typically larger than the number of samples in the training set. As a consequence, W S is singular in this case. This problem is also known as the “small sample size problem” [8].In the last decade numerous methods have been proposed to solve this problem. Tian et al . [9] used the Pseudo-Inverse method by replacing 1−W S with its pseudo-inverse. The Perturbation method is used in [2] and [10], where a small perturbation matrix ∆ is added to W S in order to make it nonsingular. Cheng et al . [11] proposed the Rank Decomposition method based on successive eigen-decompositions of the total scatter matrix T S and the between-class scatter matrix B S . However, the above methods are typically computationally expensive since the scatter matrices are very large (e.g., images of size 256 by 256 yield scatter matrices of size 65,536 by 65,536). Swets and Weng [7] proposed a two stage PCA+LDA method, also known as the Fisherface method, in which PCA is first used for dimension reduction so as to make W S nonsingular before the application of LDA. In this method the final optimal projection vector matrix becomes FLD PCA opt W W W =, where ||max arg W S W W T T W PCA =, and||||max arg W W S W W W W S W W W PCA W T PCA T PCA B T PCA T WFLD =. However, in order to make W S nonsingular, some directions corresponding to the small eigenvalues of T S are thrown away in the PCA step. Thus,applying PCA for dimensionality reduction has the potential to remove dimensions that contain discriminative information [12]-[16]. Chen et al . [17] proposed the Null Space method based on the modified Fisher’s Linear Discriminant criterion, W S W W S W W J T T B T W opt MFLD maxarg )(=. Thismethod was proposed to be used when the dimension of the sample space is larger than the rank of the within-class scatter matrix, W S . It has been shown that the original Fisher’s Linear Discriminant criterion can be replaced by the modified Fisher’s Linear Discriminant criterion in the course of solving the discriminant vectors of the optimal set in [18]. In this method, all image samples are first projected onto the null space of W S , resulting in a new within-class scatter that is a zero matrix. Then, PCA is applied to the projected samples to obtain the optimal projection vectors. Chen et al . also proved that by applying this method, the modified Fisher’s Linear Discriminant criterion attains its maximum. However, they did not propose an efficient algorithm for applying this method in the original sample space. Instead, a pixel grouping method is applied to extract geometric features and reduce the dimension of the sample space. Then they applied the Null Space method in this new reduced space. In our experiments, we observed that the performance of the Null Space method depends on the dimension of the null space of W S in the sense that larger dimension provides better performance. Thus, any kind of pre-processing that reduces the original sample space should be avoided.Another novel method, the PCA+Null Space method was proposed by Huang et al . in [15] for dealing with the small sample size problem. In this method, at first, PCA is applied to remove the null space of T S , which contains the intersection of the null spaces of B S and W S . Then, the optimal projection vectors are found in the remaining lower-dimensional space by using the Null Space method. The difference between the Fisherface method and the PCA+Null Space methodis that for the latter, the within-class scatter matrix in the reduced space is typically singular. This occurs because all eigenvectors corresponding to the nonzero eigenvalues of T S are used for dimension reduction. Yang et al . applied a variation of this method in [16]. After dimensionreduction, they split the new within-class scatter matrix, PCA W T PCAW P S P S =~ (where PCA P is the matrix whose columns are the orthonormal eigenvectors corresponding to the nonzeroeigenvalues of T S ), into its null space },...,{)~(1t r W span S N ξξ+= and orthogonal complement(i.e., range space) },...,{)~(1r W span S R ξξ= (where r is the rank of W S , and )(T S rank t = is thedimension of the reduced space). Then, all the projection vectors that maximize the between-class scatter in the null space are chosen. If, according to some criterion, more projection vectors are needed, the remaining projection vectors are obtained from the range space. Although the PCA+Null Space method and the variation proposed by Yang et al ., use the original sample space, applying PCA and using all eigenvectors corresponding to the nonzero eigenvalues make these methods impractical for face recognition applications when the training set size is large. This is due to the fact that the computational expense of training becomes very large.Lastly, the Direct-LDA method is proposed in [12]. This method uses the simultaneous diagonalization method [8]. First, the null space of B S is removed, and then the projection vectors that minimize the within-class scatter in the transformed space are selected from the range space of B S . However, removing the null space of B S by dimensionality reduction will also remove part of the null space of W S and may result in the loss of important discriminative information [13], [15], [16]. Furthermore, B S is whitened as a part of this method. This whitening process can be shown to be redundant and therefore should be skipped.In this paper, a new method is proposed which addresses the limitations of other methods that use the null space ofS to find the optimal projection vectors. Thus, the proposed method canWbe only used when the dimension of the sample space is larger than the rank ofS. TheW remainder of the paper is organized as follows. In Section II, the Discriminative Common Vector approach is introduced. In Section III, we describe the data sets and experimental results. Finally, we formulate our conclusions in Section IV.II. DISCRIMINATIVE COMMON VECTOR APPROACHThe idea of common vectors was originally introduced for isolated word recognition problems in the case where the number of samples in each class was less than or equal to the dimensionality of the sample space [19], [20]. These approaches extract the common properties of classes in the training set by eliminating the differences of the samples in each class. A common vector for each individual class is obtained by removing all the features that are in the direction of the eigenvectors corresponding to the nonzero eigenvalues of the scatter matrix of its own class. The common vectors are then used for recognition. In our case instead of using a given class’s own scatter matrix, we use the within-class scatter matrix of all classes to obtain the common vectors. We also give an alternative algorithm based on the subspace methods and the Gram-Schmidt orthogonalization procedure to obtain the common vectors. Then, a new set of vectors, called the discriminative common vectors, which will be used for classification are obtained from the common vectors. We introduce algorithms for obtaining the common vectors and the discriminative common vectors below.A. Obtaining the Discriminative Common Vectors by Using the Null Space of W S Let the training set be composed of C classes, where each class contains N samples, and let i mx be a d -dimensional column vector which denotes the m-th sample from the i-th class. There will be a total of M=NC samples in the training set. Suppose that d>M-C. In this case, W S , B S , and T S are defined as,T i i m i C i N m i m W x x S ))((11µµ−−=∑∑==, (1)T i C i i B N S )()(1µµµµ−−=∑=, (2)andB W T i mC i Nm i m T S S x x S +=−−∑∑===))((11µµ, (3) where µ is the mean of all samples, and i µ is the mean of samples in the i -th class.In the special case where 0=w S w W T and 0≠w S w B T , for all }0{\d R w ∈, the modified Fisher’s Linear Discriminant criterion attains a maximum. However, a projection vector w , satisfying the above conditions, does not necessarily maximize the between-class scatter. In this case, a better criterion is given in [6] and [13], namely||max arg ||max arg )(0||0||W S W W S W W J T T W S W B T W S W opt W T W T ====. (4)To find the optimal projection vectors w in the null space of W S , we project the face samples onto the null space of W S and then obtain the projection vectors by performing PCA. To do so, vectors that span the null space of W S must first be computed. However, this task is computationally intractable since the dimension of this null space can be very large. A moreefficient way to accomplish this task is by using the orthogonal complement of the null space of W S , which typically is a significantly lower-dimensional space.Let d R be the original sample space, V be the range space of W S , and ⊥V be the null space of W S . Equivalently,},...,1,0|{r k S span V k W k =≠=αα (5)and},...,1,0|{d r k S span V k W k +===⊥αα, 6)where d r < is the rank of W S , },....,{1d αα is an orthonormal set, and },....,{1r αα is the set of orthonormal eigen vectors corresponding to the nonzero eigenvalues of W S .Consider the matrices ]....[1r Q αα= and ]....[1d r Q αα+=. Since⊥⊕=V V R d , every face image d i m R x ∈ has a unique decomposition of the formi m i m i m z y x +=, (7)where V x QQ Px y i m T i m i m ∈==, ⊥∈==V x Q Q x P z i m T i m i m , and P and P are the orthogonal projection operators onto V and ⊥V , respectively. Our goal is to computei m i m i m i m i m Px x y x z −=−=. (8)To do this, we need to find a basis for V , which can be accomplished by an eigen-analysis of W S . In particular, the normalized eigenvectors k α corresponding to the nonzero eigenvalues of W S will be an orthonormal basis for V. The eigenvectors can be obtained by calculating the eigenvectors of the smaller M by M matrix, A A T , defined such that T W AA S =, where A is a d by M matrix of the form]........[22111111C C N N x x x x A µµµµ−−−−=. (9)Let k λand k v be the k -th nonzero eigenvalue and the corresponding eigenvector of A A T , where C M k −≤. Then k k Av =α will be the eigenvector that corresponds to the k -th nonzero eigenvalue of W S . The sought-for projection onto ⊥V is achieved by using (8). In this way, it turns out, we obtain the same unique vector for all samples of the same class,i m T i m T i m i com x Q Q x QQ x x =−=, m=1,…,N , i=1,...,C , (10)i.e., the vector on the right-hand side of (10) is independent of the sample index m . We refer tothe vectors i com x as the common vectors. The above fact is proved in the following theorem.Theorem 1: Suppose Q is a matrix whose column vectors are the orthonormal vectors thatspan the null space ⊥V of W S . Then, the projections of the samples i mx of the class i onto ⊥V produce a unique common vector i com x such thati m T i com x Q Q x =, m =1,…,N , i=1,…,C . (11)Proof : By definition, a vector d R ∈α is in ⊥V if 0=αW S . Let i µ be the mean vector of the i -th class, G be the N by N matrix whose entries are all 1−N , and i X be the d by N matrix whosem -th column is the sample i m x . Thus, multiplying both sides of identity 0=αW S by T α andwriting∑==Ci i W S S 1, (12)withT i i i i T i i m i Nm i m i G X X G X X x x S ))(())((1−−=−−=∑=µµ, (13)immediately leads to 211||))((||)())((0ααα∑∑==−=−−=Ci T i T i T i C i T X G I X G I G I X , (14)where ||.|| denotes the Euclidean norm. Thus, (14) holds if 0))((=−k T i X G I α, or k T i k T i X G X αα)()(=. From this relation we can see that,d r k C i N m x k T i k T i m ,...,1,,...,1,, (1))()(+====αµα. (15) Thus, the projection of i m x onto ⊥V ,k k d r k i k k d r k i m i com x x ααµαα〉〈=〉〈=∑∑+=+=,,11, (16)is independent of m , which proves the theorem.The theorem states that it is enough to project a single sample from each class. This will greatly reduce the computational burden of the calculations. This computational savings has not been previously reported in the literature.After obtaining the common vectors i com x , optimal projection vectors will be those thatmaximize the total scatter of the common vectors,||max arg ||max arg ||max arg )(0||0||W S W W S W W S W W J com T WT T W S W B T W S W opt W T W T =====, (17)where W is a matrix whose columns are the orthonormal optimal projection vectors k w , and com S is the scatter matrix of the common vectors,T com i com com Ci i com com x x S ))((1µµ−−=∑=, i =1,…,C , (18) where com µ is the mean of all common vectors, ∑==C i i com com x C 11µ.In this case optimal projection vectors k w can be found by an eigen-analysis of com S . In particular, all eigenvectors corresponding to the nonzero eigenvalues of com S will be the optimal projection vectors. com S is typically a large d by d matrix and thus we can use the smaller matrix, com T com A A , of size C by C, to find nonzero eigenvalues and the corresponding eigenvectors ofT com com com A A S =, where com A is the d by C matrix of the form]....[1com C com com com com x x A µµ−−=. (19)There will be C -1 optimal projection vectors since the rank of com S is C -1 if all common vectors are linearly independent. If two common vectors are identical, then the two classes which are represented by this vector cannot be distinguished. Since the optimal projection vectors k wbelong to the null space of W S , it follows that when the image samples i m x of the i -th class areprojected onto the linear span of the projection vectors k w , the feature vectorT C i m i m i w x w x ],....,[11><><=Ω− of the projection coefficients ><k i m w x , will also be independent of the sample index m . Thus, we havei m T i x W =Ω, m=1,…,N, i=1,…,C. (20)We call the feature vectors i Ω discriminative common vectors , and they will be used for classification of face images. The fact that i Ω does not depend on the index m in (20) guarantees 100% accuracy in the recognition of the samples in the training set. This guarantee has not been reported in connection with other methods [15], [17].To recognize a test image test x , the feature vector of this test image is found bytest T test x W =Ω, (21)which is then compared with the discriminative common vector i Ω of each class using the Euclidean distance. The discriminative common vector found to be the closest to test Ω is used to identify the test image.Since test Ω is only compared to a single vector for each class, the recognition is very efficientfor real-time face recognition tasks. In the Eigenface, the Fisherface, and the Direct-LDA methods, the test sample feature vector test Ω is typically compared to all feature vectors ofsamples in the training set, making these methods impractical for real-time applications for large training sets.The above method can be summarized as follows:Step 1: Compute the nonzero eigenvalues and corresponding eigenvectors of W S by using the matrix A A T , where T W AA S = and A is given by (9). Set ]....[1r Q αα=, where r is therank of W S . Step 2: Choose any sample from each class and project it onto the null space of W S to obtain the common vectorsi m T i m i com x QQ x x −=, N m ,...,1=, C i ,...,1=. (22)Step 3: Compute the eigenvectors k w of com S , corresponding to the nonzero eigenvalues, byusing the matrix com T comA A , where T com com com A A S = and com A is given in (19). There are at most C -1 eigenvectors that correspond to the nonzero eigenvalues. Use these eigenvectors to form the projection matrix ]....[11−=C w w W , which will be used to obtain feature vectors in (20) and (21).B. Obtaining the Discriminative Common Vectors by Using Difference Subspaces and the Gram-Schmidt Orthogonalization ProcedureTo find an orthonormal basis for the range of W S , the algorithm described above uses the eigenvectors corresponding to the nonzero eigenvalues of the M by M matrix A A T , whereTW AA S =. Assuming that C M S rank W −=)(, then dC C M dM M M M l +−+−+)(2)234(233floating point operations (flops) are required to obtain an orthonormal basis set spanning the range of W S by using this approach. Here l represents the number of iterations required for convergence of the eigen-decomposition algorithm. However, the computations may become expensive and numerically unstable for large values of M . Since we do not need to find the eigenvalues (i.e., an explicit symmetric Schur decomposition) of W S , the following algorithm can be used for finding the common vectors efficiently. It requires only ))()(2(2C M d C M d −+− flops to find an orthonormal basis for the range of W S and is based on the subspace methods and the Gram-Schmidt orthogonalization procedure.Suppose that d >M-C . In this case, the subspace methods can be applied to obtain the commonvectors i com x for each class i . To do this, we choose any one of the image vectors from the i -th class as the subtrahend vector and then obtain the difference vectors i k b of the so-called difference subspace of the i -th class [20]. Thus, assuming that the first sample of each class istaken as the subtrahend vector, we have i i k i k x x b 11−=+, 1,...,1−=N k .The difference subspace i B of the i -th class is defined as },....,{11i N i i b b span B −=. Thesesubspaces can be summed up to form the complete difference subspace},....,,,....,{....12111111C N N C b b b b span B B B −−=++=. (23)The number of independent difference vectors i k b will be equal to the rank of W S . Forsimplicity, suppose there are M -C independent difference vectors. Since by Theorem 3, B and the range space V of W S , are the same spaces, the projection matrix onto B is the same as the matrix P (projection matrix onto the range space of W S ) defined previously in Section II-A. This matrix can be computed asT T D D D D P 1)(−=, (24)where ]........[1211111C N N b b b b D −−= is a d by M-C matrix [21]. This involvesfinding the inverse of an M -C by M -C nonsingular, positive definite symmetric matrix D D T . A computationally efficient method of applying the projection uses an orthonormal basis for B . Inparticular, the difference vectors i k b can be orthonormalized by using the Gram-Schmidtorthogonalization procedure to obtain orthonormal basis vectors C M −ββ,....,1. The complement of B is the indifference subspace ⊥B such that]....[1C M U −=ββ, T UU P =, (25) ]....[1d C M U ββ+−=, T U U P =, (26)where P and P are the orthogonal projection operators onto B and ⊥B , respectively. Thus matrices P and P are symmetric and idempotent, and satisfy I P P =+. Any sample from each class can now be projected onto the indifference subspace ⊥B to obtain the corresponding common vectors of the classes,.,...,1,,...,1,C i N m x UU x x U U Px x x P x i m T i m im T i mi m i m i com ==−==−== (27)The common vectors do not depend on the choice of the subtrahend vectors and they are identical to the common vectors obtained by using the null space of W S . This follows from Theorem 3 below, which uses the results of Lemma 1 and Theorem 2.Theorem 2: Let ⊥i V be the null space of the scatter matrix i S , and ⊥i B be the orthogonal complement of the difference subspace i B . Then ⊥⊥=i i B V and i i B V =.Proof : See [20].Lemma 1: Suppose that C S S ,....,1 are positive semi-definite scatter matrices. ThenI Ci i C S N S S N 11)()....(==++, (28) where N ( ) denotes the null space.Proof : The null space on the left-hand side of the above identity contains elements α such that0)....(1=++αC S S (29)or0....)....(11=++=++ααααααC T T C T S S S S , (30)by the positive semi-definiteness of C S S ++....1. Thus, again by the positive semi-definiteness, )....(1C S S N ++∈α if and only if0=ααi T S , i=1,...,C , (31)or, equivalently, I Ci i S N 1)(=∈α. Theorem 3: Let C S S ,....,1 be positive semi-definite scatter matrices. ThenC C C W B B S R S R S S R S R B ++=++=++==....)(....)()....()(111, (32)where R denotes the range.Proof : Since it is well known that the null space and the range of a matrix are complementary spaces, using the previous Lemma 1, we have,....)(....)())((....))(())(())....(()....(111111C C C C i i C C B B S R S R S N S N S N S S N S S R ++=++=++==++=++⊥⊥⊥=⊥I (33)where the last equality is a consequence of Theorem 2.After calculating the common vectors, the optimal projection vectors can be found by performing PCA as described previously in Section II-A. The eigenvectors corresponding to the nonzero eigenvalues of com S will be the optimal projection vectors. However, optimal projection vectors can also be obtained more efficiently by computing the basis of the difference subspace com B of the common vectors, since we are only interested in finding an orthonormal basis for the range of com S .The algorithm based on the Gram-Schmidt orthogonalization can be summarized as follows.Step 1: Find the linearly independent vectors i k b that span the difference subspace B and set},....,,,....,{1211111C N N b b b b span B −−=. There are totally r linearly independent vectors, where r is at most M -C .Step 2: Apply the Gram-Schmidt orthogonalization procedure to obtain an orthonormal basis r ββ,....,1 for B and set ]....[1r U ββ=.Step 3: Choose any sample from each class and project it onto B to obtain common vectors by using (27).Step 4: Find the difference vectors that span com B as11com k com k com x x b −=+, 1,...,1−=C k , (34)。

基于自适应卡尔曼滤波的光纤陀螺噪声系数估计方法

基于自适应卡尔曼滤波的光纤陀螺噪声系数估计方法

w a l k ) n o i s e c o e f i f c i e n t , s u c h a s n e e d s t o r a g e o f a m o u n t o f d a t a , n o n r e a l - t i me p r o c e s s i n g , h u g e c o m p u t a t i o n a l b u r d e n , t i m e
Op t i c a l ibe f r g yr o no i s e c o e ic f i e n t e s t i ma t i o n me t ho d us i n g a da p t i v e Ka l ma n il f t e r i ng
L I A n g , L I A n 一 , Q I N F a n g - j u n , H U B a i — q i n g ( 1 . E l e c t r i c a l E n g i n e e r i n g C o l l e g e , N a v a l U n i v e r s i t y o fE n in g e e r i n g ,Wu h a n 4 3 0 0 3 3 , C h i n a ;
文方法的可行性和有效性。
关 键 词 :光 纤 陀螺 ;角度 随 机 游 走 ;自适 应 ;卡 尔 曼滤 波 中 图分 类 号 : T N 9 1 9 文献标识码 : A 文 章 编 号 :1 6 7 4 — 6 2 3 6 ( 2 0 1 3 ) 2 1 — 0 0 6 7 — 0 3
李 昂 ,李 安 一 , 覃 方君 ,胡柏 青 -
( 1 . 海 军 工程 大 学 电 气 工 程 学 院 , 湖北 武 汉 4 3 0 0 3 3 ; 2 . 海 军 工程 大 学 科 研 部 ,湖 北 武 汉 4 3 0 0 3 3 ) 摘 要 :针 对 A l l a n方 差 法 确 定 光 纤 陀螺 AR W( a n g l e r a n d o m w a l k ) 噪 声 系数 的 一 些 不 足 , 如 大量存储数 据、 非 实 时 处 理、 计算量 大、 耗时长等 , 提 出 了基 于 自适 应 卡 尔 曼 滤 波 的光 纤 陀 螺 A R W 系数在 线估 计 方 法 。在 角度 随机 游走 、 零 偏 不稳 定 性 、 角速 率 随机 游走 等 主要 噪 声 数 学特 性 分 析 基 础 上 , 建 立 了光 纤 陀螺 现 代 状 态空 间噪 声误 差模 型 . 基 于新 息 自适 应 卡 尔曼 滤 波 量 测 噪 声协 方 差 阵 的迭 代 计 算 , 实 现 光 纤 陀螺 A R W 系数 的 在 线 、 实时 估 计 . 从 而 避 免 了存 储 大 量 历 史数 据 , 显 著 地 减 小 了计 算 量 , 缩 短 了 陀螺 数 据 处 理 时 间 。 数 字仿 真 试 验 和 光 纤 陀螺 实 测数 据 试 验 结 果 均验 证 了本

基于Kriging模型的自适应多阶段并行代理优化算法

第27卷第11期2021年11月计算机集成制造系统Vol.27No.11 Computer Integrated Manufacturing Systems Nov.2021DOI:10.13196/j.cims.2021.11.016基于Kriging模型的自适应多阶段并行代理优化算法乐春宇,马义中+(南京理工大学经济管理学院,江苏南京210094)摘要:为了充分利用计算资源,减少迭代次数,提出一种可以批量加点的代理优化算法。

该算法分别采用期望改进准则和WB2(Watson and Barnes)准则探索存在的最优解并开发已存在最优解的区域,利用可行性概率和多目标优化框架刻画约束边界。

在探索和开发阶段,设计了两种对应的多点填充算法,并根据新样本点和已知样本点的距离关系,设计了两个阶段的自适应切换策略。

通过3个不同类型算例和一个工程实例验证算法性能,结果表明,该算法收敛更快,其结果具有较好的精确性和稳健性。

关键词:Kriging模型;代理优化;加点准则;可行性概率;多点填充中图分类号:O212.6文献标识码:AParallel surrogate-based optimization algorithm based on Kriging model usingadaptive multi-phases strategyYUE Chunyu,MA Yizhong+(School o£Economics and Management,Nanjing University of Science and Technology,Nanjing210094,China) Abstract:To make full use of computing resources and reduce the number of iterations,a surrogate-based optimiza­tion algorithm which could add batch points was proposed.To explore the optimum solution and to exploit its area, the expected improvement and the WB2criterion were used correspondingly.The constraint boundary was charac­terized by using the probability of feasibility and the multi-objective optimization framework.Two corresponding multi-points infilling algorithms were designed in the exploration and exploitation phases and an adaptive switching strategy for this two phases was designed according to the distance between new sample points and known sample points.The performance of the algorithm was verified by three different types of numerical and one engineering benchmarks.The results showed that the proposed algorithm was more efficient in convergence and the solution was more precise and robust.Keywords:Kriging model;surrogate-based optimization;infill sampling criteria;probabil让y of feasibility;multi­points infill0引言现代工程优化设计中,常采用高精度仿真模型获取数据,如有限元分析和流体动力学等E,如何在优化过程中尽可能少地调用高精度仿真模型,以提高优化效率,显得尤为重要。

2012年西安交通大学本科生科研训练和实践创新基金项目结题验收结果

陈正涛
王威、胡志勇、轩杨、李卓强
王晶
合格
2012040
机械学院
利用细胞打印技术体外构建三维平滑肌微组织
刘俊聪
黄熠文
徐峰
合格
2012068
机械学院
节能汽车壳体设计及制造
汪洋
丁宝庆、丁文朝、高飞、于罗钦
王晶
合格
2012037
机械学院
倾转三涵道风扇多功能垂直起降飞行器
徐廷中
杨亚军、金博龙、陈立功、姚团结
赵立波、张安峰
孙勍铉
曹旭、王来升、夏广辉、俞亦钊
王江峰
优秀
2012020
电信学院
3+1式直升机设计及对其摄像等功能的实现
张辉
万日栋、石勇义、张弛、段泽能
杜清河
优秀
2012005
电信学院
基于CAN总线的多电机智能控制系统
董奭
杨志宇、谢磊、杨昂、吴彝丹
冯祖仁
优秀
2012006
电信学院
电磁诱导透明光开关及路由器
封玮康
黄高坪、元佳敏
苏旭
秦远智、张峰、邵栋
李景银
合格
2012058
能动学院
滤涂工艺制备基于共离子传导复相电解质低温固体氧化物燃料电池的研究
洪海峰
王峰、朱旭东
黄建兵
合格
2012002
电气学院
基于激光雷达的目标识别与随动系统研究
孙力
李卓强、胡志勇、吴彝丹、魏潇然
牟轩沁
合格
2012007
电信学院
自平衡小车的研究与制作
李维启
徐希楠、刘峰、程思婧、王映周
樊亚萍
合格
2012090
人文学院

压缩感知技术在非对比增强肾动脉MRA中的应用

肾动脉狭窄是继发性高血压和肾功能不全的重要病因之一,了解有无肾动脉狭窄及其狭窄程度对临床高血压的诊治至关重要[1]。

临床上肾动脉血管成像多使用外源性对比剂成像技术包括DSA 、CTA 和对比增强MRA 。

DSA 为肾动脉狭窄诊断的金标准,但属有创检查,不适合作为筛查手段。

CTA 广泛DOI :10.3969/j.issn.1672-0512.2023.05.020[基金项目]科技部重点研发计划项目(2022YFC2409403);国家自然科学基金项目(52227814)。

[通信作者]徐辉,Email :。

压缩感知技术在非对比增强肾动脉MRA 中的应用袁颖,任浩,韩昕君,钟朝辉,徐辉首都医科大学附属北京友谊医院放射科,北京100050[摘要]目的:探讨基于稳态自由进动序列的非对比增强呼吸触发采集技术(B -TRANCE )结合压缩感知(CS )技术在非对比增强肾动脉MRA 中的应用价值。

方法:招募34例健康志愿者,行非对比增强肾动脉MRA ,扫描序列包括常规B -TRANCE 呼吸触发扫描序列(常规组)、结合CS 技术的B -TRANCE 呼吸触发扫描序列(CS 触发组)及结合CS 技术的B -TRANCE 屏气扫描序列(CS 屏气组)。

由2名高年资放射科医师采用双盲法阅片,分别对3组图像从肾动脉血管分支显示情况、血管清晰度、图像伪影及整体图像评分4个方面进行主观图像质量评价;由1名医师在原始图像肾动脉层面测量肾动脉主干和竖脊肌的信号强度(SI )及噪声值(SD ),计算肾动脉血管SNR 和CNR ,进行客观图像质量评价。

结果:2名医师对3组图像评分一致性较好,3组图像整体评分均>3分,满足诊断要求。

3组图像在肾动脉血管分支显示情况、血管清晰度及整体图像评分方面差异均有统计学意义(均P <0.05),图像伪影方面差异无统计学意义(P >0.05)。

常规组与CS 触发组在主观评价方面差异无统计学意义(P >0.05)。

1SARfundementals201409


探测:
什么?
* RCS (Radar Cross Section)
在哪儿?
* LOS (Line of Sight)
测距:
多远 ?
* e.g. mountain area
R=c/2
Dr. W. Hong, MITL/IECAS 2014.09
21
ANTENNA BEAM PATTERN TRANSMITTER ANTENNA RADAR PULSE .….. CIRCULATOR SIDELOBES RECEIVER ADC DSP -3 dB 0 dB
* SAR 雷达方程 ?!
27
Dr. W. Hong, MITL/IECAS 2014.09
合成孔径雷达数据获取
Dr. W. Hong, MITL/IECAS 2014.09
28
成像几何 坐标系 平台坐标系 目标坐标系 地面坐标系 平面 数据采集平面(斜距平面)data acquisition plane 地距平面 坐标轴(图像的二维) 方位 along track / azimuth 距离 cross track / range (slant range or ground range)
a
DATA RECORDER / DISPLAYER 3dB OPENING ANGLE
MAINLOBE
LOS
a=k/D
BACKSCATTER GROUND SURFACE POINT TARGET
RADAR:RAdio Detection And Ranging
Dr. W. Hong, MITL/IECAS 2014.09
1996 2000~ NASA/DLR/ASI ESA 2006~ JAXA OHB CSA DLR/ASTRIUM ESA …..
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

270IEEESIGNALPROCESSINGLETTERS,VOL.21,NO.3,MARCH2014AdaptiveCompressedSensingviaMinimizingCramer–RaoBoundTianyaoHuang,YiminLiu,Member,IEEE,HuadongMeng,Member,IEEE,andXiqinWang

Abstract—Thisletterconsiderstheproblemofobservationstrategydesignforcompressedsensing.Anadaptivemethod,basedonCramer–Raoboundminimization,isproposedtode-signthesensingmatrix.SimulationresultsdemonstratethattheadaptivelyconstructedsensingmatrixcanleadtomuchlowerrecoveryerrorsthanthoseoftraditionalGaussianmatricesandsomeexistingadaptiveapproaches.

IndexTerms—Adaptivesampling,compressedsensing,Cramer–Raobound,subspacepursuit.

I.INTRODUCTION

COMPRESSEDSENSING(CS)hasbeensuccessfullyap-

pliedinvariousareas,e.g.signalprocessing,commu-nication,imageprocessing.Inanoise-freecase,CSmethodspromisetoexactlyreconstructasparsesignalwithjustafewsamples(muchfewerthanthelengthofthesignal)[1]–[4],andarestillstableinnoisyenvironments.ForaCSframework,thesamplingmatrixplaysakeyrole.Inconventionalapproaches,asamplingmatrixisgeneratedof-fline.Randommatricesareusuallypreferred,e.g.,GaussianorBernoullimatrix,becausesuchmatriceshaverestrictedisom-etryproperty(RIP)withoverwhelmingprobabilities[5],whichguaranteesthestabilityofrecoveryalgorithms.However,suchpredefinedrandommatrixmightcausewasteofprecioussamplingresources.Intuitively,whenweobtainsomeinitialmeasurementsandhavebasicinformationonthesparsesignal,wecanfocusontheentriesthatarelikelynonzerowiththehopetoincreasethesensingefficiencyandsignaltonoiseratioperentry.Thereisnoneedtoallocatesamplingeffortstozeroentries.SuchsensingstrategyiscalledadaptiveCS,wherethesensingmatrixisdesignedonlineanddependsonpreviousrecoveryresults.AdaptiveCShasshownpotentialsforenhancingtherecoveryaccuracy[6]–[9].Inthisletter,weproposeanoveladaptiveCSmethod,inwhichthesamplingmatrixisoptimizedviaminimizingtheCramer–Raoboundofrecoveryerrors.Asequentialapproach

ManuscriptreceivedOctober28,2013;revisedDecember15,2013;acceptedJanuary09,2014.DateofpublicationJanuary13,2014;dateofcurrentver-sionJanuary20,2014.ThisworkwassupportedinpartbytheNationalNat-uralScienceFoundationofChina(Grants40901157and61201356),andinpartbytheNationalBasicResearchProgramofChina(973Program,Grant2010CB731901).Theassociateeditorcoordinatingthereviewofthismanu-scriptandapprovingitforpublicationwasProf.ChandraSekharSeelamantula.TheauthorsarewiththeDepartmentofElectronicEngineering,TsinghuaUniversity,Beijing100084,China(e-mail:huangtianyao2009@gmail.com;yiminliu@tsinghua.edu.cn;menghd@tsinghua.edu.cn;wangxq_ee@ts-inghua.edu.cn).Colorversionsofoneormoreofthefiguresinthispaperareavailableonlineathttp://ieeexplore.ieee.org.DigitalObjectIdentifier10.1109/LSP.2014.2299814

isdeveloped,andsimulationresultsshowthatourmethoddramaticallyimprovesthenonadaptivecounterpart,andalsooutperformsexistingadaptivemethods[6],[7].

II.COMPRESSEDSENSINGANDSUBSPACEPURSUIT

Thegoalofcompressedsensing(orsparserecovery)istoreconstructasparsehigh-dimensionalsignalfrommuchfewersamples.Attimeinstant,thesparsesignalissampledwithasensingvector,

(1)for,whereisanadditivenoiseobeyingGaussiandistribution.Theunknownvectorissaidtobesparsewhenthenumberofnonzeroentries.Combinealltheobservationsand(1)canberewritteninamatrixform,

(2)where,,.Assumethatelementsinareindependenttoeachother.Roughlyspeaking,therearetwokindsofpopularalgorithmsappliedinCS:optimizationmethods,e.g.,BasisPursuit[1]andDantzigSelector[2],andgreedymethods,e.g.,Orthog-onalMatchingPursuit[3]andSubspacePursuit(SP)[4].Inthisletter,SPisadoptedasthereconstructionalgorithmforitsadvantagesinbothprovablestabilityandcomputationaleffi-ciency.Referto[4]fortheproceduresofSP.

III.ADAPTIVECOMPRESSEDSENSING

Inthissection,wedevelopanadaptivesensingstrategytosampleasparsesignal.Intheproposedmethod,isrecon-

structedsequentially,andthesamplingvectorsisdesignedaccordingtothepreviousestimateof.Byexploitingthepriorinformationfrompreviousestimates,wecanexpectamoreac-curatereconstructionofthesparsesignal.

A.MotivationOurgoalistominimizethemeansquareerror(MSE)ofre-coveryresults,i.e.,

(3)whereistherecoveryresult.However,sincetheground

truthisactuallyunknownduringthesensingprocedures,thecriterion(3)isnotapplicableinpractice.Instead,wemimictheideafromacognitivetrackingradar[10],minimizingtheCramer–Rao(lower)bound(CRB)ofMSE.TheCRB

1070-9908©2014IEEE.Personaluseispermitted,butrepublication/redistributionrequiresIEEEpermission.Seehttp://www.ieee.org/publications_standards/publications/rights/index.htmlformoreinformation.

相关文档
最新文档