基于LMS算法的自适应组合滤波器中英翻译
基于LMS算法自适应滤波器的设计

叙述了对LMS算法产生的影响和原因。
最后,对一些著名的自适应波束形成方法进行 概要的介绍和比较,对最常用的LMS自适应算法做 了改进,同时在MATLAB平台上进行了仿真。
原理
2
3
自适应LMS算法的研究
LMS算法的比较与阵列分析
4
自适应滤波器的概述
自适应滤波器的发展历程 B.Widrow等人于1975年提出了自适应滤 波理论以来,以自适应滤波为主的信号处理已 成为信息科学的一个重要的分支。 自适应滤波在信道均衡、回波抵消、谱线 增强、噪声抑制、天线自适应旁瓣抑制、雷达
自适应滤波器的一般形式
自适应滤波器的结构
I. 无限长冲激响应(IIR)滤波器 IIR型结构滤波器的传输函数既有零点又有 极点。其主要的缺点是稳定性不好,并且相位 特性难于控制。
II.有限长冲激响应(FIR)滤波器 FIR滤波器是全零点滤波器,它始终是稳 定的,且能实现线性的相移特性,因此它在自 适应滤波中得到最广泛的应用。 实现结构:横向型、对称横向型、格形
期望输出的运行结果
实际输出的运行结果
误差值的统计结果
由图可见,滤波器的实际输出与期望响应之间的均方 误差较小,变步长的效果也比较明显。 计算机仿真结果表明提出的基于误差归一化的变步长 LMS 算法有快速的收敛能力很好的跟踪能力和较小的稳态 误差在自适应天线系统中有很强的应用潜能文中还分析了参
基于LMS算法的自适应滤波器设计

电路最优化设计课程设计报告基于LMS算法的自适应滤波器设计一、内容摘要通过学习自适应滤波器和LMS算法基本原理,设计了一个二阶加权自适应横向滤波器, 并在MATLAB软件平台实现了仿真,最后对仿真结果作出了分析。
二、设计目的通过设计自适应滤波器并在在MATLAB实现仿真,进一步加深了解自适应滤波原理和LMS自适应算法。
、设计原理自适应滤波器由参数可调的数字滤波器和自适应算法两部分组成。
输入信号x(n)通过参数可调数字滤波器后产生输出信号y(n),将其与参数信号d(n)进行比较,形成误差信号e(n)。
e(n)通过某种自适应算法对滤波器参数进行调整,最终使e(n)的均方值最小。
最小均方误差LMS准则的目的在于使滤波器输出与期望信号误差的平方的统计平均值最小。
图1为LMS自适应横向滤波器原理图。
图1 LMS自适应横向滤波器原理图改自适应滤波器的输入矢量为:X(n) =[x(n)x(n -1)…x(n —m - 1)]T( 1)加权矢量为:W(n) =[W1( n) W2 (n)…W M (n)] 丁( 2)滤波器的输出为:My( n) = ' w i (n)x( n -i 亠1) = W T (n) X (n) =X T( n)W (n) (3)i 土y(n)相对于滤波器期望输出d( n)的误差为:e( n) =d( n) -y(n) =d( n) -W T (n)X( n) (4)根据最小均方误差准则,最佳的滤波器参量应使得性能函数均方误差f(W)二In) =E[e2(n)]为最小,上式称为均方误差性能函数。
假定输入信号x(n)和期望相应d(n)是联合平稳过程,那么在时刻n的均方误差是加权矢量的二次函数,其表达式为:'(n) =E[d2( n) —2P T W( n) W T (n ) R x W (n)] ( 5)式中:E[d (n)]是期望响应d(n)的方差;P=E[d(n)X(n)]是输入矢量X(n)和期望响应d(n)的互相矢量;R x=E[X(n)X T(n)]是输入矢量X(n)的自相关矩阵。
使用LMS算法设计FIR自适应滤波器

使用LMS算法设计FIR自适应滤波器自适应滤波器是统计信号处理的一个重要组成部分。
在实际应用中,由于没有充足的信息来设计固定系数的数字滤波器,或者设计规则会在滤波器正常运行时改变,因此我们需要研究自适应滤波器。
凡是需要处理未知统计环境下运算结果所产生的信号或需要处理非平稳信号时,自适应滤波器可以提供一种吸引人的解决方法,而且其性能通常远优于用常方法设计的固定滤波器。
此外,自适应滤波器还能提供非自适应方法所不可能提供的新的信号处理能力。
通过《现代信号处理》这门课程的学习,掌握了自适应滤波器的基本理论、算法及设计方法。
本文中对最小均方误差(LMS)算法进行了认真的回顾,最终采用改进的LMS 算法设计FIR结构自适应滤波器,并采用MATLAB进行仿真。
一、自适应滤波器理论基础1、基本概念凡是有能力进行信号处理的装置都可以称为滤波器。
在近代电信装备和各类控制系统中,滤波器应用极为广泛;在所有的电子部件中,使用最多,技术最复杂要算滤波器了。
滤波器的优劣直接决定产品的优劣,所以,对滤波器的研究和生产历来为各国所重视。
滤波器是一种用来消除干扰杂讯的器件,将输入或输出经过过滤而得到纯净的交流电。
您可以通过基本的滤波器积木块——二阶通用滤波器传递函数,推导出最通用的滤波器类型:低通、带通、高通、陷波和椭圆型滤波器。
传递函数的参数——f0、d、hHP、hBP 和hLP,可用来构造所有类型的滤波器。
转降频率f0为s项开始占支配作用时的频率。
设计者将低于此值的频率看作是低频,而将高于此值的频率看作是高频,并将在此值附近的频率看作是带内频率。
阻尼d用于测量滤波器如何从低频率转变至高频率,它是滤波器趋向振荡的一个指标,实际阻尼值从0至2变化。
高通系数hHP是对那些高于转降频率的频率起支配作用的分子的系数。
带通系数hBP是对那些在转降频率附近的频率起支配作用的分子的系数。
低通系数hLP是对那些低于转降频率的频率起支配作用1的分子的系数。
基于LMS算法的自适应滤波器的设计

( 1964—) ,女,副教 授,研 究 方 向: 扩 频 通 信 技 术 及 应 用、信号处理等.
应用,在 70 年代自适应滤波器主要朝着低功耗、 高精度、小体积、多功能、高稳定等方向发展; 如今 主要致力于将各类滤波器应用于各类产品[2]。我 国在 50 年代开始广泛使用滤波器,主要用于话路 的滤波,以 及 回 声 的 抵 消。 随 着 微 电 子 技 术 的 高 速发展,FPGA 内部资源的逐渐丰富,为数字滤波 器的硬件实现开辟了宽广的领域。对于普通的基 于 LMS 算法的自适应滤波器来说其系数更新和 滤波不能同时进行,必须等滤波结果产生后,才能 得出误差信 号 进 行 系 数 更 新,各 部 分 不 能 同 时 进 行,系统的工作效率不高; 而本文所设计的 FIR 自 适应滤波器通过对 LMS 算法的改进,使数据在时
商英娜等: 电路板绝缘电阻的曲线拟合
91
过模型了解印制电路板的性能。
间大于 1480h 时绝缘电阻可以判断为失效。
利用软件 Matlab 对数据进行处理和拟合回归
分析,可以提高拟合结果的综合分析和判断水平, 参考文献:
了解数据的变化规律和趋势。
实际环境中 温 度 的影响很大。将温度与湿度的关 系应用到经 验 模 型 中,将 使 建 立 的 经 验 模 型 更 加 准确。
( 沈阳理工大学 信息科学与工程学院,辽宁 沈阳 110159)
摘 要:设计了基于 LMS 算法的 FIR 自适应滤波器,采用 MATLAB 仿真软件对其进行 了仿真。根据自顶向下的设计流程,采用 DDS 技术和流水线设计方法,在 Quartus II 开 发平台上采用 Verilog HDL 语言对其进行了硬件实现。实验结果证明该设计比普通自 适应滤波器具有结构简单、易于实现、滤波速度快的优点.
基于LMS算法自适应滤波器的仿真应用

第11 -12期2016年12月山西焦煤科技Shanxi Coking Coal Science &TechnologyNo. 11 - 12Dec. 2016•试验研究•基于LMS算法自适应滤波器的仿真应用宋海鹰(西山煤电集团公司官地矿,山西太原030022)摘要自适应系统理论由于可以通过学习与适应外界的变化来解决经典系统设计方法中存在 的局限性,受到国内外学者的广泛关注。
本文在滤波器经典设计方法的基础上,结合LM S改进算法,研究了基于自适应FIR(Finite Impulse Response,有限冲击响应)滤•波器的消噪系统。
在仿真与应用 部分,利用MATLAB进行了自适应F IR滤波器的设计与仿真,并通过对加噪声的音频信号进行处理 验证了滤波的效果。
研究表明,跟传统方法相比,自适应F IR滤波器具有较小的通阻带纹波,接近理 想特性。
关键词自适应系统;LM S改进算法;自适应H R滤波器;消噪系统中图分类号:TD67 文献标识码:A文章编号=1672 -0652(2016) 11 -12 -0040 -04信号处理技术对于推动现实生活中各个领域的发展起着重要作用,其中滤波器是进行信号处理的重 要组成部分。
滤波器作为滤除无关信号,得到所需信 号输出的一种抽象器件,通常被用于去除噪声或提取 特定频率段的信号。
而在现代滤波器中,自适应滤波 器是由自适应算法通过调整滤波器系数,以达到最优 滤波的时变最佳滤波器。
自适应滤波器的结构可用 IIR、F IR或格形滤波器等结构来实现。
其中,F IR滤 波器具有严格的线性相位,可用快速傅里叶变换(FFT)实现,具备运算速度比其它系统快且稳定,有 限精度运算误差小的优势tK2].在自适应滤波器的算法构建中,WitJrow和Hoff 教授在研究自适应理论时提出了LMS(Least mean square,最小均方)算法,由于其容易实现而很快得到 了广泛应用,成为自适应滤波的标准算法。
lms自适应滤波器原理

lms自适应滤波器原理LMS自适应滤波器原理引言:LMS(Least Mean Square)自适应滤波器是一种常用的数字信号处理技术,它被广泛应用于自适应滤波、信号降噪、通信系统和控制系统等领域。
本文将介绍LMS自适应滤波器的原理及其应用。
一、LMS自适应滤波器简介LMS自适应滤波器是一种基于最小均方(Least Mean Square)误差准则的自适应滤波器。
其基本原理是通过不断调整滤波器的权值,使得输出信号尽可能接近期望输出信号,从而达到滤波的目的。
LMS算法是一种迭代算法,通过不断更新滤波器的权值,逐步逼近最优解。
二、LMS自适应滤波器的工作原理1. 输入信号与滤波器权值的乘积LMS自适应滤波器的输入信号经过滤波器产生的输出信号,与期望输出信号进行比较,得到误差信号。
误差信号与滤波器权值的乘积,即为滤波器的输出。
2. 更新滤波器权值LMS算法通过不断更新滤波器的权值,使得滤波器的输出逐步接近期望输出。
权值的更新是根据误差信号和输入信号的乘积,以及一个自适应因子进行的。
自适应因子的选择对算法的收敛速度和稳定性有重要影响。
3. 收敛判据LMS自适应滤波器的收敛判据是通过计算滤波器的平均误差来判断滤波器是否已经达到稳态。
当滤波器的平均误差小于一定阈值时,认为滤波器已经收敛。
三、LMS自适应滤波器的应用LMS自适应滤波器广泛应用于信号降噪、通信系统和控制系统等领域。
1. 信号降噪LMS自适应滤波器可以通过不断调整滤波器的权值,将噪声信号从输入信号中滤除,从而实现信号的降噪处理。
在语音信号处理、图像处理等领域有着重要的应用。
2. 通信系统LMS自适应滤波器可以用于通信系统中的均衡处理。
在通信信道中,由于传输过程中的噪声和失真等因素,信号会发生失真和衰减。
LMS自适应滤波器可以通过适当调整滤波器的权值,实现信号的均衡,提高通信系统的性能。
3. 控制系统LMS自适应滤波器在控制系统中常用于系统辨识和自适应控制。
LMS中英文对照表
LMS 中英文对照表位置原文简体中文繁体中文菜单File 文件New 新建Open 打开Reopen 重新打开Save 保存SaveAs 另存为Revert 返回Load QuickSet File 载入快速设置文件Save QuickSet File夹保存快速设置文件Load GraphSetup File 载入图表设置文件Save GraphSetup File 保存图表设置文件Print 打印Editor 编辑器Preferences 参数选择Exit 退出Graph 图表Parameters 参数Curve Library 曲线库Notes & Comments 附注&注释Analyzer 分析仪Sweep Start/Stop 扫频开始/停止Osc On/Off Osc 开/关RLC meter RLC 表Microphone Setup 麦克风设置PAC Interface PAC接口Micro Run 宏运行Calibration 剪贴板Processing 处理Unary Math Operations 一元数学操作Binary Math Operations 二元数学操作Minimum Phase Transform 最小相位转换Delay Phase Transform 延迟相位转换Group Delay Transform 组延迟转换Inv Fast Fourier Transform 快速反傅立叶转换Fast Fourier Transform 快速傅立叶转换Speaker Parameters 喇叭参数Tail Correction 尾部修正Data Transfer 数据合并Data Splice 数据接合Data Realign 数据重组Curve Averaging 数据提升Curve Compare 数据对照Curve Integration 数据综合Utilities 实用程序Import Curve Data File 导入曲线数据文件Export Curve Data File 导出曲线数据文件Export Graphics to File 导出图表到文件Export Graphics to Clipboard 导出图表到剪贴板Curve Capture 曲线捕捉Curve Editor 曲线编辑器Macro Editor 宏编辑器MDF Editor MDF 编辑器Polar Convertor 极性转换器View Clipboard 查看剪贴板Transducer Model Derivation 传感器模式导出Scale 刻度Parameters 参数Auto 自动Up 向上Down 向下View 查看Zoom In 放大Zoom Out 缩小Zoom 缩放Redraw 刷新ToolBars 工具栏Show All 全部显示Hide All 全部隐藏ToolBox 工具框Help 帮助Contents 内容Index 索引Glossary 术语表About Modules 关于模块About Program 关于软件对话框(Graph Parameters)Frame 边框Background 背景Note Underline 注释下划线Large Frame Line 大边框线Small Frame Line 小边框线Grid 格子Border Line 边界线Major Div 主格Minor Div 次格Font 字体Title Block 工程明细表Map Legend 映射图例Note List 注释列表Graph Title 图表标题Scale Vertical 垂直刻度Scale Horizontal 水平刻度Typeface 字体Style 字形Size 大小Color 颜色(Curve Library)Data Curve 数据曲线Same Line Type 统一线型Left(Magnitude) 左坐标(量级) Right Lighter 右坐标浅色Right(Phase) 右坐标(相位)Info 信息Horz Data Range 频宽范围Left Vert 左坐标Right Vert 右坐标Points 点Style 类型Width 宽度Color 颜色(Note&Comments)Left Page 左页面Right Page 右页面Title Block Data 工程图明细表Person 个人Company 公司Project 工程Automatic Curve Info Notes 自动曲线信息标注(Analyzer Parameters)Oscillator 振荡器Output Level 输出水平Frequency 频率Mode 扫频模式Hi Speed Data 高速数据Precision Data 精密数据Gating 门控Off 关On 开Meter 仪表Data 数据Value 值Source 来源Freq 频率Sweep 扫频Lo Freq 低频Hi Freq 高频Direction 方向Control 控制Pulse 脉冲Gate Time Calculator 门控计时器Meter Filter 滤波器Filter Function 滤波函数Track Ratio 音轨率Gate Timing 门控时间(RLC Meter)Measurement 测量Resistance 电阻Inductance 电感Capacitance 电容Impedance 阻抗Limit Testing 测试限制Enable 启用MinValue 最小值Max Value 最大值(Microphone Setup)Mic Input 麦克风输入Line Input 信号输入Model 型号Acoustic Ref 声学参数Electric Ref 电学参数Serial 序列号Author 制造商Date 日期Load 载入(PAC Interface)Serial Port 串行端口Linking 正在链接Start Link 开始链接Automatic Link 自动链接Baud Rate 波特率System Power 系统供电Power Source 供电电源Battery Status 电池状态Charge Amps 功放充电System Status 系统状态Link Status 链接状态Port 端口Voltage 电压Battery V oltage 电池电压External V oltage 外部电压(Analyzer Calibration)Internal 内部External 外部Parameter Under Test 测试参数Results 结果(Unary Math Operations)Library Curve 曲线库Operation 操作Magnitude Offset 幅度补偿Phase Offset 相位偏移Delay Offset 延迟偏移Exponentiation 求幂Smooth Curve 平滑曲线Frequency Translation 频率转换Multiply by 乘Divide 除Real 正弦Imag(sin) 余弦Execute 执行(Binary Math Operations)Mul 乘Div 除Add 加Sub 减Operand 操作数(Minimum Phase Transform)Asymptotic Slope at Hi Freq Limit 频率上限斜率Asymptotic Slope at Lo Freq Limit 频率下限斜率Automatic Tail Correction/Mirroring for Impedance Curves 自动尾部修正/阻抗曲线镜像(Delay Phase Transform)Source Curve with Delay Data 来源延迟数据曲线Result Curve for Phase Data 结果相位数据曲线(Group Delay Transform)Source Curve with Phase Data 来源相位曲线Result Curve for Group Delay 结果组延迟曲线(Inverse Fast Fourier Transform)Linear Frequency Points 线性频率点Frequency Domain Data 频率范围数据Result Impulse Curve 结果推动曲线Time Domain Data 时间范围数据Result Step Curve 结果步骤曲线(Speaker Parameters)Method 方法Single Curve 单曲线Double Curve 双曲线Reference Curve 参考曲线Standard 标准Estimate 估算Optimize 优化Simulate 模拟Model Simulation 模拟模型Copy Binary to Clipboard 复制二元数据到剪贴板Copy Text to Clipboard 复制文本到剪贴板(Tail Correction)Slope Hi 高频Slope Lo 低频(Data Splice)Source Curve for Higher Data 来源高数据曲线Source Curve for Lower Data 来源低数据曲线Horz Splice Transition 水平接合转换(Data Realign)Linear 线性Log 对数Horz Lo Limit 水平下限Horz Hi Limit 水平上限Interpolation 插补Quadratic 平方Cubic 立方(Curve Averaging)Scalar 标量Vector 矢量(Curve Compare)Test Parameters 测试参数Absolute 绝对Relative 相对Tolerance 公差Relative Flatness 相对平面(Curve Integration)Average 平均Integrate 叠加(Import Curve Data)Left Vert Data 左坐标数据Right Ver Data 右坐标数据Polar Freq 极性频率Units 单位Prefix 前缀Curve Entry 曲线条目Special Processing 特殊处理Skip First Data Column 跳过首个数据列(Export Graphics)Artwork 作品Graph Name 图表名称Format 格式Raster 光栅Image Pixel Width 图像宽度(像素) Image Pixel Height 图像高度(像素) Image Bits per Pixels 图像位每像素Image Bytes 图像字节数(Clipboard Graphics Transfer)Enhanced Metafile 增强元文件Bitmap 位图(Curve Capture)Polar Freq 极性频率Graph Image 图表图像Reference Point Data 参考点数据Upper Right 右上Scan Direction 扫描方向Top to Bottom 顶到底Bottom to Top 底到顶Color Match 颜色匹配(Curve Editor)Control 控制Node 节点Insert 插入Snap 快照Guidelines 参考线Ruler Grid 标尺格Smooth 平滑(Transducer Model Derivation)Measurements 度量Profile 配置文件(Scale Parameters)Axis 轴Range 范围Divisions 分格Current 当前Volume 音量Acceleration 加速度Velocity 速率Excursion 偏移。
频域lms算法范文
频域lms算法范文频域Least Mean Square (LMS)算法是一种基于自适应滤波的算法,广泛应用于信号处理领域。
其主要思想是通过不断地调整滤波器的系数,使滤波器的输出尽可能接近期望输出。
频域LMS算法在频域上运算,能够对频域信息进行处理,因此具有一定的优势。
频域LMS算法的核心是通过最小化均方误差的方式来调整滤波器系数。
假设有一个期望输出序列d(n)和一个输入序列x(n),我们的目的是找到一个滤波器的系数向量W(n),使得滤波器的输出y(n)与期望输出d(n)的误差e(n)最小。
通过不断地调整滤波器的系数,使误差e(n)达到最小,从而实现信号的滤波。
频域LMS算法的步骤如下:1.将输入序列x(n)和期望输出序列d(n)进行傅里叶变换,得到频域上的输入信号X(k)和期望输出信号D(k)。
2.初始化滤波器的系数向量W(n)为零向量。
3.对于每一个输入样本,初始化预测输出y(n)为滤波器的输出信号的傅里叶反变换。
4.计算误差信号e(n)=D(k)-Y(k)的傅里叶变换,其中Y(k)为滤波器的输出信号的傅里叶变换。
5.更新滤波器的系数向量W(k)=W(k-1)+μ*X'(k)*e'(k)/(X(k)*X'(k)+λ),其中μ为步长因子,X'(k)和e'(k)为X(k)和e(k)的共轭。
6.重复步骤3-5,直至满足收敛条件。
频域LMS算法的优点是能够处理频域信息,对频域信号具有较好的处理能力。
此外,频域LMS算法具有较快的收敛速度和较低的计算复杂度,适用于实时信号处理。
然而,频域LMS算法也存在一些缺点。
首先,频域LMS算法需要进行傅里叶变换和反变换,因此需要较大的计算开销。
其次,频域LMS算法对信号的稳态行为要求较高,对非稳态信号处理效果较差。
此外,频域LMS算法对噪声的统计特性有一定的要求,对于非高斯白噪声的处理效果较差。
总之,频域LMS算法是一种自适应滤波算法,能够对频域上的信号进行处理。
基于LMS算法的自适应组合滤波器中英文翻译
Combined Adaptive Filter with LMS-Based Algorithms ´Abstract: A combined adaptive filter is proposed. It consists of parallel LMS-based adaptive FIR filters and an algorithm for choosing the better among them. As a criterion for comparison of the considered algorithms in the proposed filter, we take the ratio between bias and variance of the weighting coefficients. Simulations results confirm the advantages of the proposed adaptive filter.Keywords: Adaptive filter, LMS algorithm, Combined algorithm,Bias and vari ance trade-off1.IntroductionAdaptive filters have been applied in signal processing and control, as well as in many practical problems, [1, 2]. Performance of an adaptive filter depends mainly on the algorithm used for updating the filter weighting coefficie nts. The most commonly used adaptive systems are those based on the Least Mean Square (LMS) adaptive algorithm and its modifications (LMS-based algorithms).The LMS is simple for implementation and robust in a number of applications [1–3]. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its performance by the appropriate modifications: sign algorithm (SA) [8], geometric mean LMS (GLMS) [5], variable step-size LMS(VS LMS) [6, 7].Each of the LMS-base d algorithms has at least one parameter that should be defined prior to the adaptation procedure (step for LMS and SA; step and smoothing coefficients for GLMS; various parameters affecting the step for VS LMS). These parameters crucially influence the filter output during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned adaptation phases. We propose a possible approach for the LMS-based adaptive filter performance improvement. Namely, we make a combination of several LMS-based FIR filters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This method may be applied to all the LMS-based algorithms, although we here consider only several of them.The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simulation results are presented in Section 4.2. LMS based algorithms Let us define the input signal vector T k N k x k x k x X )]1()1()([+--= and vector of weighting coefficients as T N k k W k W k W W )]()()([110-= .The weighting coefficients vector should be calculated according to:}{21k k k k X e E W W μ+=+ (1) where µ is the algorithm step, E{·} is the estimate of the expected value and k T k k k X W d e -=is the error at the in-stant k,and dk is a reference signal. Depending on the estimation of expected value in (1), one defines various forms of adaptive algorithms: the LMS {}()k k k k X e X e E =,the GLMS {}()()∑=--≤<-=k i i k i k i k k a X e a a X e E 010,1, and the SA {}()()k k k k e sign X X e E =,[1,2,5,8] .The VS LMS has the same form as the LMS, but in the adaptation the step µ(k) is changed [6, 7].The considered adaptive filtering p roblem consists in trying to adjust a set of weighting coefficients so that the system output,k T k k X W y =, tracks a reference signal, assumed as k k Tk k n X W d +=*,where k n is a zero mean Gaussian noise with thevariance 2n σ,and *k W is the optimal weight vector (Wiener vector). Two cases will be considered:W W k =* is a constant (stationary case) and *k W is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vector *k W )are time variant. It is often assumed that variation of *k W may be modeled as K k k Z W W +=+**1 is the zero-mean random perturbation, independent onk X and k n with the autocorrelation matrix []I Z Z E G Z T k k 2σ==.Note that analysis for the stationary case directly follows for 02=Zσ.The weighting coefficient vector converges to the Wiene r one, if the condition from [1, 2] is satisfied.Define the weighting coefficientsmisalignment, [1–3],*k k k W W V -=. It is due to both the effects of gradient noise (weighting coefficients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), [3]. It can be expressed as:()()()()*k k k k k W W E W E W V -+-=, (2)According to (2), the ith element of k V is:(3)where ()()k W bias i is the weighting coefficient bias and ()k i ρ is a zero-mean random variable with the variance 2σ.The variance depends on the type ofLMS-based algorithm, as well as on the external noise variance 2n σ.Thus, if the noisevariance is constant or slowly-varying,2σ is time invariant for a particular LMS-based algorithm. In that sense, in the analysis that follows we will assume that 2σ depends only on the algorithm type, i.e. on its parameters.An important performance measure for an adaptive filter is its mean square deviation (MSD) of weighting coefficients. For the adaptive filters, it is given by, [3]:[]k T k k V V E MSD ∞→=lim .3. Combined adaptive filterThe basic idea of the combined adaptive filter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration [9]. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for the weighting coefficients. The best weighting coefficient is the one that is, at a given instant, the closest to the corresponding value of the Wiener vector.Let ()q k W i , be the i −th weighting coefficient for LMS -based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a unified way (LMS: q ≡ µ,GLMS: q ≡ a,SA:q ≡ µ). LMS -based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al-gorithm. Analyze no w a combined adaptive filter, with several LMS -based algorithms of the same type, but with different parameter q.The weighting coefficients are random variables distributed around the ()k W i *,with()()q k W bias i ,and the variance 2q σ, related by [4, 9]: ()()()()q i i i q k W bias k W q k W κσ≤--,,*, (4)where (4) holds with the probability P(κ), dependent on κ. For example, for κ = 2 and ()()()()()()()()()()()()k k W bias k W E k W k W k W E k V i i i i i i i ρ+=-+-=*a Gaussian distribution,P(κ) = 0.95 (two sigma rule).Define the confidence intervals for ()]9,4[,,q k W i :()()()[]q i q i i q k W k q k W k D κσσ2,,2,+-= (5) Then, from (4) and (5) we conclude that, as long as ()()q i q k W bias κσ<,,()()k D k W i i ∈*, independently on q. This means that, for small b ias, the confidence intervals, for different s q ' of the same LMS-based algorithm, of the same LMS-based algorithm, intersect. When, on the other hand, the bias becomes large, then the central positions of the intervals for different s q ' are far apart, and they do not intersect.Since we do not have apriori information about the ()()q k W bias i ,,we will use a specific statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. The criterion follows from the trade-off condition that bias and variance are of the same order of magnitude, i.e.()()[]4,,q i q k W bias κσ≅.The proposed combined algorithm (CA) can now be summarized in the following steps:Step 1. Calculate ()q k W i ,for the algorithms with different s q 'from the predefined set {} ,,2q q Q i =.Step 2. Estimate the variance 2q σ for each considered algorithm.Step 3. Check if ()k D i intersect for the considered algorithms. Start from an algorithm with largest value of variance, and go toward the ones with smaller values of variances. According to (4), (5) and the trade-off criterion, this check reduces to the check if()()()ql qm l i m i q k W q k W σσκ+<-2,, (6)is satisfied, where Q q q l m ∈,,and the following relation holds:Q q q h ql qh qm h ∉⇒>>∀,:222σσσ.If no ()k D i intersect (large bias) choose the algorithm with largest value of variance. If the ()k D i intersect, the bias is already small. So, check a new pair of weighting coefficients or, if that is the last pair, just choose the algorithm with the smallest variance. First two intervals that do not intersect mean that the proposed trade-off criterion is achieved, and choose the algorithm with large variance.Step 4. Go to the next instant of time.The smallest number of elements of the set Q is L =2. In that case, one of the s q 'should provide good tracking of rapid variations (the largest variance), while the other should provide small variance in the steady state. Observe that by adding few more s q ' between these two extremes, one may slightly improve the transient behavior of the algorithm.Note that the only unknown values in (6) are the variances. In our simulations weestimate 2q σ as in [4]:()()()2675.0/1--=k W k W median i i q σ, (7)for k = 1, 2,... , L and 22qZ σσ<<. The alternative way is to estimate 2n σ as:∑=≈T i i n e T 1221σ,for x(i) = 0. (8) Expressions relating 2n σ and 2q σ in steady state, for different types of LMS-basedalgorithms, are known from literature. For the standard LMS algorithm in steady state, 2n σ and 2q σ are related 22nq q σσ=,[3]. Note that any other estimation of 2q σis valid for the proposed filter.Complexity of the CA depends on the constituent algorithms (Step 1), and on the decision algorithm (Step 3).Calculation of weighting coefficients for parallel algorithms does not increase the calculation time, since it is performed by a parallel hardware realization, thus increasing the hardware requirements. The variance estimations (Step 2), negligibly contribute to the increase of algorithm complexity, because they are performed at the very beginning of adaptation and they are using separate hardware realizations. Simple analysis shows that the CA increases the number of operations for, at most, N(L −1) additions and N(L −1) IF decisions, and needs some additional hardware with respect to the constituent algorithms.4.Illustration of combined adaptive filterConsider a system identification by the combination of two LMS algorithms with different steps. Here, the parameter q is μ,i.e. {}{}10/,,21μμ==q q Q .The unknown system has four time-invariant coefficients,and the FIR filters are with N = 4. We give the average mean square deviation (AMSD ) for both individual algorithms, as well as for their combination,Fig. 1(a). Results are obtained by averaging over 100 independent runs (the Monte Carlo method), with μ = 0.1. Thereference dk is corrupted by a zero-mean uncorrelated Gaussian noise with 2n σ= 0.01and SNR = 15 dB, and κ is 1.75. In the first 30 iterations the variance was estimated according to (7), and the CA picked the weighting coefficients calculated by the LMS with μ.As presented in Fig. 1(a), the CA first uses the LMS with μ and then, in the steady state, the LMS with μ/10. Note the region, between the 200th and 400th iteration,where the algorithm can take the LMS with either stepsize,in different realizations. Here, performance of the CA would be improved by increasing the number of parallel LMS algorithms with steps between these two extrems.Observe also that, in steady state, the CA does not ideally pick up the LMS with smaller step. The reason is in the statistical nature of the approach.Combined adaptive filter achieves even better performance if the individual algorithms, instead of starting an iteration with the coefficient values taken from their previous iteration, take the ones chosen by the CA. Namely, if the CA chooses, in the k -th iteration, the weighting coefficient vector P W ,then each individual algorithm calculates its weighting coefficients in the (k +1)-th iteration according to:{}k k p k X e E W W μ21+=+(9)Fig. 1. Average MSD for considered algorithms.Fig. 2. Average MSD for considered algorithms.Fig. 1(b) shows this improvement, applied on the previous example. In order to clearly compare the obtained results,for each simulation we calculated the AMSD . For the first LMS (μ) it was AMSD = 0.02865, for the second LMS (μ/10) it was AMSD = 0.20723, for the CA (CoLMS) it was AMSD = 0.02720 and for the CA with modification (9) it was AMSD = 0.02371.5. Simulation resultsThe proposed combined adaptive filter with various types of LMS-based algorithms is implemented for stationary and nonstationary cases in a system identificationsetup.Performance of the combined filter is compared with the individual ones, that compose the particular combination.In all simulations presented here, the reference dk is corrupted by a zero-meanuncorrelated Gaussian noise with 1.02=nσand SNR = 15 dB. Results are obtained by averaging over 100 independent runs, with N = 4, as in the previous section.(a) Time varying optimal weighting vector: The proposed idea may be applied to the SA algorithms in a nonstationary case. In the simulation, the combined filter is composed out of three SA adaptive filters with different steps, i.e. Q = {μ, μ/2, μ/8}; μ = 0.2. The optimal vectors is generated according to the presented model with 001.02=Z σ,and with κ = 2. In the first 30 iterations the variance was estimated according to (7), and CA takes the coefficients of SA with μ (SA1).Figure 2(a) shows the AMSD characteristics for each algorithm. In steady statethe CA does not ideally follow the SA3 with μ/8, because of the nonstationary problem nature and a relatively small difference between the coefficient variances of the SA2 and SA3. However,this does not affect the overall performance of the proposed algorithm. AMSD for each considered algorithm was: AMSD = 0.4129 (SA1,μ), AMSD = 0.4257 (SA2,μ/2), AMSD = 1.6011 (SA3, μ/8) and AMSD = 0.2696 (Comb).(b) Comparison with VS LMS algorithm [6]: In this simulation we take the improved CA (9) from 3.1, and compare its performance with the VS LMS algorithm [6], in the case of abrupt changes of optimal vector. Since the considered VS LMS algorithm[6] updates its step size for each weighting coefficient individually, the comparison of these two algorithms is meaningful.All the parameters for the improved CA are the same as in 3.1. For the VS LMS algorithm [6], the relevant parameter values are the counter of sign change m0 = 11,and the counter of sign continuity m1 = 7. Figure 2(b)shows the AMSD for the compared algorithms, where one can observe the favorable properties of the CA, especially after the abrupt changes. Note that abrupt changes are generated by multiplying all the system coefficients by −1 at the 2000-th iteration (Fig. 2(b)). The AMSD for the VS LMS was AMSD = 0.0425, while its value for the CA (CoLMS) was AMSD = 0.0323.For a complete comparison of these algorithms we consider now their calculation complexity, expressed by the respective increase in number of operations with respect to the LMS algorithm. The CA increases the number of requres operations for N additions and N IF decisions.For the VS LMS algorithm, the respective increase is: 3N multiplications, N additions, and at least 2N IF decisions.These values show the advantage of the CA with respect to the calculation complexity.6. ConclusionCombination of the LMS based algorithms, which results in an adaptive system that takes the favorable properties of these algorithms in tracking parameter variations, is proposed.In the course of adaptation procedure it chooses better algorithms, all the way to the steady state when it takes the algorithm with the smallest variance of theweighting coefficient deviations from the optimal value.Acknowledgement. This work is supported by the Volkswagen Stiftung, Federal Republic of Germany.基于LMS算法的自适应组合滤波器()()k W bias i 是加权系数的偏差,()k i ρ与方差2σ是零均值的随机变量差,它取决于LMS 的算法类型,以及外部噪声方差2n σ。
基于LMS算法与RLS算法的自适应滤波
基于LMS算法与RLS算法的自适应滤波徐艳;李静【期刊名称】《电子设计工程》【年(卷),期】2012(020)012【摘要】The theory and technology of adaptive signal processing have become popular in filtering and canceling noise field. This article mainly talks about the theory of adaptive filtering and steps of the arithmetic based on LMS&RLS.Emulations successfully showed the theory by MATLAB in this paper.%自适应信号处理的理论和技术已经成为人们常用滤波和去噪技术。
文中讲述了自适应滤波的原理以及LMS算法和RLS算法两种基本自适应算法的原理及步骤。
并用MATLAB分别对两种算法进行了自适应滤波仿真和实现。
【总页数】4页(P49-51,54)【作者】徐艳;李静【作者单位】长安大学信息工程学院,陕西西安710064;长安大学信息工程学院,陕西西安710064【正文语种】中文【中图分类】TP312【相关文献】1.基于LMS算法与RLS算法自适应滤波及仿真分析 [J], 马国栋;阎树田;贺成柱;杨晨2.基于RLS算法的自适应抗干扰工频通信滤波器的设计与实现 [J], 徐婷;通旭明3.基于qF-LMS算法的自适应滤波器与FPGA实现 [J], 丁泽锋; 白路阳; 杨炜毅; 王艳芬4.基于LMS算法的自适应滤波器性能分析 [J], 刘建涛;席闯;姜海洋5.基于RLS算法的有源滤波器自适应基波检测方法(英文) [J], 姜孝华;金济;Ale Emedi因版权原因,仅展示原文概要,查看原文内容请购买。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Combined Adaptive Filter with LMS-Based AlgorithmsAbstract: A combined adaptive filter is proposed. It consists of parallel LMS-based adaptive FIR filters and an algorithm for choosing the better among them. As a criterion for comparison of the considered a lgorithms in the proposed filter, we take the ratio between bias and variance of the weighting coefficients. Simulations results confirm the advantages of the proposed adaptive filter. Keywords: Adaptive filter, LMS algorithm, Combined algorithm,Bias and varian ce trade-off 1.IntroductionAdaptive filters have been applied in signal processing and control, as well as in many practical problems, [1, 2]. Performance of an adaptive filter depends mainly on the algorithm used for updating the filter weighting coefficient s. The most commonly used adaptive systems are those based on the Least Mean Square (LMS) adaptive algorithm and its modifications (LMS-based algorithms).The LMS is simple for implementation and robust in a number of applications [1–3]. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its performance by the appropriate modifications: sign algorithm (SA) [8], geometric mean LMS (GLMS) [5], variable step-size LMS(VS LMS) [6, 7].Each of the LMS-based algorithms has at least one parameter that should be defined prior to the adaptation procedure (step for LMS and SA; step and smoothing coefficients for GLMS; various parameters affecting the step for VS LMS). These parameters crucially influence the filter ou tput during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned adaptation phases.We propose a possible approach for the LMS-based adaptive filter performance improvement. Namely, we make a combination of several LMS-based FIR filters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This method may be applied to all the LMS-based algorithms, although we here consider only several of them.The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simulation results are presented in Section 4.2. LMS based algorithmsLet us define the input signal vector T k N k x k x k x X )]1()1()([+--= and vector of weighting coefficients as T N k k W k W k W W )]()()([110-= .The weighting coefficients vector should be calculated according to:}{21k k k k X e E W W μ+=+ (1) where µ is the algorithm step, E{·} is the estimate of the expected value and k T k k k X W d e -=is the error at the in-stant k,and dk is a reference signal. Depending on the estimation of expected value in (1), one defines various forms of adaptive algorithms: the LMS {}()k k k k X e X e E =,the GLMS {}()()∑=--≤<-=k i ik i k i k k a X e a aX e E 010,1, and the SA {}()()k k k k e sign X X e E =,[1,2,5,8] .The VS LMS has the same form as the LMS, but in the adaptation the step µ(k) is changed [6, 7].The considered adaptive filtering p roblem consists in trying to adjust a set of weighting coefficients so that the system output,k T k k X W y =, tracks a reference signal, assumed as k k Tk k n X W d +=*,where k n is a zero mean Gaussian noise with the variance 2n σ,and*k W is the optimal weight vector (Wiener vector). Two cases will be considered:W W k =* is a constant (stationary case) and *k W is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vector *k W )are time variant. It is often assumed thatvariation of *k W may be modeled as K k k Z W W +=+**1 is the zero-mean random perturbation,independent on k X and k n with the autocorrelation matrix []I Z Z E G Z T k k 2σ==.Note that analysis for the stationary case directly follows for 02=Z σ.The weighting coefficient vector converges to the Wiener one, if the condition from [1, 2] is satisfied.Define the weighting coefficientsmisalignment, [1–3],*k k k W W V -=. It is due to both the effects of gradient noise (weighting coefficients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), [3]. It can be expressed as:()()()()*k k k k k W W E W E W V -+-=, (2)According to (2), the ith element of k V is:(3)where ()()k W bias i is the weighting coefficient bias and ()k i ρ is a zero-mean random variable with the variance 2σ.The variance depends on the type of LMS-based algorithm, as well as on the external noise variance2nσ.Thus, if the noise variance is constant or slowly-varying,2σ is time invariant for a particular ()()()()()()()()()()()()k k W bias k W E k W k W k W E k V i i i i i i i ρ+=-+-=*LMS-based algorithm. In that sense, in the analysis that follows we will assume that 2σ depends only on the algorithm type, i.e. on its parameters.An important performance measure for an adaptive filter is its mean square deviation (MSD) of weighting coefficients. For the adaptive filters, it is given by, [3]:[]k T k k V V E MSD ∞→=lim . 3. Combined adaptive filterThe basic idea of the combined adaptive filter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration [9]. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for the weighting coefficients. The best weighting coefficient is the one that is, at a given instant, the closest to the corresponding value of the Wiener vector.Let ()q k W i , be the i −th weighting coefficient for LMS-based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a unified way (LMS: q ≡ µ,GLMS: q ≡ a,SA:q ≡ µ). LMS-based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al- gorithm. Analyze no w a combined adaptive filter, with several LMS-based algorithms of the same type, but with different parameter q.The weighting coefficients are random variables distributed around the ()k W i *,with()()q k W bias i ,and the variance 2qσ, related by [4, 9]: ()()()()q i i i q k W bias k W q k W κσ≤--,,*, (4)where (4) holds with the probability P(κ), dependent on κ. For example, for κ = 2 and a Gaussian distribution,P(κ) = 0.95 (two sigma rule).Define the confidence intervals for ()]9,4[,,q k W i :()()()[]q i q i i q k W k q k W k D κσσ2,,2,+-= (5)Then, from (4) and (5) we conclude that, as long as ()()q i q k W bias κσ<,,()()k D k W i i ∈*, independently on q. This means that, for small b ias, the confidence intervals, for different s q ' of the same LMS-based algorithm, of the same LMS-based algorithm, intersect. When, on the other hand, the bias becomes large, then the central positions of the intervals for different s q ' are far apart, and they do not intersect.Since we do not have apriori information about the ()()q k W bias i ,,we will use a specific statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. The criterion follows from the trade-off condition that bias and variance are of the same order ofmagnitude, i.e.()()[]4,,q i q k W bias κσ≅.The proposed combined algorithm (CA) can now be summarized in the following steps: Step 1. Calculate ()q k W i ,for the algorithms with different s q 'from the predefined set {} ,,2q q Q i =.Step 2. Estimate the variance 2qσ for each considered algorithm. Step 3. Check if ()k D i intersect for the considered algorithms. Start from an algorithm with largest value of variance, and go toward the ones with smaller values of variances. According to(4), (5) and the trade-off criterion, this check reduces to the check if()()()ql qm l i m i q k W q k W σσκ+<-2,, (6)is satisfied, where Q q q l m ∈,,and the following relation holds:Q q q h ql qh qm h ∉⇒>>∀,:222σσσ.If no ()k D i intersect (large bias) choose the algorithm with largest value of variance. If the ()k D i intersect, the bias is already small. So, check a new pair of weig hting coefficients or, if that is the last pair, just choose the algorithm with the smallest variance. First two intervals that do not intersect mean that the proposed trade-off criterion is achieved, and choose the algorithm with large variance.Step 4. Go to the next instant of time.The smallest number of elements of the set Q is L =2. In that case, one of the s q 'should provide good tracking of rapid variations (the largest variance), while the other should provide small variance in the steady state. Observe that by adding few more s q ' between these two extremes, one may slightly improve the transient behavior of the algorithm.Note that the only unknown values in (6) are the variances. In our simulations we estimate 2qσ as in [4]: ()()()2675.0/1--=k W k W mediani i q σ, (7) for k = 1, 2,... , L and 22qZ σσ<<. The alternative way is to estimate2n σ as: ∑=≈T i i n e T 1221σ,for x (i ) = 0. (8) Expressions relating 2n σ and 2q σ in steady state, for different types of LMS-based algorithms, are known from literature. For the standard LMS algorithm in steady state,2n σ and 2q σ are related 22nq q σσ=,[3]. Note that any other estimation of 2q σis valid for the proposed filter. Complexity of the CA depends on the constituent algorithms (Step 1), and on the decisionalgorithm (Step 3).Calculation of weighting coefficients for parallel algorithms does not increase the calculation time, since it is performed by a parallel hardware realization, thus increasing the hardware requirements. The variance estimations (Step 2), negligibly contribute to the increase of algorithm complexity, because they are performed at the very beginning of adaptation and they are using separate hardware realizations. Simple analysis shows that the CA increases the number of operations for, at most, N(L −1) additions and N (L −1) IF decisions, and needs some additional hardware with respect to the constituent algorithms.4.Illustration of combined adaptive filterConsider a system identification by the combination of two LMS algorithms with different steps. Here, the parameter q is μ,i.e. {}{}10/,,21μμ==q q Q .The unknown system has four time-invariant coefficients,and the FIR filters are with N = 4. We give the average mean square deviation (AMSD ) for both individual algorithms, as well as for their combination,Fig. 1(a). Results are obtained by averaging over 100 independent runs (the Monte Carlo method), with μ = 0.1. The reference dk is corrupted by a zero-mean uncorrelated Gaussian noise with 2nσ= 0.01 and SNR = 15 dB, and κ is 1.75. In the first 30 iterations the variance was estimated according to (7), and the CA picked the weighting coefficients calculated by the LMS with μ.As presented in Fig. 1(a), the CA first uses the LMS with μ and then, in the steady state, the LMS with μ/10. Note the region, between the 200th and 400th iteration,where the algorithm can take the LMS with either stepsize,in different realizations. Here, performance of the CA would be improved by increasing the number of parallel LMS algorithms with steps between these two extrems.Observe also that, in steady state, the CA does not ideally pick up the LMS with smaller step. The reason is in the statistical nature of the approach.Combined adaptive filter achieves even better performance if the individual algorithms, instead of starting an iteration with the coefficient values taken from their previous iteration, take the ones chosen by the CA. Namely, if the CA chooses, in the k -th iteration, the weighting coefficient vector P W ,theneach individual algorithm calculates its weighting coefficients in the (k +1)-th iteration according to:{}k k p k X e E W W μ21+=+(9)Fig. 1. Average MSD for considered algorithms.Fig. 2. Average MSD for considered algorithms.Fig. 1(b) shows this improvement, applied on the previous example. In order to clearly compare the obtained results,for each simulation we calculated the AMSD . For the first LMS (μ) it was AMSD = 0.02865, for the second LMS (μ/10) it was AMSD = 0.20723, for the CA (CoLMS) it was AMSD = 0.02720 and for the CA with modification (9) it was AMSD = 0.02371.5. Simulation resultsThe proposed combined adaptive filter with various types of LMS-based algorithms is implemented for stationary and nonstationary cases in a system identification setup.Performanceof the combined filter is compared with the individual ones, that compose the particularcombination.In all simulations presented here, the reference dk is corrupted by a zero-mean uncorrelated Gaussian noise with 1.02=n σand SNR = 15 dB. Results are obtained by averaging over 100 independent runs, with N = 4, as in the previous section.(a) Time varying optimal weighting vector: The proposed idea may be applied to the SA algorithms in a nonstationary case. In the simulation, the combined filter is composed out of three SA adaptive filters with different steps, i.e. Q = {μ, μ/2, μ/8}; μ = 0.2. The optimal vectors is generated according to the presented model with 001.02=Z σ,and with κ = 2. In the first 30 iterations the variance was estimated according to (7), and CA takes the coefficients of SA with μ (SA1).Figure 2(a) shows the AMSD characteristics for each algorithm. In steady state the CA does not ideally follow the SA3 with μ/8, because of the nonstationary problem nature and a relatively small difference between the coefficient variances of the SA2 and SA3. However,this does not affect the overall performance of the proposed algorithm. AMSD for each considered algorithm was: AMSD = 0.4129 (SA1,μ), AMSD = 0.4257 (SA2,μ/2), AMSD = 1.6011 (SA3, μ/8) and AMSD = 0.2696(Comb).(b) Comparison with VS LMS algorithm [6]: In this simulation we take the improved CA (9) from 3.1, and compare its performance with the VS LMS algorithm [6], in the case of abrupt changes of optimal vector. Since the considered VS LMS algorithm[6] updates its step size for each weighting coefficient individually, the comparison of these two algorithms is meaningful. All the parameters for the improved CA are the same as in 3.1. For the VS LMS algorithm[6], the relevant parameter values are the counter of sign change m 0 = 11,and the counter of sign continuity m 1 = 7. Figure 2(b)shows the AMSD for the compared algorithms, where one can observe the favorable properties of the CA, especially after the abrupt changes. Note that abrupt changes are generated by multiplying all the system coefficients by −1 at the 2000-th iteration (Fig. 2(b)). The AMSD for the VS LMS was AMSD = 0.0425, while its value for the CA (CoLMS) was AMSD = 0.0323.For a complete comparison of these algorithms we consider now their calculation complexity, expressed by the respective increase in number of operations with respect to the LMS algorithm. The CA increases the number of requres operations for N additions and N IF decisions.For the VSLMS algorithm, the respective increase is: 3N multiplications, N additions, and at least 2N IFdecisions.These values show the advantage of the CA with respect to the calculation complexity.6. ConclusionCombination of the LMS based algorithms, which results in an adaptive system that takes the favorable properties of these algorithms in tracking parameter variations, is proposed.In the course of adaptation procedure it chooses better algorithms, all the way to the steady state when it takes the algorithm with the smallest variance of the weighting coefficient deviations from the optimal value.Acknowledgement. This work is supported by the Volkswagen Stiftung, Federal Republic of Germany.References[1] Widrow, B.; Stearns, S.: Adaptive Signal Processing. Prentice-Hall, Inc. Englewood Cliffs, N.J. 07632.[2] Alexander, S. T.: Adaptive Signal Processing –Theory and Applications. Springer-Verlag, New York 1986.[3] Widrow, B.; McCool, J. M.; Larimore, M. G.; Johnson, C. R.:Stationary and Nonstationary Learning Characteristics of the LMS Adaptive Filter. Proc. IEEE 64 (1976) 1151–1161.[4] Stankovic, L. J.; Katkovnik, V.: Algorithm for the instantaneous frequency estimation using time-frequency distributions with variable window width. IEEE SP. Letters 5 (1998), 224–227. [5] Krstajic, B.; Uskokovic, Z.; Stankovic, Lj.: GLMS Adaptive Algorithm in Linear Prediction. Proc. CCEC E’97 I, pp. 114–117, Canada, 1997.[6] Harris, R. W.; Chabries, D. M.; Bishop, F. A.: A Variable Step (VS) Adaptive Filter Algorithm. IEEE Trans. ASSP 34 (1986),309–316.[7] Aboulnasr, T.; Mayyas, K.: A Robust Variable Step-Size LMSType Algorithm: Analysis and Simulations. IEEE Trans. SP 45 (1997), 631–639.[8] Mathews, V. J.; Cho, S. H.: Improved Convergence Analysis of Stochastic Gradient Adaptive Filters Using the Sign Algorithm. IEEE Trans. ASSP 35 (1987), 450–454.[9] Krstajic, B.; Stankovic, L. J.; Uskokovic, Z.; Djurovic, I.:Combined adaptive system for identification of unknown systems with varying parameters in a noisy enviroment. Proc. IEEEICECS’99, Phafos, Cyprus, Sept. 1999.中文译文基于LMS 算法的自适应组合滤波器摘要:提出了一种自适应组合滤波器。