lms算法毕业论文(8)

2019-04-22 09:42

结 论

我们知道,如果不希望用与估计输入信号矢量有关的相关矩阵来加快LMS算法的收敛速度,那么可用变步长方法来缩短其自适应收敛过程,其中一个主要的方法是归一化LMS算法,为了达到快速收敛的目的,必须合适地选择变步长μ(n)的值,一个可能的策略是尽可能多的减少瞬时平方误差,即用瞬时平方误差作为均方误差MSE的简单估计,这就是LMS算法的基本思想。

通过仿真,变步长μ(n)的取值尤为重要,特别是在μ(n)取较大值的时候。如果μ(n)取值很小,则MLMS算法近似等效于LMS算法。此外,MLMS算法与NLMS算法特殊形式的更新公式很相似,变步长都取决于输入信号功率,但不同的是信号和误差序列都差一个时延的相应值,随着迭代运算次数的增加而趋于一致。因此,归一化LMS算法、时域正交LMS算法及修正LMS算法都是以输入信号功率控制变步长LMS算法,利用梯度信息调整滤波器权系数使其达到最佳值这一点完全相同。但它们的自适应过程较快,性能有了很大改进。

我们也知道,收敛因子μ(n)是LMS算法自适应滤波器的重要参数,它控制着收敛速度与稳态失调的平衡。一般来说,较小的收敛因子会导致收敛速度和较小的失调。然而,在数字自适应系统中,当迭代增量(即修正项)的大小比数字量的最低有效位(LSB)的一半还小,即

2?e(n)x(n?i)?LSB2LMS算法的迭代将停止。因此,μ(n)的减小将导致系统性能的下降,如果减小收敛因子μ(n),则DRE将显著增加。因此,在实际系统中,LMS算法的收敛因子不能无限制地减小,其下界由量化和有限精度运算对系统的影响程度来决定。

当输入信号自相关阵的一个或多个特征值为0时,由于非线性量化的影响,自适应滤波器有可能不能收敛。通常,采用泄露技术来防止这一现象的发生。在自适应滤波器权系数的更新中引入一定的非线性变换,可以在一定程度上简化权系数更新过程中的乘法运算,并因此简化LMS自适应滤波器的硬件或程序实现,本文中介绍的极性LMS自适应算法就是典型的这种算法。符号函数的引入,简化了自适应滤波器的计算,但是由于信号或系统精度的降低,引起了系统性能的降低,因此,在使用这种非线性变换时,需要综合考虑运算量和系统其它特性的关系。

36

参考文献

[1] 胡广书。数字信号处理理论、算法与实现[M].清华大学出版社,2004。 [2] 姚天任,孙洪。现代数字信号处理[M].华中理工大学出版社,1999。 [3] 陈后金,薛健,胡健等。数字信号处理[M].高等教育出版社,2004。 [4] 潘士先。谱估计和自适应滤波[M].北京航天航空大学出版社,1991。

[5] 李勇,徐震等。MATLAB辅助现代工程数字信号处理[M].西安电子科技大学出版社,2002。 [6] 何振亚著。自适应信号处理[M].科学出版社,2002。

[7] 沈福明著。自适应信号处理[M].西安电子科技大学出版社,2001。

[8] 邱天爽,魏东兴,唐洪,张安清等。通信中的自适应信号处理[M].电子工业出版社,2005。 [9] Bernard Widrow,Samuel D.Adaptive Signal Processing[M].Stearns.China Machine Press,2008. [10] 刘波,文忠,曾涯等。MATLAB信号处理。电子工业出版社,2006。

[11] Widrow, B.; Stearns, S.: Adaptive Signal Processing. Prentice-Hall, Inc. Englewood Cliffs, N.J. 07632.

37

附录Ⅰ 英文原文及译文

英文原文

Combined Adaptive Filter with LMS-Based Algorithms

Boˇ zo Krstaji′ c, LJubiˇ sa Stankovi′ c,and Zdravko Uskokovi′

Abstract: A combined adaptive ?lter is proposed. It consists of parallel LMS-based adaptive FIR ?lters and an algorithm for choosing the better among them. As a criterion for comparison of the considered algorithms in the proposed ?lter, we take the ratio between bias and variance of the weighting coef?cients. Simulations results con?rm the advantages of the proposed adaptive ?lter.

Keywords: Adaptive ?lter, LMS algorithm, Combined algorithm,Bias and variance trade-off 1.Introduction

Adaptive ?lters have been applied in signal processing and control, as well as in many practical problems, [1, 2]. Performance of an adaptive ?lter depends mainly on the algorithm used for updating the ?lter weighting coef?cients. The most commonly used adaptive systems are those based on the Least Mean Square (LMS) adaptive algorithm and its modi?cations (LMS-based algorithms).

The LMS is simple for implementation and robust in a number of applications [1–3]. However, since it does not always converge in an acceptable manner, there have been many attempts to improve its performance by the appropriate modi?cations: sign algorithm (SA) [8], geometric mean LMS (GLMS) [5], variable step-size LMS(VS LMS) [6, 7].

Each of the LMS-based algorithms has at least one parameter that should be de?ned prior to the adaptation procedure (step for LMS and SA; step and smoothing coef?cients for GLMS; various parameters affecting the step for VS LMS). These parameters crucially in?uence the ?lter output during two adaptation phases:transient and steady state. Choice of these parameters is mostly based on some kind of trade-off between the quality of algorithm performance in the mentioned adaptation phases.

We propose a possible approach for the LMS-based adaptive ?lter performance improvement. Namely, we make a combination of several LMS-based FIR ?lters with different parameters, and provide the criterion for choosing the most suitable algorithm for different adaptation phases. This method may be applied to all the LMS-based algorithms, although we here consider only several of them.

The paper is organized as follows. An overview of the considered LMS-based algorithms is given in Section 2.Section 3 proposes the criterion for evaluation and combination of adaptive algorithms. Simulation results are presented in Section 4. 2. LMS based algorithms

Let us de?ne the input signal vector Xk?[x(k)x(k?1)?x(k?N?1)]and vector of weighting coef?cients

T 38

as Wk?[W0(k)W1(k)?WN?1(k)]T.The weighting coef?cients vector should be calculated according to:

Wk?1?Wk?2?E{ekXk} (1)

where μ is the algorithm step, E{·} is the estimate of the expected value andek?dk?WkTXkis the error at the in-stant k,and dk is a reference signal. Depending on the estimation of expected value in (1), one de?nes various forms of adaptive algorithms:

?????1?a?eX,0?a?1?, ????and the SA?E?eX??Xsign?e??,[1,2,5,8] .The VS LMS has the same form as the LMS, but in the

the LMSEekXk?ekXk,the GLMSEekXk?akkkkkii?0k?ik?iadaptation the step μ(k) is changed [6, 7].

The considered adaptive ?ltering problem consists in trying to adjust a set of weighting coef?cients so that the system output,yk?WkTXk, tracks a reference signal, assumed asdk?WGaussian noise with the variance

*TkXk?nk,where nkis a zero mean

2,and Wk*is the optimal weight vector (Wiener vector). Two cases will be ?nconsidered:Wk*?W is a constant (stationary case) andWk*is time-varying (nonstationary case). In nonstationary case the unknown system parameters( i.e. the optimal vectorWk*)are time variant. It is often

*assumed that variation of Wk*may be modeled as Wk*?1?Wk?ZK is the zero-mean random perturbation, 2independent on Xkand nkwith the autocorrelation matrix G?EZkZkT??ZI.Note that analysis for the

??stationary case directly follows for condition from [1, 2] is satis?ed.

2?Z?0.The weighting coef?cient vector converges to the Wiener one, if the

De?ne the weighting coef?cientsmisalignment, [1–3],Vk?Wk?Wk*. It is due to both the effects of gradient noise (weighting coef?cients variations around the average value) and the weighting vector lag (difference between the average and the optimal value), [3]. It can be expressed as:

Vk??Wk?E?Wk???E?Wk??Wk*, (2)

According to (2), the ith element of Vk is:

(3)

where bias?Wi?k?? is the weighting coef?cient

??Vi?k??E?Wi?k???Wi?k???Wi?k??E?Wi?k???*???bias?Wi?k????i?k?bias and

?i?k? is a zero-mean random variable with the variance ?2.The variance depends on the type of

2.Thus, if the noise variance is constant or ?n2LMS-based algorithm, as well as on the external noise variance

slowly-varying,? is time invariant for a particular LMS-based algorithm. In that sense, in the analysis that follows we will assume that? depends only on the algorithm type, i.e. on its parameters.

An important performance measure for an adaptive ?lter is its mean square deviation (MSD) of weighting coef?cients. For the adaptive ?lters, it is given by, [3]:MSD?limEVkVk.

k??2?T?3. Combined adaptive ?lter

39

The basic idea of the combined adaptive ?lter lies in parallel implementation of two or more adaptive LMS-based algorithms, with the choice of the best among them in each iteration [9]. Choice of the most appropriate algorithm, in each iteration, reduces to the choice of the best value for the weighting coef?cients. The best weighting coef?cient is the one that is, at a given instant, the closest to the corresponding value of the Wiener vector. Let Wi?k,q? be the i ?th weighting coef?cient for LMS-based algorithm with the chosen parameter q at an instant k. Note that one may now treat all the algorithms in a uni?ed way (LMS: q ≡ μ,GLMS: q ≡ a,SA:q ≡ μ). LMS-based algorithm behavior is crucially dependent on q. In each iteration there is an optimal value qopt , producing the best performance of the adaptive al-

gorithm. Analyze now a combined adaptive ?lter, with several LMS-based algorithms of the same type, but with different parameter q.

The weighting coef?cients are random variables distributed around the Wi*?k?,with bias?Wi?k,q??and the variance

2, related by [4, 9]: ?qWi?k,q??Wi*?k??bias?Wi?k,q?????q, (4)

where (4) holds with the probability P(κ), dependent on κ. For example, for κ = 2 and a Gaussian distribution,P(κ) = 0.95 (two sigma rule).

De?ne the con?dence intervals for Wi?k,q?,[4,9]:

Di?k??Wi?k,q??2k?q,Wi?k,q??2??q (5)

Then, from (4) and (5) we conclude that, as long as bias?Wi?k,q?????q,Wi*?k??Di?k?, independently on q. This means that, for small bias, the con?dence intervals, for different q?s of the same LMS-based algorithm, of the same LMS-based algorithm, intersect. When, on the other hand, the bias becomes large, then the central positions of the intervals for different q?s are far apart, and they do not intersect.

Since we do not have apriori information about the bias?Wi?k,q??,we will use a speci?c statistical approach to get the criterion for the choice of adaptive algorithm, i.e. for the values of q. The criterion follows from the trade-off condition that bias and variance are of the same order of magnitude, i.e.bias?Wi?k,q?????q,?4?. The proposed combined algorithm (CA) can now be summarized in the following steps:

Step 1. Calculate Wi?k,q?for the algorithms with different q?sfrom the prede?ned set Q??qi,q2,??. Step 2. Estimate the variance

2 for each considered algorithm. ?q??Step 3. Check if Di?k? intersect for the considered algorithms. Start from an algorithm with largest value of variance, and go toward the ones with smaller values of variances. According to (4), (5) and the trade-off criterion, this check reduces to the check if

Wi?k,qm??Wi?k,ql??2???qm??ql? (6)

is satis?ed, where qm,ql?Q,and the following relation holds: ?qh:?qm??qh??ql,?qh?Q.

40

222


lms算法毕业论文(8).doc 将本文的Word文档下载到电脑 下载失败或者文档不完整,请联系客服人员解决!

下一篇:温州市区城镇中低收入住房困难家庭申请公共租赁住房保障

相关阅读
本类排行
× 注册会员免费下载(下载后可以自由复制和排版)

马上注册会员

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信: QQ: