Proportionate-type Normalized Least Mean Square Algorithms - Kevin Wagner - E-Book

Proportionate-type Normalized Least Mean Square Algorithms E-Book

Kevin Wagner

0,0
139,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The topic of this book is proportionate-type normalized least mean squares (PtNLMS) adaptive filtering algorithms, which attempt to estimate an unknown impulse response by adaptively giving gains proportionate to an estimate of the impulse response and the current measured error. These algorithms offer low computational complexity and fast convergence times for sparse impulse responses in network and acoustic echo cancellation applications. New PtNLMS algorithms are developed by choosing gains that optimize user-defined criteria, such as mean square error, at all times. PtNLMS algorithms are extended from real-valued signals to complex-valued signals. The computational complexity of the presented algorithms is examined. Contents 1. Introduction to PtNLMS Algorithms 2. LMS Analysis Techniques 3. PtNLMS Analysis Techniques 4. Algorithms Designed Based on Minimization of User Defined Criteria 5. Probability Density of WD for PtLMS Algorithms 6. Adaptive Step-size PtNLMS Algorithms 7. Complex PtNLMS Algorithms 8. Computational Complexity for PtNLMS Algorithms About the Authors Kevin Wagner has been a physicist with the Radar Division of the Naval Research Laboratory, Washington, DC, USA since 2001. His research interests are in the area of adaptive signal processing and non-convex optimization. Milos Doroslovacki has been with the Department of Electrical and Computer Engineering at George Washington University, USA since 1995, where he is now an Associate Professor. His main research interests are in the fields of adaptive signal processing, communication signals and systems, discrete-time signal and system theory, and wavelets and their applications.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 180

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Preface

Notation

Acronyms

1 Introduction to PtNLMS Algorithms

1.1. Applications motivating PtNLMS algorithms

1.2. Historical review of existing PtNLMS algorithms

1.3. Unified framework for representing PtNLMS algorithms

1.4. Proportionate-type NLMS adaptive filtering algorithms

1.5. Summary

2 LMS Analysis Techniques

2.1. LMS analysis based on small adaptation step-size

2.2. LMS analysis based on independent input signal assumptions

2.3. Performance of statistical LMS theory

2.4. Summary

3 PtNLMS Analysis Techniques

3.1. Transient analysis of PtNLMS algorithm for white input

3.2. Steady-state analysis of PtNLMS algorithm: bias and MSWD calculation

3.3. Convergence analysis of the simplified PNLMS algorithm

3.4. Convergence analysis of the PNLMS algorithm

3.5. Summary

4 Algorithms Designed Based on Minimization of User-Defined Criteria

4.1. PtNLMS algorithms with gain allocation motivated by MSE minimization for white input

4.2. PtNLMS algorithm obtained by minimization of MSE modeled by exponential functions

4.3. PtNLMS algorithm obtained by minimization of the MSWD for colored input

4.4. Reduced computational complexity suboptimal gain allocation for PtNLMS algorithm with colored input

4.5. Summary

5 Probability Density of WD for PtLMS Algorithms

5.1. Proportionate-type least mean square algorithms

5.2. Derivation of the Conditional PDF of WD for the PtLMS algorithm

5.3. Applications using the conditional PDF

5.4. Summary

6 Adaptive Step-Size PtNLMS Algorithms

6.1. Adaptation of μ-law for compression of weight estimates using the output square error

6.2. AMPNLMS and AEPNLMS simplification

6.3. Algorithm performance results

6.4. Summary

7 Complex PtNLMS Algorithms

7.1. Complex adaptive filter framework

7.2. cPtNLMS and cPtAP algorithm derivation

7.3. Complex water-filling gain allocation algorithm for white input signals: one gain per coefficient case

7.4. Complex colored water-filling gain allocation algorithm: one gain per coefficient case

7.5. Simulation results

7.6. Transform domain PtNLMS algorithms

7.7. Summary

8 Computational Complexity for PtNLMS Algorithms

8.1. LMS computational complexity

8.2. NLMS computational complexity

8.3. PtNLMS computational complexity

8.4. Computational complexity for specific PtNLMS algorithms

8.5. Summary

Conclusion

Appendix 1: Calculation of , and

A1.1. Calculation of term

A1.2. Calculation of term

A1.3. Calculation of term

Appendix 2: Impulse Response Legend

Bibliography

Index

First published 2013 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2013

The rights of Kevin Wagner and Miloš Doroslovački to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2013937864

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISSN: 2051-2481 (Print)

ISSN: 2051-249X (Online)

ISBN: 978-1-84821-470-5

Preface

Aims of this book

The primary goal of this book is to impart additional capabilities and tools to the field of adaptive filtering. A large part of this book deals with the operation of adaptive filters when the unknown impulse response is sparse. A sparse impulse response is one in which only a few coefficients contain the majority of energy. In this case, the algorithm designer attempts to use the a priori knowledge of sparsity. Proportionate-type normalized least mean square (PtNLMS) algorithms attempt to leverage this knowledge of sparsity. However, an ideal algorithm would be robust and could provide superior channel estimation in both sparse and non-sparse (dispersive) channels. In addition, it would be preferable for the algorithm to work in both stationary and non-stationary environments. Taking all these factors into consideration, this book attempts to add to the state of the art in PtNLMS algorithm functionality for all these diverse conditions.

Organization of this book

Chapter 1 introduces the framework of the PtNLMS algorithm. A review of prior work performed in the field of adaptive filtering is presented.

Chapter 2 describes classic techniques used to analyze the steady-state and transient regimes of the least mean square (LMS) algorithm.

In Chapter 3, a general methodology is presented for analyzing steady-state and transient analysis of an arbitrary PtNLMS algorithm for white input signals. This chapter builds on the previous chapter and examines that the usability and limitations of assuming the weight deviations are Gaussian.

In Chapter 4, several new algorithms are discussed which attempt to choose a gain at any time instant that will minimize user-defined criteria, such as mean square output error and mean square weight deviation. The solution to this optimization problem results in a water-filling algorithm. The algorithms described are then tested in a wide variety of input as well as impulse scenarios.

In Chapter 5, an analytic expression for the conditional probability density function of the weight deviations, given the preceding weight deviations, is derived. This joint conditional probability density function is then used to derive the steady-state joint probability density function for weight deviations under different gain allocation laws.

In Chapter 6, a modification of the µ-law PNLMS algorithm is introduced. Motivated by minimizing the mean square error (MSE) at all times, the adaptive step-size algorithms described in this chapter are shown to exhibit robust convergence properties.

In Chapter 7, the PtNLMS algorithm is extended from real-valued signals to complex-valued signals. In addition, several simplifications of the complex PtNLMS algorithm are proposed and so are their implementations. Finally, complex water-filling algorithms are derived.

In Chapter 8, the computational complexities of algorithms introduced in this book are compared to classic algorithms such as the normalized least mean square (NLMS) and proportionate normalized least mean square (PNLMS) algorithms.

Notation

The following notation is used throughout this book. Vectors are denoted by boldface lowercase letters, such as x. All vectors are column vectors unless explicitly stated otherwise. Scalars are denoted by Roman or Greek letters, such as x or v. The ith component of vector x is given by xi. Matrices are denoted by boldface capital letters, such as A. The (i, j)th entry of any matrix A is denoted as [A]ij ≡ aij. We frequently encounter time-varying vectors in this book. A vector at time k is given by x(k). For notational convenience, this time indexing is often suppressed so that the notation x implies x(k). Additionally, we use the definitions x+ ≡ x(k + 1) and x− ≡ x(k − 1) to represent the vector x at times k + 1 and k − 1, respectively.

For vector a with length L, we define the function Diag{a} as an L × L matrix whose diagonal entries are the L elements of a and all other entries are zero. For matrix A, we define the function diag{A} as a column vector containing the L diagonal entries from A. For matrices, Re{A} and Im{A} represent the real and imaginary parts of the complex matrix A.

The list of notation is given below.

x

a vector

x

a scalar

A

a matrix

x

i

the

i

th entry of vector x

[

A

]

i j

a

ij

the (

i

,

j

)th entry of any matrix

A

Diag{

a

}

a diagonal matrix whose diagonal entries are the elements of vector

a

diag{

A

}

a column vector whose entries are the diagonal elements of matrix

A

I

identity matrix

E

{

x

}

expected value of random vector

x

.

T

matrix transposition

.

H

complex transposition (Hermitian transposition)

.*

complex conjugation

the Hadamard product

Im{

A

}

imaginary part of complex matrix

A

Re{

A

}

real part of complex matrix

A

||x||

2

squared Euclidean norm of the vector x

||x||

2w

x

T

W

x for column vector x and positive definite matrix

W

Tr{

A

}

trace of the matrix

A

Acronyms

The following acronyms are used in this book.

AEPNLMS

adaptive

є

-proportionate normalized least mean square

AMPNLMS

adaptive

μ

-proportionate normalized least mean square

APAF

affine projection adaptive filter

ASPNLMS

adaptive segmented proportionate normalized least mean square

cCWF

complex colored water-filling

cLMS

complex least mean square

cMPNLMS

complex

μ

-proportionate normalized least mean square

cNLMS

complex normalized least mean square

cPNLMS

complex proportionate normalized least mean square

cPtAP

complex proportionate-type affine projection

cPtNLMS

complex proportionate-type normalized least mean square

CWF

colored water-filling

cWF

complex water-filling

DCT

discrete cosine transform

DCT-cPtNLMS

discrete cosine transform complex proportionate-type normalized least mean square

DCT-LMS

discrete cosine transform least mean square

DCT-NLMS

discrete cosine transform normalized least mean square

DCT-PNLMS

discrete cosine transform proportionate-type normalized least mean square

DCT-WF

discrete cosine transform water-filling

DFT

discrete Fourier transform

DWT

discrete wavelet transform

EPNLMS

є

-proportionate normalized least mean square

Haar-cPtNLMS

Haar complex proportionate-type normalized least mean square

Haar-NLMS

Haar normalized least mean square

Haar-PNLMS

Haar proportionate-type normalized least mean square

Haar-WF

Haar water-filling

IAF-PNLMS

individual activation factor proportionate normalized least mean square

IIPNLMS

improved improved proportionate normalized least mean square

IPNLMS

improved proportionate normalized least mean square

LMS

least mean square

MMSE

minimum mean square error

MPNLMS

μ

-proportionate normalized least mean square

MSE

mean square error

MSWD

mean square weight deviation

MWD

mean weight deviation

NLMS

normalized least mean square

PDF

probability distribution function

PNLMS

proportionate normalized least mean square

PNLMS++

proportionate normalized least mean square plus plus

PtLMS

proportionate-type least mean square

PtNLMS

proportionate-type normalized least mean square

RLS

recursive least squares

SNR

signal-to-noise ratio

SO-NLMS

self-orthogonalizing normalized least mean square

SO-PNLMS

self-orthogonalizing proportionate normalized least mean square

SO-WF

self-orthogonalizing water-filling

SPNLMS

segmented proportionate normalized least mean square

TD-CPtNLMS

transform domain complex proportionate-type normalized least mean square

VOIP

voice over IP

WD

weight deviation

WF

water-filling

1

Introduction to PtNLMS Algorithms

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!