Tài liệu Digital Signal Processing Handbook P66 pdf

17 397 1
Tài liệu Digital Signal Processing Handbook P66 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

R. D. De Groat, et. Al. “Subspace Tracking.” 2000 CRC Press LLC. <http://www.engnetbase.com>. SubspaceTracking R.D.DeGroat TheUniversityofTexasatDallas E.M.Dowling TheUniversityofTexasatDallas D.A.Linebarger TheUniversityofTexasatDallas 66.1Introduction 66.2Background EVDvs.SVD • ShortMemoryWindowsforTimeVarying Estimation • ClassificationofSubspaceMethods • Historical OverviewofMEPMethods • HistoricalOverviewofAdaptive, Non-MEPMethods 66.3IssuesRelevanttoSubspaceandEigenTrackingMethods BiasDuetoTimeVaryingNatureofDataModel • Control- lingRoundoffErrorAccumulationandOrthogonalityErrors • Forward-BackwardAveraging • Frequencyvs.SubspaceEs- timationPerformance • TheDifficultyofTestingandCom- paringSubspaceTrackingMethods • SphericalSubspace(SS) Updating—AGeneralFrameworkforSimplifiedUpdating • InitializationofSubspaceandEigenTrackingAlgorithms • DetectionSchemesforSubspaceTracking 66.4SummaryofSubspaceTrackingMethodsDevelopedSince 1990 ModifiedEigenProblems • Gradient-BasedEigenTracking • TheURVandRankRevealingQR(RRQR)Updates • Miscel- laneousMethods References 66.1 Introduction Mosthighresolutiondirection-of-arrival(DOA)estimationmethodsrelyonsubspaceoreigen- basedinformationwhichcanbeobtainedfromtheeigenvaluedecomposition(EVD)ofanestimated correlationmatrix,orfromthesingularvaluedecomposition(SVD)ofthecorrespondingdata matrix.However,theexpenseofdirectlycomputingthesedecompositionsisusuallyprohibitivefor real-timeprocessing.Also,becausetheDOAanglesaretypicallytime-varying,repeatedcomputation isnecessarytotracktheangles.Thishasmotivatedresearchersinrecentyearstodeveloplowcosteigen andsubspacetrackingmethods.Fourbasicstrategieshavebeenpursuedtoreducecomputation: (1)computingonlyafeweigencomponents,(2)computingasubspacebasisinsteadofindividual eigencomponents,(3)approximatingtheeigencomponentsorbasis,and(4)recursivelyupdatingthe eigencomponentsorbasis.Themostefficientmethodsusuallyemployseveralofthesestrategies. In1990,anextensivesurveyofSVDtrackingmethodswaspublishedbyComonandGolub[7]. Theyclassifiedthevariousalgorithmsaccordingtocomplexityandbasicallytwocategoriesemerge: O(n 2 r)andO(nr 2 )methods,wherenisthesnapshotvectorsizeandristhenumberofextreme eigenpairstobetracked.Typically,r<norrn,sotheO(nr 2 )methodsinvolvesignificantlyfewer computationsthantheO(n 2 r)algorithms.However,since1990,anumberofO(nr)algorithmshave c  1999byCRCPressLLC been developed. This article will primarily focus on recursive subspace and eigen updating methods developed since 1990, especially, the O(nr 2 ) and O(nr) algorithms. 66.2 Background 66.2.1 EVD vs. SVD Let X =[x 1 |x 2 | .|x N ] be an n × N data matrix where the kth column corresponds to the kth snapshot vector, x k ∈ C n . With block processing, the correlation matrix for a zero mean, stationary, ergodic vector process is typically estimated as R = 1 N XX H where the true correlation matrix,  = E[x k x H k ]=E[R]. The EVDoftheestimatedcorrelationmatrix is closely related tothe SVD of the corresponding data matrix. The SVD of X is given by X = USV H where U ∈ C n×n and V ∈ C N×N areunitarymatrices and S ∈ C n×N is a diagonal matrix whose nonzero entries are positive. It is easy to see that the left singular vectors of X are the eigenvectors of XX H = USS T U H , and the right singular vectors of X are the eigenvectors of X H X = VS T SV H . This is so because XX H and X H X are positive definite Hermitian matrices (which have orthogonal eigenvectors and real, positive eigenvalues). Also note that the nonzero singular values of X are the positive square roots of the nonzero eigenvalues of XX H and X H X. Mathematically, the eigen information contained in the SVD of X or the EVD of XX H (or X H X) is equivalent, but the dynamic range of the eigenvalues is twice that of the corresponding singular values. With finite precision arithmetic, the greater dynamic range can result in a loss of information. For example, in rank determination, suppose the smallest singular value is  where  is machine precision. The corresponding eigenvalue,  2 , would be considered a machine precision zero and the EVD of XX H (or X H X)would incorrectly indicate a rank deficiency. Because of the dynamic range issue, it is generally recommended to use the SVD of X (orasquarerootfactorofR). However, because additive sensor noise usually dominates numerical errors, this choice may not be critical in most signal processing applications. 66.2.2 Short Memory Windows for Time Varying Estimation Ultimately, weareinterestedin trackingsome aspectoftheeigenstructureofatimevarying correlation (or data) matrix. For simplicity we will focus on time varying estimation of the correlation matrix, realizing that the EVD of R is trivially related to the SVD of X. A time varying estimator must have a short term memory in order to track changes. An example of long memory estimation is an estimator that involves a growing rectangular data window. As time goes on, the estimated quantities depend moreand more on the old data, and less and less on the new data. The twomost popular short memory approaches to estimating a time varying correlation matrix involve (1) a moving rectangular window and (2) an exponentially faded window. Unfortunately, an unbiased, causal estimate of the true instantaneous correlation matrix at time k,  k = E[x k x H k ], is not possible if averaging is used and the vector process is truly time varying. However, it is usually assumed that the process is varying slowly enough within the effective observation window that the process is approximately stationary and some averaging is desirable. In any event, at time k, a length N moving rectangular data window results in a rank two modification of the correlation matrix estimate, i.e., R (rect) k = R (rect) k−1 + 1 N (x k x H k − x k−N x H k−N ) (66.1) where x k is the new snapshot vector and x k−N is the oldest vector which is being removed from the estimate. The corresponding data matrix is given by X (rect) k =[x k |x k−1 | .|x k−N+1 ] and R (rect) k = 1 N X (rect) k  X (rect) k  H . Subtracting the rank one matrix from the correlation estimate is referred to as c  1999 by CRC Press LLC a rank one downdate. Downdating moves all the eigenvalues down (or unchanged). Updating, on the other hand, moves all eigenvalues up (or unchanged). Downdating is potentially ill-conditioned because the smallest eigenvalue can move towards zero. An exponentially faded data window produces a rank one modification in R (f ade) k = αR (f ade) k−1 + (1 − α)x k x H k (66.2) where α is the fading factor with 0 ≤ α ≤ 1. In this case, the data matrix is growing in size, but the older data is de-emphasized with a diagonal weighting matrix, X (f ade) k =[x k |x k−1 | .|x 1 ] sqrt(diag(1,α,α 2 , ., α k−1 ))and R (f ade) k = (1−α)X (f ade) k  X (f ade) k  H . Of course, the two windows could be combined to produce an exponentially faded moving rect- angular window, but this kind of hybrid short memory window has not been the subject of much study in the signal processing literature. Similarly, not much attention has been paid to which short memory windowing scheme is most appropriate for a given data model. Since downdating is poten- tially ill-conditioned, and since two rank one modifications usually involve more computation than one, the exponentially faded window has some advantages over the moving rectangular window. The main advantage of a (short) rectangular window is in tracking sudden changes. Assuming stationar- ity within the effective observation window, the power in a rectangular window will be equal to the power in an exponentially faded window when N ≈ 1 (1 − α) or equivalent ly α ≈ 1 − 1 N = N − 1 N . (66.3) Based on a Fourier analysis of linearly varying frequencies, equal frequency lags occur when [14] N ≈ (1 + α) (1 − α) or equivalent ly α ≈ N − 1 N + 1 . (66.4) Either one of these relationships could be used as a rule of thumb for relating the effective observation window of the two most popular short memory windowing schemes. 66.2.3 Classification of Subspace Methods Eigenstructure estimation can be classified as (1) block or (2) recursive. Block methods simply compute an EVD, SVD, or related decomposition based on a block of data. Recursive methods update the previously computed eigen information using new data as it arrives. Wefocus on recursive subspace updating methods in this article. Most subspace tracking algorithms can also be broadly categorized as (1) modified eigen problem (MEP) methods or (2) adaptive (or non-MEP) methods. With short memory windowing, MEP methods are adaptive in the sense that they can track time varying eigen information. However, when we use the word adaptive, we mean that exact eigen information is not computed at each update, but rather, an adaptive method tends to move towards an EVD (or some aspect of an EVD) at each update. For example, gradient-based, perturbation-based, and neural network-based methods are classified as adaptive because on average they move towards an EVD at each update. On the other hand, rank one, rank k, and sphericalized EVD and SVD updates are, by definition, MEP methods because exact eigen information associated with an explicit matrix is computed at each update. Both MEP and adaptive methods are supposed to track the eigen information of the instantaneous, time varying correlation matrix. c  1999 by CRC Press LLC 66.2.4 Historical Overview of MEP Methods Many researchers have studied SVD and EVD tracking problems. Golub [19] introduced one of the first eigen-updating schemes, and his ideas were developed and expanded by Bunch and co-workers in [3, 4]. The basic idea is to update the EVD of a symmetric (or Hermitian) matrix when modified by a rank one matrix. The rank-one eigen update was simplified in [37], when Schreiber introduced a transformation that makes the core eigenproblem real. Based on an additive white noise model, Karasalo [21] and Schreiber [37] suggested that the noise subspace be “sphericalized”, i.e., replace the noise eigenvalues by their average value so that deflation [4] could be used to significantly reduce computation. By deflating the noise subspace and only tracking the r dominant eigenvectors, the computation is reduced from O(n 3 ) to O(nr 2 ) per update. DeGroat reduced computation further by extending this concept to the signal subspace [8]. By sphericalizing and deflating both the signal and the noise subspaces, the cost of tracking the r dimensional signal (or noise) subspace is O(nr) and no iteration is involved. To make eigen updating more practical, DeGroat and Roberts developed stabilization schemes to control the loss of orthogonality due to the buildup of roundoff error [10]. Further work related to eigenvector stabilization is reported in [15, 28, 29, 30]. Recently, a more stable version of Bunch’s algorithm was developed by Gu and Eisenstat [20]. In [46], Yu extended rank one eigen updating to rank k updating. DeGroatshowedin[8] that forcing certain subspaces of the correlation matrix to be spherical, i.e., replacing the associated eigenvalues with a fixed or average value, is an easy way to deflate the size of the updating problem and reduce computation. Basically, a spherical subspace (SS) update is a rank one EVD update of a sphericalized correlation matrix. Asymptotic convergence analysis of SS updating is found in [11, 13]. A four level SS update capable of automatic signal subspace rank and size adjustment is described in [9, 11]. The four level and the two level SS updates are the only MEP updates to date that are O(nr) and noniterative. For more details on SS updating, see Section 66.3.6, Spherical Subspace (SS) Updating: A General Framework for Simplified Updating. In [42], Xu and Kailath present a Lanczos based subspace tracking method with an associated detection scheme to track the number of sources. A reference list for systolic implementations of SVD based subspace trackers is contained in [12]. 66.2.5 Historical Overview of Adaptive, Non-MEP Methods Owsley pioneered orthogonal iteration and stochastic-based subspace trackers in [32]. Yang and KavehextendedOwsley’sworkin[44] by devising a family of constrained gradient-based algorithms. A highly parallel algorithm, denoted the inflation method, is introduced for the estimation of the noise subspace. The computational complexity of this family of gradient-based methods varies from (approximately) n 2 r to 7 2 nr for the adaptation equation. However, since the eigenvectors are only ap- proximatelyorthogonal, an additional nr 2 flops may be needed if Gram Schmidt orthogonalization is used. It maybe that a partial orthogonalization scheme (see Section 66.3.2 Controlling Roundoff Error Accumulation and Orthogonality Errors) can be combined with Yang and Kaveh’s methods to improve orthogonality enough to eliminate the O(nr 2 ) Gram Schmidt computation. Karhunen [22] also ex- tended Owsley’s work by developing a stochastic approximation method for subspace computation. Bin Yang [43] used recursive least squares (RLS) methods with a projection approximation approach to develop the projection approximation subspace tracker (PAST) which tracks an arbitrary basis for the signal subspace, and PASTd which uses deflation to track the individual eigencomponents. A multi-vector eigen tracker based on the conjugate gradient method is developed in [18]. Previous conjugate gradient-based methods tracked a single eigenvector only. Orthogonal iteration, lossless adaptive filter, and perturbation-based subspace trackers appear in [40][36], and [5] respectively. A family of non-EVD subspace trackers is given in [16]. An adaptive subspace method that uses a linear operator, referred to as the Propagator, is given in [26]. Approximate SVD methods that are c  1999 by CRC Press LLC based on a QR update step followed by a single (or partial) Jacobi sweep to move the triangular factor towards a diagonal form appear in [12, 17, 30]. These methods can be described as approximate SVD methods because they will converge to an SVD if the Jacobi sweeps are repeated. Subspace estimation methods based on URV or rank revealing QR (RRQR) decompositions are referenced in [6]. These rank revealing decompositions can divide a set of orthonormal vectors into sets that span the signal and noise subspaces. However, a threshold (noise power) level that lies between the largest noise eigenvalue and the smallest signal eigenvalue must be known in advance. In some ways, the URV decomposition can be viewed as an approximate SVD. For example, the transposed QR (TQR) iteration [12] can be used to compute the SVD of a matrix, but if the iteration is stopped before convergence, the resulting decomposition is URV-like. Artificial neural networks (ANN) have also been used to estimate eigen information [35]. In 1982, Oja [31] wasone of the first to developan eigenvector estimating ANN.Using a Hebbian type learning rule, this ANN adaptively extracts the first principal eigenvector. Much research has been done in this area since 1982. For an overview and a list of references, see [35]. 66.3 Issues Relevant to Subspace and Eigen Tracking Methods 66.3.1 Bias Due to Time Varying Nature of Data Model Because direction-of-arrival (DOA) angles are typically time varying, a range of spatial frequencies is usually included in the effective observation window. Most spatial frequency estimation methods yield frequency estimates that are approximately equal to the effective frequency average in the window. Consequently, the estimates lag the true instantaneous frequency. If the frequency variation is assumed to be linear within the effective observation window, this lag (or bias) can be easily estimated and compensated [14]. 66.3.2 Controlling Roundoff Error Accumulation and Orthogonality Errors Numerical algorithms are generally defined as stable if the roundoff error accumulates in a linear fashion. However, recursive updating algorithms cannot tolerateeven a linear buildup oferrorif large (possibly unbounded) numbers of updates are to be performed. For real time processing, periodic reinitialization is undesirable. Most of the subspace tracking algorithms involve the product of at least k orthogonal matrices by the time the kth update is computed. According to Parlett [33], the error propagated by a product of orthogonal matrices is bounded as |U k U H k − I| E ≤ (k + 1)n 1.5  (66.5) where the n × n matrix U k = U k−1 Q k = Q k Q k−1 .Q 1 is a product of k matrices that are each orthogonal to working accuracy,  is machine precision, and|.| E denotes the Euclidean matrix norm. Clearly, if k is large enough, the roundoff error accumulation can be significant. There are really only two sources of error in updating a symmetric or Hermitian EVD: (1) the eigenvalues and (2) the eigenvectors. Of course, the eigenvectors and eigenvalues are interrelated. Errors in one tend to produce errors in the other. At each update, small errors may occur in the EVD update so that the eigenvalues become slowly perturbed and the eigenvectors become slowly nonorthonormal. The solution is to prevent significant errors from ever accumulating in either. We do not expect the main source of error to be from the eigenvalues. According to Stewart [38], the eigenvalues of a Hermitian matrix are perfectly conditioned, having condition numbers of one. Moreover, it is easy to show that when exponential weighting is used, the accumulated roundoff error c  1999 by CRC Press LLC is bounded by a constant, assuming no significant errors are introduced by the eigenvectors. By contrast, if exponential windowing is not used, the bound for the accumulated error builds up in a linear fashion. Thus, the fading factor not only fades out old data, but also old roundoff errors that accumulate in the eigenvalues. Unfortunately, the eigenvectors of a Hermitian matrix are not guaranteed to be well conditioned. An eigenvector will be ill-conditioned if its eigenvalue is closely spaced with other eigenvalues. In this case, small roundoff perturbations to the matrix may cause relatively large errors in the eigenvec- tors. The greatest potential for nonorthogonality then is between eigenvectors with adjacent (closely spaced) eigenvalues. This observation led to the development of a partial orthogonalization scheme known as pairwise Gram Schmidt (PGS) [10] which attacks the roundoff error buildup problem at the point of greatest numerical instability — nonorthogonality of adjacent eigenvectors. If the intervening rotations (orthogonal matrix products) inherent in the eigen update are random enough, the adjacent vector PGS can be viewed as a full orthogonalization spread out over time. When PGS is combined with exponential fading, the roundoff accumulation in both the eigenvectors and the eigenvalues is controlled. Although PGS was originally designed to stabilize Bunch’s EVD update, it is generally applicable to any EVD, SVD, URV, QR, or orthogonal vector update. Moonen et al. [29] suggested that the bulk of the eigenvector stabilization in the PGS scheme is due to the normalization of the eigenvectors. Simulations seem to indicate that normalization alone stabilizes the eigenvectors almost as well as the PGS scheme, but not to working precision orthogonality. Edelman and Stewart provide some insight into the normalization only approach to maintaining orthogonality [15]. For additional analysis and variations on the basic idea of spreading orthogonalization out over time, see [30] and especially [28]. Many ofthe O(nr) adaptive subspace methodsproduce eigenvectorestimates thatare only approx- imately orthogonal and normalization alone does not always provide enough stabilization to keep the orthogonality and other error measures small enough. We have found that PGS stabilization can noticeably improve both the subspace estimation performance as well as the DOA (or spatial frequency) estimation performance. For example, without PGS (but with normalization only), we found that Champagne’s O(nr) perturbation-based eigen tracker (method PC) [5] sometimes gives spurious MUSIC-based frequency estimates. On the other hand, with PGS, Champagne’s PC method producedimproved subspace and frequency estimates. The orthogonality error was also significantly reduced. Similar performance boosts could be expected for any subspace or eigen tracking method (especially those that produce eigenvector estimates that are only approximately orthogonal, e.g., PAST and PASTd [43] or Yang and Kaveh’s family of gradient based methods [44, 45]). Unfortu- nately, normalization onlyand PGS are O(nr). Addingthis kindof stabilization toan O(nr) subspace tracking method could double its overall computation. Other variations on the original PGS idea involve symmetrizing the 2 × 2 transformation and making the pairwise orthogonalization cyclic [28]. The symmetric transformation assumes that the vector pairs are almost orthgonal so that higher order error terms can be ignored. If this is the case, the symmetric version can provide slightly better results at a somewhat higher computational cost. For methods that involve working precision orthogonal vectors, the original PGS scheme is overkill. Instead of doing PGS orthogonalization on each adjacent vector pair, cyclic PGS orthogonalizes only one pair of vectors per update, but cycles through all possible combinations over time. Thus, cyclic PGS covers all vector pairs without relying on the randomness of intervening rotations. Cyclic PGS spreads the orthogonalization process out in time even more than the adjacent vector PGS method. Moreover, cyclic PGS (or cyclic normalization) involves O(n) flops per update, but there is a small overhead associated with keeping track of the vector pair cycle. In summary, we can say that stabilization may not be needed for a small number of updates. On the other hand, if an unbounded number of updates is to be performed, some kind of stabilization is recommended. Formethods that yield nearly orthogonal vectorsat each update, only a small amount of orthogonalization is needed to control the error buildup. In these cases, cyclic PGS may be best. c  1999 by CRC Press LLC However, for methods that produce vectors that are only approximately orthogonal, a more complete orthogonalization scheme may be appropriate, e.g., a cyclic scheme with two or three vector pairs orthogonalized per update will produce better results than a single pair scheme. 66.3.3 Forward-Backward Averaging In manysubspace tracking problems, forward-backward (FB) averaging can improve subspaceas well as DOA (or frequency) estimation performance. Although FB averaging is generally not appropriate for nonstationary processes, it does appear to improve spatial frequency estimation performance if the frequencies vary linearly within the effective observation window. Based on Fourier analysis of linearly varying frequencies, we infer that this is probably due to the fact that the average frequency in thewindowisidentical forboththe forwardandthe backwardcases[14]. Consequently, thefrequency estimates are reinforced by FB averaging. Besides improved estimation performance, FB averaging can be exploited to reduce computation by as much as 75% [24]. FB averaging can also reduce computer memory requirements because (conjugate symmetric or anti-symmetric ) symmetries in the complex eigenvectors of an FB averaged correlation matrix (or the singular vectors of an FB data matrix) can be exposed through appropriate normalization. 66.3.4 Frequency vs. Subspace Estimation Performance Ithas recentlybeenshown withasymptotic analysisthatabettersubspaceestimatedoesnotnecessarily result in a better MUSIC-based frequency estimate [23]. In subspace tracking simulations, we have also observed that some methods produce better subspace estimates, but the associated MUSIC- based frequency estimates are not always better. Consequently, if DOA estimation is the ultimate goal, subspace estimation performance may not be the best criterion for evaluating subspace tracking methods. 66.3.5 The Difficulty of Testing and Comparing Subspace Tracking Methods A significant amount of research has been done on subspace and eigen tracking algorithms in the past few years, and much progress has been made in making subspace tracking more efficient. Not surprisingly, all of the methods developed to date have different strengths and weaknesses. Unfortunately, there has not been enough time to thoroughly analyze, study, and evaluate all of the new methods. Over the years, several tests have been devised to “experimentally” compare various methods, e.g., convergence tests [44], response to sudden changes [7], and crossing frequency tracks (where the signal subspace temporarily collapses) [8]. Some methods do well on one test, but not so well on another. It is difficult to objectively compare different subspace tracking methods because optimal operating parameters are usually unknown and therefore unused, and the performance criteria may be ill-defined or contradictory. 66.3.6 Spherical Subspace (SS) Updating — A General Framework for Simplified Updating Most eigen and subspace tracking algorithms are based directly or indirectly on tracking some aspect of the EVD of a time varying correlation matrix estimate that is recursively updated according to Eq. (66.1)or(66.2). Since Eqs. (66.1) and (66.2) involve rank one and rank two modifications to the correlation matrix, most subspace tracking algorithms explicitly or implicitly involve rank one (or two) modification of the correlation matrix. Since rank two modifications can be computed as two rank one modifications, we will focus on rank one updating. c  1999 by CRC Press LLC Basically, spherical subspace (SS) updates are simplified rank one EVD updates. Thesimplification involves sphericalizing subsets of eigenvalues (i.e., forcing each subset to have the same eigenlevel) so that the sphericalized subspaces can be deflated. Based on an additive white noise signal model, Karasalo [21] and Schreiber [37] first suggested that the “noise”eigenvaluesbereplaced bytheir average value in orderto reducecomputation bydeflation. Using Ljung’s ODE-based method for analyzing stochastic recursive algorithms [25], it has recently been shown that, if the noise subspace is sphericalized, the dominant eigenstructure of a correlation matrix asymptotically converges to the true eigenstructure with probability one (under any noise assumption) [11]. It is important to realize that averaging the noise eigenvalues yields a spherical subspace in which the eigenvectors can be arbitrarily oriented as long as they form an orthonormal basis for the subspace. A rank-one modification affects only one component of the sphericalized subspace. Thus, only one of the multiple noise eigenvalues is changed by a rank-one modification. Consequently, making the noise subspace spherical (by averaging the noise eigenvalues, or replacing them with a constant eigenlevel) deflates the eigenproblem to an (r + 1) × (r + 1) problem, which corresponds to a signal subspace of dimension r, and the single noise component whose power is changed. For details on deflation, see [4]. The analysis in [11] shows that any number of sphericalized eigenlevels can be used to track various subspace spans associated with the correlation matrix. For example, if both the noise and the signal subspaces are sphericalized (i.e., the dominant and subdominant set of eigenvalues is replaced by their respective averages), the problem deflates to a 2 × 2 eigenproblem that can be solved in closed form, noniteratively. We will call this doubly deflated SS update, SA2 (Signal Averaged, Two Eigenlevels) [8]. In [13] we derived the SA2 algorithm ODE and used a Lyapunov function to show asymptotic convergence to the true subspaces w.p. 1 under a diminishing gain assumption. In fact, the SA2 subspace trajectories can be described with Lie bracket notation and follow an isospectral flow as described by Brockett’s ODE [2]. A four level SS update (called SA4) was introduced in [9] to allow for information theoretic source detection (based on the eigenvalues at the boundary of the signal and noise subspaces) and automatic subspace size adjustment. A detailed analysis of SA4 and an SA4 minimum description length (SA4-MDL) detection scheme can be found in [11, 41]. SA4 sphericalizes all the signal eigenvalues except the smallest one, and all the noise eigenvalues except the largest one, resulting in a 4×4 deflated eigenproblem. By tracking the eigenvalues that are on the boundary of the signal and noise subspaces, information theoretic detection schemes can be used to decide if the signal subspace dimension should be increased, decreased, or remain unchanged. Both SA2 and SA4 are O(nr) and noniterative. The deflated core problem in SS updating can involve any EVD or SVD method that is desired. It can also involve other decompositions, e.g., the URVD [34]. To illustrate the basic idea of SS updating, we will explicitly show how an update is accomplished when only the smallest (n − r) “noise” eigenvalues are sphericalized. This particular SS update is called a Signal Eigenstructure (SE) update because only the dominant r “signal” eigencomponents are tracked. This case is equivalent to that described by Schreiber [37] and an SVD version is given by Karasalo [21]. To simplify and more clearly illustrate the idea SS updating, we drop the normalization factor, (1 − α), and the k subscripts from Eq. (66.2) and use the eigendecomposition of R = UDU H to expose a simpler underlying structure for a single rank-one update  R = αR + xx H (66.6) = αUDU H + xx H (66.7) = U(αD+ ββ H )U H ,β= U H x (66.8) = UG(αD + γγ T )G H U H ,γ= G H β (66.9) = UGH(αD+ ζζ T )H T G H U H ,ζ= H T γ (66.10) = UGH(Q  DQ T )H T G H U H (66.11) c  1999 by CRC Press LLC =  U  D  U H ,  U = UGHQ (66.12) where G = diag (β 1 /|β 1 |, ., β n /|β n | is a diagonal unitary transformation that has the effect of making the matrix inside the parenthesis real [37], H is an embedded Householder transformation that deflates the core problem by zeroing out certain elements of ζ (see the SE case below), and Q  DQ T is the EVD of the simplified, deflated core matrix, (αD + ζζ T ). In general, H and Q will involve smaller matrices embedded in an n × n identity matrix. In order to more clearly see the details of deflation, we must concentrate on finding the eigendecomposition of the completely real matrix, S = (αD + γγ T ) for a specific case. Let us consider the SE update and assume that the noise eigenvalues contained in the diagonal matrix have been replaced by their average values, d (n) , to produce a sphericalized noise subspace. We must then apply block Householder transformations to concentrate all of the power in the new data vector into a single component of the noise subspace. The update is thus deflated to an (r + 1) × (r + 1) embedded eigenproblem as shown below, S = (αD + γγ T ) (66.13) = H(αD+ ζζ T )H T ,ζ= H T γ (66.14) =   I r 0 0 H (n) n−r     α   D ( s ) r 0 0 d (n) I n−r   + ζζ T     I r 0 0 H (n) n−r   T (66.15) =   I r 0 0 H (n) n−r          Q r+1 0 0 I n−r−1         D (s) r 00 0  d (n) 0 00αd (n) I n−r−1      ×   Q r+1 0 0 I n−r−1   T      I r 0 0 H (n) n−r   T (66.16) = H(Q  DQ T )H T (66.17) where ζ T = (H T γ) T =[γ (s) ,|γ (n) |, 0 (n−r−1)×1 ] T , (66.18) H (n) n−r = I n−r − 2 v (n) (v (n) ) T (v (n) ) T v (n) , (66.19) H =   I r 0 0 H (n) n−r   , (66.20) γ =   γ (s) γ (n)   }r }n − r (66.21) v (n) = γ (n) +|γ (n) |   1 0 (n−r−1)×1   (66.22) The superscripts (s) and (n) denote signal and noise subspace, respectively, and the subscripts de- note the size of the various block matrices. In the actual implementation of the SE algorithm, the Householder transformations are not explicitly computed, as we will see below. Moreover, it should be stressed that the Householder transformation does not change the span of the noise subspace, but c  1999 by CRC Press LLC [...]... projection approximation approach that uses RLS techniques to update the signal subspace The projection approximation subspace tracker (PAST) algorithm computes an arbitrary basis for the signal subspace in 3nr + O(r 2 ) flops per update The PASTd algorithm (which uses deflation to track the individual eigenvalues and vectors of the signal subspace) requires 4nr+O(n) flops per update Both methods produce... Network Based Updates [35] Yes Yes Yesa Noa O(nr 2 ) Stabilized signal eigenstructure (SE) updateb [8, 10] Sphericalized transposed QR SVD updateb [12] Sphericalized conjugate gradient SVD updateb [18] SWEDE [16] Gradient-based EVD updates with gram schmidt orthog [44, 45] Yesa Yesa Yesa No Yesa O(nr) Signal averaged 2-level (SA2 ) updateb [8] Signal averaged 4-level (SA4) updateb [9, 11] Projection approximation... Tracking a few extreme singular values and vectors in signal processing, Proc IEEE, 78(8), 1327–1343, Aug 1990 [8] DeGroat, R.D., Non-iterative subspace tracking, IEEE Trans Sig Proc., SP-40(3), 571–577, Mar 1992 [9] DeGroat, R.D and Dowling, E.M., Spherical subspace tracking: analysis, convergence and detection schemes, in 26th Annual Asilomar Conf on Signals, Systems, and Computers, (invited paper) Oct... covariance matrix by signal subspace averaging, IEEE Trans ASSP, ASSP-34(1), 8–12, Feb 1986 [22] Karhunen, J., Adaptive algorithms for estimating eigenvectors of correlation type matrices, in ICASSP-84, 14.6.1–14.6.4, 1984 [23] Linebarger, D.A., DeGroat, R.D., Dowling, E.M., Stoica, P and Fudge, G., Incorporating a priori information into MUSIC - algorithms and analysis, Signal Processing, 46(1), 85–104,... The RO-FST and TQR-SVD adaptive subspace tracking algorithms, IEEE Trans SP, SP-43, 2016–2018, Aug 1995 [35] Reddy, V.U., Mathew, G and Paulraj, A., Some algorithms for eigensubspace estimation, Digital Signal Processing, 5, 97–115, 1995 [36] Regalia, P.A and Loubaton, P., Rational subspace estimation using adaptive lossless filters, IEEE Trans on Sig Proc., 40, 2392–2405, Oct 1992 [37] Schreiber, R.,... International Conference on Acoustics, Speech and Sig Proc., 1416–1419, 1995 [41] Viberg, M and Stoica, P., Eds., Signal Processing, 50(1-2) of Special Issue on Subspace Methods for Detection and Estimation, April 1996 [42] Xu, G., Zha, H., Golub, G and Kailath, T., Fast and robust algorithms for updating signal subspaces, IEEE Trans CAS, 41(6), 537–549, June 1994 [43] Yang, B., Projection approximation subspace... the data space into two subspaces: the signal subspace is not sphericalized and all of its eigencomponents are explicitly tracked whereas the noise subspace is sphericalized and not explicitly tracked (to save computation) Using the properties of the Householder transformation, it can be shown that the single component of the noise subspace that mixes with the signal subspace via Qr+1 is given by u(n)... core (r + 1) × (r + 1) problem are found, the signal subspace eigenvectors can be updated as = U (s) , u(n)  = U U (s) G(s) , u(n)  Qr+1 n×r c 1999 by CRC Press LLC  n×1 (66.32) (66.33) where updating the new noise eigenvector is not necessary (if the noise subspace is resphericalized) The complexity of the core eigenproblem is O(r 3 ) and updating the signal eigenvectors is O(nr 2 ) Thus, the SE... Trans ASSP, ASSP-38(2), 301–316, Feb 1990 [11] Dowling, E.M., DeGroat, R.D., Linebarger, D.A and Ye, H., Sphericalized SVD updating for subspace tracking, in Moonen, M and De Moor, B., Eds., SVD and Signal Processing III: Algorithms, Applications and Architectures, Elsevier, 1995, 227–234 [12] Dowling, E.M., Ammann, L.P and DeGroat, R.D., A TQR-iteration based SVD for real time angle and frequency tracking,... On-line subspace algorithms for tracking moving sources, IEEE Trans on Sig Proc., 42(9), 2319–2330, Sept 1994 [17] Ferzali, W and Proakis, J.G., Adaptive SVD algorithm and applications, in SVD and Signal Processing II, Elsevier, 1992, 14–21 [18] Fu, Z and Dowling, E.M., Conjugate gradient eigenstructure tracking for adaptive spectral estimation, IEEE Trans Sig Proc., 43(5), 1151–1160, May 1995 [19] . concept to the signal subspace [8]. By sphericalizing and deflating both the signal and the noise subspaces, the cost of tracking the r dimensional signal (or. Mathew, G. and Paulraj, A., Some algorithms for eigensubspaceestimation, Digital Signal Processing, 5, 97–115, 1995. [36] Regalia, P.A. and Loubaton, P., Rational

Ngày đăng: 13/12/2013, 00:15

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan