Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 8 ppt

30 391 0
Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 8 ppt

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

6.4 Recursive Estimation Techniques for Dynamic Vision 195 ˆˆ {[() ()][() ()]} {[ * į *] [ * į *] }.   T T P E xk xk xk xk Exx KCx xx KCx (6.14) The output error covariance with įį*  y Cx vand Equation 6.11 is {įį}{(į *)(į *)} *    TT Eyy ECx v Cx v CPC R. T (6.15) Finding that matrix K that minimizes P under the given side constraints, yields 1 *{ * }    TT K PCCPC R . (6.16) With this result Equation 6.14 reduces to the well-known form for updating the er- ror covariance matrix .() *() () () *() Pk P k Kk Ck P k (6.17) 6.4.2.3 Complete Recursion Loop in Kalman Filtering These results are summarized in the following table as algorithmic steps for the ba- sic version of the extended Kalman filter for real-time vision (4-D approach): 1. Find a good guess for the n initial state components x*(0) to start with [initial hypothesis k = 0, ]. 0 ˆ *(0) xx 2. Find an estimate for the probability distribution of this initial state in terms of the first and second moments of a Gaussian distribution (mean value 0 ˆ x and the (n·n) error covariance matrix P 0 ). The diagonal terms are the components of the variance. 2 ı i 3. Find an estimate for the covariance matrices Q = E{v T v} of system noise v and R = E{w T w} of measurement noise w. Entry point for recursively running loop 4. Increment time index k = k+1; 5. Compute expected values for state variables at time k+1 (state prediction x*(k)): 11 ˆ *(1) (1)   kk k x Ak x Bk u . 6. Predict expected error covariance matrix P*(k) (components: state prediction and noise corruption): *()(1)(1)(1)(1)   T Pk Ak Pk Ak Qk . 7. Compute the expected measurement values *( ) [ *( ), ] m yk hxkp and the (total) Jacobian matrix C = y*/ x| N as first-order approximations around this point. 8. Compute the gain matrix for prediction error feedback: 1 *{* }   TT K PC CPC R . 9. Update the state variables (innovation) to the +best estimates, including the last measurement values: ˆ () *() ()[() *()]  x kxkKkykyk. 10. Update the error covariance matrix (innovation of statistical properties): () *() () () *(). Pk Pk KkCkPk Go back to step 4 for next loop. Steps for monitoring convergence and progress in the vision process will be dis- cussed in connection with applications. 196 6 Recursive State Estimation 6.4.3 The Stabilized Kalman Filter Equation 6.17 is not well conditioned for use on a computer with limited word length. Canceling of significant digits may lead to asymmetries not allowed by the definition of P*. Numerically better conditioned is the following equation (I = identity matrix): ()*()    TT P I KC P I KC KRK , (6.18) which results from using the reformulated Equation 6.16 (multiplying by the {}- term) ** TT K CP C KR P C ĺ ()*.   T K RIKCPC (6.19) Multiplying the right-hand form by K T from the right and shifting terms to the left side of the equality sign yields ()*     TT T IKCPCK KRK 0. 2 (6.20) Adding this (0) to Equation 6.17 in the form P = (I - K C ) P* yields Equation 6.18. 6.4.4 Remarks on Kalman Filtering Filter tuning, that is, selecting proper parameter settings for the recursive algo- rithm, is essential for good performance of the method. A few points of view will be discussed in this section. 6.4.4.1 Influence of the Covariance Matrices Q and R on the Gain Matrix K The basic effects of the influence of the covariance matrices Q and R on the gain matrix K can be seen from the following simple scalar example: Consider the case of just one state variable which may be measured directly as the output variable r: -1 -1 -1 -1 , -1 ,, 22 2 0, , ( ) { } ; ( ) { } . kk kkk xk kkkmkyk x xy xax bu v r s ycxwr s Qk Es Rk Es     V V y (6.21) The variance of the prediction error then is 2 *( ) *( ) ( -1) , x Pk pk pk V (6.22) and for the scalar gain factor K, one obtains 22 () [( 1) ]/[( 1) ] . xx Kk pk pk V VV 2 y (6.23) Two limiting cases for the noise terms and show the effect of different val- ues of variances on the progress of the iteration. For << 2 x ı 2 y ı 2 y ı 2 x ı , i.e., very good measurements, a K value just below 1 results; for example, ˆ r(k) *( ) 0.95{ ( ) *( )} 0.05 *( ) 0.95 ( ). mm rk rk rk rk rk    (6.24) This tells us for example, that when initial conditions are poor guesses and meas- urement data are rather reliable, the R elements should be much smaller than the Q- 6.4 Recursive Estimation Techniques for Dynamic Vision 197 elements. But when the dynamic models are good and measurements are rather noisy, they should be selected the other way round. For >> , i.e., for very poor measurement data, a small filter gain factor K results; for example, 2 y ı 2 x ı ˆ ( ) *( ) 0.1[ ( ) *( )] 0.9 *( ) 0.1 ( ). mm rk rk rk rk rk rk    (6.25) With poor measurements, also the variance of the estimation error will be im- proved only a little: () *() *() (1 ) *()pk pk Kpk Kpk   . e.g., K = 0.1 => p(k) = 0.9· p*(k). (6.26) These considerations carry over to the multivariate case. 6.4.4.2 Kalman Filter Design For the observer (not treated here), the specification of the filter gain matrix is achieved by assigning desired eigenvalues to the matrix (pole positions). In the Kalman filter, the gain matrix K(k) is obtained automatically from the covariance matrices Q(k) and R(k), assumed to be known for all t k , as well as from the error covariance matrix P* 0 . K(k) results from P* 0 , R(k) and Q(k): P* 0 may be obtained by estimating the quality of in-advance-knowledge about x 0 (how good/certain is x 0 ?). R(k) may eventually be derived from an analysis of the measurement principle used. On the contrary, finding good information about Q(k) is often hard: Both the inaccuracies of the dynamic model and perturbations on the process are not known, usually. In practice, often heuristic methods are used to determine K(k): Initially, change Q(k) only, for fixed R(k), based on engineering judgment. Then, after achieving useful results, also change R(k) as long as the filter does not show the desired per- formance level (filter tuning). The initial transient behavior is essentially determined by the choice of P* 0 : The closer P* 0 is chosen with respect to (the unknown) P*(k), the better the conver- gence will be. However, the larger initial covariance values P* 0 selected, the more will measured values be taken into account. (Compare the single variable case: Large initial values of p*(k) resulted in K-values close to 1, => strong influence of the measured values.) But note that this can be afforded only if measurement noise R is not too large; otherwise, there may be no convergence at all. Filter tuning is often referred to as an art because of the difficulties in correctly grasping all the complex interrelationships. 6.4.4.3 Computational Load For m measurement values, an (m × m) matrix has to be inverted to compute the filter gain matrix K(k). In the numerical approach, first a transformation into trian- gular form via the Householder transformation is done; then recursive elimination is performed. Especially in the stabilized form (Equation 6.18, called the “Joseph”-form), computing load is considerable. This has led to alternative methods: Briefly dis- cussed will be sequential Kalman filtering and UDU T -factored Kalman filtering. 198 6 Recursive State Estimation Gauss-Markov estimators with postfiltering are not discussed here; in this ap- proach, in which more measurement equations than state variables have to be available, the noise corrupted state variables arrived at by a Gauss-Markov estima- tor are filtered in a second step. Its advantage lies in the fact that for each state variable a filter factor between 0 and 1 can be set separately. Details on several re- cursive filter types may be found in the reference [Thrun et al. 2005]. 6.4.5 Kalman Filter with Sequential Innovation At time t k , the m k measurement values are obtained: ( ) [ ( )] ( ); { } 0; { } . T yk hxk wk Ew Eww R  (6.27) For innovation of the state variables (see Section 6.4.2) ˆ () *() ()[() *()] x kxkKkykyk  the n×n dimensional filter gain matrix -1 () *() ()[ () *() () ()] TT Kk PkCkCkPkCk Rk   is obtained by inversion of a matrix of dimension m k × m k . The central statement for the so-called “sequential innovation” is Under certain conditions, an m k -dimensional measurement vector can always be treated like m k scalar single measurements that are exploited sequentially; this re- duces the matrix inversion to a sequence of scalar divisions. 6.4.5.1 Preconditions The m k measurement values have to be uncorrelated, that means R(k) has to be (es- sentially) a diagonal matrix. If the covariance matrix R(k) is blockwise diagonal, the corresponding subvectors may be treated sequentially. Correlated measure- ments may always be transformed into uncorrelated pseudo-measurement data which then may be treated sequentially [Maybeck 1979, p. 375]. In image processing, different features derived from video signals can be as- sumed to be uncorrelated if the attributes of such features are gained by different processors and with different algorithms. Attributes of a single feature (e.g., the y- and z- components in the image plane) may well be correlated. 6.4.5.2 Algorithm for Sequential Innovation The first scalar measurement at a sampling point t k , 22 11 1 1 1 ( ) [ ( )] ( ), { } 0, { } ,yk gxk wk Ew Ew s  1 ). (6.28) leads to the first partial innovation [in the sequel, the index k will be given only when needed for clarity; k i in the following is a column-vector of Kalman gains]: 111 1 ˆ (*dx k y y  (6.29) The (n×1)-dimensional filter vector k 1 is computed by -1 1011011 0 ( ) , *( ). TTT kPccPcs PPk  (6.30) Here, c 1 is the first row (vector) of the Jacobian matrix C(k) corresponding to y 1 . After this first partial innovation, the following terms may be computed: 6.4 Recursive Estimation Techniques for Dynamic Vision 199 1 ˆ () *() (), 1 ˆ x kxkdxk  (improved estimate for the state) (improved error covariance matrix). 1011 PPkcP  0 (6.31) For the I–th partial innovation -1 0 ˆˆ ˆ d ( * ), 1, , , IIII I k x dx k y y I m dx   (6.32) momentarily the best estimate for y I has to be inserted. This estimate is based on the improved estimated value for x: -1 -1 ˆˆ *() () *() ; II I xkxkxkdx  (6.33) it may be computed either via the complete perspective mapping equation () [*()], III yk hx k (6.34) or, to save computer time, by correction of the predicted measurement value (at t k-1 for t k ) y* I,0 : ,0 -1 ˆ *() * () I II yky kcdx I ; (simplification: c I ~ const for t = t k ). (6.35) With this, the complete algorithm for sequential innovation is Initialization: ,0 22 1 ,0 00 * [ *( ) ], 1, , , { }, [ ] , * / , 1, , , ˆ 0, *( ). I Ik I III I Ik ygxk I m s Ew y g x w cdy dx I m dx P P k  (6.36) Recursion for I = 1, … , m k : ,0 -1 2 -1 -1 -1 -1 -1 ˆ * * , (simplification: ~ const for ) /( ), ˆˆ ( * ), , ˆˆ Final step at : ( ) *( ) , ( ) . kk I III I k TT IIIIIII II II I II III km yy cdx c tt kPccPcs dx dx k y y PP kcP txkxkdx PkP       m (6.37) 6.4.6 Square Root Filters Appreciable numerical problems may be encountered even in the so-called stabi- lized form, especially with computers of limited word length (such as in navigation computers onboard vehicles of the not too far past): Negative eigenvalues of P(k) for poorly conditioned models could occur numerically, for example,  very good measurements (eigenvalues of R small compared to those of P*), amplified, for example, by large eigenvalues of P 0 ;  large differences in the observability of single state variables, i.e. large differ- ences in the eigenvalues of P(k). These problems have led to the development of so-called square root filters [Pot- ter 1964] for use in the Apollo onboard computer. The equations for recursive com- putation of P are substituted by equations for a recursion in the square root matrix P P 1/2 . The filter gain matrix K then is directly computed from P 1/2 P . This has the ad- vantage that the eigenvalues allowable due to the limited word length of the co- variance matrices P and R are increased considerably. For example, let the variance 200 6 Recursive State Estimation of the state variable “position of the space vehicle” be m 2 x = 10s 6 -4 2 and the variance of the measurement value “angle”: rad 2 y = 10s 2 . The numerical range for P , R then is 10 10 , while the numerical range for PP 1/2 , R 1/2 is 10 5 . For modern general-purpose microprocessors, this aspect may no longer be of concern. However, specialized hardware could still gain from exploiting this prob- lem formulation. What is the “square roots of a matrix”? For each positive semidefinite matrix P there exist multiple roots P P 1/2 with the property P 1/2 P PP 1/2 = P. Of especial interest for numerical computations are triangular decompositions. Cholesky decompositions use the “lower” triangular matrices L: P = LL T . With the so-called Carlson filter, “upper” triangular matrices U' are being used: P = U' U' T . Their disadvantage is the relative costly computation of at least n scalar square roots for each recursion step. This may be bypassed by using triangular matrices with 1s on the diagonal (unitar- ian upper triangular matrices U) and a diagonal matrix D that may be treated as a vector. 6.4.6.1 Kalman Filter with UDU T -Factorized Covariance Matrix This method has been developed by Thornton and Bierman [Thornton, Bierman 1980]. It can be summarized by the following bullets: x Use the decomposition of the covariance matrix T PUDU . (6.38) Due to the property U D 1/2 = U', this filter is considered to belong to the class of square root filters. x In the recursion equations, the following replacement has to be done: * by * * * . T PU D U (6.39) Modified innovation equations x Starting from the sequential formulation of the innovation equations (with k i as column vectors of the Kalman gain matrix and c i as row vectors of the Jacobian matrix) 2 -1 -1 /( ), TT iiiiiii kPccPcs  (6.40) -1 ˆˆ ( * ), ii iii dx dx k y y   (6.41) -1 -1 . ii iii PP kcP  (6.42) Equation 6.40 is introduced into Equation 6.42; by substituting -1 0 by ( * * * * ), by ( ), k TT i TT im PUDUPPUDU PUDUPPUDU     there follows ^` 2 ()/( [( ) ( ) ]/( ) . TT TTT TT ii i i i TT TTT TT T iiiii U DU U DU U DU c cU DU cU DU c s U D DU c DU c cU DU c s U                ), T (6.43) x With the definition of two column vectors with n components each : , , TT i f Uc eDf  , (6.44) 6.4 Recursive Estimation Techniques for Dynamic Vision 201 a scalar q, called the innovation covariance, 22 , TT T iiii jj qcUDUc s s f e D f   { j  . T ) (6.45) and ( /) TT UDU U D e e q U      (6.46) the innovation of U and D is reduced to a decomposition of the symmetric matrix ( / T D ee q   : . There follows: 000 ( /) = T DeeqUDU   T T000 00 ( ) ( ), that is, ( ) . TT UDU U U D U U UUUD D      (6.47) x Computation of the n · 1 filter vector k i follows directly from Equation 6.40: /. i kU eq  (6.48) x The recursive innovation for each of the m k measurement values then is 1. TT ij i fUc  . 2. . 2 , , ; 1, , ii ijjij qsD f j n    3. . : ii eDf  i ) 4. . / iii kUeq  5. recursive computation of U and from ( / 0 i 0 i D T iii D ee q   . 6. UU . 0 iii U    i D 7. . 0 ii DD  8. Set UU and 1,ii 1,i D   for the next measurement value. 9. Repeat steps 1 to 8 for i = 1, …. , m k . 10. Set UU and k m  k m DD  as starting values for the prediction step. Extrapolation of the covariance matrix in UDU T -factorized form: The somewhat involved derivation is based on a Gram-Schmidt vector orthogo- nalization and may be found in [Maybeck 1979, p. 396] or [Thornton, Bierman 1977]. x The starting point is the prediction of the covariance matrix: *() (-1) (-1) (-1) (-1), T Pk Ak Pk Ak Qk  , with the decompositions, and (U T PUDU T qqq QUDU . T DU A U D U   U ) q = I if Q is diagonal). 10 by ( * * * * ), yb ( * * * * ). k TT i TT im PUDUPPUDU PUDUPPUDU    (6.49) x U* and D* are to be found such that TTT qqq UDU (6.50) AU = W D W T + UD . T qqq For this purpose, the following matrices are defined: dimension: n × 2n, (, q WAUU  (6.51) 202 6 Recursive State Estimation 0 0 ªº « ¬¼ q D D D » N ad dimension: 2n × 2n . (6.52) x The recursion obtained then is Initialization: (column vectors a 12 [ , , , ] T n aa a W{ i of the transition matrix A are rows of W). Recursion backwards for ț = n, n-1, …. , 1. 1. ; ( , 1, 2, , ; dim{ }: 2 ). jjjj hDa h Daj n hn NN N N N (6.53) 2. (scalar). T Dah NN N N (6.54) 3. (normalizes diagonal terms to 1). /dhD NNN (6.55) 4a. For 1, , -1: . T jj jU NN N  (a) 4b. Replace by ( ) jjj aaUa NN  . (b) (6.56) 6.4.6.2 Computational Aspects The matrices U and D are stored as compact vectors. The recursion may be further speeded up for sparse transition matrices A(k). In this case, index vectors for the rows of A indicate at what column index the nonzero elements start and end; only these elements contribute to the vector products. In an example with n = 8 state variables and m k = 6 measurement values, by using this approach, half the comput- ing time was needed with simultaneously improved stability compared to a Kalman filter with sequential innovation. 6.4.6.3 General Remarks With the UDU T -factored Kalman filter, an innovation, that is, an improvement of the existing state estimation, may be performed with just one measurement value (observability given). The innovation covariance in Equation 6.45 22 , 1, , TT iiiijjj qcUDUc s s Df j n    , TT i f Uc (6.57) is a measure for the momentary estimation quality; it provides information about an error zone around the predicted measurement value y i * that may be used to judge the quality of the arriving measurement values. If they are too far off, it may be wise to discard this measured value all together. 6.4.7 Conclusion of Recursive Estimation for Dynamic Vision The background and general theory of recursive estimation underlying the 4-D ap- proach to dynamic vision have been presented in this chapter. Before the overall integration for complex applications is discussed in detail, a closer look at major components including the specific initialization requirements is in order: This will 6.4 Recursive Estimation Techniques for Dynamic Vision 203 be done for road detection and tracking in Chapters 7 to 10; for vehicle detection and tracking, it is discussed in Chapter 11. The more complex task with many ve- hicles on roads will be treated in Chapter 14 after the resulting system architecture for integration of the diverse types of knowledge needed has been looked at in Chapter 13. The sequence of increasing complexity will start here with an ideally flat road with no perturbations in pitch on the vehicle and the camera mounted directly onto the vehicle body. Horizontal road curvature and self-localization on the road are to be recognized; [this has been dubbed SLAM (self-localization and mapping) since the late 1990s]. As next steps, systematic variations in road (lane) width and rec- ognition of vertical curvature are investigated. These items have to be studied in conjunction to disambiguate the image inversion problem in 3-D space over time. There also is a cross-connection between the pitching motion of the vehicle and es- timation of lane or road width. It even turned out that temporal frequencies have to be separated when precise state estimation is looked for. Depending on the loading conditions of cars, their stationary pitch angle will be different; due to gas con- sumption over longer periods of driving, this quasi-stationary pitch angle will slowly change over time. To adjust this parameter that is important for visual range estimation correspondingly, two separate estimation loops with different time con- stants and specific state variables are necessary (actual dynamic pitch angle ș V , and the quasi-stationary bias angle ș b . This shows that in the 4-D approach, even sub- tle points in understanding visual perception can be implemented straight- forwardly. Recursive estimation is not just a mathematical tool that can be applied in an arbitrary way (without having to bear the consequences), but the models both for motion in the real world and for perspective mapping (including motion blur!) have to be kept in mind when designing a high-performance dynamic vision system. Provisions to be implemented for intelligent control of feature extraction in the task context will be given for these application domains. As mentioned before, most gain in efficiency is achieved by looking at the perception and control process in closed-loop fashion and by exploiting the same spatiotemporal models for all subtasks involved. The following scheme summarizes the recursive estimation loop with sequential innovation and UDU T -factorization as it has been used in standard form with minor modifications over almost two decades. Complete scheme with recursion loops in sequential Kalman filtering and UDU T -factorization. 1. Find a good guess for the n initial state components x*(0) to start with (initial hypothe- sis k = 0, ). 0 ˆ *(0) xx 2. Find an estimate for the probability distribution of this initial state in terms of the first and second moments of a Gaussian distribution (mean value 0 ˆ x and the (n×n) error co- variance matrix P 0 in factorized form U 0 D 0 U 0 T ). The terms on the main diagonal of matrix D 0 now are the variance components . 2 ı i 3. If the plant noise covariance matrix Q is diagonal, the starting value for U q is the iden- tity matrix I, and Q is with D T qqq QUDU q the (guessed or approximately known) 204 6 Recursive State Estimation values on the diagonal of the covariance matrix E{v T v}. In the sequential formulation, the measurement covariance matrix R = E{w T w} is replaced by the diagonal terms 2 {} i 2 i s Ew (Equation 6.28) Entry point for recursively running main loop (over time) 4. Increment time index k = k+1. 5. Compute expected values for state variables at time k+1 [state prediction x*(k)]: 11 ˆ *(1) (1)   kk k x Ak x Bk u . 6. Compute the expected measurement values *( ) [ *( ), ] m yk hxkp and the (total) Jacobian matrix C = y*/ x| N as first-order approximations around this point. 7. Predict expected error covariance matrix P*(k) in factorized form (Equations 6.43 and 6.49): 7.1 Initialize matrices WA (dimension: n·2n; Equation 6.51) [, q ]UU  » and (dimension: 2n·2n, Equation 6.52). 0 0 ªº « ¬¼ q D D D Entry point for sequential computation of expected error covariance matrix 7.2 Recursion backward for ț = n, n í1 , …. , 1 (Equations. 6.53 – 6.56) ț h = ; ( , 1, 2, , ; dim{ }: 2 1 ); jjjj Da h D a j n h n N N  N N  scalar T D ah NN N N  ; vector /dhD NNNN ; ( for 1, , 1: T jj j Uad NN N  ); replace . by ( ) jjj aaU NN  a i go back to step 7.2 for prediction of error covariance matrix 8. Read new measurement values y(k); m k in size (may vary with time k). 9. Recursive innovation for each of the m k measurement values: Initialize with i = 1. Entry point for sequential innovation (Equations 6.40 – 6.48) 9.1: TT ij fU c  ., 9.2: 2 , , , 1, , . ii ijjij qsD f j n    9.3: eD, 9.4: : iii f  / iii kUeq  , 9.5: recursive computation of U and from 0 i 0 i D ( / T iii ) D ee q   , 9.6: UU , 9.7: , 0 iii U  U   i 0 ii DD  9.8: Set U and 1,ii 1,i DD   for the next measurement value. Increment i by 1 go back to 9.1 for next inner innovation loop as long as i  m k . (Loop 9 yields the matrices UU k m  and k m DD  for prediction step 7.) 10. Monitor progress of perception process; go back to step 4 for next time step k = k+1. This loop may run indefinitely, controlled by higher perception levels, possibly triggered by step 10. [...]... 0.125 0.034 0.0 08 3.2 0.65 0.52 0.13 0.073 0.033 0.009 0.002 -0 .745 -1 .24 -1 .86 -3 .72 -4 .95 -7 .4 -1 4.6 134 48 21.5 5.4 3.04 1.36 0.3 58 16.7 6 2.7 0. 68 0. 38 0.17 0.045 4.47 1.6 0.72 0. 18 0.10 0.045 0.012 (-3 5 .8) (0.066) (0.0 08) (0,002) Lopt axis 1° change in pitch angle at a look-ahead range of 20 m leads to look-ahead changes of 3.9 m (~ 20 %, column 2) for the van and 5.4 m (27 %, column 6) for the car;... above a planar ground, and for two focal lengths (resolutions) 0 vehicle HK in m 1 2 3 4 VaMoRs camera elevation = 1 .8 m resolut 8 pel/° 5 6 7 8 VaMP = 1.3 m 30 pel/° above ground 8 pel/° 30 pel/° in degrees dL/d m/deg dL/dpel m/pixel dL/dpel m/pixel in degrees dL/d in m m/deg dL/dpel m/pixel dL/dpel m/pixel 100 60 40 20 15 10 5 1 .8 -1 .03 -1 .72 -2 . 58 -5 .14 -6 .84 -1 0.2 -1 9 .8 -4 5 97 19.4 15.5 3.91 2.2... components for spatiotemporal recognition of road borderlines while driving on one side of the road It begins with the historic formulation of the dynamic vision process in the mid- 206 7 Beginnings of Spatiotemporal Road and Ego-state Recognition 1 980 s which allowed driving at speeds up to ~ 100 km/h on a free stretch of Autobahn in 1 987 with a half dozen 16-bit Intel 80 86 microprocessors and 1 PC on... row (pixel) for two different focal lengths realizing resolutions of 8, respectively, 40 pixels per degree; columns 3 and 7 are typical for images with 240 lines and a 30° vertical field of view which has often been used as a standard lens, while columns 4 and 8 correspond to a tele–lens with an 8 vertical field of view Table 7.1 Pitch angles of optical axes for certain look-ahead ranges L, for two camera... linear function of angle of attack between the tire and the ground, dubbed tire stiffness, see Section 3.4.5.2 For the UniBwM test van VaMoRs, the actual numbers are a = 3.5 m, kltf 75 m/s²/rad, and T = 0.0133·V s-1 For a speed of 10 m/s (36 km/h), the eigenvalue of the dynamic tire mode is 1/T = 7.5 s-1 (= 0.3 times video rate of 25 Hz); this can hardly be neglected for the design of a good controller... Chapter 5 was devoted to the basic techniques for image feature extraction In Chapter 6, the elementary parts of recursive estimation developed for dynamic vision have been introduced They avoid the need for storing image sequences by combining 3-D shape models and 3-D motion models of objects with the theory of perspective mapping (measurement models) and feature extraction methods All this together... general approach for dynamic scene understanding by research groups at UniBwM [Dickmanns, Graefe 1 988 ; Dickmanns, Wünsche 1999] Recognition of well-structured roads and the egostate of the vehicle carrying the camera relative to the road was one of the first very successful application domains This method has been extensively used long before the SLAM–acronym (simultaneous localization and mapping) became... (e.g., for higher speeds) To gain experience, it has sometimes been included in the model; note that for five state variables, the number of entries in the upper left block of the F-matrix goes up by 56 % compared to just four.] The two models for vehicle state estimation (upper left) and for road perception (lower right) have been coded as separate fourth-order and third-order models with cross-feeds of. .. fraction of the look-ahead distance) The idea 220 7 Beginnings of Spatiotemporal Road and Ego-state Recognition is to obtain “spread-out” variables as new curvature parameters C0hm and C1hm of a dynamic system that generate the same lateral offset yC of the curved line at the look-ahead distance L at any moment as the real road curvature (see lower right corner of Figure 7.12) The assumption for computing... while handling larger perturbations in pitch is treated in Section 9.3 The perception of crossroads is introduced in Chapter 10 Contrary to the approaches selected by the AI community in the early 1 980 s [Davis et al 1 986 ; Hanson, Riseman 1 987 ; Kuan et al 1 987 ; Pomerleau 1 989 , Thorpe et al 1 987 ; Turk et al 1 987 ; Wallace et al 1 986 ], which start from quasi-static single image interpretation, the 4-D approach . elementary parts of recursive estimation developed for dynamic vi- sion have been introduced. They avoid the need for storing image sequences by combining 3 -D shape models and 3 -D motion models of objects. estimation (upper left) and for road perception (lower right) have been coded as separate fourth-order and third-order models with cross-feeds of V·C 0h into the yaw equation, and of the first three. small speeds, T ȥ  abs ȕ  ȥ and T ȕ ĺ 0, Equation 3. 38) , a lateral motion model of third order ac- cording to block diagram Figure 7.3a results. u = d /dt (a) Third-order model Ȝ 1/2 í

Ngày đăng: 10/08/2014, 02:20

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan