Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 7 pptx

30 454 0
Dynamic Vision for Perception and Control of Motion - Ernst D. Dickmanns Part 7 pptx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

5.3 The Unified Blob-edge-corner Method (UBM) 165 L segmin = 4 pyramid-pixels or 16 original image pixels. Figure 5.35 shows the re- sults for column search (top left), for row search (top right), and superimposed re- sults, where pixels missing in one reconstructed image have been added from the other one, if available. The number of blobs to be handled is at least one order of magnitude smaller than for the full representation underlying Figure 5.34. For a human observer, rec- ognizing the road scene is not difficult despite the pixels missing. Since homoge- neous regions in road scenes tend to be more extended horizontally, the superposi- tion ‘column over row’ (bottom right) yields the more naturally looking results. Note, however, that up to now no merging of blob results from one stripe to the next has been done by the program. When humans look at a scene, they cannot but do this unwillingly and apparently without special effort. For example, nobody will have trouble recognizing the road by its almost homogeneously shaded gray val- ues. The transition from 1-D blobs in separate stripes to 2-D blobs in the image and to a 3-D surface in the outside world are the next steps of interpretation in machine vision. 5.3.2.5 Extended Shading Models in Image Regions The 1-D blob results from stripe analysis are stored in a list for each stripe, and are accumulated over the entire image. Each blob is characterized by 1. the image coordinates of its starting point (row respectively column num- ber and its position j ref in it), 2. its extension L seg in search direction, 3. the average intensity I c at its center, and 4. the average gradient components of the intensity a u and a v . This allows easy merging of results of two neighboring stripes. Figure 5.36a shows the start of 1-D blob merging when the threshold conditions for merger are satis- fied in the region of overlap in adjacent stripes: (1) The amount of overlap should exceed a lower bound, say, two or three pixels. (2) The differ- ence in image intensity at the center of overlap should be small. Since the 1-D blobs are given by their cg-position (u bi = j ref + L seg,i /2), their ‘weights’ (proportional to the segment length L seg,i ), and their intensity gradients, the intensities at the center of overlap can be com- puted in both stripes (I covl1 ) and I covl2 ) from the distance be- tween the blob center and the center of overlap exploiting the gradient information. This yields the condition for accep- tance • • Ž cg 2 cg 1 L seg1 = 12 L seg2 = 6 Figure 5.36. Merging of overlapping 1-D blobs in adjacent stripes to a 2-D blob when intensity and gradient components match within threshold limits cg S įu cg įv cg u b1 u v įu S 2 įv S2 (a) Merging of first two 1-D blobs to a 2-D blob (b) Recursive merging of a 2-D blob with an overlap- ping 1-D blob to an extended 2-D blob. įu S3 Ž cg Snew S 2D = L seg1 + L seg2 = 18 • L seg3 = 6 cg 3 įv cg cg 2D = cg Sold • įv S3 5 Extraction of Visual Features 166 cov 1 cov 2 || ll I I DelIthreshMerg . (5.37) Condition (3) for merging is that the intensity gradients should also lie within small common bounds (difference < DelSlopeThrsh, see Table 5.1). If these conditions are all satisfied, the position of the new cg after merger is computed from a balance of moments on the line connecting the cg’s of the regions to be merged; the new cg of the combined areas S 2D thus has to lie on this line. This yields the equation (see Figure 5.36a) 12 () S seg cg S seg įuL įu įuL  0, (5.38) and, solved for the shift įu S with S 2D = L seg1 + L seg2 , the relation 21 2 22 /( ) / S seg seg seg cg cg seg D įuL L L įu įuL S    (5.39) is obtained. The same is true for the v-component 21 2 22 /( ) / . S seg seg seg cg cg seg D įvL L L įv įvL S   (5.40) Figure 5.36b shows the same procedure for merging an existing 2-D blob, given by its weight S 2D , the cg-position at cg 2D , and the segment boundaries in the last stripe. To have easy access to the latter data, the last stripe is kept in memory for one additional stripe evaluation loop even after the merger to 2-D blobs has been finished. The equations for the shift in cg are identical to those above if L seg1 is re- placed by S 2Dold . The case shown in Figure 5.36b demonstrates that the position of the cg is not necessarily inside the 2-D blob region. A 2-D blob is finished when in the new stripe no area of overlap is found any more. The size S 2D of the 2-D blob is finally given by the sum of the L seg -values of all stripes merged. The contour of the 2-D blob is given by the concatenated lower and upper bounds of the 1-D blobs merged. Minimum (u min , v min ) and maximum values (u max , v max ) of the coordinates yield the encasing box of area encbox max min max min = ( ) ( ).Auuvv (a) (5.41) A measure of the compactness of a blob is the ratio compBlob 2 encbox / D RSA . (b) For close to rectangular shapes it is close to 1; for circles it is ʌ/4, for a triangle it is 0.5, and for an oblique wide line it tends toward 0. The 2-D position of the blob is given by the coordinates of its center of gravity u cg and v cg . This robust feature makes highly visible blobs attractive for tracking. 5.3.2.6 Image Analysis on two Scales Since coarse resolution may be sufficient for the near range and the sky, fine scale image analysis can be confined to that part of the image containing regions further away. After the road has been identified nearby, the boundaries of these image re- gions can be described easily around the subject’s lane as looking like a “pencil tip” (possibly bent). Figure 5.37 shows results demonstrating that with highest resolution (within the white rectangles), almost no image details are lost both for the horizontal (left) and the vertical search (right). The size and position of the white rectangle can be adjusted according to the ac- tual situation, depending on the scene content analyzed by higher system levels. Conveniently, the upper left and lower right corners need to be given to define the 5.3 The Unified Blob-edge-corner Method (UBM) 167 Reconstructed image (horizontal): Coarse (4x4) Fine Coarse resolution Reconstructed image (vertical): Coarse (4x4) Fine Coarse resolution Figure 5.37. Foveal–peripheral differentiation of image analysis shown by the ‘imag- ined scene’ reconstructed from symbolic representations on different scales: Outer part 44.11, inner part 11.11 from video fields compressed 2:1 after processing; left: horizontal search, right: vertical search, with the Hofmann operator. rectangle; the region of high resolution should be symmetrical around the horizon and around the center of the subject’s lane at the look-ahead distance of interest, in general. 5.3.3 The Corner Detection Algorithm Many different types of nonlinearities may occur on different scales. For a long time, so-called 2-D-features have been studied that allow avoiding the “aperture problem”; this problem occurs for features that are well defined only in one of the two degrees of freedom, like edges (sliding along the edge). Since general texture analysis requires significantly more computing power not yet available for real- time applications in the general case right now, we will also concentrate on those points of interest which allow reliable recognition and computation of feature flow [Moravec 1979; Harris, Stephens 1988; Tomasi, Kanade 1991; Haralick, Shapiro 1993]. 5.3.3.1 Background for Corner Detection Based on the references just mentioned, the following algorithm for corner detec- tion fitting into the mask scheme for planar approximation of the intensity function has been derived and proven efficient. The structural matrix 22 11 12 12 22 21 22 12 ( )2 2( ) rN rN rN cN rN cN c N c N nn ff ff N nn ff f f §·  §· ¨ ¨¸   ©¹ ©¹ ¸ (5.42) has been defined with the terms from Equations 5.17 and 5.18. Note that compared to the terms used by previously named authors, the entries on the main diagonal are formed from local gradients (in and between half-stripes), while those on the cross- diagonal are twice the product of the gradient components of the mask (average of the local values). With Equation 5.18, this corresponds to half the sum of all four cross-products 5 Extraction of Visual Features 168 12 21 ,1,2 0.5 ( ) riN cjN ij nn ff   ¦ . (5.43) This selection yields proper tuning to separate corners from planar elements in all possible cases (see below). The determinant of the matrix is 2 11 22 12 det -wNnnn  . (5.44) With the equations mentioned, this becomes 11 22 1112 2212 1212 det 0.75 0.5 ( ) .       cc rr rr cc Nnn nff nff ffff (5.45) Haralick calls the “Beaudet measure of cornerness”, however, formed with a different term det Nw 12 ri ci n ff 6  . The eigenvalues O of the structural matrix are obtained from  11 12 2 11 22 12 12 22 0, nn nnn nn O  ªº O O  «» O ¬¼   22 11 22 11 22 12 0nn n n nO  O . (5.46) With the quadratic enhancement term Q,   11 22 2Qnn  , (5.47) there follows for the two eigenvalues 12 ,OO, 2 1,2 11detQNQ ªº O r  ¬¼ . (5.48) Normalizing these with the larger eigenvalue 1 O yields 1N 2N 2 1 1 ; /O O OO;     2 2 1 1 det / 1 1 det / N NQ NQO   2 . (5.49) Haralick defines a measure of circularity q as  2 12 12 2 12 12 4 1q ªº OO OO  «» OO OO ¬¼ . (5.50) With Equation 5.48 this reduces to 22 11 22 12 11 22 det / 4 ( )/( )qNQ nnnnn  2 , (5.51) and in normalized terms (see Equation 5.49), there follows 2 2N 2N = 4 Ȝ / (1 + Ȝ ).q  (5.52) It can thus be seen that the normalized second eigenvalue Ȝ 2N and circularity q are different expressions for the same property. In both terms, the absolute magni- tudes of the eigenvalues are lost. Threshold values for corner points are chosen as lower limits for the determi- nant detN = w and circularity q: min ww! and min qq! . (5.53) In a post-processing step, within a user-defined window, only the maximal value of w = w* is selected. 5.3 The Unified Blob-edge-corner Method (UBM) 169 Harris was the first to use the eigenvalues of the structural matrix for threshold definition. For each location in the image, he defined the performance value 2 (,) det ( ) H Ryz N trace N D , (5.54) where 12 det N O O and 12 2trace N Q O O , (5.55) yielding  2 12 1 2 H R OO D O O  º ¼ . (a) (5.56) With ( , see Equation 5.49), there follows 21 /N O O 2N O  2 2 1 1 H R ª O ND N ¬ . (b) For R H  0 and 0 ț 1, D has to be selected in the range,  2 0 ț / 1 0.25dDd N d . (5.57) Corner candidates are points for which R H  0 is valid; larger values of D yield fewer corners and vice versa. Values around D = 0.04 to 0.06 are recommended. This condition on R H is equivalent to (from Equations 5.44, 5.53, and 5.54) 2 det 4NQ!D . (5.58) Kanade et al. (1991) (KLT) use the following corner criterion: After a smooth- ing step, the gradients are computed over the region D·D (2 d D d 10 pixels). The reference frame for the structural matrix is rotated so that the larger eigenvalue O 1 points in the direction of the steepest gradient in the region 22 1 K LT r c ffO . (5.59) O 1 is thus normal to a possible edge direction. A corner is assumed to exist if O 2 is sufficiently large (above a threshold value O 2thr ). From the relation det N = O 1 · O 2 , the corresponding value of O 2KLT can be determined 21 det K LT KLT NO O. (5.60) If 2KLT 2thr O!O, (5.61) the corresponding image point is put in a candidate list. At the end, this list is sorted in decreasing order of O 2KLT , and all points in the neighborhood with smaller O 2KLT values are deleted. The threshold value has to be derived from a histogram of O 2 by experience in the domain. For larger D, the corners tend to move away from the correct position. 5.3.3.2 Specific Items in Connection with Local Planar Intensity Models Let us first have a look at the meaning of the threshold terms circularity (q in Equa- tion 5.50) and trace N (Equation 5.55) as well as the normalized second eigenvalue (Ȝ 2N in Equation 5.49) for the specific case of four symmetrical regions in a 2 × 2 mask, as given in Figure 5.20. Let the perfect rectangular corner in intensity distri- bution as in Figure 5.38b be given by local gradients f r1 = f c1 = 0 and f r2 = f c2 = íK. Then the global gradient components are f r = f c = íK/2. The determinant Equation 5 Extraction of Visual Features 170 5.44 then has the value det N = 3/4·K 4 . The term Q (Equation 5.47) becomes Q = K 2 , and the “circularity” q according to Equation 5.51 is 2 det / = 4/3 = 0. 75.qNQ (5.62) The two eigenvalues of the structure matrix are Ȝ 1 = 1.5·K 2 , and Ȝ 2 = 0.5·K 2 so that traceN = 2Q is 4· K 2 ; this yields the normalized second eigenvalue as Ȝ 2N = 1/3. Table 5.2 contains this case as the second row. Other special cases according to the intensity distributions given in Figure 5.38 are also shown. The maximum circular- ity of 1 occurs for the checkerboard corners in Figure 5.38a and row 1 in Table 5.2; the normalized second eigenvalue also assumes its maximal value of 1 in this case. The case Figure 5.38c (third row in the table) shows the more general situation with three different intensity levels in the mask region. Here, circularity is still close to 1 and Ȝ 2N is above 0.8. The case in Figure 5.38e with constant average mask intensity in the stripe is shown in row 5 of Table 5.2: Circularity is rather high at q = 8/9 § 0.89 and Ȝ 2N = 0.5. Note that from the intensity and gradient val- ues of the whole mask this feature can only be detected by g z (I M and g y ) remain constant along the search path. By setting the minimum required circularity q min as the threshold value for ac- ceptance to min 0.7q , (5.63) Figure 5.38. Local intensity gradients on mel-level for calculation of circularity q in corner selection: (a) Ideal checker-board corner: q = 1. (b) ideal single corner: q = 0.75; (c) slightly more general case (three intensity levels, closer to planar); (d) ideal shading, one direction only (linear case for interpolation, q § 0); (e) demanding (idealized) corner feature for extraction (see text). f r2 = 0 f c2 f r1 = 0 f c1 (a) (b) f r1 = 0 f r1 f c1 f r2 f c2 f c2 f r2 f c1 (c) f r2 f f r1 = 0 f c1 c (e) (d) I 12 =1 I 22 =0 I 21 =0.5 I 11 =0.5 I mean = 0.5 row y column z f r2 = í 0.5 f r1 = 0.5 f c1 = 0 f c2 = í1 g y = 0 g z = í0.5 = g g z = g = 0 g y = 0 g z = g = í1 Shifting location of evaluation mask 5.3 The Unified Blob-edge-corner Method (UBM) 171 all significant cases of intensity corners will be picked. Figure 5.38d shows an al- most planar intensity surface with gradients –K in the column direction and a very small gradient ± İ in the row direction (K >> İ). In this case all characteristic val- ues: det N, circularity q, and the normalized second eigenvalue Ȝ 2N all go to zero (row 4 in the table). The last case in Table 5.2 shows the special planar intensity distribution with the same value for all local and global gradients (–K); this corre- sponds to Fig 5.20c. It can be seen that circularity and Ȝ 2N are zero; this nice fea- ture for the general planar case is achieved through the factor 2 on the cross- diagonal of the structure matrix Equation 5.42. When too many corner candidates are found, it is possible to reduce their number not by lifting q min but by introducing another threshold value traceN min that limits the sum of the two eigenvalues. According to the main diagonals of Equations 5.42 and 5.46, this means prescribing a minimal value for the sum of the squares of all local gradients in the mask. Table 5.2. Some special cases for demonstrating the characteristic values of the structure matrix in corner selection as a function of a single gradient value K. TraceN is twice the va- lue of Q (column 4). Example Local gradi- ent values Det. N Equation 5.44 Term Q Equation 5.47 Circula- rity q Ȝ 1 Ȝ 2N = Ȝ 2/ Ȝ 1 Figure 5.38a +, – K (2 each) 4 K 4 2 K 2 12K 2 1 Figure 5.38b 0, – K (2 each) ¾ K 4 K 2 0. 75 1.5 K 2 0.3333 Figure 5.38c 0, –K (f c1 , f r2 ), –2K 5 K 4 3 K 2 5/9 = 0.556 5 K 2 0.2 Figure 5.38e f ri = ± K f ci = 0; – 2K 8 K 4 3 K 2 8/9 4 K 2 0.5 Figure 5.38d f ri = ±İ (<< K) f ci = – K 4 * İ 2 K 2 § 0 (İ 2 +K 2 ) § K 2 ~ 4 İ 2 /K 2 § 0 § 2 * (K 2 - İ 2 ) § 2İ 2 /K 2 § 0 Planar f i,j = – K (4×) 0 2 K 2 04 * K 2 0 This parameter depends on the absolute magnitude of the gradients and has thus to be adapted to the actual situation at hand. It is interesting to note that the planarity check (on 2-D curvatures in the intensity space) for interpolating a tangent plane to the actual intensity data has a similar effect as a low boundary of the threshold value, traceN min . 5.3.4 Examples of Road Scenes Figure 5.39 left shows the nonplanar regions found in horizontal search (white bars) with ErrMax = 3%. Of these, only those locations marked by cyan crosses have been found satisfying the corner condition q min = 0.6 and traceN min = 0.11. The figure on the right-hand side shows results with the same parameters except the reduction of the threshold value to traceN min = 0.09, which leaves an increased 5 Extraction of Visual Features 172 Figure 5.39. Corner candidates derived from regions with planar interpolation resi- dues > 3% (white bars) with parameters (m, n, m c , n c = 3321). The circularity threshold q min = 0.6 eliminates most of the candidates stemming from digitized edges (like lane markings). The number of corner candidates can be reduced by lifting the threshold on the sum of the eigenvalues traceN min from 0.09 (right: 103, 121) to 0.11 (left image: 63, 72 candidates); cyan = row search, red = column search. number of corner candidates (over 60% more). Note that all oblique edges (show- ing minor corners from digitization), which were picked by the nonplanarity check, did not pass the corner test (no crosses in both figures). The crosses mark corner candidates; from neighboring candidates, the strongest yet has to be selected by comparing results from different scales. m c = 2 and n c = 1 means that two original pixels are averaged to a single cell value; nine of those form a mask element (18 pixels), so that the entire mask covers 18×4 = 72 original pixels. Figure 5.40 demonstrates all results obtainable by the unified blob-edge-corner method (UBM) in a busy highway scene in one pass: The upper left subfigure shows the original full video image with shadows from the cars on the right-hand side. The image is analyzed on the pixel level with mask elements of size four pix- els (total mask = 16 pixels). Recall that masks are shifted by steps of 1 in search di- rection and by steps of mel-size in stripe direction. About 10 5 masks result for evaluation of each image. The lower two subfigures show the small nonplanarity regions detected (about 1540), marked by white bars. In the left figure the edge elements extracted in row search (yellow, = 1000) and in column search (red, = 3214) are superimposed. Even the shadow boundaries of the vehicles and the re- flections from the own motor hood (lower part) are picked. The circularity thresh- old of q min = 0.6 and traceN min = 0.2 filter up to 58 corner candidates out of the 1540 nonplanar mask results; row and column search yield almost identical results (lower right). More candidates can be found by lowering ErrMax and traceN min . Combining edge elements to lines and smooth curves, and merging 1-D blobs to 2-D (regional) blobs will drastically reduce the number of features. These com- pound features are more easily tracked by prediction error feedback over time. Sets of features moving in conjunction, e.g. blobs with adjacent edges and corners, are indications of objects in the real world; for these objects, motion can be predicted and changes in feature appearance can be expected (see the following chapters). Computing power is becoming available lately for handling the features mentioned in several image streams in parallel. With these tools, machine vision is maturing for application to rather complex scenes with multiple moving objects. However, quite a bit of development work yet has to be done. 5.3 The Unified Blob-edge-corner Method (UBM) 173 Conclusion of section 5.3 (UBM): Figure 5.41 shows a road scene with all fea- tures extractable by the unified blob-edge-corner method UBM superimposed. The image processing parameters were: MaxErr = 4%; m = n = 3, m c = 2, n c = 1 (33.21); anglefact = 0.8 and IntGradMin = 0.02 for edge detection; q min = 0.7; tra- ceN min = 0.06 for corner detection and Lseg min = 4, VarLim = 64 for shaded blobs. Features extracted were 130 corner candidates, 1078 nonplanar regions (1.7%), 4223 ~vertical edge elements, 5918 ~horizontal edge elements, 1492 linearly shaded intensity blobs (from row search) and 1869 from column search; the latter have been used only partially to fill gaps remaining from the row search. The non- planar regions remaining are the white areas. Only an image with several colors can convey the information contained to a human observer. The entire image is reconstructed from symbolic representations of the features stored. The combination of linearly shaded blobs with edges and corners alleviates the generation of good object hypotheses, especially when char- Figure 5.40. Features extracted with unified blob-edge-corner method (UBM): Bi- directionally nonplanar intensity distributions (white regions in lower two subfigures, ~ 1540), edge elements and corner candidates (column search in red), and linearly shaded blobs. One vertical and one horizontal example is shown (gray straight lines in upper right subfigure with dotted lines connecting to the intensity profiles between the images. Red and green are the intensity profiles in the two half-stripes used in UBM; about 4600 1-D blobs resulted, yielding an average of 15 blobs per stripe. The top right subfigure is reconstructed from symbolically represented features only (no original pixel values). Collections of features moving in conjunction designate objects in the world. 5 Extraction of Visual Features 174 Figure 5.41. “Imagined” feature set extracted with the unified blob-edge-corner method UBM: Linearly shaded blobs (gray areas), horizontally (green) and vertically extracted edges (red), corners (blue crosses) and nonhomogeneous regions (white). acteristic sub-objects such as wheels can be recognized. With the background knowledge that wheels are circular (for smooth running on flat ground) with the center on a horizontal axis in 3-D space, the elliptical appearance in the image al- lows immediate determination of the aspect angle without any reference to the body on which it is mounted. Knowing some state variables such as the aspect an- gle reduces the search space for object instantiation in the beginning of the recog- nition process after detection. 5.4 Statistics of Photometric Properties of Images According to the results of planar shading models (Section 5.3.2.4), a host of in- formation is now available for analyzing the distribution of image intensities to ad- just parameters for image processing to lighting conditions [Hofmann 2004]. For each image stripe, characteristic values are given with the parameters of the shad- ing models of each segment. Let us assume that the intensity function of a stripe can be described by n s segments. Then the average intensity b s of the entire stripe over all segments i of length l i and average local intensity b i is given by 11 ()/  . ¦¦ ss nn Sii ii blb i l (5.64) For a larger region G segmented into n image stripes, then follows G 11 11 () () Sj Sj GG nn nn Gijij ji ji blb . ij l ªºª  º «»«» «»«» ¬¼¬ ¦¦ ¦¦ ¼ (5.65) The values of and are different from the mean value of the image intensity since this is given by S b G b [...]... Length of edge segment Average intensity value on left-hand side of the edge Average intensity value on right-hand side of the edge Average segment length in direction of search path Average segment length in opposite direction of search path Number of concatenated edge points sum of u-, resp., v-coordinate values of concatenated edge points sum of the squares of u-, resp., v-coordinate values of concatenated... dynamic models that contain rich information for inter- 186 6 Recursive State Estimation pretation of the motion process in a least-squares error sense, given the motion constraints, the features measured, and the statistical properties known This integral use of 1 dynamic models for motion of and around the center of gravity taking actual control outputs and time delays into account, 2 spatial (3-D)... machine vision It was realized in the mid-1980s that the joint use of dynamic models and temporal predictions for several aspects of the overall problem in parallel was the key to achieving a quantum jump in the performance level of autonomous systems based on machine vision Recursive state estimation has been introduced for the interpretation of 3-D motion of physical objects observed and for control. .. situation and have to be taken into account for decision-making 6 Recursive State Estimation Real-time vision is not perspective inversion of a sequence of images Spatial recognition of objects, as the first syllable (re-) indicates, requires previous knowledge of structural elements of the 3-D shape of an object seen Similarly, understanding of motion requires knowledge about some basic properties of temporal... knowledge on egomotion for depth understanding from image sequences This joint use of knowledge on motion processes of objects in all four physical dimensions (3-D space and time) has led to the designation “4-D approach to dynamic vision [Dickmanns 19 87] 6.1 Introduction to the 4-D Approach for Spatiotemporal Perception Since the late 1 970 s, observer techniques as developed in systems dynamics [Luenberger... “saccadic” vision as known from vertebrates and allows very much reduced data rates for a complex sense of vision It trades the need for time-sliced attention control and scene reconstruction based on sampled data (actual video image) for a data rate reduction of one to two orders of magnitude compared to full resolution in the entire simultaneous field of view The 4-D approach lends itself to this type of vision. .. out to be true and immediately allowed motion stereointerpretation of dynamic scenes observed 6.4.1 Introduction to Recursive Estimation The n-vector of state variables x(t) of a dynamical system defined by the differential equation x (t ) f [ x (t ), u(t ), z (t ), ps ], (6.1) with u(t) an r-vector of control variables, z(t) an n-vector of disturbances, and ps a vector of parameters of the system,... Images of different brightness of a stereo system with corresponding histograms of the intensity: (a) left image, (b) right-hand-side image, and (c) right-hand image adapted to the intensity distribution of the left-hand image after the intensity transformation described (see Figure 5.43, after [Hofmann 2004]) The lower sub–figure (c) shows the final result It will be discussed after the transformation... the left-hand side and the resulting image on the right-hand side It can be seen that after the transformation, the intensity distribution in both images has become much more similar Even though this transformation is only a coarse approximation, it shows that it can alleviate evaluation of image information and correspondence of features 5.4 Statistics of Photometric Properties of Images 177 Figure... these types of motion processes of several objects and subjects of interest Predictions and expectations allow directing perceptual resources and attention to what is considered most important for behavior decision A large part of mental activities thus is an essential ingredient for understanding motion behavior of subjects This field has hardly been covered in the past but will be important for future . (b) right-hand-side image, and (c) right-hand image adapted to the intensity distribution of the left-hand image after the intensity transformation described (see Figure 5.43, after [Hofmann. 0.8 and IntGradMin = 0.02 for edge detection; q min = 0 .7; tra- ceN min = 0.06 for corner detection and Lseg min = 4, VarLim = 64 for shaded blobs. Features extracted were 130 corner candidates,. the right- (R) and left-hand (L) side are compared. Addition- ally, average intensities and segment lengths of adjacent segments may be checked for judging a feature in the context of neighboring

Ngày đăng: 10/08/2014, 02:20

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan