COMPUTER-AIDED INTELLIGENT RECOGNITION TECHNIQUES AND APPLICATIONS phần 8 pdf

52 295 0
COMPUTER-AIDED INTELLIGENT RECOGNITION TECHNIQUES AND APPLICATIONS phần 8 pdf

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

References 343 [22] McLaughlin, R. A. and Alder, M. D. The Hough Transform versus the UpWrite, TR97-02, CIIPS, The University of Western Australia, 1997. [23] Milisavljevi ´ c, N. “Comparison of Three Methods for Shape Recognition in the Case of Mine Detection,” Pattern Recognition Letters, 20(11–13), pp. 1079–1083, 1999. [24] Haig, T., Attikiouzel, Y. and Alder, M. D. “Border Following: New Definition Gives Improved Border,” IEE Proceedings-I, 139(2), pp. 206–211, 1992. [25] McLaughlin, R. A. Randomized Hough Transform: Improved Ellipse Detection with Comparison, TR97-01, CIIPS, The University of Western Australia, 1997. [26] Xu, L. “Randomized Hough Transform (RHT): Basic Mechanisms, Algorithms and Computational Complexities,” CVGIP: Image Understanding, 57(2), pp. 131–154, 1993. [27] Xu, L., Oja, E. and Kultanen, P. “A New Curve Detection Method: Randomized Hough Transform (RHT),” Pattern Recognition Letters, 11, pp. 331–338, 1990. [28] Duda, O. and Hart, P. E. “Use of the Hough Transform to Detect Lines and Curves in Pictures,” Communications of the Association for Computing Machinery, 15(1), pp. 11–15, 1972. [29] Kälviäinen, H., Hirvonen, P., Xu, L. and Oja, E. “Comparisons of Probabilistic and Non-Probabilistic Hough Transforms,” Proceedings of 3rd European Conference on Computer Vision, Stockholm, Sweden, pp. 351–360, 1994. [30] Leavers, V. F. Shape Detection in Computer Vision Using the Hough Transform, Springer, London, 1992. [31] Yuen, H. K., Illingworth, J. and Kittler, J. “Detecting Partially Occluded Ellipses using the Hough Transform,” Image and Vision Computing, 7(1), pp. 31–37, 1989. [32] Capineri, L., Grande, P. and Temple, J. A. G. “Advanced Image-Processing Technique for Real-Time Interpretation of Ground Penetrating Radar Images,” International Journal on Imaging Systems and Technology, 9, pp. 51–59, 1998. [33] Milisavljevi ´ c, N., Bloch, I. and Acheroy, M. “Application of the Randomized Hough Transform to Humanitarian Mine Detection,” Proceedings of the 7th IASTED International Conference on Signal and Image Procesing (SIP2001), Honolulu, Hawaii, USA, pp. 149–154, 2001. [34] Banks, E. Antipersonnel Landmines – Recognising and Disarming, Brassey’s, London-Washington, 1997. [35] Milisavljevi ´ c, N. and Bloch, I. “Sensor Fusion in Anti-Personnel Mine Detection Using a Two-Level Belief Function Model,” IEEE Transactions On Systems, Man, and Cybernetics C, 33(2), pp. 269–283, 2003. [36] Milisavljevi ´ c, N., Bloch, I., van den Broek, S. P. and Acheroy, M. “Improving Mine Recognition through Processing and Dempster–Shafer Fusion of Ground-Penetrating Data,” Pattern Recognition, 36(5), pp. 1233–1250, 2003. [37] Dubois, D., Grabisch, M., Prade, H. and Smets, P. “Assessing the Value of a Candidate,” Proceedings of 15th Conference on Uncertainty in Artificial Intelligence (UAI’99), Stockholm, Sweden, pp. 170–177, 1999. [38] Smets, P. “Belief Functions: the Disjunctive Rule of Combination and the Generalized Bayesian Theorem,” International Journal of Approximate Reasoning, 9, pp. 1–35, 1993. [39] Schubert, J. “On Nonspecific Evidence,” International Journal of Intelligent Systems, 8, pp. 711–725, 1993. [40] Smets, P. “Constructing the Pignistic Probability Function in a Context of Uncertainty,” Uncertainty in Artificial Intelligence, 5, pp. 29–39, 1990. [41] Milisavljevi ´ c, N., Bloch, I. and Acheroy, M. “Characterization of Mine Detection Sensors in Terms of Belief Functions and their Fusion, First Results,” Proceedings of 3rd International Conference on Information Fusion (FUSION 2000), II, pp. ThC3.15–ThC3.22, 2000. 18 Fast Object Recognition Using Dynamic Programming from a Combination of Salient Line Groups Dong Joong Kang Jong Eun Ha School of Information Technology, Tongmyong University of Information Technology, Busan 608-711, Korea In So Kweon Department of Electrical & Computer Science Engineering, Korea Advanced Institute of Science and Technology, Daejun, Korea This chapter presents a new method of grouping and matching line segments to recognize objects. Weproposeadynamicprogramming-basedformulationextractingsalientlinepatternsbydefiningarobust and stable geometric representation that is based on perceptual organizations. As the end point proximity, we detect several junctions from image lines. We then search for junction groups by using the collinear constraint between the junctions. Junction groups similar to the model are searched in the scene, based on a local comparison. A DP-based search algorithm reduces the time complexity for the search of the model lines in the scene. The system is able to find reasonable line groups in a short time. 1. Introduction This chapter describes an algorithm that robustly locates collections of salient line segments in an image. In computer vision and related applications, we often wish to find objects based on stored models from an image containing objects of interest [1–6]. To achieve this, a model-based object recognition system Computer-Aided Intelligent Recognition Techniques and Applications Edited by M. Sarfraz © 2005 John Wiley & Sons, Ltd 346 Fast Object Recognition Using DP first extracts sets of features from the scene and the model, and then it looks for matches between members of the respective sets. The hypothesized matches are then verified and possibly extended to be useful in various applications. Verification can be accomplished by hypothesizing enough matches to constrain the geometrical transformation from a 3D model to a 2D image under perspective projection. We first extract junctions formed by two lines in the input image, and then find an optimal relation between the extracted junctions, by comparing them with previously constructed model relations. The relation between the junctions is described by a collinear constraint and parallelism can also be imposed. Junction detection acts as a line filter to extract salient line groups in the input image and then the relations between the extracted groups are searched to form a more complex group in an energy minimization framework. The method is successfully applied to images with some deformation and broken lines. Because the system can define a topological relation that is invariant to viewpoint variations, it is possible to extract enough lines to guide 2D or 3D object recognition. Conventionally, the DP-based algorithm as a search tool is an optimization technique for the problems where not all variables are interrelated simultaneously [7–9]. In the case of an inhomogeneous problem, such as object recognition, related contextual dependency for all the model features always exists [10]. Therefore, DP optimization would not give the true minimum. On the other hand, the DP method has an advantage in greatly reducing the time complexity for a candidate search, based on the local similarity. Silhouette or boundary matching problems that satisfy the locality constraint can be solved by DP-based methods using local comparison of the shapes. In these approaches, both the model and matched scene have a sequentially connected form of lines, ordered pixels, or chained points [11–13]. In some cases, there also exist many vision problems, in which the ordering or local neighborhood cannot be easily defined. For example, definition of a meaningful line connection in noisy lines is not easy, because the object boundary extraction for an outdoor scene is itself a formidable job for object segmentation. In this chapter, we do not assume known boundary lines or junctions, rather, we are open to any connection possibilities for arbitrary junction groups in the DP-based search. That is, the given problem is a local comparison between predefined and sequentially linked model junctions and all possible scene lines in an energy minimization framework. Section 2 introduces previous research about feature grouping in object recognition. Section 3 explains a quality measure to detect two line junctions in an input image. Section 4 describes a combination model to form local line groups and how junctions are linked to each other. Section 5 explains how related junctions are searched to form the salient line groups in a DP-based search framework. Section 6 gives a criterion to test the collinearity between lines. Section 7 tests the robustness of the junction detection algorithm by counting the number of detected junctions as a function of the junction quality and whether a prominent junction from a single object is extracted under an experimentally decided quality threshold. Section 8 presents the results of experiments using synthetic and real images. Finally, Section 9 summarizes the results and draws conclusions. 2. Previous Research Guiding object recognition by matching perceptual groups of features was suggested by Lowe [6]. In SCERPO, his approach is to match a few significant groupings from certain arrangements of lines found in images. Lowe has successfully incorporated grouping into an object recognition system. First, he groups together lines thought particularly likely to come from the same object. Then, SCERPO looks for groups of lines that have some property invariant with the camera viewpoint. For this purpose, he proposes three major line groups – proximity, parallelism and collinearity. Recent results in the field of object recognition, including those of Jacobs, Grimson and Huttenlocher, demonstrate the necessity of some type of grouping, or feature selection, to make the combinatorics of object recognition manageable [9,14]. Grouping, as for the nonaccidental image features, overcomes the unfavorable combinatorics of recognition by removing the need to search the space for all matches Junction Extraction 347 between image and model features. Grimson has shown that the combinatorics of the recognition process in cluttered environments using a constrained search reduces the time complexity from an exponential to a low-order polynomial if we use an intermediate grouping process [9]. Only those image features considered likely to come from a single object could be included together in hypothetical matches. And these groups need only be matched with compatible groups of model features. For example, in the case of a constrained tree search, grouping may tell us which parts of the search tree to explore first, or allow us to prune sections of the tree in advance. This chapter is related to Lowe’s work using perceptual groupings. However, the SCERPO grouping has a limitation: forming only small groups of lines limits the amount by which we may reduce the search. Our work extends the small grouping to bigger perceptual groups, including more complex shapes. Among Lowe’s organization groups, the proximity consisting of two or more image lines is an important clue for starting object recognition. When projected to the image plane, most manmade objects may have a polyhedral plane in which two or several sides give line junctions. First, we introduce a quality measure to detect meaningful line junctions denoting the proximity. The quality measure must be carefully defined not to skip salient junctions in the input image. Then, extracted salient junctions are combined to form more complex and important local line groups. The combination between junctions is guided by the collinearity that is another of Lowe’s perceptual groups. Henikoff and Shapiro [15] effectively use an ordered set of three lines representing a line segment with junctions at both ends. In their work, the line triples, or their relations as a local representative pattern, broadly perform the object recognition and shape indexing. However, their system cannot define the line triple when the common line sharing two junctions is broken by image noise or object occlusion. And the triple and bigger local groups are separately defined in low-level detection and discrete relaxation, respectively. The proposed system in this chapter is able to form the line triple and bigger line groups in a consistent framework. Although the common line is broken, the combination of the two junctions can be compensated by the collinearity of the broken lines. We introduce the following: 1. A robust and stable geometric representation that is based on the perceptual organizations (i.e. the representation as a primitive search node includes two or more perceptual grouping elements). 2. A consistent search framework combining the primitive geometric representations, based on the dynamic programming formulation. 3. Junction Extraction A junction is defined as any pair of line segments that intersect, and whose intersection point either lies on one of the line segments, or does not lie on either of the line segments. An additional requirement is that the acute angle between the two lines must lie in a range  min to  max . In order to avoid ambiguity with parallel or collinear pairs [6],  min could be chosen to be a predefined threshold. Various junction types are well defined by Etemadi et al. [7]. Now a perfect junction (or two-line junction) is defined as one in which the intersection point P lies precisely at the end points of the line segments. Figure 18.1 shows the schematic diagram of a typical junction. Note that there are now two virtual lines that share the end point P. The points P 1 and P 4 locating the opposite sides of P 2 and P 3 , denote the remaining end points of the virtual lines, respectively. Then, the junction quality factor is: Q J = L 1 −  1 − ⊥ 2 VL 1  · L 2 −  2 − ⊥ 1 VL 2  (18.1) where VL i i = 1 2 are the lengths of the virtual lines, as shown in Figure 18.1. The standard deviations   i and  ⊥ i , incorporating the uncertainties in the line extraction process for the position of the end points of the line segments along and perpendicular to its direction respectively, may be replaced by constants without affecting the basic grouping algorithms [7]. In this chapter, the 348 Fast Object Recognition Using DP P 1 P 2 L 1 VL 1 P 4 L 2 P 3 VL 2 P θ Figure 18.1 The junction. two variance factors   i and  ⊥ i are ignored. The defined relation penalizes pairings in which either line is far away from the junction point. The quality factor also retains the symmetry property. 4. Energy Model for the Junction Groups The relational representation, made from each contextual relation of the model and scene features, provides a reliable means to compute the correspondence information in the matching problem. Suppose that the model consists of M feature nodes. Then, a linked node chain, given by the sequential connection of the nodes, can be constructed. If the selected features are sequentially linked, then it is possible to calculate a potential energy from the enumerated feature nodes. For example, assume that any two-line features of the model correspond to two features f I and f I+1 of the scene. If the relational configuration of each line node depends only on the connected neighboring nodes, then the energy potential obtained from the M line nodes can be represented as: E total f 1 f 2 f M  = E 1 f 1 f 2  +E 2 f 2 f 3  +···+E M−1 f M−1 f M  (18.2) where E I f I f I+1  = K  k =1 r 2k f I f I+1  −R 2k I I +1 (18.3) Here, r 2k and R 2k denote the binary relations of any two connected line features of the scene and the model, respectively. The parameter K is the number of binary relations. For the relational representation of junctions, the model and scene node I and f I in Equations (18.2) and (18.3) are replaced by the model junction and corresponding scene junction, respectively. Figure 18.2(a) presents a schematic of lines consisting of an object. Figure 18.2(b) shows the binary relations of sequentially connected junctions for line pattern matching. Equation (18.3) for junction chains can be rewritten accordingly as: E I f I f I+1  =  ·f I  −I+ ·rf I f I+1  −RI I +1 (18.4) Each junction has the unary angle relation from two lines constituting a single junction, as shown in the first term of Equation (18.4) and in Figure 18.1. f I  and I are corresponding junction angles in a scene and a model, respectively. We do not use a relation depending on line length, because lines in a noisy scene could be easily broken. The binary relation for the scene r and model R in the Energy Minimization 349 1 2 3 4 6 (a) J 1 J 2 1 2 3 4 5 6 ++ + . . . J 3 (b) 5 Figure 18.2 Binary relations made from any two connected junction nodes: (a) line segments on a model; and (b) the combination of junctions by perceptual constraints, such as proximity, collinearity and parallelism. second term is defined as a topological constraint or an angle relation between two junctions. For example, the following descriptions can represent the binary relations. 1. Two lines 1 and 4 should be approximately parallel (parallelism). 2. Scene lines corresponding to two lines 2 and 3 must be a collinear pair [6] or the same line. That is, two junctions are combined by the collinear constraint. 3. Line ordering for two junctions J 1 , J 2 should be maintained, for example as clockwise or counter- clockwise, as the order of line 1, line 2, line 3 and line 4. The relation defined by the connected two junctions includes all three perceptual organization groups that Lowe used in SCERPO. These local relations can be selectively imposed according to the type of the given problem. For example, a convex line triplet [15] is simply defined, by removing the above constraint 1 and letting line 2 and line 3 of constraint 2 be equal to each other. The weighting coefficients  and  of the energy potential are experimentally given, by considering the variance factor of the line perturbation for image noise. 5. Energy Minimization Dynamic Programming (DP) is an optimization technique good for problems where not all variables are interrelated simultaneously [8,16]. Suppose that the global energy can be decomposed into the following form: Ef 1 f M  = E 1 f 1 f 2  +E 2 f 2 f 3  +···+E M−1 f M−1 f M  (18.5) in which M is the number of the model nodes, such as lines or junctions, and f I is a scene label that can be assigned to the model node I. Figure 18.3 shows a schematic DP diagram to find a trapezoidal model in the scene lines. Figure 18.3(a) presents a typical case in which we cannot define an ordering for the scene lines due 350 Fast Object Recognition Using DP (a) (b) 1 2 . . . . . . m + 1 Junction list NIL NIL NIL 12 M Figure 18.3 The DP algorithm searches a scene node corresponding to each model node. A model feature can be matched to at least one node, among scene nodes, 1m+1 of a column, including NULL node (NIL). (a) Line segments for the rear view of a vehicle; and (b) a DP-based search. m is the number of junctions detected from (a) and M is the number of predefined model junctions. to the cluttered background. Therefore, it is difficult to extract a meaningful object boundary that corresponds to the given model. In this case, the DP-based search structure is formulated as the column in Figure 18.3(b), in which all detected scene features are simultaneously included in each column. Each junction of the model can get a corresponding junction in the scene as well as a null node, which indicates no correspondence. The potential matches are defined as the energy accumulation form of Equation (18.5). From binary relations of junctions (i.e. arrows in Figure 18.3(b)) defined between two neighboring columns, the local comparison-based method using the recursive energy accumulation table of Equation (18.5) can give a fast matching solution. The DP algorithm generates a sequence that can be written in the recursive form. For I = 1M−1, D I f I+1  = min f I D I−1 f I  +E I f I f I+1  (18.6) with D 0 f 1  = 0. The minimal energy solution is obtained by min f Ef 1 f M  = min f M D M−1 f M  (18.7) If each f I takes on m discrete values, then to compute D I−1 f I  for each f I value, one must evaluate the summation D I−1 f I−1  +E I−1 f I−1 f I  for the m different f I−1 values. Therefore, the overall minimization involves M −1m 2 evaluations of the summations. This is a large reduction from the exhaustive search for the total evaluation of Ef 1 f M . m is the number of junctions satisfying a threshold for the junction quality in the scene. Collinear Criterion of Lines 351 6. Collinear Criterion of Lines Extraction of the image features such as points or lines is influenced by the conditions during image acquisition. Because the image noise distorts the object shape in the images, we need to handle the effect of the position perturbation in the features, and to decide a threshold or criterion to discard and reduce the excessive noise. In this section, the noise model and the error propagation for the collinearity test between lines are proposed. The Gaussian noise distribution for two end points of a line is an effective and general approach, as referred to in Haralick [17] and Roh [18], etc. In this section, we use the Gaussian noise model to compute error propagation for two-line collinearity and obtain a threshold value resulting from the error variance test to decide whether the two lines are collinear or not. The line collinearity can be decomposed into two terms of parallelism and normal distance defined between the two lines being evaluated. 6.1 Parallelism The parallelism is a function of eight variables: p = cos −1  a ·b  a  b    where a =x 2 −x 1 and b = x 4 −x 3 (18.8) or p =px 1  x 2  x 3  x 4  where x i = x i y i  1 T (18.9) The x i i = 14 denote image coordinates for four end points of the two lines and  a  presents the length of vector a. To avoid the treatment of a trigonometric function in calculating the partial derivatives of function p with respect to the image coordinates, we use a simpler function: p  = cosp = a ·b  a  b  = p  x 1  x 2  x 3  x 4  (18.10) Let x i y i  be the true value and ˜x i  ˜y i  be the noisy observation of x i y i , then we have ˜x i = x i + i (18.11a) ˜y i = y i + i (18.11b) where the noise terms  i and  i denote independently distributed noise terms having mean 0 and variance  2 i . Hence: E   i  = E   i  = 0 (18.12) V   i  = V   i  =  2 i (18.13) E   i  j  =   2 0 if i = j 0 otherwise E   i  j  =   2 0 if i = j 0 otherwise (18.14a) E   i  j  = 0 (18.14b) From these noisy measurements, we define the noisy parallel function, ˜p  ˜x 1  ˜y 1  ˜x 2  ˜y 2  ˜x 3  ˜y 3  ˜x 4  ˜y 4  (18.15) 352 Fast Object Recognition Using DP To determine the expected value and variance of ˜p  , we expand ˜p  as a Taylor series at x 1 y 1 x 2 y 2 x 3 y 3 x 4 y 4 : ˜p  ≈ p  + 4  i=1  ˜x i −x i   ˜p  ˜x i +˜y i −y i   ˜p  ˜y i  = p  + 4  i=1   i  ˜p  ˜x i + i  ˜p  ˜y i  (18.16) Then, the variance of the parallel function becomes: Varp   = E  ˜p  −p   2  =  2 0 4  i=1    ˜p  ˜x i  2 +   ˜p  ˜y i  2  (18.17) Hence, for a given two lines, we can determine a threshold: p = 3 ·  E˜p  −p   2  (18.18) Because the optimal p  equals 1, any parallel two lines have to satisfy the following condition: 1−˜p  ≤ p (18.19) 6.2 Normal Distance A normal distance between any two lines is selected from among two distances d 1 and d 2 : d norm = maxd 1 d 2  (18.20) where d 1 =  a 1 x  m +b 1 y  m +c 1   a 2 1 +b 2 1  1 2 (18.21a) d 2 =  a 2 x m +b 2 y m +c 2   a 2 2 +b 2 2  1 2 (18.21b) The a i b i and c i are line coefficients for the i-line and x m y m  denotes the center point coordinate of the first line, and x  m y  m  denotes the center of the second line. Similarly to the parallel case of Section 6.1, the normal distance is also a function of eight variables: d norm = dx 1  x 2  x 3  x 4  (18.22) Through all processes similar to the noise model of Section 6.1, we obtain: Varp   = E  ˜p  −p   2  =  2 0 4  i=1    ˜p  ˜x i  2 +   ˜p  ˜y i  2  (18.23) For the given two lines, we can also determine a threshold for the normal distance: d = 3 ·  E ˜ d −d 2  (18.24) Because the optimal d equals 0, the normal distance for any two collinear lines has to satisfy the following condition: ˜ d ≤ d (18.25) [...]... matching results J1 Model 2 J3 (39, 80 ) (39, 79) (39, 86 ) Model 1 J2 (80 , 41) (86 , 41) (86 , 41) (41, 70) (41, 70) (41, 70) (70, 69) (70, 69) (70, 69) (41, (41, (41, (41, (41, (80 , (80 , (80 , (80 , (80 , ( 38, ( 38, ( 38, ( 38, ( 38, (1 18, (40, (106, (94, (116, 41) 41) 41) 41) 41) 80 ) 80 ) 80 ) 80 ) 80 ) 38) 38) 38) 38) 38) J4 56) 56) 56) 56) 56) 8. 2 Collinearity Tests for Random Lines We tested the stability of... preserved, even in broken and distorted line sets Figure 18. 9 A topological shape extraction for 3D object recognition (a) Original image; (b) line extraction; and (c), (d) found topological shapes 3 58 Fast Object Recognition Using DP 69 39 80 41 70 69 69 79 39 39 86 41 86 70 41 70 (c) 56 38 80 116 41 56 56 38 38 80 116 80 41 41 1 18 (d) Figure 18. 9 (continued) Experiments 359 Table 18. 2 The topological... Figure 18. 8 J1 and J2 junctions are combined with a collinear constraint that also denotes the same rotating condition as the clockwise direction in the case of Figure 18. 8(a) The three binary relations in Section 4 all appear in the topology of Figure 18. 8 In the combination of J2 and J3 , the rotating direction between the two junctions is reversed In Figure 18. 8(b), similar topology to Figure 18. 8(a)... Matching,” Pattern Recognition, 26, pp 1 087 –10 98, 1993 [16] Amini, A A., Weymouth, T E and Jain, R C “Using dynamic programming for solving variational problems in vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(9), pp 85 5 86 7, 1990 [17] Haralick, Y S and Shapiro, R M “Error Propagation in Machine Vision,” Machine Vision and Applications, 7, pp 93–114, 1994 [ 18] Roh, K S and Kweon,... segments and 2D/3D recognition problems, with a simple shape description sequentially represented References [1] Ayache, N and Faugeras, O D “HYPER: A New Approach for the Recognition and Positioning of Two-Dimensional Objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 8( 1), pp 44–54, 1 986 [2] Ballard, D H and Brown, C M Computer Vision, Prentice Hall, 1 982 [3] Grimson, W E L and. .. Analysis and Machine Intelligence, 9, pp 469– 482 , 1 987 References 361 [4] Hummel, R A and Zucker, S W “On the foundation of relaxation labeling processes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 5(3), pp 267– 286 , 1 983 [5] Li, S Z “Matching: invariant to translations, rotations and scale changes,” Pattern Recognition, 25, pp 583 –594, 1992 [6] Lowe, D G “Three-Dimensional Object Recognition. .. indicating the line index, and Figure 18. 9(c) and 18. 9(d) give the matched line groups corresponding to the model shape of Experiments 357 J4 J1 J2 J4 J2 J3 J1 J3 (a) (b) Figure 18. 8 Topological shapes for model description (a) A model with clockwise rotation of the starting junction; (b) a model with counter-clockwise direction for the starting junction Figure 18. 8(a) and 18. 8(b), respectively In each... Vision and Image Understanding, 63(3), pp 542–567, 1996 [13] Ohta, Y and Kanade, T “Stereo by intra- and inter- scanline search using dynamic programming,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 7(2), pp 139–154, 1 985 [14] Jacobs, D W “Robust and Efficient Detection of Convex Groups,” IEEE Conference on Computer Vision and Pattern Recognition, pp 770–771, 1993 [15] Henikoff, J and. .. Transform for Object Recognition, ” Proceedings of the Second International Conference on Computer Vision, pp 700–706, 1 988 [10] Li, S Z Markov Random Field Modeling in Computer Vision, Springer Verlag, New York, 1995 [11] Bunke, H and Buhler, U Applications of Approximate String Matching to 2D Shape Recognition, ” Pattern Recognition, 26, pp 1797– 181 2, 1993 [12] Cox, I J., Higorani, S L and Rao, S B “A... recognition and 3D reconstruction 1 Introduction In order to survive worldwide competition, enterprises have tried to use the computer’s huge memory capacity, fast processing speed and user-friendly interactive graphics capabilities to automate and tie together cumbersome and separate engineering or production tasks, including design, analysis, Computer-Aided Intelligent Recognition Techniques and Applications . (1 18, 41) (41, 80 ) (80 , 38) ( 38, 56) (40, 41) (41, 80 ) (80 , 38) ( 38, 56) (106, 41) (41, 80 ) (80 , 38) ( 38, 56) (94, 41) (41, 80 ) (80 , 38) ( 38, 56) (116, 41) (41, 80 ) (80 , 38) ( 38, 56) 8. 2 Collinearity. line extraction; and (c), (d) found topological shapes. 3 58 Fast Object Recognition Using DP 39 41 70 69 79 39 41 70 69 39 86 41 70 69 56 38 80 41 116 56 38 80 116 41 38 80 41 1 18 56 86 80 (c) (d) Figure 18. 9. junction. Figure 18. 8(a) and 18. 8(b), respectively. In each extraction, there are enough line groups to guide a hypothesis for 3D object recognition. Table 18. 2 presents the matching results in Figure 18. 9

Ngày đăng: 14/08/2014, 11:21

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan