introduction to statistical pattern recognition 2nd ed. - k. fukunaga

616 640 0
introduction to statistical pattern recognition 2nd ed. - k. fukunaga

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Statistical Pattern Recogni-tion Introduction to Second Edition % % 0 0 n Keinosuke Fukunaga Introduction to Stas-tical Pattern Recognition Second Edition This completely revised second edition presents an introduction to statistical pattern recognition Pattern recognition in general covers a wide range of problems: it is applied to engineering problems, such as character readers and wave form analysis, as well as to brain modeling in biology and psychology Statistical decision and estimation, which are the main subjects of this book, are regarded as fimdamental to the study of pattern recognition This book is appropriate as a text for introductory courses in pattern recognition and as a reference book for people who work in the field Each chapter also contains computer projects as well as exercises Introduction to Statistical Pattern Recognition Second Edition This is a volume i n COMPUTER SCIENCE AND SCIENTIFIC COMPUTING Editor: WERNER RHEINBOLDT Introduction to Statistical Pattern Recognition Second Edition Keinosuke F'ukunaga School of Electrical Engineering Purdue University West Lafayet te, Indiana M K4 Morgan K.ufmu\n i an imprint o Academic Rars s f A H a m r t SaenccandTechlndogyCompony San Diego San Francisco New York Boston London Sydney Tokyo This book i printed on acid-freepaper s Copyright 1990by Academic Press All rights reserved No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system without permission in writing from the publisher ACADEMIC PRESS A Harcourt Science and Technology Company 525 B Street, Suite 1900,San Diego, CA 92101-4495 USA h ttp://www.academicpress.com Academic Press 24-28Oval Road, London N W 7DX United Kingdom h ttp:/lwww hbuWap/ Morgan Kaufmann 340 Pine Street, S x h Floor, San Francisco, CA 94104-3205 it http://mkp.com Library of Congress Cataloging-in-PublicationData Fukunaga Keinosuke Introduction to statistical pattern recognition I Keinosuke Fukunaga - 2nd ed p, cm Includes bibliographical references ISBN 0-12-269851-7 Pattern perception - Statistical methods Decision-making - Mathematical models Mathematical statistics I Title 0327.F85 1990 006.4 - dc20 89-18195 CIP PRINTF.D M THE UNITED STATES OF AMEIUCA 03 To Reiko, Gen, and Nina Introduction to Statistical Pattern Recognition 584 F Factor analysis, 417 Feature extraction for classification, 442 general critrion for, 460 sequential, 480 for signal representation, 400 ideal, 444 selection, see Feature extraction space, 402 subset selection, 489 backward selection, 490 branch and bound, see Branch and bound forward selection, 49 stepwise search technique, 490 vector, 402 Fisher classifier, see Linear classifier criterion, 134 Fixed increment rule, see Correction rule Fourier transform of likelihood ratio, 159 orthonormality of, 156 quadratic classifier of, 159 for stational process, 421 G Gamma density, see Density function function, 23, 574, 578 Gaussian pulse, 282, 472 Gaussian-Wishart distribution, see Distribution Goodness-of-fit, see Chi-square Gradient of density function, see Density function estimate of, see Estimate Gradient correction rule, see Correction rule Graph theoretic clustering, see Clustering Grouped error estimate, see Estimate H Harmonic sequence, see Sequence Hermite polynomial, 288 Holdout method, 220, 310 Hughes phenomena, 208 Hyperellipsoid surface area, 314, 573 volume, 260, 572 Hypothesis test composite, 83 multi , 66 sequential, see Sequential (hypothesis) test simple, 51 Index 585 single, 67 I Intrinsic dimensionality, see Dimensionality Inverse matrix, see Matrix K k nearest neighbor (NN) - volumetric, 305 classification, 303 likelihood ratio for, 303 density estimation, 268, 575 bias, 272 consistent, 273 metric, 275 moments, 270 approximation of, 270 optimal k, 273 minimum ZMSE, 275 minimum MSE, 274 unbias, 273 variance, 273 distance to kNN, 277 effect of parameters, 278 error estimation bias, 347 L estimate of a covariance, 35 leave-one-out method, 303 metric, 303 resubstitution method, 303 progression, 552 k nearest neighbor ( N N ) approach - voting, 305 asymptotic conditional risk and error, 307 kNN, 306 multiclass, 309 NN, 305 2NN, 306 branch and bound, see Branch and bound condensed N N , 360 edited kNN, 358 finite sample analysis, 313 bias multiclass, 322 N N , 313 2NN, 321 Karhunen-LoCve expansion, see Expansion Kiefer-Wolfowitz method, 380 Kolmogorov-Smirnov test, 76, 83 Lagrange multiplier, 26, 59 Large number of classes, 284 Layered machine, 171 Learning, 368 machine, without teacher, 394 Leave-one-out method, 220 for k nearest neighbor approach, see kNN for Parzen approach, see Parzen Likelihood ratio, 52 characteristic function of, see Characteristic function 586 Introduction to Statistical Pattern Recognition density function of, see Density function Fourier transform of, see Fourier transform for k nearest neighbor approach, see kNN minus-log, 52 for normal distribution, see Distribution, normal for Parzen approach, see Parzen test (decision rule), see Decision rule threshold of, 52 Linear classifier Bayes, 55, 57, 129 effect of design samples, 208 error of, see Error Fisher, 135 iterative design, 150 for minimum error, 136 for minimum mean-square error, 145, 147 for multiclass, 373 by nonparametric scatter matrix, 473 successive adjustment, 367 Linearly separable, 153, 371 convergence for, see Convergence Local dimensionality, see Dimensionality, intrinsic mean, 535, 542 Log transformation, see Transformation M Mapped space, see Feature, space Mapping linear, 399, 448, 465, 470 nonlinear, 463, 480 Matched filter, 126 Matrix autocorrelation, 15 sample, see Sample block toeplitz, 162 correlation, 15 covariance, 13 sample, see Sample derivatives, 564 of determinant, 567 of distance, 568 of inverse, 564 of trace, 565 determinant, 38 diagonal, 27 eigenvalue, 27 eigenvector, 27 inversion of, 41 generalized, 44 pseudo-, 43 near-singular, 40 positive definite, 35 rank, 38 sample, 39, 149, 174, 556 singular, 38 toeplitz, 160 trace, 36 Mean, see Expected, value or vector 87 Index sample, see Sample Merging, 13 Metric, 264, 275, 313 global, 13 local, 13 Minimax for feature extraction, see Feature, extraction test, see Decision rule Minimum point finding problem, see Stochastic approximation Minus-log-likelihood ratio, see Likelihood ratio Mixture autocorrelation matrix, see Scatter matrix, mixture density function, see Density function normalization, 516, 519 scatter matrix of, see Scatter matrix Model validity test, 82 Moment, 18 central, 20 estimate, 18 sample, 18 Monotonicity, 492, 526 Multiclass, 66, 169, 373 Multicluster, 169 Multihypotheses test, see Hypothesis test Multiple dichotomy, 13 Multi-sensor fusion, 114 N Nearest local-mean reclassification rule, 542 Nearest mean reclassification rule, 517 convergence of, 18 Nearest neighbor decision rule, see kNN Newton method, 376 Neyman-Pearson test, see Decision rule Nonparametric clustering, see Clustering data reduction, see Data, reduction density estimation k nearest neighbor approach, see kNN Parzen approach, see Parzen discriminant analysis, see Discriminant analysis scatter matrix, see Scatter matrix Normal decomposition, 526 maximum likelihood estimation, 527 method of moments, 527 piecewise quadratic boundary, 526 Normal distribution, see Distribution, normal Normality test, 75, 537 Normalization of eigenvalues, see Eigenvalues Introduction to Statistical Pattern Recognition 588 mixture, see Mixture Notation, Operating characteristics,63 Orthogonal, 27,287 Orthonormal, 27,386,401 for binary inputs, see Binary input of Fourier transform, see Fourier transform transformation, see Transformation Outlier, 235 P Pairwise error, see Error Parametric clustering, see Clustering data reduction, see Data, reduction estimation, 184 Parzen classification, 301 likelihood ratio, 301 reduced, 553 density estimation, 255, 574 bias, 259 consistent, 261 convolution expression, 257 kernel, 255 metric, 264 minimum ZMSE, 265 size, 261 minimum IMSE, 264 minimum MSE, 263 moments, 257 approximation of, 258 for a normal kernel, 259 for a uniform kernel, 260 unbias, 261 variance, 259 error estimation direct estimation of the Bayes error, 344 kernel L estimate of the kernel covariance, 339 shape, 336,342 size, 322 leave-one-out method, 301 lower bound of the Bayes error, 30 resubstitution method, 301 sample size, 327 threshold, 328 upper bound of the Bayes error, 301 Perceptron, 368 Perturbation of eigenvalues, see Eigenvalues of eigenvectors, see Eigenvectors of a quadratic classifier, see Quadratic classifier Piecewise classifier linear, 170 quadratic, 169 successive adjustment, 373 Positive definiteness, see Matrix Potential function, 387 Index 589 Power spectrum, 421 Power transformation, see Transformation Principal component, 28 analysis, 417 Probability a posteriori, 12 a priori, 12 class, 12 coverage, see Coverage of error, see Error reject, see Reject Process random, 17 stationary, see Stationary process whitening, 28 Q Quadratic classifier Bayes, 54 bootstrap error for, 242 design of, 153 error of, see Error error of the resubstitution method, 231 orthogonal subspace to, 480 perturbation of, 225 sequential selection, 480 Quadratic form (function), 16, 54, 125, 154 recursive computation of, 498 R Radar Data, 47 Random process, see Process Random variable, 11 Random vector, see Vector Rank of determinant, see Matrix Ranking procedure, 73 Reduced Parzen classifier, see Parzen Reduced training sequence, see Sequence Regression function, 376 Reject, 78 probability, 78 region, 78, 171 threshold, 78 Representative selection, 549 Reproducing pair, see Density function Resubstitution method, 220 error for a quadratic classifier, 23 for k nearest neighbor approach, see kNN for Parzen approach, see Parzen Robbins-Monro method, 376 Root-finding problem, see Stochastic approximation Row correlation coefficient 164 S Sample autocorrelation matrix, 19 covariance matrix, 21 design 590 bias due to, 203, 216 effect of, 201 variance due to, 213, 218 estimate, 17 generation, 30 matrix, see Matrix mean vector, 19 moment, see Moment test bias due to, 199, 216 effect of, 197 variance due to, 200, 218 Scatter matrix between-class, 446 generalized, 463 nonparametric, 467 of mixture, 446 within-class, 446 nonparametric, 477, 542 Scatter measure, 41 Schwarz inequality, 309 Separability criterion, 446 Sequence flatter, 389 harmonic, 375 reduced training, 37 Sequential (hypothesis) test, 110 Wald, 114 Simple hypothesis test, see Hypothesis test Single hypothesis test, see Hypothesis test Singular value decomposition, 557 Skeleton hypersurface, 537 Small sample size problem, 39 Introduction to Statistical Pattern Recognition Solution tree, 492, 523 Spherical coordinate, 484 Splitting, I3 Standard Data, 45 Standard deviation, 15 Stationary process, 55, 156,420 autocorrelation function, 157, 420 mean, 157,420 Stochastic approximation, 375 convergence, see Convergence minimum point finding problem, 380 multidimensional extension, 382 root-finding problem, 376 Successive adjustment of density function, see Successive Bayes estimation linear classifier, see Linear classifier piecewise classifier, see Piecewise classifier potential function, 385 Successive Bayes estimation, 389 of covariance matrix, 392, 393 of expected vector, 390, 393 supervised estimation, 390 unsupervised estimation, 394 Surface area, see Hyperellipsoid T Taylor series, 182, 258, 270, 313 Test sample, see Sample Toeplitz matrix, see Matrix Trace, see Matrix 59 Index Transformation linear, 24, 401,448,465, 470 log, 108 orthonormal, 28, 35,401,417 power, 76, 104 variable, 47 whitening, 28, 128 Truth table, 290 U Unbiased asymptotic k nearest neighbor density estimate, see kNN Parzen density estimate, see Parzen estimate, see Estimate Unsupervised classification, see Classification estimation, see Successive Bayes estimation Valley-seeking technique, see Clustering Variable transformation, see Transformation Variance, 14 Vector basis, see Basis conditional expected, 13 desired output, see Desired output expected, see Expected feature, see Feature penalty, 150 random, 11 Volume, see Hyperellipsoid Volumetric k nearest neighbor, see kNN Voting k nearest neighbor, see kNN W Wald sequential test, see Sequential (hypothesis) test Walsh function, see Expansion Weighting function, 469 White noise, 125 Whitening filter, 128 process, see Process transformation, see Transformation Wishart distribution, see Distribution ISBN 0-12-269851-7 About the author: Keinosuke Fukunaga received a B.S degree in electrical engineering from Kyoto University, Kyoto, Japan, in 1953.In 1959 he earned an M.S.E.E degree from the University of Pennsylvania in Philadelphia and then returned to Kyoto University to earn his Ph.D in 1962 From 1953 to 1966 D Fukunaga w a s r employed by the Mitsubishi Electric Company in Japan His first p i t i o n was in the Central Research Laboratories where he worked on computer applications in control systems Later, he became the Hardware Development Manager in the Computer Division He is currently a professor of Electrical Engineering at Purdue University, where he has been on the faculty since 1966 D Fukunaga acted as an Associate Editor r in pattern recognition for ZEEE Punsuctions on InfintzQtion Theory from 1977 to 1980 He is a member ofEta Kappa Nu istical paner-n Recognition Introduction to f b m mviews oftheJirst editicm Contains an excellent review of the literature on idormation theory, automata, and pattern recognition.’’ -C)aoiae “Many topics are covered with dispatch Also the book s m h hints of the author’s own carefid and valuable research.’’ -IEEE Pansactions o n Theory I S B N 0-32-269853-7 90051 780122 698514 ... Cataloging-in-PublicationData Fukunaga Keinosuke Introduction to statistical pattern recognition I Keinosuke Fukunaga - 2nd ed p, cm Includes bibliographical references ISBN 0-1 2-2 6985 1-7 Pattern. ..Keinosuke Fukunaga Introduction to Stas-tical Pattern Recognition Second Edition This completely revised second edition presents an introduction to statistical pattern recognition Pattern recognition. .. Introduction to Statistical Pattern Recognition Second Edition This is a volume i n COMPUTER SCIENCE AND SCIENTIFIC COMPUTING Editor: WERNER RHEINBOLDT Introduction to Statistical Pattern Recognition

Ngày đăng: 31/03/2014, 16:24

Từ khóa liên quan

Mục lục

  • Cover

  • Frontmatter

    • Half Title Page

    • Title Page

    • Copyright

    • Dedication

    • Table of Contents

    • Preface

    • Acknowledgments

  • Chapter 1: Introduction

    • 1.1 Formulation of Pattern Recognition Problems

    • 1.2 Process of Classifier Design

    • Notation 1

    • References 1

  • Chapter 2: Random Vectors and Their Properties

    • 2.1 Random Vectors and Their Distributions

    • 2.2 Estimation of Parameters

    • 2.3 Linear Transformation

    • 2.4 Various Properties of Eigenvalues and Eigenvectors

    • Computer Projects 2

    • Problems 2

    • References 2

  • Chapter 3: Hypothesis Testing

    • 3.1 Hypothesis Tests for Two Classes

    • 3.2 Other Hypothesis Tests

    • 3.3 Error Probability in Hypothesis Testing

    • 3.4 Upper Bounds on the Bayes Error

    • 3.5 Sequential Hypothesis Testing

    • Computer Projects 3

    • Problems 3

    • References 3

  • Chapter 4: Parametric Classifiers

    • 4.1 The Bayes Linear Classifier

    • 4.2 Linear Classifier Design

    • 4.3 Quadratic Classifier Design

    • 4.4 Other Classifiers

    • Computer Projects 4

    • Problems 4

    • References 4

  • Chapter 5: Parameter Estimation

    • 5.1 Effect of Sample Size in Estimation

    • 5.2 Estimation of Classification Errors

    • 5.3 Holdout, Leave-One-Out, and Resubstitution Methods

    • 5.4 Bootstrap Methods

    • Computer Projects 5

    • Problems 5

    • References 5

  • Chapter 6: Nonparametric Density Estimation

    • 6.1 Parzen Density Estimate

    • 6.2 k Nearest Neighbor Density Estimate

    • 6.3 Expansion by Basis Functions

    • Computer Projects 6

    • Problems 6

    • References 6

  • Chapter 7: Nonparametric Classification and Error Estimation

    • 7.1 General Discussion

    • 7.2 Voting kNN Procedure - Asymptotic Analysis

    • 7.3 Voting kNN Procedure - Finite Sample Analysis

    • 7.4 Error Estimation

    • 7.5 Miscellaneous Topics in the kNN Approach

    • Computer Projects 7

    • Problems 7

    • References 7

  • Chapter 8: Successive Parameter Estimation

    • 8.1 Successive Adjustment of a Linear Classifier

    • 8.2 Stochastic Approximation

    • 8.3 Successive Bayes Estimation

    • Computer Projects 8

    • Problems 8

    • References 8

  • Chapter 9: Feature Extraction and Linear Mapping for Signal Representation

    • 9.1 The Discrete Karhunen-Love Expansion

    • 9.2 The Karhunen-Love Expansion for Random Processes

    • 9.3 Estimation of Eigenvalues and Eigenvectors

    • Computer Projects 9

    • Problems 9

    • References 9

  • Chapter 10: Feature Extraction and Linear Mapping for Classification

    • 10.1 General Problem Formulation

    • 10.2 Discriminant Analysis

    • 10.3 Generalized Criteria

    • 10.4 Nonparametric Discriminant Analysis

    • 10.5 Sequential Selection of Quadratic Features

    • 10.5 Feature Subset Selection

    • Computer Projects 10

    • Problems 10

    • References 10

  • Chapter 11: Clustering

    • 11.1 Parametric Clustering

    • 11.2 Nonparametric Clustering

    • 11.3 Selection of Representatives

    • Computer Projects 11

    • Problems 11

    • References 11

  • Backmatter

    • Appendix A: Derivatives of Matrices

    • Appendix B: Mathematical Formulas

    • Appendix C: Normal Error Table

    • Appendix D: Gamma Function Table

    • Index

    • About the Author

  • Back Cover

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan