Information Theory, Inference & Learning Algorithms pptx

640 373 0
  • Loading ...
    Loading ...
    Loading ...

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Tài liệu liên quan

Thông tin tài liệu

Ngày đăng: 07/03/2014, 05:20

Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.Information Theory, Inference, and Learning AlgorithmsDavid J.C. MacKayCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.Information Theory,Inference,and Learning AlgorithmsDavid J.C. MacKaymackay@mrao.cam.ac.ukc1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005cCambridge University Press 2003Version 7.2 (fourth printing) March 28, 2005Please send feedback on this book viahttp://www.inference.phy.cam.ac.uk/mackay/itila/Version 6.0 of this book was published by C.U.P. in September 2003. It willremain viewable on-screen on the above website, in postscript, djvu, and pdfformats.In the second printing (version 6.6) minor typos were corrected, and the bookdesign was slightly altered to modify the placement of section numbers.In the third printing (version 7.0) minor typos were corrected, and chapter 8was renamed ‘Dependent random variables’ (instead of ‘Correlated’).In the fourth printing (version 7.2) minor typos were corrected.(C.U.P. replace this page with their own page ii.)Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.ContentsPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v1 Introduction to Information Theory . . . . . . . . . . . . . 32 Probability, Entropy, and Inference . . . . . . . . . . . . . . 223 More about Inference . . . . . . . . . . . . . . . . . . . . . 48I Data Compression . . . . . . . . . . . . . . . . . . . . . . 654 The Source Coding Theorem . . . . . . . . . . . . . . . . . 675 Symbol Codes . . . . . . . . . . . . . . . . . . . . . . . . . 916 Stream Codes . . . . . . . . . . . . . . . . . . . . . . . . . . 1107 Codes for Integers . . . . . . . . . . . . . . . . . . . . . . . 132II Noisy-Channel Coding . . . . . . . . . . . . . . . . . . . . 1378 Dependent Random Variables . . . . . . . . . . . . . . . . . 1389 Communication over a Noisy Channel . . . . . . . . . . . . 14610 The Noisy-Channel Coding Theorem . . . . . . . . . . . . . 16211 Error-Correcting Codes and Real Channels . . . . . . . . . 177III Further Topics in Information Theory . . . . . . . . . . . . . 19112 Hash Codes: Codes for Efficient Information Retrieval . . 19313 Binary Codes . . . . . . . . . . . . . . . . . . . . . . . . . 20614 Very Good Linear Codes Exist . . . . . . . . . . . . . . . . 22915 Further Exercises on Information Theory . . . . . . . . . . 23316 Message Passing . . . . . . . . . . . . . . . . . . . . . . . . 24117 Communication over Constrained Noiseless Channels . . . 24818 Crosswords and Codebreaking . . . . . . . . . . . . . . . . 26019 Why have Sex? Information Acquisition and Evolution . . 269IV Probabilities and Inference . . . . . . . . . . . . . . . . . . 28120 An Example Inference Task: Clustering . . . . . . . . . . . 28421 Exact Inference by Complete Enumeration . . . . . . . . . 29322 Maximum Likelihood and Clustering . . . . . . . . . . . . . 30023 Useful Probability Distributions . . . . . . . . . . . . . . . 31124 Exact Marginalization . . . . . . . . . . . . . . . . . . . . . 31925 Exact Marginalization in Trellises . . . . . . . . . . . . . . 32426 Exact Marginalization in Graphs . . . . . . . . . . . . . . . 33427 Laplace’s Method . . . . . . . . . . . . . . . . . . . . . . . 341Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.28 Model Comparison and Occam’s Razor . . . . . . . . . . . 34329 Monte Carlo Methods . . . . . . . . . . . . . . . . . . . . . 35730 Efficient Monte Carlo Methods . . . . . . . . . . . . . . . . 38731 Ising Models . . . . . . . . . . . . . . . . . . . . . . . . . . 40032 Exact Monte Carlo Sampling . . . . . . . . . . . . . . . . . 41333 Variational Methods . . . . . . . . . . . . . . . . . . . . . . 42234 Independent Component Analysis and Latent Variable Mod-elling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43735 Random Inference Topics . . . . . . . . . . . . . . . . . . . 44536 Decision Theory . . . . . . . . . . . . . . . . . . . . . . . . 45137 Bayesian Inference and Sampling Theory . . . . . . . . . . 457V Neural networks . . . . . . . . . . . . . . . . . . . . . . . . 46738 Introduction to Neural Networks . . . . . . . . . . . . . . . 46839 The Single Neuron as a Classifier . . . . . . . . . . . . . . . 47140 Capacity of a Single Neuron . . . . . . . . . . . . . . . . . . 48341 Learning as Inference . . . . . . . . . . . . . . . . . . . . . 49242 Hopfield Networks . . . . . . . . . . . . . . . . . . . . . . . 50543 Boltzmann Machines . . . . . . . . . . . . . . . . . . . . . . 52244 Supervised Learning in Multilayer Networks . . . . . . . . . 52745 Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . 53546 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . 549VI Sparse Graph Codes . . . . . . . . . . . . . . . . . . . . . 55547 Low-Density Parity-Check Codes . . . . . . . . . . . . . . 55748 Convolutional Codes and Turbo Codes . . . . . . . . . . . . 57449 Repeat–Accumulate Codes . . . . . . . . . . . . . . . . . . 58250 Digital Fountain Codes . . . . . . . . . . . . . . . . . . . . 589VII Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . 597A Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598B Some Physics . . . . . . . . . . . . . . . . . . . . . . . . . . 601C Some Mathematics . . . . . . . . . . . . . . . . . . . . . . . 605Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620Copyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.PrefaceThis book is aimed at senior undergraduates and graduate students in Engi-neering, Science, Mathematics, and Computing. It expects familiarity withcalculus, probability theory, and linear algebra as taught in a first- or second-year undergraduate course on mathematics for scientists and engineers.Conventional courses on information theory cover not only the beauti-ful theoretical ideas of Shannon, but also practical solutions to communica-tion problems. This book goes further, bringing in Bayesian data modelling,Monte Carlo methods, variational methods, clustering algorithms, and neuralnetworks.Why unify information theory and machine learning? Because they aretwo sides of the same coin. In the 1960s, a single field, cybernetics, waspopulated by information theorists, computer scientists, and neuroscientists,all studying common problems. Information theory and machine learning stillbelong together. Brains are the ultimate compression and communicationsystems. And the state-of-the-art algorithms for both data compression anderror-correcting codes use the same tools as machine learning.How to use this bookThe essential dependencies between chapters are indicated in the figure on thenext page. An arrow from one chapter to another indicates that the secondchapter requires some of the first.Within Parts I, II, IV, and V of this book, chapters on advanced or optionaltopics are towards the end. All chapters of Part III are optional on a firstreading, except perhaps for Chapter 16 (Message Passing).The same system sometimes applies within a chapter: the final sections of-ten deal with advanced topics that can be skipped on a first reading. For exam-ple in two key chapters – Chapter 4 (The Source Coding Theorem) and Chap-ter 10 (The Noisy-Channel Coding Theorem) – the first-time reader shoulddetour at section 4.5 and section 10.4 respectively.Pages vii–x show a few ways to use this book. First, I give the roadmap fora course that I teach in Cambridge: ‘Information theory, pattern recognition,and neural networks’. The book is also intended as a textbook for traditionalcourses in information theory. The second roadmap shows the chapters for anintroductory information theory course and the third for a course aimed at anunderstanding of state-of-the-art error-correcting codes. The fourth roadmapshows how to use the text in a conventional course on machine learning.vCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.vi Preface1Introduction to Information Theory2Probability, Entropy, and Inference3More about InferenceI Data Compression4The Source Coding Theorem5Symbol Codes6Stream Codes7Codes for IntegersII Noisy-Channel Coding8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real ChannelsIII Further Topics in Information Theory12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels18Crosswords and Codebreaking19Why have Sex?IV Probabilities and Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering23Useful Probability Distributions24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis35Random Inference Topics36Decision Theory37Bayesian Inference and Sampling TheoryV Neural networks38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian Processes46DeconvolutionVI Sparse Graph Codes47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain CodesDependenciesCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.Preface vii1Introduction to Information Theory2Probability, Entropy, and Inference3More about InferenceI Data Compression4The Source Coding Theorem5Symbol Codes6Stream Codes7Codes for IntegersII Noisy-Channel Coding8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real ChannelsIII Further Topics in Information Theory12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels18Crosswords and Codebreaking19Why have Sex?IV Probabilities and Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering23Useful Probability Distributions24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis35Random Inference Topics36Decision Theory37Bayesian Inference and Sampling TheoryV Neural networks38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian Processes46DeconvolutionVI Sparse Graph Codes47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain Codes1Introduction to Information Theory2Probability, Entropy, and Inference3More about Inference4The Source Coding Theorem5Symbol Codes6Stream Codes8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real Channels20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering24Exact Marginalization27Laplace’s Method29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks47Low-Density Parity-Check CodesMy Cambridge Course on,Information Theory,Pattern Recognition,and Neural NetworksCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.viii Preface1Introduction to Information Theory2Probability, Entropy, and Inference3More about InferenceI Data Compression4The Source Coding Theorem5Symbol Codes6Stream Codes7Codes for IntegersII Noisy-Channel Coding8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real ChannelsIII Further Topics in Information Theory12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels18Crosswords and Codebreaking19Why have Sex?IV Probabilities and Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering23Useful Probability Distributions24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis35Random Inference Topics36Decision Theory37Bayesian Inference and Sampling TheoryV Neural networks38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian Processes46DeconvolutionVI Sparse Graph Codes47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain Codes1Introduction to Information Theory2Probability, Entropy, and Inference4The Source Coding Theorem5Symbol Codes6Stream Codes8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding TheoremShort Course onInformation TheoryCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.Preface ix1Introduction to Information Theory2Probability, Entropy, and Inference3More about InferenceI Data Compression4The Source Coding Theorem5Symbol Codes6Stream Codes7Codes for IntegersII Noisy-Channel Coding8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real ChannelsIII Further Topics in Information Theory12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels18Crosswords and Codebreaking19Why have Sex?IV Probabilities and Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering23Useful Probability Distributions24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis35Random Inference Topics36Decision Theory37Bayesian Inference and Sampling TheoryV Neural networks38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian Processes46DeconvolutionVI Sparse Graph Codes47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain Codes11Error-Correcting Codes and Real Channels12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain CodesAdvanced Course onInformation Theory and CodingCopyright Cambridge University Press 2003. On-screen viewing permitted. Printing not permitted. http://www.cambridge.org/0521642981You can buy this book for 30 pounds or $50. See http://www.inference.phy.cam.ac.uk/mackay/itila/ for links.x Preface1Introduction to Information Theory2Probability, Entropy, and Inference3More about InferenceI Data Compression4The Source Coding Theorem5Symbol Codes6Stream Codes7Codes for IntegersII Noisy-Channel Coding8Dependent Random Variables9Communication over a Noisy Channel10The Noisy-Channel Coding Theorem11Error-Correcting Codes and Real ChannelsIII Further Topics in Information Theory12Hash Codes13Binary Codes14Very Good Linear Codes Exist15Further Exercises on Information Theory16Message Passing17Constrained Noiseless Channels18Crosswords and Codebreaking19Why have Sex?IV Probabilities and Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering23Useful Probability Distributions24Exact Marginalization25Exact Marginalization in Trellises26Exact Marginalization in Graphs27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis35Random Inference Topics36Decision Theory37Bayesian Inference and Sampling TheoryV Neural networks38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian Processes46DeconvolutionVI Sparse Graph Codes47Low-Density Parity-Check Codes48Convolutional Codes and Turbo Codes49Repeat–Accumulate Codes50Digital Fountain Codes2Probability, Entropy, and Inference3More about Inference20An Example Inference Task: Clustering21Exact Inference by Complete Enumeration22Maximum Likelihood and Clustering24Exact Marginalization27Laplace’s Method28Model Comparison and Occam’s Razor29Monte Carlo Methods30Efficient Monte Carlo Methods31Ising Models32Exact Monte Carlo Sampling33Variational Methods34Independent Component Analysis38Introduction to Neural Networks39The Single Neuron as a Classifier40Capacity of a Single Neuron41Learning as Inference42Hopfield Networks43Boltzmann Machines44Supervised Learning in Multilayer Networks45Gaussian ProcessesA Course on Bayesian Inferenceand Machine Learning[...]... them all I extend my sincere acknowledgments I especially wish to thank all the students and colleagues at Cambridge University who have attended my lectures on information theory and machine learning over the last nine years The members of the Inference research group have given immense support, and I thank them all for their generosity and patience over the last ten years: Mark Gibbs, Michelle Povinelli,... buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 1 Introduction to Information Theory The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point (Claude Shannon, 1948) In the first half of this book we study how to measure information content; we learn how to compress data;... analogue telephone line, over which two modems communicate digital information; • the radio communication link from Galileo, the Jupiter-orbiting spacecraft, to earth; • reproducing cells, in which the daughter cells’ DNA contains information from the parent cells; • a disk drive The last example shows that communication doesn’t have to involve information going from one place to another When we write a file... http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 8 1 — Introduction to Information Theory 0.1 0.1 0.01 R1 0.08 R5 1e-05 R1 R3 more useful codes pb 0.06 0.04 1e-10 R3 0.02 R5 R61 more useful codes 0 R61 1e-15 0 0.2 0.4 0.6 Rate 0.8 1 0 0.2 0.4 0.6 Rate 0.8 1 The repetition code R3 has therefore reduced the probability of error, as desired Yet we have lost something: our rate of information. .. Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 16 1 — Introduction to Information Theory Information theory addresses both the limitations and the possibilities of communication The noisy-channel coding theorem, which we will prove in Chapter 10,... University Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 4 1 — Introduction to Information Theory transmitted message We would prefer to have a communication channel for which this probability was zero – or so close to zero that for practical purposes... University Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 10 1 — Introduction to Information Theory where  1  0  G= 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1  1 0   1  1 (1.28) I find it easier to relate to the right-multiplication (1.25)... University Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 12 1 — Introduction to Information Theory s encoder t E parity bits r channel f = 10% decoder E E        given by reading out its first four bits If the syndrome is non-zero, then the... University Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 14 1 — Introduction to Information Theory 0.1 0.01 R1 0.1 0.08 R5 1e-05 more useful codes pb H(7,4) R1 H(7,4) BCH(511,76) 0.06 0.04 Figure 1.18 Error probability pb versus rate R for repetition... University Press 2003 On-screen viewing permitted Printing not permitted http://www.cambridge.org/0521642981 You can buy this book for 30 pounds or $50 See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links 6 1 — Introduction to Information Theory Received sequence r Likelihood ratio γ −3 γ −1 γ −1 γ −1 γ1 γ1 γ1 γ3 000 001 010 100 101 110 011 111 P (r | s = 1) P (r | s = 0) Decoded sequence ˆ . pounds or $50. See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links. Information Theory, Inference, and Learning Algorithms David J.C. MacKayCopyright. pounds or $50. See http://www .inference. phy.cam.ac.uk/mackay/itila/ for links. Information Theory, Inference, and Learning Algorithms David J.C. MacKaymackay@mrao.cam.ac.ukc1995,
- Xem thêm -

Xem thêm: Information Theory, Inference & Learning Algorithms pptx, Information Theory, Inference & Learning Algorithms pptx, Information Theory, Inference & Learning Algorithms pptx

Từ khóa liên quan