IT training scientific data mining and knowledge discovery principles and foundations gaber 2009 10 06

411 80 0
IT training scientific data mining and knowledge discovery  principles and foundations gaber 2009 10 06

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Scientific Data Mining and Knowledge Discovery “This page left intentionally blank.” Mohamed Medhat Gaber Editor Scientific Data Mining and Knowledge Discovery Principles and Foundations ABC Editor Mohamed Medhat Gaber Caulfield School of Information Technology Monash University 900 Dandenong Rd Caulfield East, VIC 3145 Australia mohamed.m.gaber@gmail.com Color images of this book you can find on www.springer.com/978-3-642-02787-1 ISBN 978-3-642-02787-1 e-ISBN 978-3-642-02788-8 DOI 10.1007/978-3-642-02788-8 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2009931328 ACM Computing Classification (1998): I.5, I.2, G.3, H.3 c Springer-Verlag Berlin Heidelberg 2010 This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer Violations are liable to prosecution under the German Copyright Law The use of general descriptive names, registered names, trademarks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use Cover design: KuenkelLopka GmbH Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) This book is dedicated to: My parents: Dr Medhat Gaber and Mrs Mervat Hassan My wife: Dr Nesreen Hassaan My children: Abdul-Rahman and Mariam “This page left intentionally blank.” Contents Introduction Mohamed Medhat Gaber Part I Background Machine Learning Achim Hoffmann and Ashesh Mahidadia Statistical Inference 53 Shahjahan Khan The Philosophy of Science and its relation to Machine Learning 77 Jon Williamson Concept Formation in Scientific Knowledge Discovery from a Constructivist View 91 Wei Peng and John S Gero Knowledge Representation and Ontologies .111 Stephan Grimm Part II Computational Science Spatial Techniques 141 Nafaa Jabeur and Nabil Sahli Computational Chemistry .173 Hassan Safouhi and Ahmed Bouferguene String Mining in Bioinformatics 207 Mohamed Abouelhoda and Moustafa Ghanem vii viii Part III Contents Data Mining and Knowledge Discovery Knowledge Discovery and Reasoning in Geospatial Applications 251 Nabil Sahli and Nafaa Jabeur Data Mining and Discovery of Chemical Knowledge .269 Lu Wencong Data Mining and Discovery of Astronomical Knowledge .319 Ghazi Al-Naymat Part IV Future Trends On-board Data Mining .345 Steve Tanner, Cara Stein, and Sara J Graves Data Streams: An Overview and Scientific Applications .377 Charu C Aggarwal Index 399 Contributors Mohamed Abouelhoda Cairo University, Orman, Gamaa Street, 12613 Al Jizah, Giza, Egypt Nile University, Cairo-Alex Desert Rd, Cairo 12677, Egypt Charu C Aggarwal IBM T J Watson Research Center, NY, USA, AL 35805, USA, charu@us.ibm.com Ghazi Al-Naymat School of Information Technologies, The University of Sydney, Sydney, NSW 2006, Australia, ghazi@it.usyd.edu.au Ahmed Bouferguene Campus Saint-Jean, University of Alberta, 8406, 91 Street, Edmonton, AB, Canada T6C 4G9 Mohamed Medhat Gaber Centre for Distributed Systems and Software Engineering, Monash University, 900 Dandenong Rd, Caul eld East, VIC 3145, Australia, Mohamed.Gaber@infotech.monash.edu.au John S Gero Krasnow Institute for Advanced Study and Volgenau School of Information, Technology and Engineering, George Mason University, USA, john@johngero.com Moustafa Ghanem Imperial College, South Kensington Campus, London SW7 2AZ, UK Sara J Graves University of Alabama in Huntsville, AL 35899, USA, sgraves@itsc.uah.edu Stephan Grimm FZI Research Center for Information Technologies, University of Karlsruhe, Baden-Wăurttemberg, Germany, grimm@fzi.de Achim Hoffmann University of New South Wales, Sydney 2052, NSW, Australia Nafaa Jabeur Department of Computer Science, Dhofar University, Salalah, Sultanate of Oman, nafaa jabeur@du.edu.om Shahjahan Khan Department of Mathematics and Computing, Australian Centre for Sustainable Catchments, University of Southern Queensland, Toowoomba, QLD, Australia, khans@usq.edu.au Ashesh Mahidadia University of New South Wales, Sydney 2052, NSW, Australia ix 386 C.C Aggarwal wide array of tasks in data streams A further advantage of sampling methods is that unlike many other synopsis construction methods, they maintain their inter-attribute correlations across samples of the data It is also often possible to use probabilistic inequalities in order to bound the effectiveness of a variety of applications with sampling methods However, a key problem in extending sampling methods to the data stream scenario, is that one does not know the total number of data points to be sampled in advance Rather, one must maintain the sample in a dynamic way over the entire course of the computation A method called reservoir sampling was first proposed in [31], which maintains such a sample dynamically This technique was originally proposed in the context of one-pass access of data from magnetic-storage devices However, the techniques also naturally extend to the data stream scenario Let us consider the case, where we wish to obtain an unbiased sample of size n from the data stream In order to initialize the approach, we simply add the first n points from the stream to the reservoir Subsequently, when the t C 1/th point is received, it is added to the reservoir with probability n=.t C 1/ When the data point is added to the reservoir, it replaces a random point from the reservoir It can be shown that this simple approach maintains the uniform sampling distribution from the data stream We note that the uniform sampling approach may not be very effective in cases where the data stream evolves significantly In such cases, one may either choose to generate the stream sample over a sliding window, or use a decay-based approach in order to bias the sample An approach for sliding window computation over data streams is discussed in [32] A second approach [29] uses biased decay functions in order to construct synopsis from data streams It has been shown in [29] that the problem is extremely difficult for arbitrary decay functions In such cases, there is no known solution to the problem However, it is possible to design very simple algorithms for some important classes of decay functions One of these classes of decay functions is the exponential decay function The exponential decay function is extremely important because of its memory less property, which guarantees that the future treatment of a data point is independent of the past data points which have arrived An interesting result is that by making simple implementation modifications to the algorithm of [31] in terms of modifying the probabilities of insertion and deletion, it is possible to construct a robust algorithm for this problem It has been shown in [29] that the approach is quite effective in practice, especially when there is significant evolution of the underlying data stream While sampling has several advantages in terms of simplicity and preservation of multi-dimensional correlations, it loses its effectiveness in the presence of data sparsity For example, a query which contains very few data points is unlikely to be accurate with the use of a sampling approach However, this is a general problem with most techniques which are effective at counting frequent elements, but are not quite as effective at counting rare or distinct elements in the data stream Sketches Sketches use some properties of random sampling in order to perform counting tasks in data streams Sketches are most useful when the domain size of a data stream is very large In such cases, the number of possible distinct elements Data Streams: An Overview and Scientific Applications 387 become very large, and it is no longer possible to track them in space-constrained scenarios There are two broad classes of sketches: projection based and hash based We will discuss each of them in turn Projection based sketches are constructed on the broad idea of random projection [33] The most well known projection-based sketch is the AMS sketch [34, 35], which we will discuss below It has been shown in [33], that by by randomly sampling subspaces from multi-dimensional data, it is possible to compute -accurate projections of the data with high probability This broad idea can easily be extended to the massive domain case, by viewing each distinct item as a dimension, and the counts on these items as the corresponding values The main problem is that the vector for performing the projection cannot be maintained explicitly since the length of such a vector would be of the same size as the number of distinct elements In fact, since the sketch-based method is most relevant in the distinct element scenario, such an approach defeats the purpose of keeping a synopsis structure in the first place Let us assume that the random projection is performed using k sketch vectors, and rij represents the j th vector for the i th item in the domain being tracked In order to achieve the goal of efficient synopsis construction, we store the random vectors implicitly in the form of a seed, and this can be used to dynamically generate the vector The main idea discussed in [36] is that it is possible to generate random vectors with a seed of size O.log.N //, provided that one is willing to work with the restriction that rij f 1; C1g should be 4-wise independent The sketch is computed by adding rij to the j th component of the sketch for the i th item In the event that the incoming item has frequency f , we add the value f rij Let us assume that there are a total of k sketch components which are denoted by s1 : : : sk / Some key properties of the pseudo-random number generator approach and the sketch representation are as follows: j A given component ri can be generated in poly-logarithmic time from the seed The time for generating the seed is poly-logarithmic in the domain size of the underlying data • A variety of approximate aggregate functions on the original data can be computed using the sketches • Some example of functions which can be computed from the sketch components are as follows: Dot Product of two streams If s1 : : : sk / be the sketches from one stream, and t1 : : : tk / be the sketches from the other stream, then sj cdottj is a random variable whose expected value of the dot product • Second Moment If s1 : : : sk / be the sketch components for a data stream, it can be shown that the expected value of sj2 is the second moment Furthermore, by using Chernoff bounds, it can be shown that by selecting the median of O.log.1=ı/ averages of O.1= / copies of sj cdottj , it is possible to guarantee the accuracy of the approximation to within C with probability at least ı • 388 • C.C Aggarwal Frequent Items The frequency of the i th item in the data stream is computed by by multiplying the sketch component sj by rij However, this estimation is accurate only for the case of frequent items, since the error is estimation is proportional to the overall frequency of the items in the data stream More details of computations which one can perform with the AMS sketch are discussed in [34, 35] The second kind of sketch which is used for counting is the count-min sketch [37] The count-min sketch is based upon the concept of hashing, and uses k D ln.1=ı/ pairwise-independent hash functions, which hash onto integers in the range : : : e= / For each incoming item, the k hash functions are applied and the frequency count is incremented by In the event that the incoming item has frequency f , the corresponding frequency count is incremented by f Note that by hashing an item into the k cells, we are ensuring that we maintain an overestimate on the corresponding frequency It can be shown that the minimum of these cells provides the -accurate estimate to the frequency with probability at least ı It has been shown in [37] that the method can also be naturally extended to other problems such as finding the dot product or the second-order moments The count-min sketch is typically more effective for problems such as frequency-estimation of individual items than the projection-based AMS sketch However, the AMS sketch is more effective for problems such as second-moment estimation Wavelet Decomposition Another widely known synopsis representation in data stream computation is that of the wavelet representation One of the most widely used representations is the Haar Wavelet We will discuss this technique in detail in this section This technique is particularly simple to implement, and is widely used in the literature for hierarchical decomposition and summarization The basic idea in the wavelet technique is to create a decomposition of the data characteristics into a set of wavelet functions and basis functions The property of the wavelet method is that the higher order coefficients of the decomposition illustrate the broad trends in the data, whereas the more localized trends are captured by the lower order coefficients We assume for ease in description that the length q of the series is a power of This is without loss of generality, because it is always possible to decompose a series into segments, each of which has a length that is a power of two The Haar Wavelet decomposition defines 2k coefficients of order k Each of these 2k coefficients corresponds to a contiguous portion of the time series of length q=2k The i th of these 2k coefficients corresponds to the segment in the series starting from position i 1/ q=2k C to position i q=2k Let us denote this coefficient by i and the corresponding time series segment by Ski At the same time, let us define k the average value of the first half of the Ski by aki and the second half by bki Then, the value of ki is given by aki bki /=2 More formally, if ˚ki denote the average value of the Ski , then the value of ki can be defined recursively as follows: i k 2i D ˚kC1 2i ˚kC1 /=2 (1) Data Streams: An Overview and Scientific Applications 389 The set of Haar coefficients is defined by the «ki coefficients of order to log2 q/ In addition, the global average ˚11 is required for the purpose of perfect reconstruction We note that the coefficients of different order provide an understanding of the major trends in the data at a particular level of granularity For example, the coefficient ki is half the quantity by which the first half of the segment Ski is larger than the second half of the same segment Since larger values of k correspond to geometrically reducing segment sizes, one can obtain an understanding of the basic trends at different levels of granularity We note that this definition of the Haar wavelet makes it very easy to compute by a sequence of averaging and differencing operations In Table 1, we have illustrated how the wavelet coefficients are computed for the case of the sequence 8; 6; 2; 3; 4; 6; 6; 5/ This decomposition is illustrated in graphical form in Fig We also note that each value can be represented as a sum of log2 8/ D linear decomposition components In general, the entire decomposition Table An example of wavelet coefficient computation Granularity Averages DWT Coefficients (Order k) ˚ values values kD4 (8, 6, 2, 3, 4, 6, 6, 5) kD3 (7, 2.5, 5, 5.5) (1, 0.5, 1, 0.5) kD2 (4.75, 5.25) (2.25, 0.25) kD1 (5) (-0.25) (8, 6, 2, 3, 4, 6, 6, 5) −0.5 −1 0.5 (7, 2.5, 5, 5.5) 2.25 −0.25 (4.75, 5.25) (5) −0.25 Fig Illustration of the wavelet decomposition 390 C.C Aggarwal [1 8] SERIES AVERAGE + −0.25 [1 8] RELEVANT RANGES [1 4] − + −0.25 2.25 + − + − −0.5 −1 0.5 [1 2] [3 4] [5 6] [7 8] RELEVANT RANGES + + − − [5 8] + + − 6 − ORIGINAL SERIES VALUES RECONSTRUCTED FROM TREE PATH Fig The error tree from the wavelet decomposition may be represented as a tree of depth 3, which represents the hierarchical decomposition of the entire series This is also referred to as the error tree In Fig 2, we have illustrated the error tree for the wavelet decomposition illustrated in Table The nodes in the tree contain the values of the wavelet coefficients, except for a special super-root node which contains the series average This super-root node is not necessary if we are only considering the relative values in the series, or the series values have been normalized so that the average is already zero We further note that the number of wavelet coefficients in this series is 8, which is also the length of the original series The original series has been replicated just below the errortree in Fig 2, and it can be reconstructed by adding or subtracting the values in the nodes along the path leading to that value We note that each coefficient in a node should be added, if we use the left branch below it to reach to the series values Otherwise, it should be subtracted This natural decomposition means that an entire contiguous range along the series can be reconstructed by using only the portion of the error-tree which is relevant to it Furthermore, we only need to retain those coefficients whose values are significantly large, and therefore affect the values of the underlying series In general, we would like to minimize the reconstruction error by retaining only a fixed number of coefficients, as defined by the space constraints While wavelet decomposition is easy to perform for multi-dimensional data sets, it is much more challenging for the case of data streams This is because data streams impose a one-pass constraint on the wavelet construction process A variety of onepass algorithms for wavelet construction are discussed in [30] Data Streams: An Overview and Scientific Applications 391 Histograms The technique of histogram construction is closely related to that of wavelets In histograms the data is binned into a number of intervals along an attribute For any given query, the counts from the bins can be utilized for query resolution A simple representation of the histogram method would simply partition the data into equi-depth or equi-width intervals The main inaccuracy with the use of histograms is that the distribution of data points within a bucket is not retained, and is therefore assumed to be uniform This causes inaccuracy because of extrapolation at the query boundaries A natural choice is to use an equal number of counts in each bucket This minimizes the error variation across different buckets However, in the case of data streams, the boundaries to be used for equi-depth histogram construction are not known a priori We further note that the design of equi-depth buckets is exactly the problem of quantile estimation, since the equi-depth partitions define the quantiles in the data Another choice of histogram construction is that of minimizing the variance of frequency variances of different values in the bucket This ensures that the uniform distribution assumption is approximately held, when extrapolating the frequencies of the buckets at the two ends of a query Such histograms are referred to as V-optimal histograms Algorithms for V-optimal histogram construction are proposed in [38, 39] A more detailed discussion of several algorithms for histogram construction may be found in [3] 3.6 Dimensionality Reduction and Forecasting in Data Streams Because of the inherent temporal nature of data streams, the problems of dimensionality reduction and forecasting and particularly important When there are a large number of simultaneous data stream, we can use the correlations between different data streams in order to make effective predictions [40, 41] on the future behavior of the data stream In particular, the well known MUSCLES method [41] is useful in applying regression analysis to data streams The regression analysis is helpful in predicting the future behavior of the data stream A related technique is the SPIRIT algorithm, which explores the relationship between dimensionality reduction and forecasting in data streams The primary idea is that a compact number of hidden variables can be used to comprehensively describe the data stream This compact representation can also be used for effective forecasting of the data streams A discussion of different dimensionality reduction and forecasting methods (including SPIRIT) is provided in [3] 3.7 Distributed Mining of Data Streams In many instances, streams are generated at multiple distributed computing nodes An example of such a case would be sensor networks in which the streams are generated at different sensor nodes Analyzing and monitoring data in such environments 392 C.C Aggarwal requires data mining technology that requires optimization of a variety of criteria such as communication costs across different nodes, as well as computational, memory or storage requirements at each node There are several management and mining challenges in such cases When the streams are collected with the use of sensors, one must take into account the limited storage, computational power, and battery life of sensor nodes Furthermore, since the network may contain a very large number of sensor nodes, the effective aggregation of the streams becomes a considerable challenge Furthermore, distributed streams also pose several challenges to mining problems, since one must integrate the results of the mining algorithms across different nodes A detailed discussion of several distributed mining algorithms are provided in [3] Scientific Applications of Data Streams Data streams have numerous applications in a variety of scientific scenarios In this section, we will discuss different applications of data streams and how they tie in to the techniques discussed earlier 4.1 Network Monitoring Many large telecommunication companies have massive streams of data containing information about phone calls between different nodes In many cases, it is desirable to analyze the underlying data in order to determine the broad patterns in the data This can be extremely difficult especially if the number of source-destination combinations are very large For example, if the company has over 106 possible nodes for both the source and the destination, the number of possible combinations is 1012 Maintaining explicit information about such a large number of pairs is practically infeasible both from a space-and computational point of view Many natural solutions have been devised for this problem which rely on the use of a sketch-based approach in order to compress and summarize the underlying data Sketches are extremely efficient because they use an additive approach in order to summarize the underlying data stream Sketches can be used in order to determine important patterns such frequent call patterns, moments or even joins across multiple data sources 4.2 Intrusion Detection In many network applications, the intrusions appear as sudden bursts of patterns in even greater streams of attacks over the world-wide web This makes the problem Data Streams: An Overview and Scientific Applications 393 extremely difficult, because one cannot scan the data twice, and we are looking for patterns which are embedded in a much greater volume of data Stream clustering turns out to be quite useful for such problems, since we can isolate small clusters in the data from much larger volumes of data The formation of new clusters often signifies an anomalous event which needs to be investigated If desired, the problem can be combined with supervised mining of the underlying data This can be done by creating supervised clusters in which each cluster may belong only to a specific-class When known intrusions are received in the stream, they can be used in order to create class-specific clusters These class-specific clusters can be used to determine the nature of new clusters which arise from unknown intrusion behavior 4.3 Sensor Network Analysis Sensors have played an increasingly important role in recent years in collecting a variety of scientific data from the environment The challenges in processing sensordata are as follows: In many cases, sensor data may be uncertain in nature The challenge is to clean the data in online fashion and then apply various application-specific algorithms to the problem An example for the case of clustering uncertain data streams is discussed in [11] • The number of streams which are processed together are typically very large This is because of the large number of sensors at which the data may be collected This leads to challenges in effective storage and processing of such data • Often the data from different sensors may only be available in aggregated form in order to save on storage space This leads to challenges in extraction of the underlying information • Synopsis construction techniques are a natural approach for sensor problems because of the nature of the underlying aggregation Sketch techniques can be used in order to compress the underlying data and use it for a variety of data mining purposes 4.4 Cosmological Applications In recent years, cosmological applications have created large volumes of data The installation of large space stations, space telescopes, and observatories result in large streams of data on different stars and clusters of galaxies This data can be used in order to mine useful information about the behavior of different cosmological objects Similarly, rovers and sensors on a planet or asteroid may send large amounts of image, video or audio data In many cases, it may not be possible to manually 394 C.C Aggarwal monitor such data continuously In such cases, it may be desirable to use stream mining techniques in order to detect the important underlying properties The amount of data received in a single day in such applications can often exceed several tera-bytes These data sources are especially challenging since the underlying applications may be spatial in nature In such cases, an attempt to compress the data using standard synopsis techniques may lose the structure of the underlying data Furthermore, the data may often contain imprecision in measurements Such imprecisions may result in the need for techniques which leverage the uncertainty information in the data in order to improve the accuracy of the underlying results 4.5 Mobile Applications Recently new technologies have emerged which can use the information gleaned from on-board sensors in a vehicle in order to monitor the diagnostic health of the vehicle as well as driver characterization Two such applications are the VEDAS system [42], and the OnStar system designed by General Motors Such systems require quick analysis of the underlying data in order to make diagnostic characterizations in real time Effective event-detection algorithms are required in order to perform this task effectively The stock market often creates large volumes of data streams which need to be analyzed in real time in order to make quick decisions about actions to be taken An example of such an approach is the MobiMine approach [43] which monitors the stock market with the use of a PDA 4.6 Environmental and Weather Data Many satellites and other scientific instruments collect environmental data such as cloud cover, wind speeds, humidity data and ocean currents Such data can be used to make predictions about long- and short-term weather and climate changes Such data can be especially massive if the number of parameters measured are very large The challenge is to be able to combine these parameters in order to make timely and accurate predictions about weather driven events This is another application of event detection techniques from massive streams of sensor data Conclusions and Research Directions Data streams are a computational challenge to data mining problems because of the additional algorithmic constraints created by the large volume of data In addition, the problem of temporal locality leads to a number of unique mining challenges Data Streams: An Overview and Scientific Applications 395 in the data stream case This chapter provides an overview to the generic issues in processing data streams, and the specific issues which arise with different mining algorithms While considerable research has already been performed in the data stream area, there are numerous research directions which remain to be explored Most research in the stream area is focussed on the one pass constraint, and generally does not deal effectively with the issue of temporal locality In the stream case, temporal locality remains an extremely important issue since the data may evolve considerably over time Other important topics which need to be explored are as follows: Streams are often collected by devices such as sensors in which the data is often noisy and error-driven Therefore, a key challenge is to effectively clean the data This may involve either imputing or modeling the underlying uncertain data This can be challenge, since any modeling needs to be done in real time, as large volumes of the data stream arrive • A related area of research is in using the modeled data for data mining tasks Since the underlying data is uncertain, the uncertainty should be used in order to improve the quality of the underlying results Some recent research addresses the issue of clustering uncertain data streams [11] • Many recent applications such as privacy-preserving data mining have not been studied effectively in the context of data streams It is often a challenge to perform privacy-transformations of continuously arriving data, since newly arriving data may compromise the integrity of earlier data The data stream domain provides a number of unique challenges in the context of the privacy problem • Acknowledgments Research of the first author was sponsored in part by the US Army Research laboratory and the UK ministry of Defense under Agreement Number W911NF-06-3-0001 The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies of the US Government, the US Army Research Laboratory, the UK Ministry of Defense, or the UK Government The US and UK governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notice hereon References G Cormode, M Garofalakis, Sketching Streams Through the Net: Distributed Approximate Query Tracking, in VLDB Conference, 2005 G Kollios, J Byers, J Considine, M Hadjielefttheriou, F Li, (2005) Robust Aggregation in Sensor Networks IEEE Data Engineering Bulletin C Aggarwal, (2007) Data Streams: Models and Algorithms (Springer, Berlin, 2007) Y Chen, G Dong, J Han, B.W Wah, J Wang, Multi-dimensional regression analysis of timeseries data streams, in VLDB Conference, 2002 G Dong, J Han, J Lam, J Pei, K Wang, Mining multi-dimensional constrained gradients in data cubes, in VLDB Conference, 2001 S Guha, N Mishra, R Motwani, L O’Callaghan, Clustering Data Streams, in IEEE FOCS Conference, 2000 396 C.C Aggarwal C Aggarwal, J Han, J Wang, P Yu, A Framework for Clustering Evolving Data Streams, VLDB Conference, 2003 T Zhang, R Ramakrishnan, M Livny, BIRCH: An Efficient Data Clustering Method for Very Large Databases, in ACM SIGMOD Conference, 1996 C Aggarwal, C Procopiuc, J Wolf, P Yu, J.-S Park, Fast Algorithms for Projected Clustering, in ACM SIGMOD Conference, 1999 10 C Aggarwal, J Han, J Wang, P Yu, A Framework for High Dimensional Projected Clustering of Data Streams, in VLDB Conference, 2004 11 C Aggarwal, P Yu, A Framework for Clustering Uncertain Data Streams, in ICDE Conference, 2008 12 C Aggarwal, P Yu, A Framework for Clustering Massive Text and Categorical Data Streams, in SIAM Data Mining Conference, 2006 13 P Domingos, G Hulten, Mining High-Speed Data Streams, in Proceedings of the ACM KDD Conference, 2000 14 C Aggarwal, J Han, J Wang, P Yu, On-Demand Classification of Data Streams, in ACM KDD Conference, 2004 15 G Hulten, L Spencer, P Domingos, Mining Time Changing Data Streams, in ACM KDD Conference, 2001 16 R Jin, G Agrawal, Efficient Decision Tree Construction on Streaming Data, in ACM KDD Conference, 2003 17 H Wang, W Fan, P Yu, J Han, Mining Concept-Drifting Data Streams using Ensemble Classifiers, in ACM KDD Conference, 2003 18 R Agrawal, T Imielinski, A Swami, Mining Association Rules between Sets of items in Large Databases, in ACM SIGMOD Conference, 1993 19 C Giannella, J Han, J Pei, X Yan, P Yu, (2002) Mining Frequent Patterns in Data Streams at Multiple Time Granularities Proceedings of the NSF Workshop on Next Generation Data Mining 20 R Jin, G Agrawal, An algorithm for in-core frequent itemset mining on streaming data, in ICDM Conference, 2005 21 G Manku, R Motwani, Approximate Frequency Counts over Data Streams, VLDB Conference, 2002 22 J Xu Yu, Z Chong, H Lu, A Zhou, False positive or false negative: Mining frequent itemsets from high speed transaction data streams, in VLDB Conference, 2004 23 Y Chi, H Wang, P Yu, R Muntz Moment: Maintaining closed frequent itemsets over a stream sliding window, ICDM Conference, 2004 24 C Gianella, J Han, J Pei, X Yan, P Yu, (2002) Mining Frequent Patterns in Data Streams at Multiple Time Granularities NSF Workshop on Next Generation data Mining 25 J.H Chang, W.S Lee, Finding recent frequent itemsets adaptively over online data streams, in ACM KDD Conference, 2003 26 C Aggarwal, A Framework for Diagnosing Changes in Evolving Data Streams, in ACM SIGMOD Conference, 2003 27 T Dasu, S Krishnan, S Venkatasubramaniam, K Yi, (2005) An Information-Theoretic Approach to Detecting Changes in Multi-dimensional data Streams Duke University Technical Report CS-2005-06 28 D Kifer, S.-B David, J Gehrke, Detecting Change in Data Streams, in VLDB Conference, 2004 29 C Aggarwal, On Biased Reservoir Sampling in the presence of Stream Evolution, in VLDB Conference, 2006 30 M Garofalakis, J Gehrke, R Rastogi, Querying and mining data streams: you only get one look (a tutorial), in SIGMOD Conference, 2002 31 J.S Vitter, ACM Transactions on Mathematical Software 11(1), 37 1985 32 B Babcock, M Datar, R Motwani, (2002) Sampling from a Moving Window over Streaming Data SIAM Symposium on Discrete Algorithms (SODA) 33 W Johnson, J Lindenstrauss, Contemporary Mathematics 26, 189 1984 Data Streams: An Overview and Scientific Applications 397 34 N Alon, P Gibbons, Y Matias, M Szegedy, Tracking Joins and Self-Joins in Limited Storage, in ACM PODS Conference, 1999 35 N Alon, Y Matias, M Szegedy, in The Space Complexity of Approximating Frequency Moments,(ACM, New York 1996), pp 20–29 36 P Indyk, (2000) Stable Distributions, Pseudorandom Generators, Embeddings and Data Stream Computation IEEE FOCS 37 G Cormode, S Muthukrishnan, (2004) An Improved Data Stream Summary: The Count-Min Sketch and its Applications LATIN, pp 29–38 38 Y Ioannidis, V Poosala, Balancing Histogram Optimality and Practicality for Query Set Size Estimation, in ACM SIGMOD Conference, 1995 39 H Jagadish, N Koudas, S Muthukrishnan, V Poosala, K Sevcik, T Suel, (1998) Optimal Histograms with Quality Guarantees, in VLDB Conference, 1998 40 Y Sakurai, S Papadimitriou, C Faloutsos, BRAID: Stream mining through group lag correlations, ACM SIGMOD Conference, 2005 41 B.-K Yi, N.D Sidiropoulos, T Johnson, H.V Jagadish, C Faloutsos, A Biliris, Online data mining for co-evolving time sequences, in ICDE Conference, 2000 42 H Kargupta et al., VEDAS: A Mobile and Distributed Data Stream Mining System for Vehicle Monitoring, in SDM Conference, 2004 43 H Kargupta et al., MobiMine: Monitoring the stock market using a PDA, ACM SIGKDD Explorations, January 2002 “This page left intentionally blank.” Index k-mers, 226 Abduction, 79 Abductive inference, 79 absent words, 220 alphabet, 210 approximate repeats, 221 association rule mining, 239 automata, 212 Automated discovery, 77 Automated scientific discovery, 77 axiom, 129 axiomatisation, 126 left maximal, 225 maximal, 225 rare MEMs, 228 right maximal, 225, 227 Explanation, 78 Falsificationism, 80 Forecast aggregation, 85 FP-growth algorithm, 241 frequent itemsets, 239 Hypothesis choice, 79 Bayesian conditionalisation, 82 Bayesian epistemology, 82 Bayesian net, 84 branching tandem repeat, see repeat, 215, 219 Inductivism, 80 Influence relation, 86 Calibration, 82 Causal Bayesian net, 84 Causal Markov Condition, 84 Causal net, 84 chaining, 237 Concepts of the Sciences, 79 Confirmation, 78 look-up table, 211 Datalog, 134 Demarcation, 78 Discovery, 79 Dynamic interaction, 80 Equivocation, 82 Evidence integration, 85, 86 exact match, 225 Knowledge integration, 86 Machine learning, 77 maximal exact matches, see exact match, 227 Maximum entropy principle, 83 Model selection, 79 Modeling, 79 Objective Bayesian net, 87 Objective Bayesianism, 82 pattern matching, 223 global, 223 local exact matches, 225 semi-global, 223 399 400 plagiarism, 243 prefix, 210 Probabilistic, 84 Probability, 82 Realism, 78 repeat branching tandem, 215, 219 dispersed, 215 fixed length, 215 heuristics, 239 left maximal, 214 maximal, 214, 216 repeated pair, 214 right maximal, 214 supermaximal, 214, 218 tandem, 215, 219, 234 repeat, approximate tandem repeats, 234 repeated pair, see repeat reverse complement, 211 Scientific machine learning, 77 Scientific Method, 78 seed-and-extend, 236 sequence alignment, 228 global alignment, 229 Index heuristics, 235 local alignment, 231 semi-global alignment, 232 suffix-prefix alignment, 234 sequence comparison, 223 spam, 243 string, 210 string kernels, 242 Subjective Bayesianism, 82 substring, 210 suffix, 210 suffix array, 214 suffix tree, 213, 214 Systematising, 79 tandem repeat, see repeat, 215, 219, 234 text documents, 243 semi-structured, 243 unstructured, 243 text documents: XML, 244 Theorising, 79 trie, 212 unique subsequences, 220 Unity, 78 .. .Scientific Data Mining and Knowledge Discovery “This page left intentionally blank.” Mohamed Medhat Gaber Editor Scientific Data Mining and Knowledge Discovery Principles and Foundations. .. Gaber (ed.), Scientific Data Mining and Knowledge Discovery: Principles and Foundations, DOI 10. 1007/978-3-642-02788-8 2, © Springer-Verlag Berlin Heidelberg 2 010 A Hoffmann and A Mahidadia and, ... University, 900 Dandenong Rd, Caulfield East, VIC 3145, Australia e-mail: Mohamed .Gaber@ infotech.monash.edu.au M.M Gaber (ed.), Scientific Data Mining and Knowledge Discovery: Principles and Foundations,

Ngày đăng: 05/11/2019, 15:01

Từ khóa liên quan

Mục lục

  • Contents

  • Introduction

  • Part I: Background

    • Machine Learning

    • Statistical Inference

    • The Philosophy of Science and its relation to Machine Learning

    • Concept Formation in Scientific Knowledge Discovery from a Constructivist View

    • Knowledge Representation and Ontologies

    • Part II: Computational Science

      • Spatial Techniques

      • Computational Chemistry

      • String Mining in Bioinformatics

      • Part III: Data Mining and Knowledge Discovery

        • Knowledge Discovery and Reasoning in Geospatial Applications

        • Data Mining and Discovery of Chemical Knowledge

        • Data Mining and Discovery of Astronomical Knowledge

        • Part IV: Future Trends

          • On-board Data Mining

          • Data Streams: An Overview and Scientific Applications

          • Index

            • A

            • B

            • C

            • D

            • E

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan