Hidden markov models

28 1.4K 1
Hidden markov models

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Bài giảng về kỹ thuật Hidden markov models trong xử lý ảnh (chương trình cao học)

Hidden Markov Models Ankur Jain Y7073 What is Covered • Observable Markov Model • Hidden Markov Model • Evaluation problem • Decoding Problem Markov Models • Set of states: {s1 , s2 ,  , s N } • Process moves from one state to another generating a sequence of states : si1 , si ,  , sik ,  • Markov chain property: probability of each subsequent state depends only on what was the previous state: P ( sik | si1 , si ,  , sik −1 ) = P ( sik | sik −1 ) • To define Markov model, the following probabilities have to be specified: transition probabilities aij = P ( si | s j ) and initial probabilities π i = P ( si ) • The output of the process is the set of states at each instant of time Calculation of sequence probability • By Markov chain property, probability of state sequence can be found by the formula: P ( si1 , si ,  , sik ) = P ( sik | si1 , si ,  , sik −1 ) P ( si1 , si , , sik −1 ) = P ( sik | sik −1 ) P ( si1 , si ,  , sik −1 ) =  = P ( sik | sik −1 ) P ( sik −1 | sik − )  P ( si | si1 ) P ( si1 ) Example of Markov Model 0.3 0.7 Rain Dry 0.2 0.8 • Two states : ‘Rain’ and ‘Dry’ •Initial probabilities: say P(‘Rain’)=0.4 , P(‘Dry’)=0.6 • Suppose we want to calculate a probability of a sequence of states in our example, {‘Dry’,’Dry’,’Rain’,Rain’} P({‘Dry’,’Dry’,’Rain’,Rain’} ) = ?? Hidden Markov models • The observation is turned to be a probabilistic function (discrete or continuous) of a state instead of an one-to-one correspondence of a state •Each state randomly generates one of M observations (or visible states) {v1 , v2 ,  , vM } • To define hidden Markov model, the following probabilities have to be specified: matrix of transition probabilities A=(aij), aij= P(si | sj) , matrix of observation probabilities B=(bi (vm )), bi(vm ) = P(vm | si) and a vector of initial probabilities π=(πi), πi = P(si) Model is represented by M=(A, B, π) HMM Assumptions • Markov assumption: the state transition depends only on the origin and destination • Output-independent assumption: all observation frames are dependent on the state that generated them, not on neighbouring observation frames Example of Hidden Markov Model 0.3 0.7 Low High 0.2 0.6 Rain 0.4 0.8 0.4 0.6 Dry Example of Hidden Markov Model • Two states : ‘Low’ and ‘High’ atmospheric pressure • Two observations : ‘Rain’ and ‘Dry’ • Transition probabilities: P(‘Low’|‘Low’)=0.3 , P(‘High’|‘Low’)=0.7 , P(‘Low’|‘High’)=0.2, P(‘High’|‘High’)=0.8 • Observation probabilities : P(‘Rain’|‘Low’)=0.6 , P(‘Dry’|‘Low’)=0.4 , P(‘Rain’|‘High’)=0.4 , P(‘Dry’|‘High’)=0.3 • Initial probabilities: say P(‘Low’)=0.4 , P(‘High’)=0.6 Calculation of observation sequence probability •Suppose we want to calculate a probability of a sequence of observations in our example, {‘Dry’,’Rain’} •Consider all possible hidden state sequences: P({‘Dry’,’Rain’} ) = P({‘Dry’,’Rain’} , {‘Low’,’Low’}) + P({‘Dry’,’Rain’} , {‘Low’,’High’}) + P({‘Dry’,’Rain’} , {‘High’,’Low’}) + P({‘Dry’,’Rain’} , {‘High’,’High’}) where first term is : P({‘Dry’,’Rain’} , {‘Low’,’Low’})= P({‘Dry’,’Rain’} | {‘Low’,’Low’}) P({‘Low’,’Low’}) = ?? Word recognition example(3) • If lexicon is given, we can construct separate HMM models for each lexicon word Amherst a m h e r s t Buffalo b u f f a l o 0.5 0.03 0.4 0.6 • Here recognition of word image is equivalent to the problem of evaluating few HMM models •This is an application of Evaluation problem Word recognition example(4) • We can construct a single HMM for all words • Hidden states = all characters in the alphabet • Transition probabilities and initial probabilities are calculated from language model • Observations and observation probabilities are as before a m f r t o b h e s v • Here we have to determine the best sequence of hidden states, the one that most likely produced word image • This is an application of Decoding problem Evaluation Problem •Evaluation problem Given the HMM M=(A, B, π) and the O=o1 o2 oT , calculate the probability that model M has generated sequence O observation sequence • Direct Evaluation :Trying to find probability of observations O=o1 o2 oT by means of considering all hidden state sequences •P(o1 o2 oT ) = P(o1 o2 oT , S ) {S is state sequence} •P(o1 o2 oT ) = P(o1 o2 oT /S ) P(S) •P(S) = •P(o1 o2 oT /S ) = {Markov property} {Output independent assumption} • NT hidden state sequences - exponential complexity • Use Forward-Backward HMM algorithms for efficient calculations • Define the forward variable αk(i) as the joint probability of the partial observation sequence o1 o2 ok and that the hidden state at time k is si : αk(i)= P(o1 o2 ok , qk= si ) Trellis representation of an HMM o1 s1 s2 ok ok+1 s1 s1 s2 a1j oK = Observations s1 s2 s2 sj si a2j si si aij aNj sN Time= sN sN sN k k+1 K Forward recursion for HMM • Initialization: α1(i)= P(o1 , q1= si ) = πi bi (o1) , 1

Ngày đăng: 14/03/2014, 23:47

Từ khóa liên quan

Mục lục

  • Hidden Markov Models

  • What is Covered

  • Markov Models

  • Calculation of sequence probability

  • Example of Markov Model

  • Hidden Markov models.

  • HMM Assumptions

  • Example of Hidden Markov Model

  • Slide 9

  • Calculation of observation sequence probability

  • Main issues using HMMs :

  • Word recognition example(1).

  • Word recognition example(2).

  • Word recognition example(3).

  • Word recognition example(4).

  • Evaluation Problem.

  • PowerPoint Presentation

  • Trellis representation of an HMM

  • Forward recursion for HMM

  • Backward recursion for HMM

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan