Techniques to incorporate the benefits of a Hierarchy in a modified hidden Markov model Lin-Yi Chou University of Waikato Hamilton New Zealand lc55@cs.waikato.ac.nz Abstract This paper explores techniques to take advantage of the fundamental difference in structure between hidden Markov models (HMM) and hierarchical hidden Markov models (HHMM). The HHMM structure allows repeated parts of the model to be merged together. A merged model takes advantage of the recurring patterns within the hierarchy, and the clusters that exist in some sequences of observations, in order to increase the extraction accuracy. This paper also presents a new technique for reconstructing grammar rules automatically. This work builds on the idea of combining a phrase extraction method with HHMM to expose patterns within English text. The reconstruction is then used to simplify the complex structure of an HHMM The models discussed here are evaluated by applying them to natural language tasks based on CoNLL-20041 and a sub-corpus of the Lancaster Treebank2 . Keywords: information extraction, natural language, hidden Markov models. 1 Introduction Hidden Markov models (HMMs) were introduced in the late 1960s, and are widely used as a probabilistic tool for modeling sequences of observations (Rabiner and Juang, 1986). They have proven to be capable of assigning semantic labels to tokens over a wide variety of input types. 1 The 2004 Conference on Computational Natural Language Learning, http://cnts.uia.ac.be/conll2004 2 Lancaster/IBM Treebank, http://www.ilc.cnr.it/EAGLES96/synlex/node23.html This is useful for text-related tasks that involve some uncertainty, including part-of-speech tagging (Brill, 1995), text segmentation (Borkar et al., 2001), named entity recognition (Bikel et al., 1999) and information extraction tasks (McCallum et al., 1999). However, most natural language processing tasks are dependent on discovering a hierarchical structure hidden within the source information. An example would be predicting semantic roles from English sentences. HMMs are less capable of reliably modeling these tasks. In contrast hierarchical hidden Markov models (HHMMs) are better at capturing the underlying hierarchy structure. While there are several difficulties inherent in extracting information from the patterns hidden within natural language information, by discovering the hierarchical structure more accurate models can be built. HHMMs were first proposed by Fine (1998) to resolve the complex multi-scale structures that pervade natural language, such as speech (Rabiner and Juang, 1986), handwriting (Nag et al., 1986), and text. Skounakis (2003) described the HHMM as multiple "levels" of HMM states, where lower levels represents each individual output symbol, and upper levels represents the combinations of lower level sequences. Any HHMM can be converted to a HMM by creating a state for every possible observation, a process called "flattening". Flattening is performed to simplify the model to a linear sequence of Markov states, thus decreasing processing time. But as a result of this process the model no longer contains any hierarchical structure. To reduce the models complexity while maintaining some hierarchical structure, our algorithm uses a "partial flattening" process. In recent years, artificial intelligence re120 Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 120­127, Sydney, July 2006. c 2006 Association for Computational Linguistics searchers have made strenuous efforts to reproduce the human interpretation of language, whereby patterns in grammar can be recognised and simplified automatically. Brill (1995) describes a simple rule-based approach for learning by rewriting the bracketing rule--a method for presenting the structure of natural language text-- for linguistic knowledge. Similarly, Krotov (1999) puts forward a method for eliminating redundant grammar rules by applying a compaction algorithm. This work draws upon the lessons learned from these sources by automatically detecting situations in which the grammar structure can be reconstructed. This is done by applying the phrase extraction method introduced by Pantel (2001) to rewrite the bracketing rule by calculating the dependency of each possible phrase. The outcome of this restructuring is to reduce the complexity of the hierarchical structure and reduce the number of levels in the hierarchy. This paper considers the tasks of identifying the syntactic structure of text chunking and grammar parsing with previously annotated text documents. It analyses the use of HHMMs--both before and after the application of improvement techniques--for these tasks, then compares the results with HMMs. This paper is organised as follows: Section 2 describes the method for training HHMMs. Section 3 describes the flattening process for reducing the depth of hierarchical structure for HHMMs. Section 4 discusses the use of HHMMs for the text chunking task and the grammar parser. The evaluation results of the HMM, the plain HHMM and the merged and partially flattened HHMM are presented in Section 5. Finally, Section 6 discusses the results. The output of a HHMM is generated by a process of traversing some sequence of states within the model. At each internal state, the automation traverses down the tree, possibly through further internal states, until it encounters a production state where an observation is contained. Thus, as it continues through the tree, the process generates a sequence of observations. The process ends when a final state is entered. The difference between a standard HMM and a hierarchical HMM is that individual states in the hierarchical model can traverse to a sequence of production states, whereas each state in the standard model corresponds is a production state that contains a single observation. 2.1 Merging A A (a) A A (b) Figure 1: Example of a HHMM Figure 1(a) and Figure 1(b) illustrate the process of reconstructing a HMM as a HHMM. Figure 1(a) shows a HMM with 11 states. The two dashed boxes (A) indicate regions of the model that have a repeated structure. These regions are furthermore independent of the other states in the model. Figure 1(b) models the same structure as a hierarchical HMM, where each repeated structure is now grouped under an internal state. This HHMM uses a two level hierarchical structure to expose more information about the transitions and probabilities within the internal states. These states, as discussed earlier, produce no observation of their own. Instead, that is left to the child production states within them. Figure 1(b) shows that each internal state contains four production states. In some cases, different internal states of a HHMM correspond to exactly the same structure in the output sequence. This is modelled by making them share the same sub-models. Using a HHMM allows for the merging of repeated parts of the structure, which results in fewer states needing to be identified--one of the three fundamental problems of HMM construction (Rabiner and 121 2 Hierarchical Hidden Markov Model A HHMM is a structured multi-level stochastic process, and can be visualised as a tree structured HMM (see Figure 1(b)). There are two types of states: · Production state: a leaf node of the tree structure, which contains only observations (represented in Figure 1(b) as the empty circle ). · Internal state: contains several production states or other internal states (represented in ) Figure 1(b) as a circle with a cross inside . Juang, 1986). 2.2 Sub-model Calculation Estimating the parameters for multi-level HHMMs is a complicated process. This section describes a probability estimation method for internal states, which transforms each internal state into three production states. Each internal state Si in the HHMM is transformed by resolving each child production state Si,j , into one of three trans(i) (i) (i) formed states, Si {sin , sstay , sout }. The transformation requires re-calculating the new observational and transition probabilities for each of these transformed states. Figure 2 shows the internal (2) (2) states of S2 have been transformed into sin , sstay , sstay and sout . (2) (2) II. Apply forward algorithm to estimate the b transform observation value ¯: The transformed observation values are simplified to (i) (i) (i) {¯in,t , ¯stay,t , ¯out,t }, which are then given as b b b the observation values for the three produc(i) (i) (i) tions states (sin , sstay , sout ). The observational probability of entering state Si at time (i) t, i.e. production state sin , is given by: ¯(i) = max bin,t j =1..Ni j ¯ × Oj,t , (2) where j represents the transition probabilities of entering child state Sj . The second probability of staying in state Si at time t, i.e. (i) production state, sstay , is given by: ¯(i) bstay,t = j =1..Ni max A ^ ,j j ¯ × Oj,t ^ ,j j , (3) , ^ = arg max j S1 S in (2) S stay (2) S stay (2) S out (2) S3 j =1..Ni A ¯ × Oj,t S 2,1 S 2,2 S 2,3 S 2,4 where ^ is the state corresponding to ^ calj j culated at previous time t-1, and A^ ,j reprej sents the transition probability from state S^ j to state to Sj . The third probability of exiting (i) state Si at time t, i.e. production state, sout , is given by: ¯(i) = max bout,t j =1..Ni Figure 2: Example of a transformed HHMM with the internal state S2 . The procedure to transform internal states is: ¯ I) calculate the transformed observation (O) for each internal state; II) apply the forward algorithm to estimate the state probabilities (¯) for the three b transformed states; III) reform the transition matrix by including estimated values for additional ¯ transformed internal states (A). ¯ I. Calculate the observation probabilities O: Every observation in each internal state Si is re-calculated by summing up all the observation probabilities in each production state Sj as: ¯ Oi,t = jNi A ^ ,j j ¯ × Oj,t × j , (4) where j is the transition probabilities for leaving the state Sj . ¯ III. Reform transition probability A(i) : Each internal state Si reforms a new 3 × 3 transi¯ tion probability matrix A, which records the transition status for the transform matrix. The ¯ formula for the estimated cells in A are: ¯(i) Ain,stay = ¯(i) Ain,out = ¯(i) Astay,stay = jNi j (5) =1 jNi j =1 2 (6) Oj,t , (1) =1 k Ni , N i jNi Ak,j (7) =1,j =1 where time t corresponds to a position in the sequence, O is an observation sequence over t, Oj,t is the observation probability for state Sj at time t, and Ni represents the number of production states for internal state Si . 122 ¯(i) Astay,out = j (8) =1 where Ni is the number of child states for ¯(i) state Si , Ain,stay is estimated by summing up all entry state probabilities for state Si , ¯(i) Ain,out is estimated from the observation that 50% of sequences transit from state sin di(i) ¯(i) rectly to state sout , Astay,stay is the sum of all the internal transition probabilities within ¯(i) state Si , and Astay,out is the sum of all exit state probabilities. The rest of the probabili¯ ties for transition matrix A are set to zero to prevent illegal transitions. Each internal state is implemented by a bottomup algorithm using the values from equations (1)(8), where lower levels of the hierarchy tree are calculated first to provide information for upper level states. Once all the internal states have been calculated, the system then need only to use the top-level of the hierarchy tree to estimate the probability sequences. This means the model will now become a linear HMM for the final Viterbi search process (Viterbi, 1967). (i) entire training corpus. The log-likelihood ratio of x and y is defined as: logL(x, y ) = ll( k1 k2 , k1 , n1 ) + ll( , k2 , n2 ) n1 n2 k1 + k2 -l l ( , k1 , n1 ) n1 + n2 k1 + k2 , k2 , n2 ) (10) -l l ( n1 + n2 where k1 = C (x, y ), n1 = C (x, ), k2 = C (¬x, y ), n2 = C (¬x, ) and ll(p, k, n) = k log(p) + (n - k) log(1 - p) (11) The system computes dependency values between states (tree nodes) or observations (tree leaves) in the tree in the same way. The mutual information and log-likelihood values are highest when the words are adjacent to each other throughout the entire corpus. By using these two values, the method is more robust against low frequency events. Figure 3 is a tree representation of the HHMM, the figure illustrates the flattening process for the sentence: (S (N A AT1 graphical JJ zoo NN1 (P of IO (N ( strange JJ and CC peculiar JJ ) attractors NN2 )))). 3 Partial flattening Partial flattening is a process for reducing the depth of hierarchical structure trees. The process involves moving sub-trees from one node to another. This section presents an interesting automatic partial flattening process that makes use of the term extractor method (Pantel and Lin, 2001). The method discovers ways of more tightly coupling observation sequences within sub-models thus eliminating rules within the HHMM. This results in more accurate model. This process involves calculating dependency values to measure the dependency between the elements in the state sequence (or observation sequence). This method uses mutual information and loglikelihood, which Dunning (1993) used to calculate the dependency value between words. Where there is a higher dependency value between words they are more likely to be treat as a term. The process involves collecting bigram frequencies from a large dataset, and identifying the possible two word candidates as terms. The first measurement used is mutual information, which is calculated using the formula: mi(x, y ) = P (x, y ) P (x)P (y ) (9) where only the part-of-speech tags and grammar information are considered. The left hand side of the figure shows the original structure of the sentence, and the right hand side shows the transformed structure. The model's hierarchy is reduced by one level, where the state P has become a sub-state of state S instead of N . The process is likely to be useful when state P is highly dependent on state N . The flattening process can be applied to the model based on two types of sequence dependancy; observation dependancy and state dependancy. · Observation dependency : The observation dependency value is based upon the observation sequence, which in Figure 3 would be the sequence of part-of-speech tags {AT1 JJ NN1 IO JJ CC JJ NN2}. Given observations NN1 and IO's as terms with a high dependency value, the model then re-construct the sub-tree at IO parent state P moving it to the same level as state N , where the states of P and N now share the same parent, state S. 123 where x and y are words adjacent to each other in the training corpus, C (x, y ) to be the frequency of the two words, and represents all the words in S S N* N* P* AT1 JJ NN1 P* AT1 JJ NN1 IO N IO N JJ CC JJ NN2 JJ CC JJ NN2 Figure 3: Partial flattening process for state N and P . · State dependency : The state dependency value is based upon the state sequence, which in Figure 3 would be {N , P , N}. The flattening process occurs when the current state has a high dependency value with the previous state, say N and P . 4 Application 4.1 Text Chunking Text chunking involves producing nonoverlapping segments of low-level noun groups. The system uses the clause information to construct the hierarchical structure of text chunks, where the clauses represent the phrases within the sentence. The clauses can be embedded in other clauses but cannot overlap one another. Furthermore each clause contains one or more text chunks. Consider a sentence from a CoNLL-2004 corpus: (S (NP He PRP) (VP reckons VBZ) (S (NP the DT current JJ account NN deficit NN) (VP will MD narrow VB) (PP to TO) (NP only RB # # 1.8 CD billion D) (PP in IN) (NP September NNP)) (O . .)) term N N 1 IO IO JJ JJ CC CC JJ JJ NN2 AT1 JJ JJ NN1 dependency value 570.55 570.55 570.55 570.55 295.24 294.25 294.25 Table 1: Observation dependency values of partof-speech tags This paper determines the high dependency values by selecting the top n values from a list of all possible terms ranked by either observation or state dependency values, where n is a parameter that can be configured by the user for better performance. Table 1 shows the observation dependency values of terms for part-of-speech tags for Figure 3. The term NN1 IO has a higher dependency value than JJ NN1, therefore state P is joined as a sub-tree of state S. States P and N remain unchanged since state P has already been moved up a level of the tree. After the flattening process, the state P no longer belongs to the child state of state N , and is instead joined as the subtree to state S as shown in Figure 3. 124 where the part-of-speech tag associated with each word is attached with an underscore, the clause information is identified by the S symbol and the chunk information is identified by the rest of the symbols NP (noun phrase), VP (verb phrase), PP (prepositional phrase) and O (null complementizer). The brackets are in Penn Treebank II style3 . The sentence can be re-expressed just as its partof-speech tags thusly: {PRP VBZ DT JJ NN NN MD VB TO RB # CD D IN NNP}, where only the part-of-speech tags and grammar information are to be considered for the extraction tasks. This is done so the system can minimise the computation cost inherent in learning a large number of unrequired observation symbols. Such an approach 3 The Penn Treebank http://www.cis.upenn.edu/~ treebank/home.html Project, also maximises the efficiency of trained data by learning the pattern that is hidden within words (syntax) rather than the words themselves (semantics). Figure 4 represents an example of the tree representation of an HHMM for the text chunking task. This example involves a hierarchy with a depth of three. Note that state NP appears in two different levels of the hierarchy. In order to build an HHMM, the sentence shown above must be restructured as: (S (NP PRP) (VP VBZ) (S (NP DT JJ NN NN) (VP MD VB) (PP TO) (NP RB # CD D) (PP IN) ( NP NNP ) ) ( O . ) ) models from the parse tree, the system takes the part-of-speech tags as the observation sequences, and learns the structure of the model using the information expressed by the syntactic tags. During construction, phrases, such as the noun phrase "( strange JJ and CC peculiar JJ )", are grouped under a dummy state (N d). Figure 5 illustrates the model in the tree representation with the structure of the model based on the previous sentence from Lancaster Treebank. 5 Evaluation The first evaluation presents preliminary evidence that the merged hierarchical hidden Markov Model (MHHMM) is able to produce more accurate results either a plain HHMM or a HMM during the text chunking task. The results suggest that the partial flattening process is capable of improving model accuracy when the input data contains complex hierarchical structures. The evaluation involves analysing the results over two sets of data. The first is a selection of data from CoNLL2004 and contains 8936 sentences. The second dataset is part of the Lancaster Treebank corpus and contains 1473 sentences. Each sentence contains hand-labeled syntactic roles for natural language text. where the model makes no use of the word information contained in the sentence. 4.2 Grammar Parsing Creation of a parse tree involves describing language grammar in a tree representation, where each path of the tree represents a grammar rule. Consider a sentence from the Lancaster Treebank4 : (S (N A AT1 graphical JJ zoo NN1 (P of IO (N ( strange JJ and CC peculiar JJ) attractors NN2)))) where the part-of-speech tag associated with each word is attached with an underscore, and the syntactic tag for each phrase occurs immediately after the opening square-bracket. In order to build the N 0.94 0.92 0.90 F 0.88 AT1 JJ NN1 P 0.86 A.1000 B.1000 C.1000 A.1200 B.1200 C.1200 N_d NN2 JJ CC JJ Figure 6: The graph of micro-average F -measure against the number of training sentences during text chunking (A: MHHMM, B: HHMM and C: H MM) The first finding is that the size of training data dramatically affects the prediction accuracy. A model with an insufficient number of observations 125 Figure 5: Parse tree for the HHMM 4 Lancaster/IBM Treebank, http://www.ilc.cnr.it/EAGLES96/synlex/node23.html A.1400 B.1400 C.1400 A.200 B.200 C.200 A.400 B.400 C.400 A.600 B.600 C.600 A.800 B.800 C.800 IO N NIL NP VP S PRP VBZ NP VP PP NP PP O DT JJ NN NN MD TO RB # CD D IN . Figure 4: HHMM for syntax roles typically has poor accuracy. In the text chunking task the number of observation symbol relies on the number of part-of-speech tags contained in training data. Figure 6 plots the relationship of micro-average F -measure for three types of models (A: MHHMM, B: HHMM and C: HMM) on 10-fold cross validation with the number of training sentences ranging from 200 to 1400. The result shows that the MHHMM has the better performance in accuracy over both the HHMM and HMM, although the difference is less marked for the latter. gle path, leading to more states within the model and higher computation cost. The extra costs of constructing a HHMM, which will have the same number of production states as the HMM, make it the least efficient. The second evaluation presents preliminary evidence that the partially flattened hierarchical hidden Markov model (PFHHMM) can assign propositions to language texts (grammar parsing) at least as accurately as the HMM. This is assignment is a task that HHMMs are generally not well suited to. Table 2 shows the F1 -measures of identified semantic roles for each different model on the Lancaster Treebank data set. The models used in this evaluation were trained with observation data from the Lancaster Treebank training set. The training set and testing set are sub-divided from the corpus in proportions of 2 and 1 . The PFHHMMs had ex3 3 tra training conditions as follows: PFHHMM obs 2000 made use of the partial flattening process, with the high dependency parameter determined by considering the highest 2000 dependency values from observation sequences from the corpus. PFHHMM state 150 again uses partial flattening, however this time the highest 150 dependency values from state sequences were utilized in discovering the high dependency threshold. The n values of 2000 and 150 were determined to be the optimal values when applied to the training set. The results show that applying the partial flattening process to a model using observation sequences to determine high dependency values reduces the complexity of the model's hierarchy and consequently improves the model's accuracy. The state dependency method is shown to be less favorable for this particular task, but the micro-average result is still comparable with the HMM's performance. The results also show no significant re126 80 A: HHMM B: HHMM-tree C: HMM 60 seconds 40 20 0 100 150 200 50 number of sentences Figure 7: The average processing time for text chunking Figure 7 represents the average processing time for testing (in seconds) for the 10-fold cross validation. The test were carried out on a dual P4-D computer running at 3GHz and with 1Gb RAM. The results indicate that the MHHMM gains efficiency, in terms of computation cost, by merging repeated sub-models, resulting in fewer states in the model. In contrast the HMM has lower efficiency as it is required to identify every sin- State Count HM M HHM M N NUL L V P Fa MicroAverage 16387 4670 4134 2099 180 0.874 0.794 0.768 0.944 0.525 0.793 0.811 0.035 0.755 0.936 0.814 0.701 PFHHMM obs 2000 0.882 0.744 0.804 0.928 0.687 0.809 PFHHMM state 150 0.874 0.743 0.791 0.926 0.457 0.792 study in part-of-speech tagging. Computational Linguistics, 21(4):543­565. T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61­74. S. Fine , Y. Singer and N. Tishby. 1998. The Hierarchical Hidden Markov Model: Analysis and Applications. Machine Learning, 32:41­62. A. Krotov, M Heple, R. Gaizauskas and Y. Wilks. 1999. Compacting the Penn Treebank Grammar. Proceedings of COLING-98 (Montreal), pages 699703. A. McCallum, K. Nigam, J. Rennie and K. Seymore. 1999. Building Domain-Specific Search Engines with Machine Learning Techniques. In AAAI99 Spring Symposium on Intelligent Agents in Cyberspace. R. Nag, K. H. Wong, and F. Fallside. 1986. Script Recognition Using Hidden Markov Models. Proc. of ICASSP 86, pp. 2071-1074, Toyko. P. Pantel and D. Lin. 2001. A Statistical CorpusBased Term Extractor. In Stroulia, E. and Matwin, S. (Eds.) AI 2001. Lecture Notes in Artificial Intelligence, pp. 36-46. Springer-Verlag. L. R. Rabiner and B. H. Juang. 1986. An Introduction to Hidden Markov Models. IEEE Acoustics Speech and Signal Processing ASSP Magazine, ASSP-3(1): 4­16, January. M. Skounakis, M. Craven and S. Ray. 2003. Hierarchical Hidden Markov Models for Information Extraction. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, Acapulco, Mexico, Morgan Kaufmann. A. J. Viterbi. 1967. Error bounds for convolutional codes and an asymtotically optimum decoding algorithm. IEEE Transactions on Information Theory, IT-13:260­267. Table 2: F1-measure of top 5 states during grammar parsing set. lationship between the occurance count of a state against the various models prediction accuracy. 6 Discussion and Future Work Due to the hierarchical structure of a HHMM, the model has the advantage of being able to reuse information for repeated sub-models. Thus the HHMM can perform more accurately and requires less computational time than the HMM in certain situations. The merging and flattening techniques have been shown to be effective and could be applied to many kinds of data with hierarchical structures. The methods are especially appealing where the model involves complex structure or there is a shortage of training data. Furthermore, they address an important issue when dealing with small datasets: by using the hierarchical model to uncover less obvious structures, the model is able to increase model performance even over more limited source materials. The experimental results have shown the potential of the merging and partial flattening techniques in building hierarchical models and providing better handling of states with less observation counts. Further research in both experimental and theoretical aspects of this work is planned, specifically in the area of reconstructing hierarchies where recursive formations are present and formal analysis and testing of techniques. References D. M. Bikel, R. Schwartz and R. M. Weischedel. 1999. An Algorithm that Learns What's in a Name. Machine Learning, 34:211­231. V. R. Borkar, K. Deshmukh and S. Sarawagi. 2001. Automatic Segmentation of Text into Structured Records. Proceedings of SIGMOD. E. Brill. 1995. Transformation-based error-driven learning and natural language processing: a case 127