Unsupervised Learning of Narrative Event Chains Nathanael Chambers and Dan Jurafsky Department of Computer Science Stanford University Stanford, CA 94305 {natec,jurafsky}@stanford.edu Abstract Hand-coded scripts were used in the 1970-80s as knowledge backbones that enabled inference and other NLP tasks requiring deep semantic knowledge. We propose unsupervised induction of similar schemata called narrative event chains from raw newswire text. A narrative event chain is a partially ordered set of events related by a common protagonist. We describe a three step process to learning narrative event chains. The first uses unsupervised distributional methods to learn narrative relations between events sharing coreferring arguments. The second applies a temporal classifier to partially order the connected events. Finally, the third prunes and clusters self-contained chains from the space of events. We introduce two evaluations: the narrative cloze to evaluate event relatedness, and an order coherence task to evaluate narrative order. We show a 36% improvement over baseline for narrative prediction and 25% for temporal coherence. tate learning, and thus this paper addresses the three tasks of chain induction: narrative event induction, temporal ordering of events and structured selection (pruning the event space into discrete sets). Learning these prototypical schematic sequences of events is important for rich understanding of text. Scripts were central to natural language understanding research in the 1970s and 1980s for proposed tasks such as summarization, coreference resolution and question answering. For example, Schank and Abelson (1977) proposed that understanding text about restaurants required knowledge about the Restaurant Script, including the participants (Customer, Waiter, Cook, Tables, etc.), the events constituting the script (entering, sitting down, asking for menus, etc.), and the various preconditions, ordering, and results of each of the constituent actions. Consider these two distinct narrative chains. accused X W joined X claimed W served X argued W oversaw dismissed X W resigned It would be useful for question answering or textual entailment to know that `X denied ' is also a likely event in the left chain, while ` replaces W' temporally follows the right. Narrative chains (such as Firing of Employee or Executive Resigns) offer the structure and power to directly infer these new subevents by providing critical background knowledge. In part due to its complexity, automatic induction has not been addressed since the early nonstatistical work of Mooney and DeJong (1985). The first step to narrative induction uses an entitybased model for learning narrative relations by fol- 1 Introduction This paper induces a new representation of structured knowledge called narrative event chains (or narrative chains). Narrative chains are partially ordered sets of events centered around a common protagonist. They are related to structured sequences of participants and events that have been called scripts (Schank and Abelson, 1977) or Fillmorean frames. These participants and events can be filled in and instantiated in a particular text situation to draw inferences. Chains focus on a single actor to facili789 Proceedings of ACL-08: HLT, pages 789­797, Columbus, Ohio, USA, June 2008. c 2008 Association for Computational Linguistics lowing a protagonist. As a narrative progresses through a series of events, each event is characterized by the grammatical role played by the protagonist, and by the protagonist's shared connection to surrounding events. Our algorithm is an unsupervised distributional learning approach that uses coreferring arguments as evidence of a narrative relation. We show, using a new evaluation task called narrative cloze, that our protagonist-based method leads to better induction than a verb-only approach. The next step is to order events in the same narrative chain. We apply work in the area of temporal classification to create partial orders of our learned events. We show, using a coherence-based evaluation of temporal ordering, that our partial orders lead to better coherence judgements of real narrative instances extracted from documents. Finally, the space of narrative events and temporal orders is clustered and pruned to create discrete sets of narrative chains. 2 Previous Work While previous work hasn't focused specifically on learning narratives1 , our work draws from two lines of research in summarization and anaphora resolution. In summarization, topic signatures are a set of terms indicative of a topic (Lin and Hovy, 2000). They are extracted from hand-sorted (by topic) sets of documents using log-likelihood ratios. These terms can capture some narrative relations, but the model requires topic-sorted training data. Bean and Riloff (2004) proposed the use of caseframe networks as a kind of contextual role knoweldge for anaphora resolution. A caseframe is a verb/event and a semantic role (e.g. kidnapped). Caseframe networks are relations between caseframes that may represent synonymy ( kidnapped and abducted) or related events ( kidnapped and released). Bean and Riloff learn these networks from two topic-specific texts and apply them to the problem of anaphora resolution. Our work can be seen as an attempt to generalize the intuition of caseframes (finding an entire set of events We analyzed FrameNet (Baker et al., 1998) for insight, but found that very few of the frames are event sequences of the type characterizing narratives and scripts. 1 rather than just pairs of related frames) and apply it to a different task (finding a coherent structured narrative in non-topic-specific text). More recently, Brody (2007) proposed an approach similar to caseframes that discovers highlevel relatedness between verbs by grouping verbs that share the same lexical items in subject/object positions. He calls these shared arguments anchors. Brody learns pairwise relations between clusters of related verbs, similar to the results with caseframes. A human evaluation of these pairs shows an improvement over baseline. This and previous caseframe work lend credence to learning relations from verbs with common arguments. We also draw from lexical chains (Morris and Hirst, 1991), indicators of text coherence from word overlap/similarity. We use a related notion of protagonist overlap to motivate narrative chain learning. Work on semantic similarity learning such as Chklovski and Pantel (2004) also automatically learns relations between verbs. We use similar distributional scoring metrics, but differ with our use of a protagonist as the indicator of relatedness. We also use typed dependencies and the entire space of events for similarity judgements, rather than only pairwise lexical decisions. Finally, Fujiki et al. (2003) investigated script acquisition by extracting the 41 most frequent pairs of events from the first paragraph of newswire articles, using the assumption that the paragraph's textual order follows temporal order. Our model, by contrast, learns entire event chains, uses more sophisticated probabilistic measures, and uses temporal ordering models instead of relying on document order. 3 The Narrative Chain Model 3.1 Definition Our model is inspired by Centering (Grosz et al., 1995) and other entity-based models of coherence (Barzilay and Lapata, 2005) in which an entity is in focus through a sequence of sentences. We propose to use this same intuition to induce narrative chains. We assume that although a narrative has several participants, there is a central actor who characterizes a narrative chain: the protagonist. Narrative chains are thus structured by the protagonist's grammatical roles in the events. In addition, narrative 790 events are ordered by some theory of time. This paper describes a partial ordering with the before (no overlap) relation. Our task, therefore, is to learn events that constitute narrative chains. Formally, a narrative chain is a partially ordered set of narrative events that share a common actor. A narrative event is a tuple of an event (most simply a verb) and its participants, represented as typed dependencies. Since we are focusing on a single actor in this study, a narrative event is thus a tuple of the event and the typed dependency of the protagonist: (event, dependency). A narrative chain is a set of narrative events {e1 , e2 , ..., en }, where n is the size of the chain, and a relation B (ei , ej ) that is true if narrative event ei occurs strictly before ej in time. 3.2 The Protagonist The notion of a protagonist motivates our approach to narrative learning. We make the following assumption of narrative coherence: verbs sharing coreferring arguments are semantically connected by virtue of narrative discourse structure. A single document may contain more than one narrative (or topic), but the narrative assumption states that a series of argument-sharing verbs is more likely to participate in a narrative chain than those not sharing. In addition, the narrative approach captures grammatical constraints on narrative coherence. Simple distributional learning might discover that the verb push is related to the verb fall, but narrative learning can capture additional facts about the participants, specifically, that the object or patient of the push is the subject or agent of the fall. Each focused protagonist chain offers one perspective on a narrative, similar to the multiple perspectives on a commercial transaction event offered by buy and sell. 3.3 Partial Ordering A narrative chain, by definition, includes a partial ordering of events. Early work on scripts included ordering constraints with more complex preconditions and side effects on the sequence of events. This paper presents work toward a partial ordering and leaves logical constraints as future work. We focus on the before relation, but the model does not preclude advanced theories of temporal order. 791 4 Learning Narrative Relations Our first model learns basic information about a narrative chain: the protagonist and the constituent subevents, although not their ordering. For this we need a metric for the relation between an event and a narrative chain. Pairwise relations between events are first extracted unsupervised. A distributional score based on how often two events share grammatical arguments (using pointwise mutual information) is used to create this pairwise relation. Finally, a global narrative score is built such that all events in the chain provide feedback on the event in question (whether for inclusion or for decisions of inference). Given a list of observed verb/dependency counts, we approximate the pointwise mutual information (PMI) by: pmi(e(w, d), e(v , g )) = log P (e(w, d), e(v , g )) (1) P (e(w, d))P (e(v , g )) where e(w, d) is the verb/dependency pair w and d (e.g. e(push,subject)). The numerator is defined by: P (e(w, d), e(v , g )) = x C (e(w, d), e(v , g )) d ,y ,f C (e(x, d), e(y , f )) (2) where C (e(x, d), e(y , f )) is the number of times the two events e(x, d) and e(y , f ) had a coreferring entity filling the values of the dependencies d and f . We also adopt the `discount score' to penalize low occuring words (Pantel and Ravichandran, 2004). Given the debate over appropriate metrics for distributional learning, we also experimented with the t-test. Our experiments found that PMI outperforms the t-test on this task by itself and when interpolated together using various mixture weights. Once pairwise relation scores are calculated, a global narrative score can then be built such that all events provide feedback on the event in question. For instance, given all narrative events in a document, we can find the next most likely event to occur by maximizing: max in =0 j :0 B (y , x) if x y and B (y , x) > B (x, y ) if !x y & !y x & D(x, y ) > 0 otherwise Figure 4: A narrative chain and its reverse order. correct incorrect tie All 8086 75% 1738 931 6 7603 78% 1493 627 10 6307 89% 619 160 Figure 5: Results for choosing the correct ordered chain. ( 10) means there were at least 10 pairs of ordered events in the chain. generated (up to) 300 random orderings for each of the remaining 63. We report 75.2% accuracy, but 22 of the 63 had 5 or fewer pairs of ordered events. Figure 5 therefore shows results from chains with more than 5 pairs, and also 10 or more. As we would hope, the accuracy improves the larger the ordered narrative chain. We achieve 89.0% accuracy on the 24 documents whose chains most progress through time, rather than chains that are difficult to order with just the before relation. Training without none relations resulted in high recall for before decisions. Perhaps due to data sparsity, this produces our best results as reported above. 6 Discrete Narrative Event Chains Up till this point, we have learned narrative relations across all possible events, including their temporal order. However, the discrete lists of events for which Schank scripts are most famous have not yet been constructed. We intentionally did not set out to reproduce explicit self-contained scripts in the sense that the `restaurant script' is complete and cannot include other events. The name narrative was chosen to imply a likely order of events that is common in spoken and written retelling of world events. Discrete sets have the drawback of shutting out unseen and un- where E is the set of all event pairs, B (i, j ) is how many times we classified events i and j as before in Gigaword, and D(i, j ) = |B (i, j ) - B (j, i)|. The relation i j indicates that i is temporally before j . 5.4 Results Out approach gives higher scores to orders that coincide with the pairwise orderings classified in our gigaword training data. The results are shown in figure 5. Of the 69 chains, 6 did not have any ordered events and were removed from the evaluation. We 795 Figure 6: An automatically learned Prosecution Chain. Arrows indicate the before relation. Figure 7: An Employment Chain. Dotted lines indicate incorrect before relations. likely events from consideration. It is advantageous to consider a space of possible narrative events and the ordering within, not a closed list. However, it is worthwhile to construct discrete narrative chains, if only to see whether the combination of event learning and ordering produce scriptlike structures. This is easily achievable by using the PMI scores from section 4 in an agglomerative clustering algorithm, and then applying the ordering relations from section 5 to produce a directed graph. Figures 6 and 7 show two learned chains after clustering and ordering. Each arrow indicates a before relation. Duplicate arrows implied by rules of transitivity are removed. Figure 6 is remarkably accurate, and figure 7 addresses one of the chains from our introduction, the employment narrative. The core employment events are accurate, but clustering included life events (born, died, graduated) from obituaries of which some temporal information is incorrect. The Timebank corpus does not include obituaries, thus we suffer from sparsity in training data. 7 Discussion We have shown that it is possible to learn narrative event chains unsupervised from raw text. Not only do our narrative relations show improvements over a baseline, but narrative chains offer hope for many other areas of NLP. Inference, coherence in summarization and generation, slot filling for question answering, and frame induction are all potential areas. We learned a new measure of similarity, the nar796 rative relation, using the protagonist as a hook to extract a list of related events from each document. The 37% improvement over a verb-only baseline shows that we may not need presorted topics of documents to learn inferences. In addition, we applied state of the art temporal classification to show that sets of events can be partially ordered. Judgements of coherence can then be made over chains within documents. Further work in temporal classification may increase accuracy even further. Finally, we showed how the event space of narrative relations can be clustered to create discrete sets. While it is unclear if these are better than an unconstrained distribution of events, they do offer insight into the quality of narratives. An important area not discussed in this paper is the possibility of using narrative chains for semantic role learning. A narrative chain can be viewed as defining the semantic roles of an event, constraining it against roles of the other events in the chain. An argument's class can then be defined as the set of narrative arguments in which it appears. We believe our model provides an important first step toward learning the rich causal, temporal and inferential structure of scripts and frames. Acknowledgment: This work is funded in part by DARPA through IBM and by the DTO Phase III Program for AQUAINT through Broad Agency Announcement (BAA) N61339-06-R-0034. Thanks to the reviewers for helpful comments and the suggestion for a non-full-coreference baseline. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Christian Boitet and Pete Whitelock, editors, ACL-98, pages 86­ 90, San Francisco, California. Morgan Kaufmann Publishers. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: an entity-based approach. Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 141­148. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. Proc. of HLT/NAACL, pages 297­304. Samuel Brody. 2007. Clustering Clauses for HighLevel Relation Detection: An Information-theoretic Approach. Proceedings of the 43rd Annual Meeting of the Association of Computational Linguistics, pages 448­455. Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of ACL-07, Prague, Czech Republic. Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. In Proceedings of EMNLP-04. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC-06, pages 449­454. Tony Deyes. 1984. Towards an authentic 'discourse cloze'. Applied Linguistics, 5(2). Toshiaki Fujiki, Hidetsugu Nanba, and Manabu Okumura. 2003. Automatic acquisition of script knowledge from a text collection. In EACL, pages 91­94. David Graff. 2002. English Gigaword. Linguistic Data Consortium. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modelling the local coherence of discourse. Computational Linguistics, 21(2). Mirella Lapata and Alex Lascarides. 2006. Learning sentence-internal temporal relations. In Journal of AI Research, volume 27, pages 85­117. C.Y. Lin and E. Hovy. 2000. The automated acquisition of topic signatures for text summarization. Proceedings of the 17th conference on Computational linguistics-Volume 1, pages 495­501. Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In Proceedings of ACL-06, July. Raymond Mooney and Gerald DeJong. 1985. Learning schemata for natural language processing. In Ninth International Joint Conference on Artificial Intelligence (IJCAI), pages 681­687. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21­ 43. Patrick Pantel and Deepak Ravichandran. 2004. Automatically labeling semantic classes. Proceedings of HLT/NAACL, 4:321­328. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, David Day, Lisa Ferro, Robert Gaizauskas, Marcia Lazo, Andrea Setzer, and Beth Sundheim. 2003. The timebank corpus. Corpus Linguistics, pages 647­ 656. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals and understanding. Lawrence Erlbaum. Wilson L. Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism Quarterly, 30:415­ 433. 797