Bayesian Query-Focused Summarization ´ Hal Daume III and Daniel Marcu Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 me@hal3.name,marcu@isi.edu Abstract We present BAY E S U M (for "Bayesian summarization"), a model for sentence extraction in query-focused summarization. BAY E S U M leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BAY E S U M is not afflicted by the paucity of information in short queries. We show that approximate inference in BAY E S U M is possible on large data sets and results in a stateof-the-art summarization system. Furthermore, we show how BAY E S U M can be understood as a justified query expansion technique in the language modeling for IR framework. relevant documents for a given query. For both of these tasks, BAY E S U M performs well, even when the underlying retrieval model is noisy. The idea of leveraging known relevant documents is known as query expansion in the information retrieval community, where it has been shown to be successful in ad hoc retrieval tasks. Viewed from the perspective of IR, our work can be interpreted in two ways. First, it can be seen as an application of query expansion to the summarization task (or, in IR terminology, passage retrieval); see (Liu and Croft, 2002; Murdock and Croft, 2005). Second, and more importantly, it can be seen as a method for query expansion in a non-ad-hoc manner. That is, BAY E S U M is a statistically justified query expansion method in the language modeling for IR framework (Ponte and Croft, 1998). 1 Introduction We describe BAY E S U M, an algorithm for performing query-focused summarization in the common case that there are many relevant documents for a given query. Given a query and a collection of relevant documents, our algorithm functions by asking itself the following question: what is it about these relevant documents that differentiates them from the non-relevant documents? BAY E S U M can be seen as providing a statistical formulation of this exact question. The key requirement of BAY E S U M is that multiple relevant documents are known for the query in question. This is not a severe limitation. In two well-studied problems, it is the de-facto standard. In standard multidocument summarization (with or without a query), we have access to known relevant documents for some user need. Similarly, in the case of a web-search application, an underlying IR engine will retrieve multiple (presumably) 305 2 Bayesian Query-Focused Summarization In this section, we describe our Bayesian queryfocused summarization model (BAY E S U M). This task is very similar to the standard ad-hoc IR task, with the important distinction that we are comparing query models against sentence models, rather than against document models. The shortness of sentences means that one must do a good job of creating the query models. To maintain generality, so that our model is applicable to any problem for which multiple relevant documents are known for a query, we formulate our model in terms of relevance judgments. For a collection of D documents and Q queries, we assume we have a D × Q binary matrix r, where rdq = 1 if an only if document d is relevant to query q . In multidocument summarization, rdq will be 1 exactly when d is in the document set corresponding to query q ; in search-engine sum- Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 305­312, Sydney, July 2006. c 2006 Association for Computational Linguistics marization, it will be 1 exactly when d is returned by the search engine for query q . 2.1 Language Modeling for IR BAY E S U M is built on the concept of language models for information retrieval. The idea behind the language modeling techniques used in IR is to represent either queries or documents (or both) as probability distributions, and then use standard probabilistic techniques for comparing them. These probability distributions are almost always "bag of words" distributions that assign a probability to words from a fixed vocabulary V . One approach is to build a probability distribution for a given document, pd (·), and to look at the probability of a query under that distribution: pd (q ). Documents are ranked according to how likely they make the query (Ponte and Croft, 1998). Other researchers have built probability distributions over queries pq (·) and ranked documents according to how likely they look under the query model: pq (d) (Lafferty and Zhai, 2001). A third approach builds a probability distribution pq (·) for the query, a probability distribution pd (·) for the document and then measures the similarity between these two distributions using KL divergence (Lavrenko et al., 2002): KL (pq || pd ) = w pq (w) log V query, because it provides background information about the document (but is not relevant to a known query) or simply because it contains useless, general English filler. Similarly, we model each word as appearing for one of those purposes. More specifically, our model assumes that each word can be assigned a discrete, exact source, such as "this word is relevant to query q1 " or "this word is general English." At the sentence level, however, sentences are assigned degrees: "this sentence is 60% about query q1 , 30% background document information, and 10% general English." To model this, we define a general English language model, pG (·) to capture the English filler. Furthermore, for each document dk , we define a background document language model, pdk (·); similarly, for each query qj , we define a query-specific language model pqj (·). Every word in a document dk is modeled as being generated from a mixture of pG , pdk and {pqj : query qj is relevant to document dk }. Supposing there are J total queries and K total documents, we say that the nth word from the sth sentence in document d, wdsn , has a corresponding hidden variable, zdsn that specifies exactly which of these distributions is used to generate that one word. In particular, zdsn is a vector of length 1 + J + K , where exactly one element is 1 and the rest are 0. At the sentence level, we introduce a second layer of hidden variables. For the sth sentence in document d, we let ds be a vector also of length 1 + J + K that represents our degree of belief that this sentence came from any of the models. The ds s lie in the J + K -dimensional simplex J +K = { = 1 , . . . , J +K +1 : (i) i i i = 1}. The interpretation of the vari0, ables is that if the "general English" component of is 0.9, then 90% of the words in this sentence will be general English. The and z variables are constrained so that a sentence cannot be generated by a document language model other than its own document and cannot be generated by a query language model for a query to which it is not relevant. Since the s are unknown, and it is unlikely that there is a "true" correct value, we place a corpuslevel prior on them. Since is a multinomial distribution over its corresponding z s, it is natural to use a Dirichlet distribution as a prior over . A Dirichlet distribution is parameterized by a vector of equal length to the corresponding multinomial parameter, again with the positivity restric306 pq (w) pd (w) (1) The KL divergence between two probability distributions is zero when they are identical and otherwise strictly positive. It implicitly assumes that both distributions pq and pd have the same support: they assign non-zero probability to exactly the same subset of V ; in order to account for this, the distributions pq and pd are smoothed against a background general English model. This final mode--the KL model--is the one on which BAY E S U M is based. 2.2 Bayesian Statistical Model In the language of information retrieval, the queryfocused sentence extraction task boils down to estimating a good query model, pq (·). Once we have such a model, we could estimate sentence models for each sentence in a relevant document, and rank the sentences according to Eq (1). The BAY E S U M system is based on the following model: we hypothesize that a sentence appears in a document because it is relevant to some 2.3 Generative Story tion, but no longer required to sum to one. It has continuous density over ai variable 1 , . . . , I i i -1 i ) ( given by: Dir( | ) = i (i ) i . The fi t term is a normalization term that ensures that rs I d Dir( | ) = 1. q pQ r J z w N S K p G The generative story for our model defines a distribution over a corpus of queries, {qj }1:J , and documents, {dk }1:K , as follows: 1. For each query j = 1 . . . J : Generate each word qj n in qj by pqj (qj n ) 2. For each document k = 1 . . . K and each sentence s in document k : (a) Select the current sentence degree ks by Dir(ks | )rk (ks ) (b) For each word wksn in sentence s: · Select the word source zksn according to Mult(z | ks ) · Generate the word wksn by G p (wksn ) if zksn = 0 pdk (w ) if zksn = k + 1 qj ksn p (wksn ) if zksn = j + K + 1 pD Figure 1: Graphical model for the Bayesian Query-Focused Summarization Model. p (q 1:J , r, d1:K ) = k T s nz j n pqj (qj n ) × (2) dks p (ks | , r) p (zksn | ks ) p (wksn | zksn ) We used r to denote relevance judgments: rk ( ) = 0 if any document component of except the one corresponding to k is non-zero, or if any query component of except those queries to which document k is deemed relevant is non-zero (this prevents a document using the "wrong" document or query components). We have further assumed that the z vector is laid out so that z0 corresponds to general English, zk+1 corresponds to document dk for 0 j < J and that zj +K +1 corresponds to query qj for 0 k < K . 2.4 Graphical Model The graphical model corresponding to this generative story is in Figure 1. This model depicts the four known parameters in square boxes (, pQ , pD and pG ) with the three observed random variables in shaded circles (the queries q , the relevance judgments r and the words w) and two unobserved random variables in empty circles (the word-level indicator variables z and the sentence level degrees ). The rounded plates denote replication: there are J queries and K documents, containing S sentences in a given document and N words in a given sentence. The joint probability over the observed random variables is given in Eq (2): ksn his expression computes the probability of the data by integrating out the unknown variables. In the case of the variables, this is accomplished by integrating over , the multinomial simplex, according to the prior distribution given by . In the case of the z variables, this is accomplished by summing over all possible (discrete) values. The final word probability is conditioned on the z value by selecting the appropriate distribution from pG , pD and pQ . Computing this expression and finding optimal model parameters is intractable due to the coupling of the variables under the integral. 3 Statistical Inference in BAY E S U M Bayesian inference problems often give rise to intractable integrals, and a large variety of techniques have been proposed to deal with this. The most popular are Markov Chain Monte Carlo (MCMC), the Laplace (or saddle-point) approximation and the variational approximation. A third, less common, but very effective technique, especially for dealing with mixture models, is expectation propagation (Minka, 2001). In this paper, we will focus on expectation propagation; experiments not reported here have shown variational 307 EM to perform comparably but take roughly 50% longer to converge. Expectation propagation (EP) is an inference technique introduced by Minka (2001) as a generalization of both belief propagation and assumed density filtering. In his thesis, Minka showed that EP is very effective in mixture modeling problems, and later demonstrated its superiority to variational techniques in the Generative Aspect Model (Minka and Lafferty, 2003). The key idea is to compute an integral of a product of terms by iteratively applying a sequence of "deletion/inclusion" steps. n Given an integral of the form: d p( ) tn ( ), EP approximates ~ each term tn by a simpler term tn , giving Eq (3). d q ( ) q ( ) = p( ) n ~ tn ( ) (3) In each deletion/inclusion step, one of the approximate terms is deleted from q (·), leaving ~ q -n (·) = q (·)/tn (·). A new approximation for tn (·) is computed so that tn (·)q -n (·) has the same ~ integral, mean and variance as tn (·)q -n (·). This ~n (·) is then included back new approximation, t into the full expression for q (·) and the process repeats. This algorithm always has a fixed point and there are methods for ensuring that the approximation remains in a location where the integral is well-defined. Unlike variational EM, the approximation given by EP is global, and often leads to much more reliable estimates of the true integral. In the case of our model, we follow Minka and Lafferty (2003), who adapts latent Dirichlet allocation of Blei et al. (2003) to EP. Due to space constraints, we omit the inference algorithms and instead direct the interested reader to the description given by Minka and Lafferty (2003). typically broken down into four fields of increasing length: the title (3-4 words), the summary (1 sentence), the narrative (2-4 sentences) and the concepts (a list of keywords). Obviously, one would expect that the longer the query, the better a model would be able to do, and this is borne out experimentally (Section 4.5). Of the TREC data, we have trained our model on 350 queries (queries numbered 51-350 and 401-450) and all corresponding relevant documents. This amounts to roughly 43k documents, 2.1m sentences and 65.8m words. The mean number of relevant documents per query is 137 and the median is 81 (the most prolific query has 968 relevant documents). On the other hand, each document is relevant to, on average, 1.11 queries (the median is 5.5 and the most generally relevant document is relevant to 20 different queries). In all cases, we apply stemming using the Porter stemmer; for all other models, we remove stop words. In order to evaluate our model, we had seven human judges manually perform the queryfocused sentence extraction task. The judges were supplied with the full TREC query and a single document relevant to that query, and were asked to select up to four sentences from the document that best met the needs given by the query. Each judge annotated 25 queries with some overlap to allow for an evaluation of inter-annotator agreement, yielding annotations for a total of 166 unique query/document pairs. On the doubly annotated data, we computed the inter-annotator agreement using the kappa measure. The kappa value found was 0.58, which is low, but not abysmal (also, keep in mind that this is computed over only 25 of the 166 examples). 4.2 Evaluation Criteria Since there are differing numbers of sentences selected per document by the human judges, one cannot compute precision and recall; instead, we opt for other standard IR performance measures. We consider three related criteria: mean average precision (MAP), mean reciprocal rank (MRR) and precision at 2 (P@2). MAP is computed by calculating precision at every sentence as ordered by the system up until all relevant sentences are selected and averaged. MRR is the reciprocal of the rank of the first relevant sentence. P@2 is the precision computed at the first point that two relevant sentences have been selected (in the rare case that 308 4 Search-Engine Experiments The first experiments we run are for query-focused single document summarization, where relevant documents are returned from a search engine, and a short summary is desired of each document. 4.1 Data The data we use to train and test BAY E S U M is drawn from the Text REtrieval Conference (TREC) competitions. This data set consists of queries, documents and relevance judgments, exactly as required by our model. The queries are humans selected only one sentence, we use P@1). RANDOM POSITION JAC C A R D COSINE KL KL+REL BAY E S U M 4.3 Baseline Models As baselines, we consider four strawman models and two state-of-the-art information retrieval models. The first strawman, R A N D O M ranks sentences randomly. The second strawman, P O S I T I O N, ranks sentences according to their absolute position (in the context of non-query-focused summarization, this is an incredibly powerful baseline). The third and fourth models are based on the vector space interpretation of IR. The third model, JAC C A R D, uses standard Jaccard distance score (intersection over union) between each sentence and the query to rank sentences. The fourth, C O S I N E , uses TF-IDF weighted cosine similarity. The two state-of-the-art IR models used as comparative systems are based on the language modeling framework described in Section 2.1. These systems compute a language model for each query and for each sentence in a document. Sentences are then ranked according to the KL divergence between the query model and the sentence model, smoothed against a general model estimated from the entire collection, as described in the case of document retrieval by Lavrenko et al. (2002). This is the first system we compare against, called K L. The second true system, K L + R E L is based on augmenting the K L system with blind relevance feedback (query expansion). Specifically, we first run each query against the document set returned by the relevance judgments and retrieve the top n sentences. We then expand the query by interpolating the original query model with a query model estimated on these sentences. This serves as a method of query expansion. We ran experiments ranging n in {5, 10, 25, 50, 100} and the interpolation parameter in {0.2, 0.4, 0.6, 0.8} and used oracle selection (on MRR) to choose the values that performed best (the results are thus overly optimistic). These values were n = 25 and = 0.4. Of all the systems compared, only BAY E S U M and the K L + R E L model use the relevance judgments; however, they both have access to exactly the same information. The other models only run on the subset of the data used for evaluation (the corpus language model for the K L system and the IDF values for the C O S I N E model are computed on the full data set). EP ran for 2.5 hours. 309 MAP 19.9 24.8 17.9 29.6 36.6 36.3 44.1 MRR 37.3 41.6 29.3 50.3 64.1 62.9 70.8 P@2 16.6 19.9 16.7 23.7 27.6 29.2 33.6 Table 1: Empirical results for the baseline models as well as BAY E S U M, when all query fields are used. 4.4 Performance on all Query Fields Our first evaluation compares results when all query fields are used (title, summary, description and concepts1 ). These results are shown in Table 1. As we can see from these results, the JAC C A R D system alone is not sufficient to beat the position-based baseline. The C O S I N E does beat the position baseline by a bit of a margin (5 points better in MAP, 9 points in MRR and 4 points in P@2), and is in turn beaten by the K L system (which is 7 points, 14 points and 4 points better in MAP, MRR and P@2, respectively). Blind relevance feedback (parameters of which were chosen by an oracle to maximize the P@2 metric) actually hurts MAP and MRR performance by 0.3 and 1.2, respectively, and increases P@2 by 1.5. Over the best performing baseline system (either K L or K L + R E L), BAY E S U M wins by a margin of 7.5 points in MAP, 6.7 for MRR and 4.4 for P@2. 4.5 Varying Query Fields Our next experimental comparison has to do with reducing the amount of information given in the query. In Table 2, we show the performance of the K L, K L - R E L and BAY E S U M systems, as we use different query fields. There are several things to notice in these results. First, the standard KL model without blind relevance feedback performs worse than the position-based model when only the 3-4 word title is available. Second, BAY E S U M using only the title outperform the KL model with relevance feedback using all fields. In fact, one can apply BAY E S U M without using any of the query fields; in this case, only the relevance judgments are available to make sense A reviewer pointed out that concepts were later removed from TREC because they were "too good." Section 4.5 considers the case without the concepts field. 1 +Description +Summary +Concepts No Query KL KL-Rel BAY E S U M KL KL-Rel BAY E S U M KL KL-Rel BAY E S U M KL KL-Rel BAY E S U M BAY E S U M Mean Average Precision of Sentence Extraction POSITION Title MAP 24.8 19.9 31.9 41.1 31.5 32.6 40.9 31.6 34.2 42.0 36.7 36.3 44.1 39.4 MRR 41.6 32.6 53.8 65.7 58.3 55.0 66.9 56.9 48.5 67.8 64.2 62.9 70.8 64.7 P@2 19.9 17.8 26.1 31.6 24.1 26.2 31.0 23.8 27.0 31.8 27.6 29.2 33.6 30.4 44 KL-Rel (title only) BayeSum (title only) KL-Rel (title+desc+sum) BayeSum (title+desc+sum) KL-Rel (all fields) BayeSum (all fields) 42 40 38 36 34 32 30 28 Table 2: Empirical results for the position-based model, the KL-based models and BAY E S U M, with different inputs. of what the query might be. Even in this circumstance, BAY E S U M achieves a MAP of 39.4, an MRR of 64.7 and a P@2 of 30.4, still better across the board than K L - R E L with all query fields. While initially this seems counterintuitive, it is actually not so unreasonable: there is significantly more information available in several hundred positive relevance judgments than in a few sentences. However, the simple blind relevance feedback mechanism so popular in IR is unable to adequately model this. With the exception of the KL model without relevance feedback, adding the description on top of the title does not seem to make any difference for any of the models (and, in fact, occasionally hurts according to some metrics). Adding the summary improves performance in most cases, but not significantly. Adding concepts tends to improve results slightly more substantially than any other. 4.6 Noisy Relevance Judgments Our model hinges on the assumption that, for a given query, we have access to a collection of known relevant documents. In most real-world cases, this assumption is violated. Even in multidocument summarization as run in the DUC competitions, the assumption of access to a collection of documents all relevant to a user need is unrealistic. In the real world, we will have to deal with document collections that "accidentally" contain irrelevant documents. The experiments in this section show that BAY E S U M is comparatively robust. For this experiment, we use the IR engine that performed best in the TREC 1 evaluation: Inquery (Callan et al., 1992). We used the offi310 0.4 0.5 0.6 0.7 0.8 0.9 1 R-precision of IR Engine Figure 2: Performance with noisy relevance judgments. The X-axis is the R-precision of the IR engine and the Y-axis is the summarization performance in MAP. Solid lines are BAY E S U M, dotted lines are KL-Rel. Blue/stars indicate title only, red/circles indicated title+description+summary and black/pluses indicate all fields. cial TREC results of Inquery on the subset of the TREC corpus we consider. The Inquery Rprecision on this task is 0.39 using title only, and 0.51 using all fields. In order to obtain curves as the IR engine improves, we have linearly interpolated the Inquery rankings with the true relevance judgments. By tweaking the interpolation parameter, we obtain an IR engine with improving performance, but with a reasonable bias. We have run both BAY E S U M and KL-Rel on the relevance judgments obtained by this method for six values of the interpolation parameter. The results are shown in Figure 2. As we can observe from the figure, the solid lines (BAY E S U M) are always above the dotted lines (KL-Rel). Considering the KL-Rel results alone, we can see that for a non-perfect IR engine, it makes little difference what query fields we use for the summarization task: they all obtain roughly equal scores. This is because the performance in KL-Rel is dominated by the performance of the IR engine. Looking only at the BAY E S U M results, we can see a much stronger, and perhaps surprising difference. For an imperfect IR system, it is better to use only the title than to use the title, description and summary for the summarization component. We believe this is because the title is more on topic than the other fields, which contain terms like "A relevant document should describe . . . ." Never- theless, BAY E S U M has a more upward trend than KL-Rel, which indicates that improved IR will result in improved summarization for BAY E S U M but not for KL-Rel. and sixth (depending on the Rouge parameters), but was never statistically significantly worse than the best performing systems. 6 Discussion and Future Work 5 Multidocument Experiments In this paper we have described a model for automatically generating a query-focused summary, when one has access to multiple relevance judgments. Our Bayesian Query-Focused Summarization model (BAY E S U M) consistently outperforms contending, state of the art information retrieval models, even when it is forced to work with significantly less information (either in the complexity of the query terms or the quality of relevance judgments documents). When we applied our system as a stand-alone summarization model in the 2005 MSE and DUC tasks, we achieved among the highest scores in the evaluation metrics. The primary weakness of the model is that it currently only operates in a purely extractive setting. One question that arises is: why does BAY E S U M so strongly outperform KL-Rel, given that BAY E S U M can be seen as Bayesian formalism for relevance feedback (query expansion)? Both models have access to exactly the same information: the queries and the true relevance judgments. This is especially interesting due to the fact that the two relevance feedback parameters for KLRel were chosen optimally in our experiments, yet BAY E S U M consistently won out. One explanation for this performance win is that BAY E S U M provides a separate weight for each word, for each query. This gives it significantly more flexibility. Doing something similar with ad-hoc query expansion techniques is difficult due to the enormous number of parameters; see, for instance, (Buckley and Salton, 1995). One significant advantage of working in the Bayesian statistical framework is that it gives us a straightforward way to integrate other sources of knowledge into our model in a coherent manner. One could consider, for instance, to extend this model to the multi-document setting, where one would need to explicitly model redundancy across documents. Alternatively, one could include user models to account for novelty or user preferences along the lines of Zhang et al. (2002). Our model is similar in spirit to the randomwalk summarization model (Otterbacher et al., 2005). However, our model has several advantages over this technique. First, our model has 311 We present two results using BAY E S U M in the multidocument summarization settings, based on the official results from the Multilingual Summarization Evaluation (MSE) and Document Understanding Conference (DUC) competitions in 2005. 5.1 Performance at MSE 2005 We participated in the Multilingual Summarization Evaluation (MSE) workshop with a system based on BAY E S U M. The task for this competition was generic (no query) multidocument summarization. Fortunately, not having a query is not a hindrance to our model. To account for the redundancy present in document collections, we applied a greedy selection technique that selects sentences central to the document cluster but far ´ from previously selected sentences (Daume III and Marcu, 2005a). In MSE, our system performed very well. According to the human "pyramid" evaluation, our system came first with a score of 0.529; the next best score was 0.489. In the automatic "Basic Element" evaluation, our system scored 0.0704 (with a 95% confidence interval of [0.0429, 0.1057]), which was the third best score on a site basis (out of 10 sites), and was not statistically significantly different from the best system, which scored 0.0981. 5.2 Performance at DUC 2005 We also participated in the Document Understanding Conference (DUC) competition. The chosen task for DUC was query-focused multidocument summarization. We entered a nearly identical system to DUC as to MSE, with an additional rule´ based sentence compression component (Daume III and Marcu, 2005b). Human evaluators considered both responsiveness (how well did the summary answer the query) and linguistic quality. Our system achieved the highest responsiveness score in the competition. We scored more poorly on the linguistic quality evaluation, which (only 5 out of about 30 systems performed worse); this is likely due to the sentence compression we performed on top of BAY E S U M. On the automatic Rouge-based evaluations, our system performed between third no tunable parameters: the random-walk method has many (graph connectivity, various thresholds, choice of similarity metrics, etc.). Moreover, since our model is properly Bayesian, it is straightforward to extend it to model other aspects of the problem, or to related problems. Doing so in a non ad-hoc manner in the random-walk model would be nearly impossible. Another interesting avenue of future work is to relax the bag-of-words assumption. Recent work has shown, in related models, how this can be done for moving from bag-of-words models to bag-ofngram models (Wallach, 2006); more interesting than moving to ngrams would be to move to dependency parse trees, which could likely be accounted for in a similar fashion. One could also potentially relax the assumption that the relevance judgments are known, and attempt to integrate them out as well, essentially simultaneously performing IR and summarization. Acknowledgments. We thank Dave Blei and Tom Minka for discussions related to topic models, and to the anonymous reviewers, whose comments have been of great benefit. This work was partially supported by the National Science Foundation, Grant IIS-0326276. Victor Lavrenko, M. Choquette, and Bruce Croft. 2002. Crosslingual relevance models. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Xiaoyong Liu and Bruce Croft. 2002. Passage retrieval based on language models. In Processing of the Conference on Information and Knowledge Management (CIKM). Thomas Minka and John Lafferty. 2003. Expectationpropagation for the generative aspect model. In Proceedings of the Converence on Uncertainty in Artificial Intelligence (UAI). Thomas Minka. 2001. A family of algorithms for approximate Bayesian inference. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA. Vanessa Murdock and Bruce Croft. 2005. A translation model for sentence retrieval. In Proceedings of the Joint Conference on Human Language Technology Conference and Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 684­ 691. Jahna Otterbacher, Gunes Erkan, and Dragomir R. Radev. 2005. Using random walks for questionfocused sentence retrieval. In Proceedings of the Joint Conference on Human Language Technology Conference and Empirical Methods in Natural Language Processing (HLT/EMNLP). Jay M. Ponte and Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Hanna Wallach. 2006. Topic modeling: beyond bagof-words. In Proceedings of the International Conference on Machine Learning (ICML). Yi Zhang, Jamie Callan, and Thomas Minka. 2002. Novelty and redundancy detection in adaptive filtering. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). References David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research (JMLR), 3:993­1022, January. Chris Buckley and Gerard Salton. 1995. Optimization of relevance feedback weights. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). Jamie Callan, Bruce Croft, and Stephen Harding. 1992. The INQUERY retrieval system. In Proceedings of the 3rd International Conference on Database and Expert Systems Applications. ´ Hal Daume III and Daniel Marcu. 2005a. Bayesian multi-document summarization at MSE. In ACL 2005 Workshop on Intrinsic and Extrinsic Evaluation Measures. ´ Hal Daume III and Daniel Marcu. 2005b. Bayesian summarization at DUC and a suggestion for extrinsic evaluation. In Document Understanding Conference. John Lafferty and ChengXiang Zhai. 2001. Document language models, query models, and risk minimization for information retrieval. In Proceedings of the Conference on Research and Developments in Information Retrieval (SIGIR). 312