SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking Topic Segmentation with Shared Topic Detection and Alignment of Multiple Documents Bingjun Sun*, Prasenjit Mitra* , Hongyuan Zha , C. Lee Giles* , John Yen* *Depar tment of Computer Science and Engineering College of Computing College of Information Sciences and Technology The Georgia Institute of Technology The Pennsylvania State University Atlanta, GA 30332 University Park, PA 16802 *bsun@cse.psu.edu, {pmitra,giles,jyen}@ist.psu.edu, zha@cc.gatech.edu ABSTRACT Topic detection and tracking [26] and topic segmentation [15] play an important role in capturing the local and sequential information of documents. Previous work in this area usually focuses on single documents, although similar multiple documents are available in many domains. In this paper, we introduce a novel unsupervised method for shared topic detection and topic segmentation of multiple similar documents based on mutual information (MI) and weighted mutual information (WMI) that is a combination of MI and term weights. The basic idea is that the optimal segmentation maximizes MI(or WMI). Our approach can detect shared topics among documents. It can find the optimal boundaries in a document, and align segments among documents at the same time. It also can handle single-document segmentation as a special case of the multi-document segmentation and alignment. Our methods can identify and strengthen cue terms that can be used for segmentation and partially remove stop words by using term weights based on entropy learned from multiple documents. Our experimental results show that our algorithm works well for the tasks of single-document segmentation, shared topic detection, and multi-document segmentation. Utilizing information from multiple documents can tremendously improve the performance of topic segmentation, and using WMI is even better than using MI for the multi-document segmentation. General Terms Algorithms, Design, Experimentation Keywords Topic segmentation, shared topic detection, topic alignment, mutual information, multiple documents, term weight 1. INTRODUCTION Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval--Clustering ; H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing-- Linguistic processing ; I.2.7 [Artificial Intelligence]: Natural Language Processing--Text analysis ; I.5.3 [Pattern Recognition]: Clustering--Algorithms;Similarity measures Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGIR'07, July 23­27, 2007, Amsterdam, The Netherlands. Copyright 2007 ACM 978-1-59593-597-7/07/0007 ...$5.00. Many researchers have worked on topic detection and tracking (TDT) [26] and topic segmentation during the past decade. Topic segmentation intends to identify the boundaries in a document with the goal to capture the latent topical structure. Topic segmentation tasks usually fall into two categories [15]: text stream segmentation where topic transition is identified, and coherent document segmentation in which documents are split into sub-topics. The former category has applications in automatic speech recognition, while the latter one has more applications such as partial-text query of long documents in information retrieval, text summary, and quality measurement of multiple documents. Previous research in connection with TDT falls into the former category, targeted on topic tracking of broadcast speech data and newswire text, while the latter category has not been studied very well. Traditional approaches perform topic segmentation on documents one at a time [15, 25, 6]. Most of them perform badly in subtle tasks like coherent document segmentation [15]. Often, end-users seek documents that have the similar content. Search engines, like, Google, provide links to obtain similar pages. At a finer granularity, users may actually be looking to obtain sections of a document similar to a particular section that presumably discusses a topic of the users interest. Thus, the extension of topic segmentation from single documents to identifying similar segments from multiple similar documents with the same topic is a natural and necessary direction, and multi-document topic segmentation is expected to have a better performance since more information is utilized. Traditional approaches using similarity measurement based on term frequency generally have the same assumption that similar vocabulary tends to be in a coherent topic segment [15, 25, 6]. However, they usually suffer the issue of identifying stop words. For example, additional document-dependent stop words are removed together with the generic stop words in [15]. There are two reasons that we do not remove stop 199 SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking words directly. First, identifying stop words is another issue [12] that requires estimation in each domain. Removing common stop words may result in the loss of useful information in a specific domain. Second, even though stop words can be identified, hard classification of stop words and nonstop words cannot represent the gradually changing amount of information content of each word. We employ a soft classification using term weights. In this paper, we view the problem of topic segmentation as an optimization issue using information theoretic techniques to find the optimal boundaries of a document given the number of text segments so as to minimize the loss of mutual information (MI) (or a weighted mutual information (WMI)) after segmentation and alignment. This is equal to maximizing the MI (or WMI). The MI focuses on measuring the difference among segments whereas previous research focused on finding the similarity (e.g. cosine distance) of segments [15, 25, 6]. Topic alignment of multiple similar documents can be achieved by clustering sentences on the same topic into the same cluster. Single-document topic segmentation is just a special case of the multi-document topic segmentation and alignment problem. Terms can be co-clustered as in [10] at the same time, given the number of clusters, but our experimental results show that this method results in a worse segmentation (see Tables 1, 4, and 6). Usually, human readers can identify topic transition based on cue words, and can ignore stop words. Inspired by this, we give each term (or term cluster) a weight based on entropy among different documents and different segments of documents. Not only can this approach increase the contribution of cue words, but it can also decrease the effect of common stop words, noisy word, and document-dependent stop words. These words are common in a document. Many methods based on sentence similarity require that these words are removed before topic segmentation can be performed [15]. Our results in Figure 3 show that term weights are useful for multi-document topic segmentation and alignment. The ma jor contribution of this paper is that it introduces a novel method for topic segmentation using MI and shows that this method performs better than previously used criteria. Also, we have addressed the problem of topic segmentation and alignment across multiple documents, whereas most existing research focused on segmentation of single documents. Multi-document segmentation and alignment can utilize information from similar documents and improves the performance of topic segmentation greatly. Obviously, our approach can handle single documents as a special case when multiple documents are unavailable. It can detect shared topics among documents to judge if they are multiple documents on the same topic. We also introduce the new criterion of WMI based on term weights learned from multiple similar documents, which can improve performance of topic segmentation further. We propose an iterative greedy algorithm based on dynamic programming and show that it works well in practice. Some of our prior work is in [24]. The rest of this paper is organized as follows: In Section 2, we review related work. Section 3 contains a formulation of the problem of topic segmentation and alignment of multiple documents with term co-clustering, a review of the criterion of MI for clustering, and finally an introduction to WMI. In Section 4, we first propose the iterative greedy algorithm of topic segmentation and alignment with term co-clustering, and then describe how the algorithm can be optimized by us- Figure 1: Illustration of multi-do cument segmentation and alignment. ing dynamic programming. In Section 5, experiments about single-document segmentation, shared topic detection, and multi-document segmentation are described, and results are presented and discussed to evaluate the performance of our algorithm. Conclusions and some future directions of the research work are discussed in Section 6. 2. PREVIOUS WORK Generally, the existing approaches to text segmentation fall into two categories: supervised learning [19, 17, 23] and unsupervised learning [3, 27, 5, 6, 15, 25, 21]. Supervised learning usually has good performance, since it learns functions from labelled training sets. However, often getting large training sets with manual labels on document sentences is prohibitively expensive, so unsupervised approaches are desired. Some models consider dependence between sentences and sections, such as Hidden Markov Model [3, 27], Maximum Entropy Markov Model [19], and Conditional Random Fields [17], while many other approaches are based on lexical cohesion or similarity of sentences [5, 6, 15, 25, 21]. Some approaches also focus on cue words as hints of topic transitions [11]. While some existing methods only consider information in single documents [6, 15], others utilize multiple documents [16, 14]. There are not many works in the latter category, even though the performance of segmentation is expected to be better with utilization of information from multiple documents. Previous research studied methods to find shared topics [16] and topic segmentation and summarization between just a pair of documents [14]. Text classification and clustering is a related research area which categorizes documents into groups using supervised or unsupervised methods. Topical classification or clustering is an important direction in this area, especially co-clustering of documents and terms, such as LSA [9], PLSA [13], and approaches based on distances and bipartite graph partitioning [28] or maximum MI [2, 10], or maximum entropy [1, 18]. Criteria of these approaches can be utilized in the issue of topic segmentation. Some of those methods have been extended into the area of topic segmentation, such as PLSA [5] and maximum entropy [7], but to our best knowledge, using MI for topic segmentation has not been studied. 3. PROBLEM FORMULATION Our goal is to segment documents and align the segments across documents (Figure 1). Let T be the set of terms {t1 , t2 , ..., tl }, which appear in the unlabelled set of documents D = {d1 , d2 , ..., dm }. Let Sd be the set of sentences for document d D, i.e.{s1 , s2 , ..., snd }. We have a 3D matrix of term frequency, in which the three dimensions are random variables of D, Sd , and T . Sd actually is a random 200 SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking vector including a random variable for each d D. The term frequency can be used to estimate the joint probability distribution P (D, Sd , T ), which is p(t, d, s) = T (t, d, s)/ND , where T (t, d, s) is the number of t in d's sentence s and ND ^ is the total number of terms in D. S represents the set of ^ segments {s1 , s2 , ..., sp } after segmentation and alignment ^^ among multiple documents, where the number of segments ^ |S | = p. A segment si of document d is a sequence of ad^ jacent sentences in d. Since for different documents si may discuss different sub-topics, our goal is to cluster adjacent sentences in each document into segments, and align similar segments among documents, so that for different documents si is about the same sub-topic. The goal is to find the opti^ mal topic segmentation and alignment mapping ^^ ^ S egd (si ) : {s1 , s2 , ..., snd } {s1 , s2 , ..., sp } and Alid (si ) : {s1 , s2 , ..., sp } {s1 , s2 , ..., sp }, for all d ^ ^^ ^ ^^ ^ D, where si is ith segment with the constraint that only ^ adjacent sentences can be mapped to the same segment, ^ i.e. for d, {si , si+1 , ..., sj } {sq }, where q {1, ..., p}, where p is the segment number, and if i > j , then for d, sq is missing. After segmentation and alignment, random ^ ^ vector Sd becomes an aligned random variable S . Thus, ^, T ). P (D, Sd , T ) becomes P (D, S Term co-clustering is a technique that has been employed [10] to improve the accuracy of document clustering. We evaluate the effect of it for topic segmentation. A term t is mapped to exactly one term cluster. Term co-clustering involves simultaneously finding the optimal term clustering ^^ ^ mapping C lu(t) : {t1 , t2 , ..., tl } {t1 , t2 , ..., tk }, where k l, l is the total number of words in all the documents, and k is the number of clusters. segmentation, computed as: ^^ I (T ; S ) = ^ ^ ^^ p(t, s)log ^^ p(t, s) , ^)p(s) p(t ^ (2) ^ ^ tT sS ^^ where p(t, s) are estimated by the term frequency tf of Term ^ Cluster t and Segment s in the training set D. Note that ^ here a segment s includes sentences about the the same topic ^ among all documents. The optimal solution is the mapping ^ ^ ^ ^ C lut : T T , S egd : Sd S , and Alid : S S , which ^^ maximizes I (T ; S ). 4.2 Weighted Mutual Information In topic segmentation and alignment of multiple docu^ ments, if P (D, S , T ) is known, based on the marginal dis^ tributions P (D|T ) and P (S |T ) for each term t T , we can categorize terms into four types in the data set: · Common stop words are common both along the dimensions of documents and segments. · Document-dependent stop words that depends on the personal writing style are common only along the dimension of segments for some documents. · Cue words are the most important elements for segmentation. They are common along the dimension of documents only for the same segment, and they are not common along the dimensions of segments. · Noisy words are other words which are not common along both dimensions. ^ Entropy based on P (D|T ) and P (S |T ) can be used to identify different types of terms. To reinforce the contribution of cue words in the MI computation, and simultaneously reduce the effect of the other three types of words, similar as the idea of the tf-idf weight [22], we use entropies of each ^ term along the dimensions of document D and segment S , ^ ^ i.e. ED (t) and ES (t), to compute the weight. A cue word ^ ^ usually has a large value of ED (t) but a small value of ES (t). ^^ We introduce term weights (or term cluster weights) ^ ES (t) ED (t) ^^ )a (1 - )b , (3) ^ maxt T (ED (t )) maxt T (ES (t )) ^^ ^ ^ ^ ^ d ^) = ^)log|D| 1 ^ , where ED (t D p(d|t p(d|t) ^ 1 ES (t) = ^^ ^^ ^ ^ p(s|t)log|S | p(s|t) , and a > 0 and b > 0 are sS ^^ powers to adjust term weights. Usually a = 1 and b = 1 ^ as default, and maxt T (ED (t )) and maxt T (ES (t )) are ^^ ^^ ^^ used to normalize the entropy values. Term cluster weights ^^ are used to adjust p(t, s), ^^ w^p(t, s) ^^ , (4) pw (t, s) = ^t ^^ ^ ^ ^^ tT ;sS wt p(t, s) wt = ( ^ and ^^ Iw (T ; S ) = ^ ^ ^^ pw (t, s)log ^^ pw (t, s) , ^ pw (t)pw (s) ^ (5) 4. METHODOLOGY We now describe a novel algorithm which can handle singledocument segmentation, shared topic detection, and multidocument segmentation and alignment based on MI or WMI. 4.1 Mutual Information MI I (X ; Y ) is a quantity to measure the amount of information which is contained in two or more random variables [8, 10]. For the case of two random variables, we have I (X ; Y ) = x X y Y p(x, y )log p(x, y ) , p(x)p(y ) (1) Obviously, when random variables X and Y are independent, I (X ; Y ) = 0. Thus, intuitively, the value of MI depends on how random variables are dependent on each other. ^ The optimal co-clustering is the mapping C lux : X X and ^ ^^ C luy : Y Y that minimizes the loss: I (X ; Y ) - I (X ; Y ), ^^ which is equal to maximizing I (X ; Y ). This is the criterion of MI for clustering. In the case of topic segmentation, the two random variables are the term variable T and the segment variable S , and each sample is an occurrence of a term T = t in a particular segment S = s. I (T ; S ) is used to measure how dependent T and S are. However, I (T ; S ) cannot be computed for documents before segmentation, since we do not have a set of S due to the fact that sentences of Document d, si Sd , is not aligned with other documents. Thus, instead of minimizing the loss of MI, we can maximize MI after topic ^ ^ tT sS ^ ^^ ^ where pw (t) and pw (s) are marginal distributions of pw (t, s). However, since we do not know either the term weights ^ or P (D, S , T ), we need to estimate them, but wt depends ^ ^ ^ on p(s|t) and S , while S and p(s|t) also depend on wt that ^ ^ ^ is still unknown. Thus, an iterative algorithm is required to estimate term weights wt and find the best segmenta^ tion and alignment to optimize the ob jective function Iw concurrently. After a document is segmented into sentences 201 SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking Input: Joint probability distribution P (D, Sd , T ), number of text segments p {2, 3, ..., max(sd )}, number of term clusters k {2, 3, ..., l} (if k = l, no term co-clustering required), and weight type w {0, 1}, indicating to use I or Iw , respectively. Output: Mapping C lu, S eg , Ali, and term weights wt . ^ Initialization: (0) (0) (0) (0) 0. i = 0. Initialize C lut , S egd , and Alid ; Initialize wt ^ using Equation (6) if w = 1; Stage 1: 1. If |D| = 1, k = l, and w = 0, check all sequential segmentations of d into p segments and find the best one ^^ S egd (s) = arg maxs I (T ; S ), ^ and return S egd ; otherwise, if w = 1 and k = l, go to 3.1; Stage 2: ^ 2.1 If k < l, for each term t, find the best cluster t as (i+1) (t) = ar g max I (T ; S (i) ) ^^ C lu ^ t step algorithm to solve the task of single-document segmentation [6, 15, 25], the global maximum of MI is guaranteed. We will show later that term co-clustering reduces the accuracy of the results and is not necessary, and for singledocument segmentation, term weights are also not required. 4.3.1 Initialization In Step 0, the initial term clustering C lut and topic segmentation and alignment S egd and Alid are important to avoid local maxima and reduce the number of iterations. First, a good guess of term weights can be made by using the distributions of term frequency along sentences for each document and averaging them to get the initial values of wt : ^ wt = ( maxt ED (t) ES (t) )(1 - ), T (ED (t )) maxt T (ES (t )) (1 - Dt (6) where ES (t) = 1d |Dt | s Sd p(s|t)log|Sd | 1 ), p(s|t) based on S eg (i) and Ali(i) ; 2.2 For each d, check all sequential segmentations of d into p segments with mapping s s s, and find the best one ^ ^ Alid (i+1) (S egd (i+1) ^ ^ (s)) = arg maxs I (T (i+1) ; S ) ^ if k < l or C lu(0) (t) if k = l; based on 2.3 i + +. If C lu, S eg , or Ali changed, go to 2.1; otherwise, if w = 0, return C lu(i) , S eg (i) , and Ali(i) ; else j = 0, go to 3.1; Stage 3: (i+j +1) based on S eg (i+j ) , Ali(i+j ) , and C lu(i) 3.1 Update wt ^ using Equation (3); 3.2 For each d, check all sequential segmentations of d into p segments with mapping s s s, and find the best one ^ ^ (i+j +1) (i+j +1) Alid (S egd ( s )) (i) and w (i+j +1) ; based on C lu ^ t C lu(i+1) (t) where Dt is the set of documents which contain Term t. Then, for the initial segmentation S eg (0) , we can simply segment documents equally by sentences. Or we can find the optimal segmentation just for each document d which (0) ^ maximizes the WMI, S egd = arg maxs Iw (T ; S ), where ^ (0) (0) w = wt . For the initial alignment Ali , we can first ^ assume that the order of segments for each d is the same. For the initial term clustering C lu(0) , first cluster labels can be set randomly, and after the first time of Step 3, a good initial term clustering is obtained. 4.3.2 Different Cases ^ ^ = arg maxs Iw (T (i) ; S ) ^ ^^ 3.3 j + +. If Iw (T ; S ) changes, go to step 6; otherwise, stop (i) , S eg (i+j ) , Ali(i+j ) , and w (i+j ) ; and r eturn C l u ^ t Figure 2: Algorithm: Topic segmentation and alignment based on MI or WMI. and each sentence is segmented into words, each word is stemmed. Then the joint probability distribution P (D, Sd , T ) can be estimated. Finally, this distribution can be used to compute MI in our algorithm. 4.3 Iterative Greedy Algorithm ^^ Our goal is to maximize the ob jective function, I (T ; S ) or ^^ Iw (T ; S ), which can measure the dependence of term occurrences in different segments. Generally, first we do not know the estimate term weights, which depend on the optimal topic segmentation and alignment, and term clusters. Moreover, this problem is NP-hard [10], even though if we know the term weights. Thus, an iterative greedy algorithm is desired to find the best solution, even though probably only local maxima are reached. We present the iterative greedy ^^ algorithm in Figure 2 to find a local maximum of I (T ; S ) or ^^ Iw (T ; S ) with simultaneous term weight estimation. This algorithm can is iterative and greedy for multi-document cases or single-document cases with term weight estimation and/or term co-clustering. Otherwise, since it is just a one After initialization, there are three stages for different cases. Totally there are eight cases, |D| = 1 or |D| > 1, k = l or k < l, w = 0 or w = 1. Single document segmentation without term clustering and term weight estimation (|D| = 1, k = l, w = 0) only requires Stage 1 (Step 1). If term clustering is required (k < l), Stage 2 (Step 2.1, 2.2, and 2.3) is executed iteratively. If term weight estimation is required (w = 1), Stage 3 (Step 3.1, 3.2, and 3.3) is executed iteratively. If both are required (k < l, w = 1), Stage 2 and 3 run one after the other. For multi-document segmentation without term clustering and term weight estimation (|D| > 1, k = l, w = 0), only iteration of Step 2.2 and 2.3 are required. At Stage 1, the global maximum can be found based on ^^ I (T ; S ) using dynamic programming in Section 4.4. Simultaneously finding a good term clustering and estimated term weights is impossible, since when moving a term to a new ^^ term cluster to maximize Iw (T ; S ), we do not know that the weight of this term should be the one of the new cluster or the old cluster. Thus, we first do term clustering at Stage 2, and then estimate term weights at Stage 3. At Stage 2, Step 2.1 is to find the best term clustering and Step 2.2 is to find the best segmentation. This cycle is repeated to find a local maximum based on MI I until it converges. The two steps are: (1) based on current term clustering C lut , for each document d, the algorithm segments ^ all the sentences Sd into p segments sequentially (some segments may be empty), and put them into the p segments ^ S of the whole training set D (all possible cases of different segmentation S egd and alignment Alid are checked) to find the optimal case, and (2) based on the current segmentation 202 SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking and alignment, for each term t, the algorithm finds the best term cluster of t based on the current segmentation S egd and alignment Alid . After finding a good term clustering, term weights are estimated if w = 1. At Stage 3, similar as Stage 2, Step 3.1 is term weight re-estimation and Step 3.2 is to find a better segmentation. They are repeated to find a local maximum based on WMI Iw until it converges. However, if the term clustering in Stage 2 is not accurate, then the term weight estimation at Stage 3 may have a bad result. Finally, at Step 3.3, this algorithm converges and return the output. This algorithm can handle both single-document and multi-document segmentation. It also can detect shared topics among documents by checking the proportion of overlapped sentences on the same topics, as described in Sec 5.2. 4.4 Algorithm Optimization In many previous works on segmentation, dynamic programming is a technique used to maximize the ob jective function. Similarly, at Step 1, 2.2, and 3.2 of our algorithm, we can use dynamic programming. For Stage 1, using dynamic programming can still find the global optimum, but for Stage 2 and Stage 3, we can only find the optimum for each step of topic segmentation and alignment of a document. Here we only show the dynamic programming for Step 3.2 using WMI (Step 1 and 2.2 are similar but they can use either I or Iw ). There are two cases that are not shown in the algorithm in Figure 2: (a) single-document segmentation or multi-document segmentation with the same sequential order of segments, where alignment is not required, and (b) multi-document segmentation with different sequential orders of segments, where alignment is necessary. The alignment mapping function of the former case is simply just ^ ^ Alid (si ) = si , while for the latter one's alignment mapping ^ ^ function Alid (si ) = sj , i and j may be different. The computational steps for the two cases are listed below: Case 1 (no alignment): For each document d: ^ ^^ ^ (1) Compute pw (t), partial pw (t, s) and partial pw (s) without counting sentences from d. Then put sentences from i to j into Part k, and compute partial WMI ^^ P Iw (T ; sk (si , si+1 , ..., sj )) ^ ^^ pw (t, sk ) ^^ , pw (t, sk )log ^ pw (t)pw (sk ) ^ ^ ^^ 1, kL/j ) + P Iw (T ; sAlid (sL )=j (si , si+1 , ..., sm ))], ^ where 0 m nd , 1 < L < p, 1 i m + 1, kL p! S et(p, L), which is the set of all L!(p-L)! combinations of L segments chosen from all p segments, j kL , the set of L segments chosen from all p segments, and kL/j is the combination of L - 1 segments in kL except Segment j . (3) Finally, M (snd , p, kp ) = maxi,j [M (si-1 , p - 1, kp/j ) ^^ +P Iw (T ; sAlid (sL )=j (si , si+1 , ..., snd ))], ^ where kp is just the combination of all p segments and 1 i nd + 1, which is the optimal Iw and the corresponding segmentation is the best. The steps of Case 1 and 2 are similar, except in Case 2, alignment is considered in addition to segmentation. First, basic items of probability for computing Iw are computed excluding Doc d, and then partial WMI by putting every possible sequential segment (including empty segment) of d into every segment of the set. Second, the optimal sum of P Iw for L segments and the leftmost m sentences, M (sm , L), is found. Finally, the maximal WMI is found among different sums of M (sm , p - 1) and P Iw for Segment p. 5. EXPERIMENTS In this section, single-document segmentation, shared topic detection, and multi-document segmentation will be tested. Different hyper parameters of our method are studied. For convenience, we refer to the method using I as M Ik if w = 0, and Iw as W M Ik if w = 2 or as W M Ik if w = 1, where k is the number of term clusters, and if k = l, where l is the total number of terms, then no term clustering is required, i.e. M Il and W M Il . 5.1 5.1.1 Single-document Segmentation Test Data and Evaluation tT where Alid (si , si+1 , ..., sj ) = k, k {1, 2, ..., p}, 1 i j ^ nd , and S egd (sq ) = sk for all i q j . ^^ (2) Let M (sm , 1) = P Iw (T ; s1 (s1 , s2 , ..., sm )). Then ^^ M (sm , L) = maxi [M (si-1 , L - 1) + P Iw (T ; sL (si , ..., sm ))], where 0 m nd , 1 < L < p, 1 i m + 1, and when i > m, no sentences are put into sk when compute P Iw ^ ^^ (note P Iw (T ; s(si , ..., sm )) = 0 for single-document segmentation). (3) Finally M (snd , p) = maxi [M (si-1 , p - 1)+ ^^ P Iw (T ; sp (si , ..., snd ))], where 1 i nd +1. The optimal Iw is found and the corresponding segmentation is the best. Case 2 (alignment required): For each document d: ^ ^^ ^ (1) Compute pw (t), partial pw (t, s), and partial pw (s), and ^^ P Iw (T ; sk (si , si+1 , ..., sj )) similarly as Case 1. ^^ (2) Let M (sm , 1, k) = P Iw (T ; sk (s1 , s2 , ..., sm )), where k {1, 2, ..., p}. Then M (sm , L, kL ) = maxi,j [M (si-1 , L - The first data set we tested is a synthetic one used in previous research [6, 15, 25] and many other papers. It has 700 samples. Each is a concatenation of ten segments. Each segment is the first n sentence selected randomly from the Brown corpus, which is supposed to have a different topic from each other. Currently, the best results on this data set is achieved by Ji et.al. [15]. To compare the performance of our methods, the criterion used widely in previous research is applied, instead of the unbiased criterion introduced in [20]. It chooses a pair of words randomly. If they are in different segments (dif f erent) for the real segmentation (real), but predicted (pred) as in the same segment, it is a miss. If they are in the same segment (same), but predicted as in different segments, it is a false alarm. Thus, the error rate is computed using the following equation: p(err|real, pred) = p(miss|real, pred, dif f )p(dif f |real) +p(f alse alarm|real, pred, same)p(same|real). 5.1.2 Experiment Results We tested the case when the number of segments is known. Table 1 shows the results of our methods with different hyper parameter values and three previous approaches, C99[25], U00[6], and ADDP03[15], on this data set when the segment number is known. In W M I for single-document segmentation, the term weights are computed as follows: wt = ^ 1 - ES (t)/maxt T (ES (t )). For this case, our methods M Il ^^ ^^ ^^ and W M Il both outperform all the previous approaches. We compared our methods with ADDP03using one-sample one-sided t-test and p-values are shown in Table 2. From the p-values, we can see that mostly the differences are very 203 SIGIR 2007 Proceedings Session 9: Topic Detection and Tracking Table 1: Average Error Rates of Single-do cument Segmentation Given Segment Numb ers Known R a n g e of n 3-11 3-5 6-8 9-11 Sample size 400 100 100 100 C99 12% 11% 10% 9% U00 10% 9% 7% 5% ADDP03 6.0% 6.8% 5.2% 4.3% M Il 4.68% 5.57% 2.59% 1.59% W M Il 4.94% 6.33% 2.76% 1.62% M I 100 9.62% 12.92% 8.66% 6.67% Table 3: Shared Topic Detection: Average Error Rates for Different Numb ers of Do cuments in Each Subset #Doc 10 20 40 80 LDA 8.89% 16.33% 1.35% 0.60% M I l , = 0 .6 4.17% 1.71% 1.47% 0.0% W M Il , = 0.8 18.6% 3.16% 1.92% 0.0% W M Il check whether the proportion overlapped sentences on the same topic is larger than the adjustable threshold . That s s, in M Il and W M Il , for a pair of documents d, d , i if [ Sd ,s Sd 1(topics =topics ) /min(|Sd |, |Sd |)] > , where Sd is the set of sentences of d, and |Sd | is the number of sentences of d, then d and d have the shared topic. For a pair of documents selected randomly, the error rate is computed using the following equation: p(err|real, pred) = p(miss|real, pred, same)p(same|real) +p(f alse alarm|real, pred, dif f )p(dif f |real), where a miss means if they have the same topic (same) for the real case (real), but predicted (pred) as on the same topic. If they are on different topics (dif f ), but predicted as on the same topic, it is a false alarm. Table 2: Single-do cument Segmentation: of T-test on Error Rates R a n g e of n 3-11 3-5 6-8 ADDP03, M I l 0.000 0.000 0.000 ADDP03, W M Il 0.000 0.099 0.000 M Il , W M Il 0.061 0.132 0.526 P-values 9-11 0.000 0.000 0.898 significant. We also compare the error rates between our two methods using two-sample two-sided t-test to check the hypothesis that they are equal. We cannot reject the hypothesis that they are equal, so the difference are not significant, even though all the error rates for M Il are smaller than W M Il . However, we can conclude that term weights contribute little in single-document segmentation. The results also show that M I using term co-clustering (k = 100) decreases the performance. We tested different number of term clusters, and found that the performance becomes better when the cluster number increases to reach l. W M Ik