Using Bilingual Comparable Corpora and Semi-supervised Clustering for Topic Tracking Fumiyo Fukumoto Interdisciplinary Graduate School of Medicine and Engineering Univ. of Yamanashi fukumoto@yamanashi.ac.jp Yoshimi Suzuki Interdisciplinary Graduate School of Medicine and Engineering Univ. of Yamanashi ysuzuki@yamanashi.ac.jp Abstract We address the problem dealing with skewed data, and propose a method for estimating effective training stories for the topic tracking task. For a small number of labelled positive stories, we extract story pairs which consist of positive and its associated stories from bilingual comparable corpora. To overcome the problem of a large number of labelled negative stories, we classify them into some clusters. This is done by using k-means with EM. The results on the TDT corpora show the effectiveness of the method. 1 Introduction With the exponential growth of information on the Internet, it is becoming increasingly difficult to find and organize relevant materials. Topic Tracking defined by the TDT project is a research area to attack the problem. It starts from a few sample stories and finds all subsequent stories that discuss the target topic. Here, a topic in the TDT context is something that happens at a specific place and time associated with some specific actions. A wide range of statistical and ML techniques have been applied to topic tracking(Carbonell et. al, 1999; Oard, 1999; Franz, 2001; Larkey, 2004). The main task of these techniques is to tune the parameters or the threshold to produce optimal results. However, parameter tuning is a tricky issue for tracking(Yang, 2000) because the number of initial positive training stories is very small (one to four), and topics are localized in space and time. For example, `Taipei Mayoral Elections' and `U.S. Mid-term Elections' are topics, but `Elections' is not a topic. Therefore, the system needs to estimate whether or not the test stories are the same 231 topic with few information about the topic. Moreover, the training data is skewed data, i.e. there is a large number of labelled negative stories compared to positive ones. The system thus needs to balance the amount of positive and negative training stories not to hamper the accuracy of estimation. In this paper, we propose a method for estimating efficient training stories for topic tracking. For a small number of labelled positive stories, we use bilingual comparable corpora (TDT13 English and Japanese newspapers, Mainichi and Yomiuri Shimbun). Our hypothesis using bilingual corpora is that many of the broadcasting station from one country report local events more frequently and in more detail than overseas' broadcasting stations, even if it is a world-wide famous ones. Let us take a look at some topic from the TDT corpora. A topic, `Kobe Japan quake' from the TDT1 is a world-wide famous one, and 89 stories are included in the TDT1. However, Mainichi and Yomiuri Japanese newspapers have much more stories from the same period of time, i.e. 5,029 and 4,883 stories for each. These observations show that it is crucial to investigate the use of bilingual comparable corpora based on the NL techniques in terms of collecting more information about some specific topics. We extract Japanese stories which are relevant to the positive English stories using English-Japanese bilingual corpora, together with the EDR bilingual dictionary. The associated story is the result of alignment of a Japanese term association with an English term association. For a large number of labelled negative stories, we classify them into some clusters using labelled positive stories. We used a semisupervised clustering technique which combines Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 231­238, Sydney, July 2006. c 2006 Association for Computational Linguistics labeled and unlabeled stories during clustering. Our goal for semi-supervised clustering is to classify negative stories into clusters where each cluster is meaningf ul in terms of class distribution provided by one cluster of positive training stories. We introduce k-means clustering that can be viewed as instances of the EM algorithm, and classify negative stories into clusters. In general, the number of clusters k for the k-means algorithm is not given beforehand. We thus use the Bayesian Information Criterion (BIC) as the splitting criterion, and select the proper number for k. 2 Related Work Most of the work which addresses the small number of positive training stories applies statistical techniques based on word distribution and ML techniques. Allan et. al explored on-line adaptive filtering approaches based on the threshold strategy to tackle the problem(Allan et. al, 1998). The basic idea behind their work is that stories closer together in the stream are more likely to discuss related topics than stories further apart. The method is based on unsupervised learning techniques except for its incremental nature. When a tracking query is first created from the Nt training stories, it is also given a threshold. During the tracking phase, if a story S scores over that threshold, S is regarded to be relevant and the query is regenerated as if S were among the Nt training stories. This method was tested using the TDT1 corpus and it was found that the adaptive approach is highly successful. But adding more than four training stories provided only little help, although in their approach, 12 training stories were added. The method proposed in this paper is similar to Allan's method, however our method for collecting relevant stories is based on story pairs which are extracted from bilingual comparable corpora. The methods for finding bilingual story pairs are well studied in the cross-language IR task, or MT systems/bilingual lexicons(Dagan, 1997). Much of the previous work uses cosine similarity between story term vectors with some weighting techniques(Allan et. al, 1998) such as TF-IDF, or cross-language similarities of terms. However, most of them rely on only two stories in question to estimate whether or not they are about the same topic. We use multiple-links among stories to produce optimal results. In the TDT tracking task, classifying negative 232 stories into meaningf ul groups is also an important issue to track topics, since a large number of labelled negative stories are available in the TDT context. Basu et. al. proposed a method using k-means clustering with the EM algorithm, where labeled data provides prior information about the conditional distribution of hidden category labels(Basu, 2002). They reported that the method outperformed the standard random seeding and COP-k-means(Wagstaff, 2001). Our method shares the basic idea with Basu et. al. An important difference with their method is that our method does not require the number of clusters k in advance, since it is determined during clustering. We use the BIC as the splitting criterion, and estimate the proper number for k. It is an important feature because in the tracking task, no knowledge of the number of topics in the negative training stories is available. 3 System Description The system consists of four procedures: extracting bilingual story pairs, extracting monolingual story pairs, clustering negative stories, and tracking. 3.1 Extracting Bilingual Story Pairs We extract story pairs which consist of positive English story and its associated Japanese stories using the TDT English and Mainichi and Yomiuri Japanese corpora. To address the optimal positive English and their associated Japanese stories, we combine the output of similarities(multiplelinks). The idea comes from speech recognition where two outputs are combined to yield a better result in average. Fig.1 illustrates multiple-links. The TDT English corpus consists of training and test stories. Training stories are further divided into positive(black box) and negative stories(doted box). Arrows in Fig.1 refer to an edge with similarity value between stories. In Fig.1, for example, whether the story J2 discusses the target topic, and is related to E1 or not is determined by not only the value of similarity between E1 and J2 , but also the similarities between J2 and J4 , E1 and J4 . Extracting story pairs is summarized as follows: Let initial positiv e training stories E1 , · · ·, Em be b initial node, and each Japanese stories J1 , · · ·, Jm e node or terminal node in the graph G. We calculate cosine similarities between Ei (1 i m) and Jj (1 j m )1 . In a similar way, we calcu1 m refers to the difference of dates between English and TDT English corpus training stories E1 E2 E3 test stories edge(E1,J1) edge(E1,J4) J1 J2 J3 J4 J5 J6 Jm' time lines edge(J2,J4) Mainichi and Yomiuri Japanese corpora topic not topic time lines Table 1: tE and tJ matrix tE tJ t J SJ t J SJ i i t E si E a c t E si E b d Figure 1: Multiple-links among stories late similarities between Jk and Jl (1 k, l m ). If the value of similarity between nodes is larger than a certain threshold, we connect them by an edge(bold arrow in Fig.1). Next, we delete an edge which is not a constituent of maximal connected sub-graph(doted arrow in Fig.1). After eliminating edges, we extract pairs of initial positive English story Ei and Japanese story Jj as a linked story pair, and add associated Japanese story Jj to the training stories. In Fig.1, E1 , J2 , and J4 are extracted. The procedure for calculating cosine similarities between Ei and Jj consists of two sub-steps: extracting terms, and estimating bilingual term correspondences. Extracting terms The first step to calculate similarity between Ei and Jj is to align a Japanese term with its associated English term using the bilingual dictionary, EDR. However, this naive method suffers from frequent failure due to incompleteness of the bilingual dictionary. Let us take a look at the Mainichi Japanese newspaper stories. The total number of terms(words) from Oct. 1, 1998 to Dec. 31, 1998, was 528,726. Of these, 370,013 terms are not included in the EDR bilingual dictionary. For example, ' (Endeavour)' which is a key term for the topic `Shuttle Endeavour mission for space station' from the TDT3 corpus is not included in the EDR bilingual dictionary. New terms which fail to segment by during a morphological analysis are also a problem in calculating similarities between stories in monolingual data. For example, a proper noun ` '(Tokyo Metropolitan Univ.) is divided into three terms, `' (Metropolitan), ` (Univ.)', Japanese story pairs. and ` (Tokyo)'. To tackle these problems, we conducted term extraction from a large collection of English and Japanese corpora. There are several techniques for term extraction(Chen, 1996). We used n-gram model with Church-Gale smoothing, since Chen reported that it outperforms all existing methods on bigram models produced from large training data. The length of the extracted terms does not have a fixed range2 . We thus applied the normalization strategy which is shown in Eq.(1) to each length of the terms to bring the probability value into the range [0,1]. We extracted terms whose probability value is greater than a certain threshold. Words from the TDT English(Japanese newspaper) corpora are identified if they match the extracted terms. simnew = simold - simmin simmax - simmin (1) Bilingual term correspondences The second step to calculate similarity between Ei and Jj is to estimate bilingual term correspondences using 2 statistics. We estimated bilingual term correspondences with a large collection of English and Japanese data. More precisely, let Ei be an English story (1 i n), where n is the i number of stories in the collection, and SJ denote the set of Japanese stories with cosine similarities i higher than a certain threshold value : SJ = {Jj | cos(Ei , Jj ) }. Then, we concatenate coni i stituent Japanese stories of SJ into one story SJ , and construct a pseudo-parallel corpus P P CE J of English and Japanese stories: P P CE J = { { Ei , i i SJ } | SJ = 0 }. Suppose that there are two criteria, monolingual term tE in English story and tJ in Japanese story. We can determine whether or not a particular term belongs to a particular story. Consequently, terms are divided into four classes, as shown in Table 1. Based on the contingency table of co-occurence frequencies of tE and tJ , we estimate bilingual term correspondences according to the statistical measure 2 . (ad - bc)2 (a + b)(a + c)(b + d)(c + d) 2 We set at most five noun words. 2 (t E , t J ) = (2) 233 We extract term tJ as a pair of tE which satisfies maximum value of 2 , i.e. maxtJ TJ 2 (tE ,tJ ), where TJ = {tJ | 2 (tE ,tJ )}. For the extracted English and Japanese term pairs, we conducted semiautomatic acquisition, i.e. we manually selected bilingual term pairs, since our source data is not a clean parallel corpus, but an artificially generated noisy pseudo-parallel corpus, it is difficult to compile bilingual terms full-automatically(Dagan, 1997). Finally, we align a Japanese term with its associated English term using the selected bilingual term correspondences, and again calculate cosine similarities between Japanese and English stories. 3.2 Extracting Monolingual Story Pairs We noted above that our source data is not a clean parallel corpus. Thus the difference of dates between bilingual stories is one of the key factors to improve the performance of extracting story pairs, i.e. stories closer together in the timeline are more likely to discuss related subjects. We therefore applied a method for extracting bilingual story pairs from stories closer in the timelines. However, this often hampers our basic motivation for using bilingual corpora: bilingual corpora helps to collect more information about the target topic. We therefore extracted monolingual(Japanese) story pairs and added them to the training stories. Extracting Japanese monolingual story pairs is quite simple: Let Jj (1 j m ) be the extracted Japanese story in the procedure, extracting bilingual story pairs. We calculate cosine similarities between Jj and Jk (1 k n). If the value of similarity between them is larger than a certain threshold, we add Jk to the training stories. 3.3 Clustering Negative Stories a method of finding the maximum-likelihood estimate(MLE) of the parameters of an underlying distribution from a set of observed data that has missing value. K -means is essentially an EM on a mixture of k Gaussians under certain assumptions. In the standard k-means without any initial supervision, the k-means are chosen randomly in the initial M-step and the stories are assigned to the nearest means in the subsequent E-step. For positive training stories, the initial labels are kept unchanged throughout the algorithm, whereas the conditional distribution for the negative stories are re-estimated at every E-step. We select the number of k initial stories: one is the cluster center of positive stories, and other k-1 stories are negative stories which have the top k-1 smallest value between the negative story and the cluster center of positive stories. In Basu et. al's method, the number of k is given by a user. However, for negative training stories, the number of clusters is not given beforehand. We thus developed an algorithm for estimating k. It goes into action after each run of k means3 , making decisions about which sets of clusters should be chosen in order to better fit the data. The splitting decision is done by computing the Bayesian Information Criterion which is shown in Eq.(3). B I C (k = l ) = pl ^ l l l (X ) - · log N 2 (3) ^ where lll (X ) is the log-likelihood of X according to the number of k is l, N is the total number of training stories, and pl is the number of parameters in k = l. We set pl to the sum of k class probk ^ abilities, m=1 ll(Xm ) , the number of n · k centroid coordinates, and the MLE for the variance, ^ 2 . Here, n is the number of dimensions. 2 , un^ der the identical spherical Gaussian assumption, is: 2 ^ = Our method for classifying negative stories into some clusters is based on Basu et. al.'s method(Basu, 2002) which uses k-means with the EM algorithm. K -means is a clustering algorithm based on iterative relocation that partitions a dataset into the number of k clusters, locally minimizing the average squared distance between the data points and the cluster centers(centroids). Suppose we classify X = { x1 , · · ·, xN }, xi Rd into k clusters: one is the cluster which consists of positive stories, and other k-1 clusters consist of negative stories. Here, which clusters does each negative story belong to? The EM is 234 i 1 (xi - i )2 N -k (4) where i denotes i-th partition center. The probabilities are: ^ P (x i ) = 1 Ri 1 · exp(- 2 || xi - i ||2 ) (5) n N 2 ^ 2 ^ Ri is the number of stories that have i as their closest centroid. The log-likelihood of ll(X ) 3 We set the maximum number of k to 100 in the experiment. center of gravity created Japanese corpora (Mainichi and Yomiuri newspapers) to evaluate the method. We annotated the total number of 66,420 stories from Oct.1, to Dec.31, 1998, against the 60 topics. Each story was labelled according to whether the story discussed the topic or not. Not all the topics were present in the Japanese corpora. We therefore coltest data cluster of negative training data lected 1 topic from the TDT1 and 2 topics from the minimum distance between test data and the center of gravity TDT2, each of which occurred in Japan, and added Figure 2: Each cluster and a test story them in the experiment. TDT1 is collected from the same period of dates as the TDT3, and the first i story of `Kobe Japan Quake' topic starts from Jan. P (xi ). It is taken at the maximumis log likelihood point(story), and thus, focusing just on 16th. We annotated 174,384 stories of Japanese the set Xm X which belongs to the centroid m corpora from the same period for the topic. Table 2 shows 24 topics which are included in the and plugging in the MLE yields: Japanese corpora. `TDT' refers to the evaluation Rm Rm - k Rm · n ^ ^ ll(Xm ) = - log(2 ) - log(2 ) - data, TDT1, 2, or 3. `ID' denotes topic number de2 2 2 +Rm log Rm - Rm log N (1 m k) (6) fined by the TDT. `OnT.'(On-Topic) refers to the number of stories discussing the topic. Bold font stands for the topic which happened in Japan. The We choose the number of k whose value of B I C evaluation of annotation is made by three humans. is highest. The classification is determined to be correct if the 3.4 Tracking majority of three human judges agree. Each story is represented as a vector of terms with tf · idf weights in an n dimensional space, 4.2 Experiments Set Up where n is the number of terms in the collection. The English data we used for extracting terms Whether or not each test story is positive is judged is Reuters'96 corpus(806,791 stories) including using the distance (measured by cosine similarity) TDT1 and TDT3 corpora. The Japanese data between a vector representation of the test story was 1,874,947 stories from 14 years(from 1991 and each centroid g of the clusters. Fig.2 illus- to 2004) Mainichi newspapers(1,499,936 stories), trates each cluster and a test story in the tracking and 3 years(1994, 1995, and 1998) Yomiuri procedure. Fig.2 shows that negative training sto- newspapers(375,011 stories). All Japanese stories are classified into three groups. The centroid ries were tagged by the morphological analysis g for each cluster is calculated as follows: Chasen(Matsumoto, 1997). English stories were cluster of positive training data g = (g 1 , · · · , g n ) = ( p p 1i 1i xi1 , · · · , xin ) (7) p p =1 =1 where xij (1 j n) is the tf ·idf weighted value of term j in the story xi . The test story is judged by using these centroids. If the value of cosine similarity between the test story and the centroid with positive stories is smallest among others, the test story is declared to be positive. In Fig.2, the test story is regarded as negative, since the value between them is smallest. This procedure, is repeated until the last test story is judged. 4 Experiments 4.1 Creating Japanese Corpus We chose the TDT3 English corpora as our gold standard corpora. TDT3 consists of 34,600 stories with 60 manually identified topics. We then 235 tagged by a part-of-speech tagger(Schmid, 1995), and stop word removal. We applied n-gram model with Church-Gale smoothing to noun words, and selected terms whose probabilities are higher than a certain threshold4 . As a result, we obtained 338,554 Japanese and 130,397 English terms. We used the EDR bilingual dictionary, and translated Japanese terms into English. Some of the words had no translation. For these, we estimated term correspondences. Each story is represented as a vector of terms with tf ·idf weights. We calculated story similarities and extracted story pairs between positive and its associated stories5 . In The threshold value for both English and Japanese was 0.800. It was empirically determined. 5 The threshold value for bilingual story pair was 0.65, and that for monolingual was 0.48. The difference of dates between bilingual stories was ±4. 4 Table 2: Topic Name TDT 1 2 3 3 3 3 3 3 3 3 3 3 3 ID 15 31015 30001 30006 30017 30022 30031 30034 30041 30047 30049 30053 30057 Topic name Kobe Japan quake Japan Apology to Korea Cambodian government coalition NBA labor disputes North Korean food shortages Chinese dissidents sentenced Shuttle Endeavour mission for space station Indonesia-East Timor conflict Jiang's Historic Visit to Japan Space station module Zarya launched North Korean nuclear facility? Clinton's Gaza trip India train derailment OnT. 9,912 28 48 44 23 21 17 34 111 30 111 74 12 TDT 2 3 3 3 3 3 3 3 3 3 3 ID 31023 30003 30014 30018 30030 30033 30038 30042 30048 30050 30055 Topic name Kyoto Energy Protocol Pinochet trial Nigerian gas line fire Tony Blair visits China in Oct. Taipei Mayoral elections Euro Introduced Olympic bribery scandal PanAm lockerbie bombing trial IMF bailout of Brazil U.S. Mid-term elections D'Alema's new Italian government OnT. 40 165 6 7 353 152 35 13 28 123 37 the tracking, we used the extracted terms together with all verbs, adjectives, and numbers, and represented each story as a vector of these with tf ·idf weights. We set the evaluation measures used in the TDT benchmark evaluations. `Miss' denotes Miss rate, which is the ratio of the stories that were judged as YES but were not evaluated as such for the run in question. `F/A' shows false alarm rate, which is the ratio of the stories judged as NO but were evaluated as YES. The DET curve plots misses and false alarms, and better performance is indicated by curves more to the lower left of the graph. The detection cost function(CDet ) is defined by Eq.(8). CD et PM iss PF a = = = (CM iss PM iss PT arget + CF a PF a (1 - PT arget )) #M isses/#T ar g ets #F alseAlar ms/#N onT ar g ets 90 random performance With story pairs Baseline 80 60 Miss Probability (in %) 40 20 10 5 2 1 .01 .02 .05 0.1 0.2 0.5 1 2 5 10 20 False Alarm Probability (in %) 40 60 80 90 Figure 3: Tracking result(23 topics) on a tracking query which is created from the top 10 most commonly occurring features in the Nt stories, with weight equal to the number of times the term occurred in those stories multiplied by its incremental idf value. They used a shallow tagger and selected all nouns, verbs, adjectives, and numbers. We added the extracted terms to these part-of-speech words to make their results comparable with the results by our method. `Baseline' in Table 3 shows the best result with their method among varying threshold values of similarity between queries and test stories. We can see that the performance of our method was competitive to the baseline at every Nt value. Fig.3 shows DET curves by both our method and Allan's method(baseline) for 23 topics from the TDT2 and 3. Fig.4 illustrates the results for 3 topics from TDT2 and 3 which occurred in Japan. To make some comparison possible, only the Nt = 4 is given for each. Both Figs. show that we have an advantage using bilingual comparable corpora. 4.4 The Effect of Story Pairs The contribution of the extracted story pairs, especially the use of two types of story pairs, bilingual and monolingual, is best explained by looking at the two results: (i) the tracking results with two types of story pairs, with only English and (8) CM iss , CF a , and PT arget are the costs of a missed detection, false alarm, and priori probability of finding a target, respectively. CM iss , CF a , and PT arget are usually set to 10, 1, and 0.02, respectively. The normalized cost function is defined by Eq.(9), and lower cost scores indicate better performance. (CD et )N orm = CD et /M I N (CM iss PT arget , CF a (1 - PT arget )) (9) 4.3 Basic Results Table 3 summaries the tracking results. M I N denotes M I N (CDet )N orm which is the value of (CDet )N orm at the best possible threshold. Nt is the number of initial positive training stories. We recall that we used subset of the topics defined by the TDT. We thus implemented Allan's method(Allan et. al, 1998) which is similar to our method, and compared the results. It is based 236 TDT1 (Kobe Japan Quake) Nt 1 2 4 Miss 27% 20% 9% F/A .15% .12% .09% Baseline Recall Precision 73% 67% 80% 73% 91% 80% F .70 .76 .85 MIN .055 .042 .039 Nt 1 2 4 Miss 10% 6% 5% Bilingual corpora & clustering F/A Recall Precision .42% 90% 74% .27% 93% 76% .18% 96% 81% F .81 .83 .88 MIN .023 .013 .012 Table 3: Basic results TDT2 & TDT3(23 topics) Nt 1 2 4 Miss 41% 40% 29% F/A .17% .16% .12% Baseline Recall Precision 59% 60% 60% 62% 71% 72% F .60 .61 .71 MIN .089 .072 .057 Nt 1 2 4 Miss 29% 27% 20% Bilingual corpora & clustering F/A Recall Precision .25% 71% 54% .25% 73% 55% .13% 80% 73% F .61 .63 .76 MIN .059 .054 .041 90 random performance With story pairs(Japan) Baseline(Japan) 80 60 40 20 10 5 2 1 .01 .02 .05 0.1 0.2 0.5 1 2 5 10 20 False Alarm Probability (in %) 40 60 80 90 Figure 4: 3 topics concerning to Japan 90 random performance two types of story pairs With only J-E story pairs Without story pairs 80 60 40 20 10 5 2 1 .01 .02 .05 0.1 0.2 0.5 1 2 5 10 20 40 60 80 90 False Alarm Probability (in %) Figure 5: With and without story pairs Japanese stories in question, and without story pairs, and (ii) the results of story pairs by varying values of Nt . Fig.5 illustrates DET curves for 23 topics, Nt =4. As can be clearly seen from Fig.5, the result with story pairs improves the overall performance, especially the result with two types of story pairs was better than that with only English and Japanese stories in question. Table 4 shows the performance of story pairs which consist of positive and its associated story. Each result denotes micro-averaged scores. `Rec.' is the ratio of correct story pair assignments by the system divided by the total number of correct assignments. `Prec.' is the ratio of correct story pair assignments by the system divided by the total number of system's assignments. Table 4 shows that the system with two types of story pairs correctly extracted stories related to the target topic even for a small number of positive training stories, since the ratio of Prec. in Nt = 1 is 0.82. However, each recall value in Table 4 is low. One solution is to use an incremental approach, i.e. by repeating story pairs extraction, new story pairs that are not extracted previously may be extracted. This is a rich space for further exploration. The effect of story pairs for the tracking task also depends on the performance of bilingual term correspondences. We obtained 1,823 English and Japanese term pairs in all when a period of days was ±4. Fig.6 illustrates the result using different period of days(±1 to ±10). For example, `±1' shows that the difference of dates between English and Japanese story pairs is less than ±1. Y-axis shows the precision which is the ratio of correct term pairs by the system divided by the total number of system's assignments. Fig.6 shows that the difference of dates between bilingual story pairs, affects the overall performance. 4.5 The Effect of k-means with EM Miss Probability (in %) Miss Probability (in %) Table 4: Performance of story pairs(24 topics) Nt 1 2 4 Two types of story pairs Rec. Prec. F 30% 82% .439 36% 85% .506 45% 88% .595 J-E story pairs Rec. Prec. F 28% 80% .415 33% 82% .471 42% 79% .548 The contribution of k-means with EM for classifying negative stories is explained by looking at the result without classifying negative stories. We calculated the centroid using all negative training stories, and a test story is judged to be negative or 237 rec. (%) as X -means(Pelleg, 2000), and (iii) applying the method to the TDT4 for quantitative evaluation. 53.0 39.8 37.2 34.0 33.7 32.0 20.8 19.6 Acknowledgments This work was supported by the Grant-in-aid for the JSPS, Support Center for Advanced Telecommunications Technology Research, and International Communications Foundation. 18.3 1.42 P Figure 6: Prec. with different period of days 90 Random Performance BIC (with classifying) k=0 k=100 References J.Allan and R.Papka and V.Lavrenko, On-line new event detection and tracking, Proc. of the DARPA Workshop, 1998. J.Allan and V.Lavrenko and R.Nallapti, UMass at TDT 2002, Proc. of TDT Workshop, 2002. S.Basu and A.Banerjee and R.Mooney, Semi-supervised clustering by seeding, Proc. of ICML'02, 2002. .01 .02 .05 0.1 0.2 0.5 1 2 5 10 20 False Alarm Probability (in %) 40 60 80 90 80 60 Miss Probability (in %) 40 20 10 5 2 1 Figure 7: BIC v.s. fixed k for k-means with EM positive by calculating cosine similarities between the test story and each centroid of negative and positive stories. Further, to examine the effect of using the BIC, we compared with choosing a predefined k, i.e. k=10, 50, and 100. Fig.7 illustrates part of the result for k=100. We can see that the method without classifying negative stories(k=0) does not perform as well and results in a high miss rate. This result is not surprising, because the size of negative training stories is large compared with that of positive ones, and therefore, the test story is erroneously judged as NO. Furthermore, the result indicates that we need to run BIC, as the result was better than the results with choosing any number of pre-defined k, i.e. k=10, 50, and 100. We also found that there was no correlation between the number of negative training stories for each of the 24 topics and the number of clusters k obtained by the BIC. The minimum number of clusters k was 44, and the maximum was 100. J.Carbonell et. al, CMU report on TDT-2: segmentation, detection and tracking, Proc. of the DARPA Workshop, 1999. S.F.Chen and J.Goodman, An empirical study of smoothing techniques for language modeling, Proc. of the ACL'96, pp. 310-318, 1996. N.Collier and H.Hirakawa and A.Kumano, Machine translation vs. dictionary term translation - a comparison for English-Japanese news article alignment, Proc. of COLING'02, pp. 263-267, 2002. I.Dagan and K.Church, Termight: Coordinating humans and machines in bilingual terminology acquisition, Journal of MT, Vol. 20, No. 1, pp. 89-107, 1997. M.Franz and J.S.McCarley, Unsupervised and supervised clustering for topic tracking, Proc. of SIGIR'01, pp. 310317, 2001. L.S.Larkey et. al, Language-specific model in multilingual topic tracking, Proc. of SIGIR'04, pp. 402-409, 2004. Y.Matsumoto et. al, Japanese morphological analysis system chasen manual, NAIST Technical Report, 1997. D.W.Oard, Topic tracking with the PRISE information retrieval system, Proc. of the DARPA Workshop, pp. 94101, 1999. D.Pelleg and A.Moore, X-means: Extending K-means with efficient estimation of the number of clusters, Proc. of ICML'00, pp. 727-734, 2000. H.Schmid, Improvements in part-of-speech tagging with an application to german, Proc. of the EACL SIGDAT Workshop, 1995. K.Wagstaff et. al, Constrained K-means clustering with background knowledge, Proc. of ICML'01, pp. 577-584, 2001. Y.Yang et. al, Improving text categorization methods for event tracking, Proc. of SIGIR'00, pp. 65-72, 2000. 5 Conclusion In this paper, we addressed the issue of the difference in sizes between positive and negative training stories for the tracking task, and investigated the use of bilingual comparable corpora and semisupervised clustering. The empirical results were encouraging. Future work includes (i) extending the method to an incremental approach for extracting story pairs, (ii) comparing our clustering method with the other existing methods such 238