Novel Association Measures Using Web Search with Double Checking Hsin-Hsi Chen Ming-Shun Lin Yu-Chuan Wei Department of Computer Science and Information Engineering National Taiwan University Taipei, Taiwan hhchen@csie.ntu.edu.tw;{mslin,ycwei}@nlg.csie.ntu.edu.tw Abstract A web search with double checking model is proposed to explore the web as a live corpus. Five association measures including variants of Dice, Overlap Ratio, Jaccard, and Cosine, as well as CoOccurrence Double Check (CODC), are presented. In the experiments on Rubenstein-Goodenough's benchmark data set, the CODC measure achieves correlation coefficient 0.8492, which competes with the performance (0.8914) of the model using WordNet. The experiments on link detection of named entities using the strategies of direct association, association matrix and scalar association matrix verify that the double-check frequencies are reliable. Further study on named entity clustering shows that the five measures are quite useful. In particular, CODC measure is very stable on wordword and name-name experiments. The application of CODC measure to expand community chains for personal name disambiguation achieves 9.65% and 14.22% increase compared to the system without community expansion. All the experiments illustrate that the novel model of web search with double checking is feasible for mining associations from the web. 1 Introduction In statistical natural language processing, resources used to compute the statistics are indispensable. Different kinds of corpora have made available and many language models have been experimented. One major issue behind the corpus-based approaches is: if corpora adopted can reflect the up-to-date usage. As we know, languages are live. New terms and phrases are used in daily life. How to capture the new usages is an important research topic. The Web is a heterogeneous document collection. Huge-scale and dynamic nature are characteristics of the Web. Regarding the Web as a live corpus becomes an active research topic recently. How to utilize the huge volume of web data to measure association of information is an important issue. Resnik and Smith (2003) employ the Web as parallel corpora to provide bilingual sentences for translation models. Keller and Lapata (2003) show that bigram statistics for English language is correlated between corpus and web counts. Besides, how to get the word counts and the word association counts from the web pages without scanning over the whole collections is indispensable. Directly managing the web pages is not an easy task when the Web grows very fast. Search engine provides some way to return useful information. Page counts for a query denote how many web pages containing a specific word or a word pair roughly. Page count is different from word frequency, which denotes how many occurrences a word appear. Lin and Chen (2004) explore the use of the page counts provided by different search engines to compute the statistics for Chinese segmentation. In addition to the page counts, snippets returned by web search, are another web data for training. A snippet consists of a title, a short summary of a web page and a hyperlink to the web page. Because of the cost to retrieve the full web pages, short summaries are always adopted (Lin, Chen, and Chen, 2005). Various measures have been proposed to compute the association of objects of different granularity like terms and documents. Rodríguez and Egenhofer (2003) compute the semantic 1009 Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 1009­1016, Sydney, July 2006. c 2006 Association for Computational Linguistics similarity from WordNet and SDTS ontology by word matching, feature matching and semantic neighborhood matching. Li et al. (2003) investigate how information sources could be used effectively, and propose a new similarity measure combining the shortest path length, depth and local density using WordNet. Matsuo et al. (2004) exploit the Jaccard coefficient to build "Web of Trust" on an academic community. This paper measures the association of terms using snippets returned by web search. A web search with double checking model is proposed to get the statistics for various association measures in Section 2. Common words and personal names are used for the experiments in Sections 3 and 4, respectively. Section 5 demonstrates how to derive communities from the Web using association measures, and employ them to disambiguate personal names. Finally, Section 6 concludes the remarks. VariantOve rlap ( X , Y ) = min{ f (Y @X ), f ( X@Y )} (4) min{ f ( X ), f (Y )} CODC( X ,Y ) 0 = f (Y@X ) f ( X@Y ) × log f (X ) f (Y ) e if f (Y@X ) = 0 or (5) f ( X@Y ) = 0 Otherwise 2 A Web Search with Double Checking Model Where f(X) is the total occurrences of X in the top N snippets of query X, and, similarly, f(Y) is the total occurrences of Y in the top N snippets of query Y. Formulas (1)-(4) are variants of the Dice, Cosine, Jaccard, and Overlap Ratio association measure. Formula (5) is a function CODC (Co-Occurrence Double-Check), which measures the association in an interval [0,1]. In the extreme cases, when f(Y@X)=0 or f(X@Y)=0, CODC(X,Y)=0; and when f(Y@X)=f(X) and f(X@Y)=f(Y), CODC(X,Y)=1. In the first case, X and Y are of no association. In the second case, X and Y are of the strongest association. Instead of simple web page counts and complex web page collection, we propose a novel model, a Web Search with Double Checking (WSDC), to analyze snippets. In WSDC model, two objects X and Y are postulated to have an association if we can find Y from X (a forward process) and find X from Y (a backward process) by web search. The forward process counts the total occurrences of Y in the top N snippets of query X, denoted as f(Y@X). Similarly, the backward process counts the total occurrences of X in the top N snippets of query Y, denoted as f(X@Y). The forward and the backward processes form a double check operation. Under WSDC model, the association scores between X and Y are defined by various formulas as follows. VariantDic e( X , Y ) 0 = f (Y@X ) + f ( X@Y ) f ( X ) + f (Y ) VariantCos ine ( X , Y) = 3 Association of Common Words We employ Rubenstein-Goodenough's (1965) benchmark data set to compare the performance of various association measures. The data set consists of 65 word pairs. The similarities between words, called Rubenstein and Goodenough rating (RG rating), were rated on a scale of 0.0 to 4.0 for "semantically unrelated" to "highly synonymous" by 51 human subjects. The Pearson product-moment correlation coefficient, rxy, between the RG ratings X and the association scores Y computed by a model shown as follows measures the performance of the model. rxy = (x i =1 n i - x )( y i - y ) (6) ( n - 1) s x s y if f (Y@X ) = 0 or f ( X@ Y ) = 0 Otherwise (1) min ( f (Y @ X ), f ( X@ Y )) f ( X ) × f (Y ) (2) VariantJaccard(X,Y) min( f (Y@X ), f ( X@Y )) = f ( X ) + f (Y ) - max( f (Y@X ), f ( X@Y )) (3) Where x and y are the sample means of xi and yi, and sx and sy are sample standard deviations of xi and yi and n is total samples. Most approaches (Resink, 1995; Lin, 1998; Li et al., 2003) used 28 word pairs only. Resnik (1995) obtained information content from WordNet and achieved correlation coefficient 0.745. Lin (1998) proposed an informationtheoretic similarity measure and achieved a correlation coefficient of 0.8224. Li et al. (2003) combined semantic density, path length and depth effect from WordNet and achieved the correlation coefficient 0.8914. 1010 100 VariantDice VariantOverlap VariantJaccard VariantCosine CODC (=0.15) Jaccard Coeff* 0.5332 0.5517 0.5533 0.5552 0.5629 0.5847 200 0.5169 0.6516 0.6409 0.6459 0.6951 0.5933 300 0.5352 0.6973 0.6993 0.7063 0.8051 0.6099 400 0.5406 0.7173 0.7229 0.7279 0.8473 0.5807 500 0.5306 0.6923 0.6989 0.6987 0.8438 0.5463 600 0.5347 0.7259 0.738 0.7398 0.8492 0.5202 700 0.5286 0.7473 0.7613 0.7624 0.8222 0.4855 800 0.5421 0.7556 0.7599 0.7594 0.8291 0.4549 900 0.5250 0.7459 0.7486 0.7501 0.8182 0.4622 Table 1. Correlation Coefficients of WSDC Model on Word-Word Experiments Model Correlation Coefficient chord-smile rooster-voyage noon-string glass-magician monk-slave coast-forest monk-oracle lad-wizard forest-graveyard food-rooster coast-hill car-journey crane-implement brother-lad bird-crane bird-cock food-fruit brother-monk asylum-madhouse furnace-stove magician-wizard journey-voyage coast-shore implement-tool boy-lad Automobile-car Midday-noon gem-jewel RG Rating 0.02 0.04 0.04 0.44 0.57 0.85 0.91 0.99 1 1.09 1.26 1.55 2.37 2.41 2.63 2.63 2.69 2.74 3.04 3.11 3.21 3.58 3.6 3.66 3.82 3.92 3.94 3.94 Resnik (1995) 0.7450 1.1762 0 0 1.0105 2.9683 0 2.9683 2.9683 0 1.0105 6.2344 0 2.9683 2.9355 9.3139 9.3139 5.0076 2.9683 15.666 1.7135 13.666 6.0787 10.808 6.0787 8.424 8.0411 12.393 14.929 Lin (1998) 0.8224 0.20 0 0 0.06 0.18 0.16 0.14 0.20 0 0.04 0.58 0 0.39 0.20 0.67 0.83 0.24 0.16 0.97 0.18 1 0.89 0.93 0.80 0.85 1 1 1 Li et al (2003) 0.8914 0 0 0 0 0.350 0.170 0.168 0.355 0.132 0 0.366 0 0.366 0.355 0.472 0.779 0.170 0.779 0.779 0.585 0.999 0.779 0.779 0.778 0.778 1 1 1 VariantCosine CODC(=0.15, (#snippets=700) #snippets=600) WSDC WSDC 0.7624 0.8492 0 0 0 0 0 0.0019 0 0 0 0 0 0.0014 0 0.0027 0 0.0058 0.0025 0.0027 0.0015 0.0035 0.0031 0.0086 0.0139 0.0033 0.0101 0.0144 0.0097 0.0107 0 0 0 0 0 0.1686 0 0 0 0 0 0.2049 0 0.1811 0 0.2295 0.2355 0.1956 0.1845 0.1982 0.2076 0.2666 0.2923 0.2506 0.2828 0.4229 0.2994 0.3530 Table 2. Comparisons of WSDC with Models in Previous Researches In our experiments on the benchmark data set, we used information from the Web rather than WordNet. Table 1 summarizes the correlation coefficients between the RG rating and the association scores computed by our WSDC model. We consider the number of snippets from 100 to 900. The results show that CODC > VariantCosine > VariantJaccard > VariantOverlap > VariantDice. CODC measure achieves the best performance 0.8492 when =0.15 and total snippets to be analyzed are 600. Matsuo et al. (2004) used Jaccard coefficient to calculate similarity between personal names using the Web. The coefficient is defined as follows. 1011 JaccardCoff ( X , Y ) = f ( X Y ) f ( X Y ) (7) Where f(XY) is the number of pages including X's and Y's homepages when query "X and Y" is submitted to a search engine; f(XY) is the number of pages including X's or Y's homepages when query "X or Y" is submitted to a search engine. We revised this formula as follows and evaluated it with Rubenstein-Goodenough's benchmark. JaccardCoff ( X , Y )* = f s ( X Y ) f s ( X Y ) (8) Where fs(XY) is the number of snippets in which X and Y co-occur in the top N snippets of query "X and Y"; fs(XY) is the number of snippets containing X or Y in the top N snippets of query "X or Y". We test the formula on the same benchmark. The last row of Table 1 shows that Jaccard Coeff* is worse than other models when the number of snippets is larger than 100. Table 2 lists the results of previous researches (Resink, 1995; Lin, 1998; Li et al., 2003) and our WSDC models using VariantCosine and CODC measures. The 28 word pairs used in the experiments are shown. CODC measure can compete with Li et al. (2003). The word pair "carjourney" whose similarity value is 0 in the papers (Resink, 1995; Lin, 1998; Li et al., 2003) is captured by our model. In contrast, our model cannot deal with the two word pairs "craneimplement" and "bird-crane". Figure 1. Three Strategies for Link Detection i.e., f(NEj@NEi)>0 and f(NEi@NEj)>0, then the link detection test says "yes", i.e., NEi and NEj have direct association. Otherwise, the test says "no". Figure 1(a) shows the direct association. ˇ Association Matrix: Compose an n×n binary matrix M=(mij), where mij=1 if f(NEj@NEi)>0 and f(NEi@NEj)>0; mij=0 if f(NEj@NEi)=0 or f(NEi@NEj)=0; and n is total number of named entities in L. Let Mt be a transpose matrix of M. The matrix A=M×Mt is an association matrix. Here the element aij in A means that total aij common named entities are associated with both NEi and NEj directly. Figure 1(b) shows a one-layer indirect association. Here, aij=3. We can define NEi and NEj have an indirect association if aij is larger than a threshold . That is, NEi and NEj should associate with at least common named entities directly. The strategy of association matrix specifies: if aij, then the link detection test says "yes", otherwise it says "no". In the example shown in Figure 1(b), NEi and NEj are indirectly associated when 0<3. ˇ Scalar Association Matrix: Compose a binary association matrix B from the association matrix A as: bij=1 if aij>0 and bij=0 if aij=0. The matrix S= B×Bt is a scalar as- 4 Association of Named Entities Although the correlation coefficient of WSDC model built on the web is a little worse than that of the model built on WordNet, the Web provides live vocabulary, in particular, named entities. We will demonstrate how to extend our WSDC method to mine the association of personal names. That will be difficult to resolve with previous approaches. We design two experiments ­ say, link detection test and named entity clustering, to evaluate the association of named entities. Given a named-entity set L, we define a link detection test to check if any two named entities NEi and NEj (ij) in L have a relationship R using the following three strategies. ˇ Direct Association: If the double check frequency of NEi and NEj is larger than 0, 1012 sociation matrix. NEi and NEj may indirectly associate with a common named entity NEk. Figure 1(c) shows a two-layer n indirect association. The sij = k =1 bik × bkj denotes how many such an NEk there are. In the example of Figure 1(c), two named entities indirectly associate NEi and NEj at the same time. We can define NEi and NEj have an indirect association if sij is larger than a threshold . In other words, if sij >, then the link detection test says "yes", otherwise it says "no". To evaluate the performance of the above three strategies, we prepare a test set extracted from domz web site (http://dmoz.org), the most comprehensive human-edited directory of the Web. The test data consists of three communities: actor, tennis player, and golfer, shown in Table 3. Total 220 named entities are considered. The golden standard of link detection test is: we compose 24,090 (=220×219/2) named entity pairs, and assign "yes" to those pairs belonging to the same community. Category Path in domz.org Top: Sports: Golf: Golfers Top: Sports: Tennis: Players: Female (+Male) Top: Arts: People: Image Galleries: Female (+Male): Individual # of Person Names 10 90 120 Table 3. Test Set for Association Evaluation of Named Entities When collecting the related values for computing the double check frequencies for any named entity pair (NEi and NEj), i.e., f(NEj@NEi), f(NEi@NEj), f(NEi), and f(NEj), we consider naming styles of persons. For example, "Alba, Jessica" have four possible writing: "Alba, Jessica", "Jessica Alba", "J. Alba" and "Alba, J." We will get top N snippets for each naming style, and filter out duplicate snippets as well as snippets of ULRs including dmoz.org and Strategies Direct Association Association Matrix Scalar Association Matrix 100 59.20% 71.53% (=1) 73.93% (=1, =6) 200 62.86% 79.95% (=1) 82.69% (=2, =9) 300 65.72% 84.00% (=2) 86.70% (=4, =9) 400 67.88% 86.08% (=3) 88.61% (=5, =10) google.com. Table 4 lists the experimental results of link detection on the test set. The precisions of two baselines are: guessing all "yes" (46.45%) and guessing all "no" (53.55%). All the three strategies are better than the two baselines and the performance becomes better when the numbers of snippets increase. The strategy of direct association shows that using double checks to measure the association of named entities also gets good effects as the association of common words. For the strategy of association matrix, the best performance 90.14% occurs in the case of 900 snippets and =6. When larger number of snippets is used, a larger threshold is necessary to achieve a better performance. Figure 2(a) illustrates the relationship between precision and threshold (). The performance decreases when >6. The performance of the strategy of scalar association matrix is better than that of the strategy of association matrix in some and . Figure 2(b) shows the relationship between precision and threshold for some number of snippets and . In link detection test, we only consider the binary operation of double checks, i.e., f(NEj@NEi) > 0 and f(NEi@NEj) > 0, rather than utilizing the magnitudes of f(NEj@NEi) and f(NEi@NEj). Next we employ the five formulas proposed in Section 2 to cluster named entities. The same data set as link detection test is adopted. An agglomerative average-link clustering algorithm is used to partition the given 220 named entities based on Formulas (1)-(5). Four-fold crossvalidation is employed and B-CUBED metric (Bagga and Baldwin, 1998) is adopted to evaluate the clustering results. Table 5 summarizes the experimental results. CODC (Formula 5), which behaves the best in computing association of common words, still achieves the better performance on different numbers of snippets in named entity clustering. The F-scores of the other formulas are larger than 95% when more snippets are considered to compute the double check frequencies. 500 69.83% 88.13% (=4) 90.90% (=6, =12) 600 71.35% 89.67% (=5) 91.93% (=7, =12) 700 72.05% 89.98% (=5) 91.90% (=7, =18) 800 72.46% 90.09% (=6) 92.20% (=10, =16) 900 72.55% 90.14% (=6) 92.35% (=10, =18) Table 4. Performance of Link Detection of Named Entities 1013 (a) (b) Figure 2. (a) Performance of association matrix strategy. (b) Performance of scalar association matrix strategy (where is fixed and its values reference to scalar association matrix in Table 4) 100 P VariantDice R F P VariantOverlap R F P VariantJaccard R F P VariantCosine R F CODC (=0.15) P R F 200 300 400 500 600 700 800 900 91.70% 88.71% 87.02% 87.49% 96.90% 100.00% 100.00% 100.00% 100.00% 55.80% 81.10% 87.70% 93.00% 89.67% 93.61% 94.42% 94.88% 94.88% 69.38% 84.73% 87.35% 90.16% 93.14% 96.69% 97.12% 97.37% 97.37% 99.13% 87.04% 85.35% 85.17% 88.16% 88.16% 88.16% 97.59% 98.33% 52.16% 81.10% 86.24% 93.45% 92.03% 93.64% 92.82% 90.82% 93.27% 68.35% 83.96% 85.79% 89.11% 90.05% 90.81% 90.43% 94.08% 95.73% 99.13% 97.59% 98.33% 95.42% 97.59% 88.16% 95.42% 100.00% 100.00% 55.80% 77.53% 84.91% 88.67% 87.18% 90.58% 88.67% 93.27% 91.64% 71.40% 86.41% 91.12% 91.92% 92.09% 89.35% 91.92% 96.51% 95.63% 84.62% 97.59% 85.35% 85.17% 88.16% 88.16% 88.16% 98.33% 98.33% 56.22% 78.92% 86.48% 93.45% 92.03% 93.64% 93.64% 93.27% 93.27% 67.55% 87.26% 85.91% 89.11% 90.05% 90.81% 90.81% 95.73% 95.73% 91.70% 87.04% 87.02% 95.93% 98.33% 95.93% 95.93% 94.25% 94.25% 55.80% 81.10% 90.73% 94.91% 94.91% 96.52% 98.24% 98.24% 98.24% 69.38% 83.96% 88.83% 95.41% 96.58% 96.22% 97.07% 96.20% 96.20% Table 5. Performance of Various Scoring Formulas on Named Entity Clustering 5 Disambiguation Using Association of Named Entities This section demonstrates how to employ association mined from the Web to resolve the ambiguities of named entities. Assume there are n named entities, NE1, NE2, ..., and NEn, to be disambiguated. A named entity NEj has m accompanying names, called cue names later, CNj1, CNj2, ..., CNjm. We have two alternatives to use the cue names. One is using them directly, i.e., NEj is represented as a community of cue names Community(NEj)={CNj1, CNj2, ..., CNjm}. The other is to expand the cue names CNj1, CNj2, ..., CNjm for NEj using the web data as follows. Let CNj1 be an initial seed. Figure 3 sketches the concept of community expansion. (1) Collection: We submit a seed to Google, and select the top N returned snippets. Then, we use suffix trees to extract possible patterns (Lin and Chen, 2006). Validation: We calculate CODC score of each extracted pattern (denoted Bi) with the seed A. If CODC(A,Bi) is strong enough, i.e., larger than a (2) 1014 threshold , we employ Bi as a new seed and repeat steps (1) and (2). This procedure stops either expected number of nodes is collected or maximum number of layers is reached. (3) Union: The community initiated by the seed CNji is denoted by Community(CNji)={Bji , Bji , ..., BBji }, where Bji is a new seed. The Cscore score, community score, of Bji is the CODC score of Bji with its parent divided by the layer it is located. We repeat Collection and Validation steps until all the cue names CNji (1im) of NEj are processed. Finally, we have 1 2 r k B k k Community( NE j ) = im 1 Community(CN ji ) = NEj denotes the same person as those in c . We let the new Community( c ) be the old Community( c ) {CNj1, CNj2, ..., CNjm}. Otherwise, NEj is left undecided. To evaluate the personal name disambiguation, we prepare three corpora for an ambiguous name " " (Chien-Ming Wang) from United Daily News Knowledge Base (UDN), Google Taiwan (TW), and Google China (CN). Table 6 summarizes the statistics of the test data sets. In UDN news data set, 37 different persons are mentioned. Of these, 13 different persons occur more than once. The most famous person is a pitcher of New York Yankees, which occupies 94.29% of 2,205 documents. In TW and CN web data sets, there are 24 and 107 different persons. The majority in TW data set is still the New York Yankees's "Chien-Ming Wang". He appears in 331 web pages, and occupies 88.03%. Comparatively, the majority in CN data set is a research fellow of Chinese Academy of Social Sciences, and he only occupies 18.29% of 421 web pages. Total 36 different "Chien-Ming Wang"s occur more than once. Thus, CN is an unbiased corpus. UDN # of documents # of persons # of persons of occurrences>1 Majority 2,205 37 13 94.29% TW 376 24 9 88.03% CN 421 107 36 18.29% Figure 3. A Community for a Seed "" ("Chien-Ming Wang") In a cascaded personal name disambiguation system (Wei, 2006), association of named entities is used with other cues such as titles, common terms, and so on. Assume k clusters, c1 c2 ... ck, have been formed using title cue, and we try to place NE1, NE2, ..., and NEl into a suitable cluster. The cluster c is selected by the similarity measure defined below. Table 6. Statistics of Test Corpora M1 M2 0.9674 0.9677 0.9675 0.8786 0.7287 0.7967 0.5982 0.8378 0.6980 (0.70%) (1.26%) (0.98%) (0.07%) (17.40%) (9.65%) (21.83%) (4.09%) (14.22%) score( NE j , cq ) 1s = i =1 count ( pni ) × Cscore( pni ) r c = arg max score ( NE j , c q ) c q (1 q k ) (9) UDN P R F P TW R F P CN R F 0.9742 0.9800 0.9771 0.8760 0.6207 0.7266 0.4910 0.8049 0.6111 (10) Where pn1, pn2, ..., pns are names which appear in both Community(NEj) and Community(cq); count(pni) is total occurrences of pni in Community(cq); r is total occurrences of names in Community(NEj); Cscore(pni) is community score of pni. If score(NEj, c ) is larger than a threshold, then NEj is placed into cluster c . In other words, Table 7. Disambiguation without/with Community Expansion 1015 Table 7 shows the performance of a personal name disambiguation system without (M1)/with (M2) community expansion. In the news data set (i.e., UDN), M1 is a little better than M2. Compared to M1, M2 decreases 0.98% of F-score. In contrast, in the two web data sets (i.e., TW and CN), M2 is much better than M1. M2 has 9.65% and 14.22% increases compared to M1. It shows that mining association of named entities from the Web is very useful to disambiguate ambiguous names. The application also confirms the effectiveness of the proposed association measures indirectly. Model. Proceedings of 36th COLING-ACL Conference, 79-85. F. Keller and M. Lapata. 2003. Using the Web to Obtain Frequencies for Unseen Bigrams. Computational Linguistics, 29(3): 459-484. Y. Li, Z.A. Bandar and D. McLean. 2003. An Approach for Measuring Semantic Similarity between Words Using Multiple Information Sources. IEEE Transactions on Knowledge and Data Engineering, 15(4): 871-882. D. Lin. 1998. An Information-Theoretic Definition of Similarity. Proceedings of the Fifteenth International Conference on Machine Learning, 296-304. H.C. Lin and H.H. Chen. 2004. Comparing Corpusbased Statistics and Web-based Statistics: Chinese Segmentation as an Example. Proceedings of 16th ROCLING Conference, 89-100. M.S. Lin, C.P. Chen and H.H. Chen. 2005. An Approach of Using the Web as a Live Corpus for Spoken Transliteration Name Access. Proceedings of 17th ROCLING Conference, 361-370. M.S. Lin and H.H. Chen. 2006. Constructing a Named Entity Ontology from Web Corpora. Proceedings of 5th International Conference on Language Resources and Evaluation. Y. Matsuo, H. Tomobe, K. Hasida, and M. Ishizuka. 2004. Finding Social Network for Trust Calculation. Proceedings of 16th European Conference on Artificial Intelligence, 510-514. P. Resnik. 1995. Using Information Content to Evaluate Semantic Similarity in a Taxonomy. Proceedings of the 14th International Joint Conference on Artificial Intelligence, 448-453. P. Resnik and N.A. Smith. 2003. The Web as a Parallel Corpus. Computational Linguistics, 29(3): 349380. M.A. Rodríguez and M.J. Egenhofer. 2003. Determining Semantic Similarity among Entity Classes from Different Ontologies. IEEE Transactions on Knowledge and Data Engineering, 15(2): 442-456. H. Rubenstein and J.B. Goodenough. 1965. Contextual Correlates of Synonymy. Communications of the ACM, 8(10): 627-633. Y.C. Wei. 2006. A Study of Personal Name Disambiguation. Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University, Taiwan. 6 Concluding Remarks This paper introduces five novel association measures based on web search with double checking (WSDC) model. In the experiments on association of common words, Co-Occurrence Double Check (CODC) measure competes with the model trained from WordNet. In the experiments on the association of named entities, which is hard to deal with using WordNet, WSDC model demonstrates its usefulness. The strategies of direct association, association matrix, and scalar association matrix detect the link between two named entities. The experiments verify that the double-check frequencies are reliable. Further study on named entity clustering shows that the five measures ­ say, VariantDice, VariantOverlap, ariantJaccard, VariantCosine and CODC, are quite useful. In particular, CODC is very stable on word-word and namename experiments. Finally, WSDC model is used to expand community chains for a specific personal name, and CODC measures the association of community member and the personal name. The application on personal name disambiguation shows that 9.65% and 14.22% increase compared to the system without community expansion. Acknowledgements Research of this paper was partially supported by National Science Council, Taiwan, under the contracts 94-2752-E-001-001-PAE and 95-2752E-001-001-PAE. References A. Bagga and B. Baldwin. 1998. Entity-Based CrossDocument Coreferencing Using the Vector Space 1016