Learning Phoneme Mappings for Transliteration without Parallel Data Sujith Ravi and Kevin Knight University of Southern California Information Sciences Institute Marina del Rey, California 90292 {sravi,knight}@isi.edu Abstract We present a method for performing machine transliteration without any parallel resources. We frame the transliteration task as a decipherment problem and show that it is possible to learn cross-language phoneme mapping tables using only monolingual resources. We compare various methods and evaluate their accuracies on a standard name transliteration task. We explore the third dimension, where we see several techniques in use: · Manually-constructed transliteration models, e.g., (Hermjakob et al., 2008). · Models constructed from bilingual dictionaries of terms and names, e.g., (Knight and Graehl, 1998; Huang et al., 2004; Haizhou et al., 2004; Zelenko and Aone, 2006; Yoon et al., 2007; Li et al., 2007; Karimi et al., 2007; Sherif and Kondrak, 2007b; Goldwasser and Roth, 2008b). · Extraction of parallel examples from bilingual corpora, using bootstrap dictionaries e.g., (Sherif and Kondrak, 2007a; Goldwasser and Roth, 2008a). · Extraction of parallel examples from comparable corpora, using bootstrap dictionaries, and temporal and word co-occurrence, e.g., (Sproat et al., 2006; Klementiev and Roth, 2008). · Extraction of parallel examples from web queries, using bootstrap dictionaries, e.g., (Nagata et al., 2001; Oh and Isahara, 2006; Kuo et al., 2006; Wu and Chang, 2007). · Comparing terms from different languages in phonetic space, e.g., (Tao et al., 2006; Goldberg and Elhadad, 2008). In this paper, we investigate methods to acquire transliteration mappings from non-parallel sources. We are inspired by previous work in unsupervised learning for natural language, e.g. (Yarowsky, 1995; 1 Introduction Transliteration refers to the transport of names and terms between languages with different writing systems and phoneme inventories. Recently there has been a large amount of interesting work in this area, and the literature has outgrown being citable in its entirety. Much of this work focuses on backtransliteration, which tries to restore a name or term that has been transported into a foreign language. Here, there is often only one correct target spelling--for example, given jyon.kairu (the name of a U.S. Senator transported to Japanese), we must output "Jon Kyl", not "John Kyre" or any other variation. There are many techniques for transliteration and back-transliteration, and they vary along a number of dimensions: · phoneme substitution vs. character substitution · heuristic vs. generative vs. discriminative models · manual vs. automatic knowledge acquisition 37 Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL, pages 37­45, Boulder, Colorado, June 2009. c 2009 Association for Computational Linguistics WFSA - A English word sequence WFST - B ( SPENCER ABRAHAM ) English sound sequence WFST - C Japanese sound sequence WFST - D Japanese katakana sequence () ( S P EH N S ER EY B R AH HH AE M ) ( S U P E N S A A E E B U R A H A M U ) Figure 1: Model used for back-transliteration of Japanese katakana names and terms into English. The model employs a four-stage cascade of weighted finite-state transducers (Knight and Graehl, 1998). Goldwater and Griffiths, 2007), and we are also inspired by cryptanalysis--we view a corpus of foreign terms as a code for English, and we attempt to break the code. 2 Background We follow (Knight and Graehl, 1998) in tackling back-transliteration of Japanese katakana expressions into English. Knight and Graehl (1998) developed a four-stage cascade of finite-state transducers, shown in Figure 1. · WFSA A - produces an English word sequence w with probability P(w) (based on a unigram word model). · WFST B - generates an English phoneme sequence e corresponding to w with probability P(e|w). · WFST C - transforms the English phoneme sequence into a Japanese phoneme sequence j according to a model P(j|e). · WFST D - writes out the Japanese phoneme sequence into Japanese katakana characters according to a model P(k|j). Using the cascade in the reverse (noisy-channel) direction, they are able to translate new katakana names and terms into English. They report 36% error in translating 100 U.S. Senators' names, and they report exceeding human transliteration performance in the presence of optical scanning noise. The only transducer that requires parallel training data is WFST C. Knight and Graehl (1998) take several thousand phoneme string pairs, automatically align them with the EM algorithm (Dempster et al., 1977), and construct WFST C from the aligned phoneme pieces. We re-implement their basic method by instantiating a densely-connected version of WFST C with 38 all 1-to-1 and 1-to-2 phoneme connections between English and Japanese. Phoneme bigrams that occur fewer than 10 times in a Japanese corpus are omitted, and we omit 1-to-3 connections. This initial WFST C model has 15320 uniformly weighted parameters. We then train the model on 3343 phoneme string pairs from a bilingual dictionary, using the EM algorithm. EM immediately reduces the connections in the model to those actually observed in the parallel data, and after 14 iterations, there are only 188 connections left with P(j|e) 0.01. Figure 2 shows the phonemic substitution table learnt from parallel training. We use this trained WFST C model and apply it to the U.S. Senator name transliteration task (which we update to the 2008 roster). We obtain 40% error, roughly matching the performance observed in (Knight and Graehl, 1998). 3 Task and Data The task of this paper is to learn the mappings in Figure 2, but without parallel data, and to test those mappings in end-to-end transliteration. We imagine our problem as one faced by monolingual English speaker wandering around Japan, reading a multitude of katakana signs, listening to people speak Japanese, and eventually deciphering those signs into English. To mis-quote Warren Weaver: "When I look at a corpus of Japanese katakana, I say to myself, this is really written in English, but it has been coded in some strange symbols. I will now proceed to decode." Our larger motivation is to move toward easily-built transliteration systems for all language pairs, regardless of parallel resources. While Japanese/English transliteration has its own particular features, we believe it is a reasonable starting point. e AA j o a oo aa P(j|e) 0.49 0.46 0.02 0.02 e AY j ai i a iy ay b bu P(j|e) 0.84 0.09 0.03 0.01 0.01 0.82 0.15 e EH j e a P(j|e) 0.94 0.03 e HH j h w ha P(j|e) 0.95 0.02 0.02 e L j r ru P(j|e) 0.62 0.37 e OY j oi oe o i P(j|e) 0.89 0.04 0.04 0.04 e SH j sh y sh yu ssh y sh i ssh e t to tt o ts tt u ts u ch su s sh to ch te t a u uu ua dd u ssh o P(j|e) 0.33 0.31 0.17 0.12 0.04 0.02 0.01 0.43 0.25 0.17 0.04 0.03 0.02 0.02 0.02 0.48 0.22 0.16 0.04 0.04 0.02 0.02 0.02 0.79 0.09 0.04 0.03 0.02 0.02 e V j b bu w a P(j|e) 0.75 0.17 0.03 0.02 AE a a ssh an 0.93 0.02 0.02 B ER aa a ar ru or er ee e ei a ai 0.8 0.08 0.03 0.02 0.02 0.02 0.58 0.15 0.12 0.1 0.03 IH i e in a 0.89 0.05 0.01 0.01 M m mu n 0.68 0.22 0.08 P p pu pp u pp 0.63 0.16 0.13 0.06 T W w u o i 0.73 0.17 0.04 0.02 AH a o e i u 0.6 0.13 0.11 0.07 0.06 CH AO o oo a on au u 0.6 0.27 0.05 0.03 0.03 0.01 D tch i ch ch i ch y tch y tch ssh y k d do dd o z j u 0.27 0.24 0.23 0.2 0.02 0.02 0.01 0.01 0.54 0.27 0.06 0.02 0.02 0.01 EY IY ii i e ee 0.58 0.3 0.07 0.03 N n nn 0.96 0.02 PAUSE pause 1.0 TH Y y i e a 0.7 0.26 0.02 0.02 F h hu hh hh u 0.58 0.35 0.04 0.02 JH jy j ji jj i z 0.35 0.24 0.21 0.14 0.04 NG n gu ng i u o a 0.62 0.22 0.09 0.04 0.01 0.01 0.01 R r a o ru aa 0.61 0.27 0.07 0.03 0.01 UH Z AW au aw ao a uu oo o 0.69 0.15 0.06 0.04 0.02 0.02 0.02 DH z zu az 0.87 0.08 0.04 G g gu gg u gy gg ga 0.66 0.19 0.1 0.03 0.01 0.01 K k ku kk u kk ky ki 0.53 0.2 0.16 0.05 0.02 0.01 OW o oo ou 0.57 0.39 0.02 S su s sh u ss ssh 0.43 0.37 0.08 0.05 0.02 0.01 UW uu u yu 0.67 0.29 0.02 ZH z zu u su j a n i s o jy ji j 0.27 0.25 0.16 0.07 0.06 0.06 0.03 0.03 0.02 0.02 0.43 0.29 0.29 Figure 2: Phonemic substitution table learnt from 3343 parallel English/Japanese phoneme string pairs. English phonemes are in uppercase, Japanese in lowercase. Mappings with P(j|e) > 0.01 are shown. A A A A A A A A A A A A A A A A A I I I J J K K CH I D O K U P U R A Z A N D O T I S U T O T O S E R I N A P I S U T O N A N B I R U D I I D O K E N B E R I I I A K A PP U I T O A SH I A K O O S U U A : : CH E N J I CH E S : D E K O R A D E T O M O E P I G U R E R A N D O : J Y A I A N J Y A Z U : M Y U U Z E : : N E B A D A : O P U T I K U S U : P I I T A A P I KK A A P I N G U U P I P E R A J I N A M I D O P I S A P I U R A P O I N T O : : : W W W W Y Y Y A A A A U U U K N N S N N U O K T E I I B I A A PP U E N P O R I N O N TT O SH I S U T E M U T I B U R U T O A M U TS U U M U : : Z E N E R A R U E A K O N Z E R O Z O N B I I Z U : : Figure 3: Some Japanese phoneme sequences generated from the monolingual katakana corpus using WFST D. Our monolingual resources are: · 43717 unique Japanese katakana sequences collected from web newspaper data. We split multi-word katakana phrases on the center-dot ("·") character, and select a final corpus of 9350 unique sequences. We add monolingual Japanese versions of the 2008 U.S. Senate roster.1 · The CMU pronunciation dictionary of English, We use "open" EM testing, in which unlabeled test data is allowed to be part of unsupervised training. However, no parallel data is allowed. 1 with 112,151 entries. · The English gigaword corpus. Knight and Graehl (1998) already use frequently-occurring capitalized words to build the WFSA A component of their four-stage cascade. We seek to use our English knowledge (derived from 2 and 3) to decipher the Japanese katakana corpus (1) into English. Figure 3 shows a portion of the Japanese corpus, which we transform into Japanese phoneme sequences using the monolingual resource of WFST D. We note that the Japanese phoneme inventory contains 39 unique ("ciphertext") symbols, 39 compared to the 40 English ("plaintext") phonemes. Our goal is to compare and evaluate the WFST C model learnt under two different scenarios--(a) using parallel data, and (b) using monolingual data. For each experiment, we train only the WFST C model and then apply it to the name transliteration task--decoding 100 U.S. Senator names from Japanese to English using the automata shown in Figure 1. For all experiments, we keep the rest of the models in the cascade (WFSA A, WFST B, and WFST D) unchanged. We evaluate on whole-name error-rate (maximum of 100/100) as well as normalized word edit distance, which gives partial credit for getting the first or last name correct. 4 Acquiring Phoneme Mappings from Non-Parallel Data Our main data consists of 9350 unique Japanese phoneme sequences, which we can consider as a single long sequence j. As suggested by Knight et al (2006), we explain the existence of j as the result of someone initially producing a long English phoneme sequence e, according to P(e), then transforming it into j, according to P(j|e). The probability of our observed data P(j) can be written as: P (j) = e We allow EM to run until the P(j) likelihood ratio between subsequent training iterations reaches 0.9999, and we terminate early if 200 iterations are reached. Finally, we decode our test set of U.S. Senator names. Following Knight et al (2006), we stretch out the P(j|e) model probabilities after decipherment training and prior to decoding our test set, by cubing their values. Decipherment under the conditions of transliteration is substantially more difficult than solving letter-substitution ciphers (Knight et al., 2006; Ravi and Knight, 2008; Ravi and Knight, 2009) or phoneme-substitution ciphers (Knight and Yamada, 1999). This is because the target table contains significant non-determinism, and because each symbol has multiple possible fertilities, which introduces uncertainty about the length of the target string. 4.1 Baseline P(e) Model P (e) · P (j|e) Clearly, we can design P(e) in a number of ways. We might expect that the more the system knows about English, the better it will be able to decipher the Japanese. Our baseline P(e) is a 2-gram phoneme model trained on phoneme sequences from the CMU dictionary. The second row (2a) in Figure 4 shows results when we decipher with this fixed P(e). This approach performs poorly and gets all the Senator names wrong. 4.2 Consonant Parity We take P(e) to be some fixed model of monolingual English phoneme production, represented as a weighted finite-state acceptor (WFSA). P(j|e) is implemented as the initial, uniformly-weighted WFST C described in Section 2, with 15320 phonemic connections. We next maximize P(j) by manipulating the substitution table P(j|e), aiming to produce a result such as shown in Figure 2. We accomplish this by composing the English phoneme model P(e) WFSA with the P(j|e) transducer. We then use the EM algorithm to train just the P(j|e) parameters (inside the composition that predicts j), and guess the values for the individual phonemic substitutions that maximize the likelihood of the observed data P(j).2 In our experiments, we use the Carmel finite-state transducer package (Graehl, 1997), a toolkit with an algorithm for EM training of weighted finite-state transducers. 2 When training under non-parallel conditions, we find that we would like to keep our WFST C model small, rather than instantiating a fully-connected model. In the supervised case, parallel training allows the trained model to retain only those connections which were observed from the data, and this helps eliminate many bad connections from the model. In the unsupervised case, there is no parallel data available to help us make the right choices. We therefore use prior knowledge and place a consonant-parity constraint on the WFST C model. Prior to EM training, we throw out any mapping from the P(j|e) substitution model that does not have the same number of English and Japanese consonant phonemes. This is a pattern that we observe across a range of transliteration tasks. Here are ex- 40 Phonemic Substitution Model 1 2a 2b e j = { 1-to-1, 1-to-2 } + EM aligned with parallel data e j = { 1-to-1, 1-to-2 } + decipherment training with 2-gram English P(e) e j = { 1-to-1, 1-to-2 } + decipherment training with 2-gram English P(e) + consonant-parity e j = { 1-to-1, 1-to-2 } + decipherment training with 3-gram English P(e) + consonant-parity e j = { 1-to-1, 1-to-2 } + decipherment training with a word-based English model + consonant-parity e j = { 1-to-1, 1-to-2 } + decipherment training with a word-based English model + consonant-parity + initialize mappings having consonant matches with higher probability weights Name Transliteration Error whole-name error norm. edit distance 40 25.9 100 98 100.0 89.8 2c 94 73.6 2d 77 57.2 2e 73 54.2 Figure 4: Results on name transliteration obtained when using the phonemic substitution model trained under different scenarios--(1) parallel training data, (2a-e) using only monolingual resources. amples of mappings where consonant parity is violated: K => a EH => s a N => e e EY => n Modifying the WFST C in this way leads to better decipherment tables and slightly better results for the U.S. Senator task. Normalized edit distance drops from 100 to just under 90 (row 2b in Figure 4). 4.3 Better English Models Row 2c in Figure 4 shows decipherment results when we move to a 3-gram English phoneme model for P(e). We notice considerable improvements in accuracy. On the U.S. Senator task, normalized edit distance drops from 89.8 to 73.6, and whole-name error decreases from 98 to 94. When we analyze the results from deciphering with a 3-gram P(e) model, we find that many of the Japanese phoneme test sequences are decoded into English phoneme sequences (such as "IH K R IH N" and "AE G M AH N") that are not valid words. This happens because the models we used for decipherment so far have no knowledge of what constitutes a globally valid English sequence. To help the phonemic substitution model learn this information automatically, we build a word-based P(e) from English phoneme sequences in the CMU dictionary and use this model for decipherment train41 ing. The word-based model produces complete English phoneme sequences corresponding to 76,152 actual English words from the CMU dictionary. The English phoneme sequences are represented as paths through a WFSA, and all paths are weighted equally. We represent the word-based model in compact form, using determinization and minimization techniques applicable to weighted finite-state automata. This allows us to perform efficient EM training on the cascade of P(e) and P(j|e) models. Under this scheme, English phoneme sequences resulting from decipherment are always analyzable into actual words. Row 2d in Figure 4 shows the results we obtain when training our WFST C with a word-based English phoneme model. Using the word-based model produces the best result so far on the phonemic substitution task with non-parallel data. On the U.S. Senator task, word-based decipherment outperforms the other methods by a large margin. It gets 23 out of 100 Senator names exactly right, with a much lower normalized edit distance (57.2). We have managed to achieve this performance using only monolingual data. This also puts us within reach of the parallel-trained system's performance (40% whole-name errors, and 25.9 word edit distance error) without using a single English/Japanese pair for training. To summarize, the quality of the English phoneme e AA j a o i u e oo ya aa a i e o u uu oo P(j|e) 0.37 0.25 0.15 0.08 0.07 0.03 0.01 0.01 0.52 0.19 0.11 0.08 0.03 0.02 0.02 e AY j ai oo e i a uu yu u o ee b p k m s g t z d ch y g k b sh s r ch y p m ch d do n to sh i ku k gu b s h r b w t p g jy d k P(j|e) 0.36 0.13 0.12 0.11 0.11 0.05 0.02 0.02 0.02 0.02 0.41 0.12 0.09 0.07 0.04 0.04 0.03 0.02 0.02 0.02 0.12 0.11 0.09 0.07 0.07 0.07 0.07 0.06 0.06 0.06 0.16 0.15 0.05 0.03 0.03 0.03 0.03 0.03 0.03 0.02 0.13 0.12 0.09 0.08 0.07 0.07 0.06 0.05 0.05 0.03 e EH j e a o i u oo yu ai aa a u o e oo ii yu uu i ee a i u o e oo ei ii uu h hu b sh i p m r s ha bu gu g ku bu k b to t ha d P(j|e) 0.37 0.24 0.12 0.12 0.06 0.04 0.01 0.01 0.47 0.17 0.08 0.07 0.04 0.03 0.03 0.02 0.02 0.02 0.3 0.22 0.11 0.09 0.06 0.06 0.05 0.04 0.02 0.01 0.18 0.14 0.09 0.07 0.07 0.06 0.04 0.03 0.03 0.02 0.13 0.11 0.08 0.06 0.04 0.04 0.03 0.03 0.03 0.03 e HH j h s k b m w p g ky d i e a u o oo P(j|e) 0.45 0.12 0.09 0.08 0.07 0.03 0.03 0.03 0.02 0.02 0.36 0.25 0.15 0.09 0.09 0.01 e L j r n ru ri t mu m wa ta ra m n k r s h t g b mu n ru su mu kk u ku hu to pp u bi tt o ru n kk u su mu dd o tch i pp u jj i a o oo u i ya e uu ai ii P(j|e) 0.3 0.19 0.15 0.04 0.03 0.02 0.02 0.01 0.01 0.01 0.3 0.08 0.08 0.07 0.06 0.05 0.04 0.04 0.04 0.03 0.56 0.09 0.04 0.02 0.02 0.02 0.02 0.01 0.01 0.01 0.21 0.17 0.14 0.1 0.07 0.06 0.04 0.03 0.03 0.03 0.3 0.25 0.12 0.09 0.07 0.04 0.04 0.02 0.02 0.01 e OY j a i yu oi ya yo e o oo ei p pu n k sh i ku su pa t ma pause P(j|e) 0.27 0.16 0.1 0.1 0.09 0.08 0.08 0.06 0.02 0.02 0.18 0.08 0.05 0.05 0.04 0.04 0.03 0.03 0.02 0.02 1.0 e SH j sh y m r s p sa h b t k t to ta n ku k te s r gu k pu ku d hu su bu ko ga sa a o e yu ai i uu oo aa u u a o uu i yu ii e oo ee P(j|e) 0.22 0.11 0.1 0.06 0.06 0.05 0.05 0.05 0.04 0.04 0.2 0.16 0.05 0.04 0.03 0.03 0.02 0.02 0.02 0.02 0.21 0.11 0.1 0.08 0.07 0.05 0.04 0.03 0.03 0.02 0.24 0.14 0.11 0.1 0.09 0.08 0.07 0.07 0.03 0.02 0.39 0.15 0.13 0.12 0.04 0.03 0.03 0.03 0.02 0.02 e V j b k m s d r t h sh n w r m s k h b t p d s k m g p b r d ur ny to zu ru su gu mu n do ji ch i m p t h d s b r jy k P(j|e) 0.34 0.14 0.13 0.07 0.07 0.04 0.03 0.02 0.01 0.01 0.23 0.2 0.13 0.08 0.07 0.06 0.06 0.04 0.04 0.02 0.25 0.18 0.07 0.06 0.05 0.05 0.04 0.04 0.03 0.03 0.14 0.11 0.11 0.1 0.09 0.07 0.06 0.06 0.02 0.02 0.17 0.16 0.15 0.13 0.1 0.08 0.07 0.05 0.03 0.02 AE B ER IH M P T W AH a o i e u ee oo aa o a e oo i u yu ee oo au a ai aa e o i iy ea 0.31 0.23 0.17 0.12 0.1 0.02 0.01 0.01 0.29 0.26 0.14 0.12 0.08 0.05 0.03 0.01 0.2 0.19 0.18 0.11 0.11 0.05 0.04 0.04 0.02 0.01 CH EY IY AO D F JH AW DH G K i ii a aa u o oo ia ee e b k jy s m t j h sh d k n ku kk u to su sh i r ko ka 0.25 0.21 0.15 0.12 0.07 0.05 0.02 0.02 0.02 0.02 0.13 0.1 0.1 0.08 0.08 0.07 0.07 0.07 0.06 0.05 0.17 0.1 0.1 0.05 0.03 0.03 0.02 0.02 0.02 0.02 N PAUSE TH Y NG R OW S r n ur ri ru d t s m k su n ru to ku sh i ri mu hu ch i 0.53 0.07 0.05 0.03 0.02 0.02 0.01 0.01 0.01 0.01 0.4 0.11 0.05 0.03 0.03 0.02 0.02 0.02 0.02 0.02 UH Z UW ZH Figure 5: Phonemic substitution table learnt from non-parallel corpora. For each English phoneme, only the top ten mappings with P(j|e) > 0.01 are shown. model used in decipherment training has a large effect on the learnt P(j|e) phonemic substitution table (i.e., probabilities for the various phoneme mappings within the WFST C model), which in turn affects the quality of the back-transliterated English output produced when decoding Japanese. Figure 5 shows the phonemic substitution table learnt using word-based decipherment. The mappings are reasonable, given the lack of parallel data. They are not entirely correct--for example, the mapping "S s u" is there, but "S s" is missing. Sample end-to-end transliterations are illustrated in Figure 6. The figure shows how the transliteration results from non-parallel training improve steadily as we use stronger decipherment techniques. We note that in one case (LAUTENBERG), the decipherment mapping table leads to a correct answer where the mapping table derived from parallel data does not. Because parallel data is limited, it may not contain all of the necessary mappings. 4.4 Size of Japanese Training Data Monolingual corpora are more easily available than parallel corpora, so we can use increasing amounts of monolingual Japanese training data during decipherment training. The table below shows that using more Japanese training data produces better transliteration results when deciphering with the word-based English model. Japanese training data (# of phoneme sequences) 4,674 9,350 Error on name transliteration task whole-name error normalized word edit distance 87 69.7 77 57.2 42 !"#$%&'()*'+"%,)-./0%12%,0)2,/$34/&'#$5 !"#$%&'()*'+"%,)-./0%12%,0)2,/$34/&'#$ !"#$%&'()*'+"%,)-./0%12%,0)2,/$34/&'#$5 !"#$#%&' ()""*+,-.%/0*" !"#$#%&' 45678-9::;< !,#G,%33)?%+#,&H).#)!/,/44%4)7/&/)<3%IH !,#G,%33)?%+#,&H).#)!/,/44%4)7/&/)<3 !"#$%&'()*'+"%,)-./0%12%,0)2,/$34/&'# 1&"&''*' =*+#>2*"?*%,- 1&"&''*' 12)%*,#+ =*+#>2*"?*%,=*+#>2*"?*%,=*+#>2*"?*%, !"#$#%&' 12)%*,#+ ()""*+,-.%/0*" =*+#>2*"?*%,=*+#>2*"? ()""*+,-.%/0*" 1&"&''*'!"#$#%&' 12)%*,#+ =*+#>2*"?*%,=*+#>2*"?*%, ()""*+,-.%/0*" =*+#>2*"?*%,=*+#> 3"&#%#%$ @A*,2)B-9C =*+#>2*"?*%,@A*,2)B-DC 1&"&''*' 12)%*,#+ EC @A*,2)B 3"&#%#%$ @A*,2)B-DC@A*,2)B-9C =*+#>2*"?*%,@A*,2)B-DC @A*,2)B E 3"&#%#%$ @A*,2)B-9C @A*,2)B EC 3"&#%#%$ @A*,2)B-9C @A*,2)B-DC @A*, 45678-9::;< 45678-9::;< 45678-9::;< !,#G,%33)?%+#,&H).#)!/,/44%4)7/&/)<3%IH !,#G,%33)?%+#,&H).#)!/,/44%4)7/&/ !"#$%& '%()*+ ,-'.& /00 123#& /)%4 567!& 8%0! 8(&9:6; <=>?& @3A# <2?& B#C5# ?)#7& D%E#@%7 F1GH(GI!"#$%& '%()*+ .JI.K.A,-'.& =.HLGM-.7.7./00 F1GH(GIF1GH(GI F1GH(GI;!:*6)*<=9=) F1GH(GI F1GH(GI;!:*6)*<=9=) F1GH(GIF1GH(GIF1GH(GIF1GH(GIF1GH(GI!"#$%& F1GH(GIF1GH(GI- ;!:*6)*<=9=) F1GH(GI- F1GH(GI;!:*6)*<=9=) F1GH !"#$%& .JI.K.A.JI.K.A 67689:.) .JI.K.A.JI.K.A .JI.K.A.JI. .JI.K.A'%()*+ .JI.K.A 67689:.) 67689:.) .JI.K.A .JI.K.A- 67689:.) '%()*+ .JI.K.A N.OHG-.MM.I=/)%4 567!& 8%0! A.P-J.Q(QF- 123#& =.HLGM-.7.7.:*:)>: ;*@6?A.B)6:;2) 7:.A =.HLGM :*:)>: =.HLGM,-'.& >=?6:)6:;2) :*:)>: =.HLGM :*:)>: >=?6:)6:;2) ;*@6?A.B)6:;2) >=?6:)6:;2)>=?6:)6:;2) 7:.A6886):C:*=) ;*@6?A.B)6:;2) 7:.A6886 =.HLGM-.7.7.- =.HLGM-.7.7.=.HLGM ;*@6?A.B)6:;2) 7:.A6886):C:*=) ,-'.& /00 /00 N.OHG-.MM.I=N.OHG-.MM.I= N.OHG CE?7) N.OHG-.MM.I=N.OHG-.MM.I=- N.OHG-.MM.I= *@=A*6)D=@.) 123#& *@=A*6)D=@.) N.OHG CE?7) N.OHG-.MM.I= N.OHG-.MM.I=- N.OHG-.MM.I= *@=A*6)D=@.) N.OHG CE?7) N.OHG-.MM.I=- N.OHG CE?7) *@=A*6)D=@.) N.OHG-. 123#& /)%4 /)%4 A.P-J.Q(QFA.P-J.Q(QF 966;6)D:96;) A.P C==>;) A.P F=*<;) A.P-J.Q(QFA.P-J.Q(QF 966;6)D:96;) A.P C==>;) A.P 567!& A.P-J.Q(QF 567!& 8%0! 966;6)D:96;) A.P-J.Q(QF- N.O A.P C==>;) A.P-J.Q(QF A.P F=*<;) 966;6)D:96;) A.P C==>;) A.P F=* J!J-JGHHG338(&9:6; 8%0! J!J-JGHHG33 9E);*@6?A.B) D:*>)CA88A.B) C=2@)C6..A.B) J!J-JGHHG33 9E);*@6?A.B) 8(&9:6; J!J-JGHHG33@3A# D:*>)CA88A.B) C=2@ J!J-JGHHG33<=>?& @3A# J!J-JGHHG33 9E);*@6?A.B) R!FG1K-JL=GHD:*>)CA88A.B) C=2@)C6..A.B) D:*>)CA88A.B) J!J-JGHHG33J!J-JGHHG33 R!FG1K-JL=GH 9E);*@6?A.B) C=2@)C6. 8(&9:6;R!FG1K-JL=GH R!FG1K-JL=GHD:!:.)CA76.) R!FG1K-JL=GH@=<;6)8:C=?) D:!:.)CA76.) R!FG <=>?& @=<;6)8:C=?) R!FG1K-JL=GH<2?& B#C5# R!FG1K-JL=GH @=<;6)8:C=?) RGSS-JLH5.A.H- C6.D:9A.) RGSS C6.D:9A.)D:!:.)CA76.) D:!:.)CA76.) R!FG1K-JL=GHR!FG1K-JL=GHR!FG1K-JL=GH RGSS-JLH5.A.H D=@.)!F6AFF6? @=<;6)8:C=?) R!FG1K-J <=>?& RGSS C6.D:9A.) RGSS RGSS-JLH5.A.H- RGSS-JLH5.A.H D=@.)!F6AFF6? RGSS <2?& @3A# B#C5# RGSS-JLH5.A.H RGSS C6.D SI.H7 ;<.)F8A.2) F?:.*6) F?:. RGSS-JLH5.A.H- <2?& RGSS-JLH5.A.H RGSS-JLH5.A.H- SI.H7- C6.D:9A.) D=@.)!F6AFF6? D=@.)!F6AFF6? RGSS RGSS C6.D:9A.) RGSS C6.D:9A.) SI.H7SI.H7 ?)#7& ;<.)F8A.2) F?:.*6) F?:.*6) ?)#7& 8:<26.C:*@M.Q3GHJGI5M.Q3GHJGI5M.Q3 D%E#@%7 M.Q3GHJGI5 8:<26.C:*@ M.Q3GHJGI5 M.Q3GHJGI5D%E#@%7 B#C5# SI.H7M.Q3GHJGI5 ?)#7& SI.H7 D%E#@%7 8:<26.C:*@ SI.H7;<.)F8A.2) M.Q3GHJGI5 SI.H7 F?:.*6) 8:<26.C:*@ M.Q3GHJGI5- ;<.)F8A.2) F?:.*6) M.Q3GHJGI5- F?:.*6) M.Q3GHJGI5- F?:.*6) M.Q3GHJ Figure 6: Results for end-to-end name transliteration. This figure shows the correct answer, the answer obtained by training mappings on parallel data (Knight and Graehl, 1998), and various answers obtained by deciphering nonparallel data. Method 1 uses a 2-gram P(e), Method 2 uses a 3-gram P(e), and Method 3 uses a word-based P(e). 4.5 P(j|e) Initialization So far, the P(j|e) connections within the WFST C model were initialized with uniform weights prior to EM training. It is a known fact that the EM algorithm does not necessarily find a global minimum for the given objective function. If the search space is bumpy and non-convex as is the case in our problem, EM can get stuck in any of the local minima depending on what weights were used to initialize the search. Different sets of initialization weights can lead to different convergence points during EM training, or in other words, depending on how the P(j|e) probabilities are initialized, the final P(j|e) substitution table learnt by EM can vary. We can use some prior knowledge to initialize the probability weights in our WFST C model, so as to give EM a good starting point to work with. Instead of using uniform weights, in the P(j|e) model we set higher weights for the mappings where English and Japanese sounds share common consonant phonemes. For example, mappings such as: N => n D => d N => a n D => d o 43 are weighted X (a constant) times higher than other mappings such as: N D => b => B N => r EY => a a in the P(j|e) model. In our experiments, we set the value X to 100. Initializing the WFST C in this way results in EM learning better substitution tables and yields slightly better results for the Senator task. Normalized edit distance drops from 57.2 to 54.2, and the wholename error is also reduced from 77% to 73% (row 2e in Figure 4). 4.6 Size of English Training Data We saw earlier (in Section 4.4) that using more monolingual Japanese training data yields improvements in decipherment results. Similarly, we hypothesize that using more monolingual English data can drive the decipherment towards better transliteration results. On the English side, we build different word-based P(e) models, each trained on different amounts of data (English phoneme sequences from the CMU dictionary). The table below shows that deciphering with a word-based English model built from more data produces better transliteration results. English training data (# of phoneme sequences) 76,152 97,912 Error on name transliteration task whole-name error normalized word edit distance 73 54.2 66 49.3 This yields the best transliteration results on the Senator task with non-parallel data, getting 34 out of 100 Senator names exactly right. 4.7 Re-ranking Results Using the Web It is possible to improve our results on the U.S. Senator task further using external monolingual resources. Web counts are frequently used to automatically re-rank candidate lists for various NLP tasks (Al-Onaizan and Knight, 2002). We extract the top 10 English candidates produced by our wordbased decipherment method for each Japanese test name. Using a search engine, we query the entire English name (first and last name) corresponding to each candidate, and collect search result counts. We then re-rank the candidates using the collected Web counts and pick the most frequent candidate as our choice. For example, France Murkowski gets only 1 hit on Google, whereas Frank Murkowski gets 135,000 hits. Re-ranking the results in this manner lowers the whole-name error on the Senator task from 66% to 61%, and also lowers the normalized edit distance from 49.3 to 48.8. However, we do note that re-ranking using Web counts produces similar improvements in the case of parallel training as well and lowers the whole-name error from 40% to 24%. So, the re-ranking idea, which is simple and requires only monolingual resources, seems like a nice strategy to apply at the end of transliteration experiments (during decoding), and can result in further gains on the final transliteration performance. quence, the correct back-transliterated phoneme sequence is present somewhere in the English data) and apply the same decipherment strategy using a word-based English model. The table below compares the transliteration results for the U.S. Senator task, when using comparable versus non-parallel data for decipherment training. While training on comparable corpora does have benefits and reduces the whole-name error to 59% on the Senator task, it is encouraging to see that our best decipherment results using only non-parallel data comes close (66% error). English/Japanese Corpora (# of phoneme sequences) Comparable Corpora (English = 2,608 Japanese = 2,455) Non-Parallel Corpora (English = 98,000 Japanese = 9,350) Error on name transliteration task whole-name error normalized word edit distance 59 41.8 66 49.3 6 Conclusion 5 Comparable versus Non-Parallel Corpora We have presented a method for attacking machine transliteration problems without parallel data. We developed phonemic substitution tables trained using only monolingual resources and demonstrated their performance in an end-to-end name transliteration task. We showed that consistent improvements in transliteration performance are possible with the use of strong decipherment techniques, and our best system achieves significant improvements over the baseline system. In future work, we would like to develop more powerful decipherment models and techniques, and we would like to harness the information available from a wide variety of monolingual resources, and use it to further narrow the gap between parallel-trained and non-parallel-trained approaches. We also present decipherment results when using comparable corpora for training the WFST C model. We use English and Japanese phoneme sequences derived from a parallel corpus containing 2,683 phoneme sequence pairs to construct comparable corpora (such that for each Japanese phoneme se44 7 Acknowledgements This research was supported by the Defense Advanced Research Projects Agency under SRI International's prime Contract Number NBCHD040058. References Y. Al-Onaizan and K. Knight. 2002. Translating named entities using monolingual and bilingual resources. In Proc. of ACL. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society Series, 39(4):1­38. Y. Goldberg and M. Elhadad. 2008. Identification of transliterated foreign words in Hebrew script. In Proc. of CICLing. D. Goldwasser and D. Roth. 2008a. Active sample selection for named entity transliteration. In Proc. of ACL/HLT Short Papers. D. Goldwasser and D. Roth. 2008b. Transliteration as constrained optimization. In Proc. of EMNLP. S. Goldwater and L. Griffiths, T. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proc. of ACL. J. Graehl. 1997. Carmel finite-state toolkit. http://www.isi.edu/licensed-sw/carmel. L. Haizhou, Z. Min, and S. Jian. 2004. A joint sourcechannel model for machine transliteration. In Proc. of ACL. U. Hermjakob, K. Knight, and H. Daume. 2008. Name translation in statistical machine translation--learning when to transliterate. In Proc. of ACL/HLT. F. Huang, S. Vogel, and A. Waibel. 2004. Improving named entity translation combining phonetic and semantic similarities. In Proc. of HLT/NAACL. S. Karimi, F. Scholer, and A. Turpin. 2007. Collapsed consonant and vowel models: New approaches for English-Persian transliteration and backtransliteration. In Proc. of ACL. A. Klementiev and D. Roth. 2008. Named entity transliteration and discovery in multilingual corpora. In Learning Machine Translation. MIT press. K. Knight and J. Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599­612. K. Knight and K. Yamada. 1999. A computational approach to deciphering unknown scripts. In Proc. of the ACL Workshop on Unsupervised Learning in Natural Language Processing. K. Knight, A. Nair, N. Rathod, and K. Yamada. 2006. Unsupervised analysis for decipherment problems. In Proc. of COLING/ACL. J. Kuo, H. Li, and Y. Yang. 2006. Learning transliteration lexicons from the web. In Proc. of ACL/COLING. H. Li, C. Sim, K., J. Kuo, and M. Dong. 2007. Semantic transliteration of personal names. In Proc. of ACL. M. Nagata, T. Saito, and K. Suzuki. 2001. Using the web as a bilingual dictionary. In Proc. of the ACL Workshop on Data-driven Methods in Machine Translation. J. Oh and H. Isahara. 2006. Mining the web for transliteration lexicons: Joint-validation approach. In Proc. of the IEEE/WIC/ACM International Conference on Web Intelligence. S. Ravi and K. Knight. 2008. Attacking decipherment problems optimally with low-order n-gram models. In Proc. of EMNLP. S. Ravi and K. Knight. 2009. Probabilistic methods for a Japanese syllable cipher. In Proc. of the International Conference on the Computer Processing of Oriental Languages (ICCPOL). T. Sherif and G. Kondrak. 2007a. Bootstrapping a stochastic transducer for arabic-english transliteration extraction. In Proc. of ACL. T. Sherif and G. Kondrak. 2007b. Substring-based transliteration. In Proc. of ACL. R. Sproat, T. Tao, and C. Zhai. 2006. Named entity transliteration with comparable corpora. In Proc. of ACL. T. Tao, S. Yoon, A. Fister, R. Sproat, and C. Zhai. 2006. Unsupervised named entity transliteration using temporal and phonetic correlation. In Proc. of EMNLP. J. Wu and S. Chang, J. 2007. Learning to find English to Chinese transliterations on the web. In Proc. of EMNLP/CoNLL. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL. S. Yoon, K. Kim, and R. Sproat. 2007. Multilingual transliteration using feature based phonetic method. In Proc. of ACL. D. Zelenko and C. Aone. 2006. Discriminative methods for transliteration. In Proc. of EMNLP. 45