Domain Adaptation with Multiple Sources Yishay Mansour Google Research and Tel Aviv Univ. mansour@tau.ac.il Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Afshin Rostamizadeh Courant Institute New York University rostami@cs.nyu.edu Abstract This paper presents a theoretical analysis of the problem of adaptation with multiple sources. For each source domain, the distribution over the input points as well as a hypothesis with error at most are given. The problem consists of combining these hypotheses to derive a hypothesis with small error with respect to the target domain. We present several theoretical results relating to this problem. In particular, we prove that standard convex combinations of the source hypotheses may in fact perform very poorly and that, instead, combinations weighted by the source distributions benefit from favorable theoretical guarantees. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most with respect to any target mixture of the source distributions. We further generalize the setting from a single target function to multiple consistent target functions and show the existence of a combining rule with error at most 3. Finally, we report empirical results for a multiple source adaptation problem with a real-world dataset. 1 Introduction A common assumption in theoretical models of learning such as the standard PAC model [16], as well as in the design of learning algorithms, is that training instances are drawn according to the same distribution as the unseen test examples. In practice, however, there are many cases where this assumption does not hold. There can be no hope for generalization, of course, when the training and test distributions vastly differ, but when they are less dissimilar, learning can be more successful. A typical situation is that of domain adaptation where little or no labeled data is at one's disposal for the target domain, but large amounts of labeled data from a source domain somewhat similar to the target, or hypotheses derived from that source, are available instead. This problem arises in a variety of applications in natural language processing [4, 7, 10], speech processing [8, 9, 11, 13­15], computer vision [12], and many other areas. This paper studies the problem of adaptation from multiple sources, which has also received considerable attention in many areas such as natural language processing and speech processing. An example is the problem of sentiment analysis which consists of classifying a text sample such as a movie review, restaurant rating, or discussion boards, or other web pages. Information about a relatively small number of domains such as movies or books may be available, but little or none can be found for more difficult domains such as travel. We will consider the following problem of multiple source adaptation. For each source i [1, k ], the learner receives the distribution Di of the input points corresponding to that source as well as a hypothesis hi with loss at most on that source. The learner's task consists of combining the k hypotheses hi , i [1, k ], to derive a hypothesis h with small loss with respect to the target distribution. The target distribution is assumed to be a mixture of the distributions Di . We will discuss both the case where the mixture is known to the learner and the case where it is unknown. 1 Note that the distribution Di is defined over the input points and bears no information about the labels. In practice, Di is estimated from large amounts of unlabeled points typically available from source i. An alternative set-up for domain adaptation with multiple sources is one where the learner is not supplied with a good hypothesis hi for each source but where instead he has access to the labeled training data for each source domain. A natural solution consists then of combining the raw labeled data from each source domain to form a new sample more representative of the target distribution and use that to train a learning algorithm. This set-up and the type of solutions just described have been in fact explored extensively in applications [8, 9, 11, 13­15]. However, several empirical observations motivated our study of hypothesis combination, in addition to the theoretical simplicity and clarity of this framework. First, in some applications such as very large-vocabulary speech recognition, often the original raw data used to derive each domain-dependent model is no more available [2, 9]. This is because such models are typically obtained as a result of training based on many hours of speech with files occupying hundreds of gigabytes of disk space, while the models derived require orders of magnitude less space. Thus, combining raw labeled data sets is not possible in such cases. Secondly, a combined data set can be substantially larger than each domain-specific data set, which can significantly increase the computational cost of training and make it prohibitive for some algorithms. Thirdly, combining labeled data sets requires the mixture parameters of the target distribution to be known, but it is not clear how to produce a hypothesis with a low error rate with respect to any mixture distribution. Few theoretical studies have been devoted to the problem of adaptation with multiple sources. BenDavid et al. [1] gave bounds for single source adaptation, then Blitzer et al. [3] extended the work to give a bound on the error rate of a hypothesis derived from a weighted combination of the source data sets for the specific case of empirical risk minimization. Crammer et al. [5, 6] also addressed a problem where multiple sources are present but the nature of the problem differs from adaptation since the distribution of the input points is the same for all these sources, only the labels change due to varying amounts of noise. We are not aware of a prior theoretical study of the problem of adaptation with multiple sources analyzed here. We present several theoretical results relating to this problem. We examine two types of hypothesis combination. The first type is simply based on convex combinations of the k hypotheses hi . We show that this natural and widely used hypothesis combination may in fact perform very poorly in our setting. Namely, we give a simple example of two distributions and two matching hypotheses, each with zero error for their respective distribution, but such that any convex combination has expected absolute loss of 1/2 for the equal mixture of the distributions. This points out a potentially significant weakness of a convex combination. The second type of hypothesis combination, which is the main one we will study in this work, takes into account the probabilities derived from the distributions. Namely, the weight of hypothesis hi on an input x is proportional to i Di (x), were is the set of mixture weights. We will refer to this method as the distribution weighted hypothesis combination. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most with respect to any mixture of the k distributions. We also show that there exists a distribution weighted combining rule that has loss at most 3 with respect to any consistent target function (one for which each hi has loss on Di ) and any mixture of the k distributions. In some sense, our results establish that the distribution weighted hypothesis combination is the "right" combination rule, and that it also benefits from a well-founded theoretical guarantee. The remainder of this paper is organized as follows. Section 2 introduces our theoretical model for multiple source adaptation. In Section 3, we analyze the abstract case where the mixture parameters of the target distribution are known and show that the distribution weighted hypothesis combination that uses as weights these mixture coefficients achieves a loss of at most . In Section 4, we give a simple method to produce an error of (k ) that does not require the prior knowledge of the mixture parameters of the target distribution. Our main results showing the existence of a combined hypothesis performing well regardless of the target mixture are given in Section 5 for the case of a fixed target function, and in Section 6 for the case of multiple target functions. Section 7 reports empirical results for a multiple source adaptation problem with a real-world dataset. 2 2 Problem Set-Up Let X be the input space, f : X R the target function to learn, and L : R × R R a loss function penalizing errors with respect to f . The loss of a hypothesis h with respect to a distribution D is denoted by L(D, h, f ) and defined as L(D, h, f ) = ExD [L(h(x), f (x))] = k x X L(h(x), f (x))D (x). We will denote by the simplex = { : i 0 i=1 i = 1} of Rk . We consider an adaptation problem with k source domains and a single target domain. The input to the problem is the set of k source distributions D1 , . . . , Dk and k corresponding hypotheses h1 , . . . , hk such that for all i [1, k ], L(Di , hi , f ) , for a fixed 0.1 The distribution of the target domain, DT , is assumed to be a mixture of the k source distributions Di s, that is k DT (x) = i=1 i Di (x), for some unknown mixture weight vector . The adaptation problem consists of combing the hypotheses hi s to derive a hypothesis with small loss on the target domain. Since the target distribution DT is assumed to be a mixture, we will refer to this problem as the mixture adaptation problem. A combining rule for the hypotheses takes as an input the hi s and outputs a single hypothesis h : X R. We define two combining rules of particular interest for our purpose: the linear combining rule which is based on a parameter z and which sets the hypothesis to k d h(x) = i=1 zi hi (x); and the distribution weighted combining rule also basek on a parameter k z D (x ) Pk i i z which sets the hypothesis to h(x) = i=1 h (x) when j =1 zj Dj (x) > 0. z D (x ) i j =1 j j This last condition always holds if Di (x) > 0 for all x X and i [1, k ]. We define H to be the set of all distribution weighted combining rules. Given the input to the adaptation problem we have implicit information about the target function f . We define the set of consistent target functions, F , as follows, F = {g : i [1, k ], L(Di , hi , g ) } . By definition, the target function f is an element of F . We will assume that the following properties hold for the loss function L: (i) L is non-negative: k L(x, y ) 0 for all x, y R; (ii) L is convex with respect to the first argument: L( i=1 i xi , y ) k i=1 i L(xi , y ) for all x1 , . . . , xk , y R and ; (iii) L is bounded: there exists M 0 such that L(x, y ) M for all x, y R; and (iv) L(x, y ) is continuous in both x and y . The absolute loss defined by L(x, y ) = |x - y | will serve as our primary motivating example. 3 Known Target Mixture Distribution In this section, we assume that the parameters of the target mixture distribution are known. Thus, the k learning algorithm is given such that DT (x) = i=1 i Di (x). A good starting point would be k to study the performance of a linear combining rule. Namely the classifier h(x) = i=1 i hi (x). While this seems like a very natural classifier, the following example highlights the problematic aspects of this approach. Consider a discrete domain X = {a, b} and two distributions, Da and Db , such that Da (a) = 1 and Db (b) = 1. Namely, each distribution puts all the weight on a single element in X . Consider the target function f , where f (a) = 1 and f (b) = 0, and let the loss be the absolute loss. Let h0 = 0 be the function that outputs 0 for all x X and similarly h1 = 1. The hypotheses h1 and h0 have zero expected absolute loss on the distributions Da and Db , respectively, i.e., = 0. Now consider the target distribution DT with a = b = 1/2, thus DT (a) = DT (b) = 1/2. The hypothesis h(x) = (1/2)h1 (x) + (1/2)h0 (x) always outputs 1/2, and has an absolute loss of 1/2. Furthermore, for any other parameter z of the linear combining rule, the expected absolute loss of h(x) = z h1 (x) + (1 - z )h0 (x) with respect to DT is exactly 1/2. We have established the following theorem. Theorem 1. There is a mixture adaptation problem with = 0 for which any linear combination rule has expected absolute loss of 1/2. 1 In practice, the source distributions Di s can be accurately estimated from large amounts of unlabeled points available from each source i. 3 Next we show that the distribution weighted combining rule produces a hypothesis with a low exk pected loss. Given a mixture DT (x) = i=1 i Di (x), we consider the distribution weighted combining rule with parameter , which we denote by h . Recall that, h (x) = ik =1 i k i Di (x) i Di (x) hi (x) = hi (x) . k DT (x) =1 j =1 j Dj (x) Using the convexity of L with respect to the first argument, the loss of h with respect to DT and a target f F can be bounded as follows, L(DT , h , f ) = x X L(h (x), f (x))DT (x) x ik X =1 i Di (x)L(hi (x), f (x)) = ik =1 i i . Thus, we have derived the following theorem. k Theorem 2. For any mixture adaptation problem with target distribution D (x) = i=1 i Di (x), the expected loss of the hypothesis h is at most with respect to any target function f F : L(D , h , f ) . 4 Simple Adaptation Algorithms In this section, we show how to construct a simple distribution weighted hypothesis that has an expected loss guarantee with respect to any mixture. Our hypothesis hu is simply based on equal weights, i.e., ui = 1/k , for all i [1, k ]. Thus, hu (x) = ik =1 ik (1/k )Di (x) hi (x) = k =1 j =1 (1/k )Dj (x) Di (x) hi (x). k j =1 Dj (x) We show for hu an expected loss bound of k , with respect to any mixture distribution DT and target function f F . (Proof omitted.) Theorem 3. For any mixture adaptation problem the expected loss of hu is at most k , for any mixture distribution DT and target function f F , i.e., L(DT , hu , f ) k . Unfortunately, the hypothesis hu can have an expected absolute loss as large as (k ). (Proof omitted.) Theorem 4. There is a mixture adaptation problem for which the expected absolute loss of hu is (k ). Also, for k = 2 there is an input to the mixture adaptation problem for which the expected absolute loss of hu is 2 - 2 . 5 Existence of a Good Hypothesis In this section, we will show that for any target function f F there is a distribution weighted combining rule hz that has a loss of at most with respect to any mixture DT . We will construct the proof in two parts. In the first part, we will show, using a simple reduction to a zero-sum game, that one can obtain a mixture of hz s that guarantees a loss bounded by . In the second part, which is the more interesting scenario, we will show that for any target function f F there is a single distribution weighted combining rule hz that has loss of at most with respect to any mixture DT . This later part will require the use of Brouwer fixed point theorem to show the existence of such an hz . 5.1 Zero-sum game The adaptation problem can be viewed as a zero-sum game between two players, NATURE and LEARNER. Let the input to the mixture adaptation problem be D1 , . . . , Dk , h1 , . . . , hk and , and fix a target function f F . The player NATURE picks a distribution Di while the player LEARNER selects a distribution weighted combining rule hz H. The loss when NATURE plays Di and LEARNER plays hz is L(Di , hz , f ). Let us emphasize that the target function f F is fixed beforehand. The objective of NATURE is to maximize the loss and the objective of LEARNER is to minimize the loss. We start with the following lemma, 4 Lemma 1. Given any mixed strategy of NATURE, i.e., a distribution µ over Di 's, then the following action of LEARNER hµ H has expected loss at most , i.e., L(Dµ , hµ , f ) . The proof is identical to that of Theorem 2. This almost establishes that the value of the game is at most . The technical part that we need to take care of is the fact that the action space of LEARNER is infinite. However, by an appropriate discretization of H we can derive the following theorem. Thmorem 5. For any target function f F and any > 0, there exists a function h(x) = e j =1 j hzj (x), where hzi H, such that L(DT , h, f ) + for any mixture distribution k DT (x) = i=1 i Di (x). Since we can fix > 0 to be arbitrarily small, this implies that a linear mixture of distribution weighted combining rules can guarantee a loss of almost with respect to any product distribution. 5.2 Single distribution weighted combining rule In the previous subsection, we showed that a mixture of hypotheses in H would guarantee a loss of at most . Here, we will considerably strengthen the result and show that there is a single hypothesis in H for which this guarantee holds. Unfortunately our loss is not convex with respect to the H, so we need to resort to a more powerful technique, namely the Brouwer fixed point theorem. For the proof we will need that the distribution weighted combining rule hz to be continuous in the parameter z . In general, this does hold due to the existence of points x X for which k j =1 zj Dj (x) = 0. To avoid this discontinuity, we will modify the definition of hz to hz , as follows. Claim 1. For any > 0 and z , let h : X R be the function defined by z h (x) = z ik =1 zi Di (x) + /k hi (x). k j =1 zj Dj (x) + Then, for any distribution D, L(D, h , f ) is continuous in z .2 z Let us first state Brouwer's fixed point theorem. Theorem 6 (Brouwer Fixed Point Theorem). For any compact and convex non-empty set A Rn and any continuous function f : A A, there is a point x A such that f (x) = x. We first show that there exists a distribution weighted combining rule h for which the losses z L(Di , h , f ) are all nearly the same. z Lemma 2. For any target function f F and any , > 0, there exists z , with zi = 0 for all i [1, k ], such that the following holds for the distribution weighted combining rule h H: z L(Di , h , f ) = + - z for any 1 i k , where = k j =1 zj L(Dj , hz , f ). + zi k Proof. Fix > 0 and let Lz = L(Di , h , f ) for all z and i [1, m]. Consider the z i k mapping : defined for all z by [(z )]i = (zi Lz + /k )/ ( j =1 zj Lz + ), j i where [(z )]i , is the ith coordinate of (x), i [1, m]. By Claim 1, is continuous. Thus, by Brouwer's Fixed Point Theorem, there exists z such that (z ) = z . This implies that k zi = (zi Lz + /k )/( j =1 zj Lz + ). Since > 0, we must have zi = 0 for any i [1, m]. Thus, i j k we can divide by zi and write Lz + /(zi k ) = ( j =1 zj Lz ) + . Therefore, Lz = + - /(zi k ) j i i k with = j =1 zj Lz . j 2 In addition to continuity, the perturbation to hz , h , also helps us ensure that none of the mixture weights z zi is zero in the proof of the Lemma 2 . 5 Note that the lemma just presented does not use the structure of the distribution weighted combining rule, but only the fact that the loss is continuous in the parameter z . The lemma applies as well to the linear combination rule and provides the same guarantee. The real crux of the argument is, as shown in the next lemma, that is small for a distribution weighted combining rule (while it can be very large for a linear combination rule). Lemma 3. For any target function f F and any , > 0, there exists z such that L(D , h , f ) + M + for any . z Proof. Let z be the parameter guaranteed in Lemma 2. Then L(Di , h , f ) = + - /(zi k ) z + , for 1 i k . Consider the mixture Dz , i.e., set the mixture parameter to be z . Consider the k quantity L(Dz , h , f ). On the one hand, by definition, L(Dz , h , f ) = i=1 zi L(Di , h , f ) and z z z thus L(Dz , h , f ) = . On the other hand, z L(Dz , h , f ) z = X Dz (x)L(h (x), f (x)) z k X i=1 x X x X = k X i=1 X zi Di (x)L(hi (x), f (x)) k X i=1 Dz (x) Dz (x) + x X ! X + M k X (zi Di (x) + )L(hi (x), f (x)) k i=1 ! zi L(Di , hi , f ) + M = zi i + M + M . Therefore + M . To complete the proof, note that the following inequality holds for any mixture D : ik i L(Di , h , f ) + , L(D , h , f ) = z z =1 which is at most + M + . By setting = /(2M ) and = /2, we can derive the following theorem. Theorem 7. For any target function f F and any > 0, there exists > 0 and z , such that L(D , h , f ) + for any mixture parameter . z 6 Arbitrary target function The results of the previous section show that for any fixed target function there is a good distribution weighted combining rule. In this section, we wish to extend these results to the case where the target function is not fixed in advanced. Thus, we seek a single distribution weighted combining rule that can perform well for any f F and any mixture D . Unfortunately, we are not able to prove a bound of + o() but only a bound of 3. To show this bound we will show that for any f1 , f2 F and any hypothesis h the difference of loss is bounded by at most 2. Lemma 4. Assume that the loss function L obeys the triangle inequality, i.e., L(D, f , h) L(D, f , g ) + L(D, g , h). Then for any f , f F and any mixture DT , the inequality L(DT , h, f ) L(DT , h, f ) + 2 holds for any hypothesis h. Proof. Since our loss function obeys the triangle inequality, for any functions f , g , h, the following holds, L(D, f , h) L(D, f , g ) + L(D, g , h). In our case, we observe that replacing g with any f F gives, L(D , f , h) L(D , f , h) + L(D , f , f ). We can bound the term L(D , f , f ) with a similar inequality, L(D , f , f ) L(D , f , h ) + L(D , f , h ) 2, where h is the distribution weighted combining rule produced by choosing z = and using Theorem 2. Therefore, for any f , f F we have, L(D , f , h) L(D , f , h) + 2, which completes the proof. We derived the following corollary to Theorem 7. Corollary 1. Assume that the loss function L obeys the triangle inequality. Then, for any > 0, there exists > 0 and z , such that for any mixture parameter and any f F , L(D , h , f ) 3 + . z 6 Uniform Mixture Over 4 Domains 2.1 2 1.9 In-Domain Out-Domain 2.4 2.2 Mixture = book + (1 - ) kitchen weighted linear book kitchen 2.4 2.2 Mixture = dvd + (1 - ) electronics weighted linear dvd electronics MSE MSE 1.8 1.7 1.6 1.5 1 2 3 4 5 6 2 1.8 1.6 1.4 0 MSE 0.2 0.4 0.6 0.8 1 2 1.8 1.6 1.4 0 0.2 0.4 0.6 0.8 1 ( a) (b ) Figure 1: (a) MSE performance for a target mixture of four domains (1: books, 2: dvd, 3: electronics, 4: kitchen 5: linear, 6: weighted). (b) MSE performance under various mixtures of two source domains, plot left: book and kitchen, plot right: dvd and electronics. 7 Empirical results This section reports the results of our experiments with a distribution weighted combining rule using real-world data. In our experiments, we fixed a mixture target distribution D and considered the distribution weighted combining rule hz , with z = . Since we used real-world data, we did not have access to the domain distributions. Instead, we modeled each distribution and used large amounts of unlabeled data available for each source to estimate the model's parameters. One could have thus expected potentially significantly worse empirical results than the theoretical ones, but this turned out not to be an issue in our experiments. We used the sentiment analysis dataset found in [4].3 The data consists of review text and rating labels, taken from amazon.com product reviews within four different categories (domains). These four domains consist of book, dvd, electronics and kitchen reviews, where each domain contains 2000 data points. 4 In our experiments, we fixed a mixture target distribution D and considered the distribution weighted combining rule hz , with z = . In our first experiment, we considered mixtures of all four domains, where the test set was a uniform mixture of 600 points, that is the union of 150 points taken uniformly at random from each domain. The remaining 1,850 points from each domain were used to train the base hypotheses.5 We compared our proposed weighted combining rule to the linear combining rule. The results are shown in Figure 1(a). They show that the base hypotheses perform poorly on the mixture test set, which justifies the need for adaptation. Furthermore, the distribution weighted combining rule is shown to perform at least as well as the worst in-domain performance of a base hypothesis, as expected from our bounds. Finally, we observe that this real-world data experiment gives an example in which a linear combining rule performs poorly compared to the distribution weighted combining rule. In other experiments, we considered the mixture of two domains, where the mixture is varied according to the parameter {0.1, 0.2, . . . , 1.0}. For each plot in Figure 1 (b), the test set consists of 600 points from the first domain and 600(1 - ) points from the second domain, where the first and second domains are made clear in the figure. The remaining points that were not used for testing were used to train the base hypotheses. The results show the linear shift from one domain to 3 4 http://www.seas.upenn.edu/~mdredze/datasets/sentiment/. The rating label, an integer between 1 and 5, was used as a regression label, and the loss measured by the mean squared error (MSE). All base hypotheses were generated using Support Vector Regression (SVR) [17] with the trade-off parameters C = 8, = 0.1, and a Gaussian kernel with parameter g = 0.00078. The SVR solutions were obtained using the libSVM software library ( http://www.csie.ntu.edu.tw/~cjlin/libsvm/). Our features were defined as the set of unigrams appearing five times or more in all domains. This defined about 4000 unigrams. We used a binary feature vector encoding the presence or absence of these frequent unigrams to define our instances. To model the domain distributions, we used a unigram statistical language model trained on the same corpus as the one used to define the features. The language model was created using the GRM library (http://www.research.att.com/~fsmtools/grm/). 5 Each experiment was repeated 20 times with random folds. The standard deviation found was far below what could be legibly displayed in the figures. 7 the other, as is evident from the performance of the two base hypotheses. The distribution weighted combining rule outperforms the base hypotheses as well as the linear combining rule. Thus, our preliminary experiments suggest that the distribution weighted combining rule performs well in practice and clearly outperforms a simple linear combining rule. Furthermore, using statistical language models as approximations to the distribution oracles seem to be sufficient in practice and can help produce a good distribution weighted combining rule. 8 Conclusion We presented a theoretical analysis of the problem of adaptation with multiple sources. Domain adaptation is an important problem that arises in a variety of modern applications where limited or no labeled data is available for a target application and our analysis can be relevant in a variety of situations. The theoretical guarantees proven for the distribution weight combining rule provide it with a strong foundation. Its empirical performance with a real-world data set further motivates its use in applications. Acknowledgments We thank Jennifer Wortman for helpful comments on an earlier draft of this paper. The work of M. Mohri and A. Rostamizadeh was partly supported by the New York State Office of Science Technology and Academic Research (NYSTAR). References [1] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In Proceedings of NIPS 2006. MIT Press, 2007. [2] Jacob Benesty, M. Mohan Sondhi, and Yiteng Huang, editors. Springer Handbook of Speech Processing. Springer, 2008. [3] John Blitzer, Koby Crammer, A. Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In Proceedings of NIPS 2007. MIT Press, 2008. [4] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In ACL 2007, Prague, Czech Republic, 2007. [5] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from Data of Variable Quality. In Proceedings of NIPS 2005, 2006. [6] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. In Proceedings of NIPS 2006, 2007. [7] Mark Dredze, John Blitzer, Pratha Pratim Talukdar, Kuzman Ganchev, Joao Graca, and Fernando Pereira. Frustratingly Hard Domain Adaptation for Parsing. In CoNLL 2007, Prague, Czech Republic, 2007. [8] Jean-Luc Gauvain and Chin-Hui. Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Speech and Audio Processing, 2(2):291­298, 1994. [9] Frederick Jelinek. Statistical Methods for Speech Recognition. The MIT Press, 1998. [10] Jing Jiang and ChengXiang Zhai. Instance Weighting for Domain Adaptation in NLP. In Proceedings of ACL 2007, pages 264­271, Prague, Czech Republic, 2007. Association for Computational Linguistics. [11] C. J. Legetter and Phil C. Woodland. Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models. Computer Speech and Language, pages 171­185, 1995. [12] Aleix M. Mart´nez. Recognizing imprecisely localized, partially occluded, and expression variant faces i from a single sample per class. IEEE Trans. Pattern Anal. Mach. Intell., 24(6):748­763, 2002. [13] S. Della Pietra, V. Della Pietra, R. L. Mercer, and S. Roukos. Adaptive language modeling using minimum discriminant estimation. In HLT '91: Proceedings of the workshop on Speech and Natural Language, pages 103­106, Morristown, NJ, USA, 1992. Association for Computational Linguistics. [14] Brian Roark and Michiel Bacchiani. Supervised and unsupervised PCFG adaptation to novel domains. In Proceedings of HLT-NAACL, 2003. [15] Roni Rosenfeld. A Maximum Entropy Approach to Adaptive Statistical Language Modeling. Computer Speech and Language, 10:187­228, 1996. [16] Leslie G. Valiant. A theory of the learnable. ACM Press New York, NY, USA, 1984. [17] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. 8