Boosting the Area Under the ROC Curve Philip M. Long plong@google.com Rocco A. Servedio rocco@cs.columbia.edu Abstract We show that any weak ranker that can achieve an area under the ROC curve slightly better than 1/2 (which can be achieved by random guessing) can be efficiently boosted to achieve an area under the ROC curve arbitrarily close to 1. We further show that this boosting can be performed even in the presence of independent misclassification noise, given access to a noise-tolerant weak ranker. 1 Introduction Background. Machine learning is often used to identify members of a given class from a list of candidates. This can be formulated as a ranking problem, where the algorithm takes a input a list of examples of members and non-members of the class, and outputs a function that can be used to rank candidates. The goal is to have the top of the list enriched for members of the class of interest. ROC curves [12, 3] are often used to evaluate the quality of a ranking function. A point on an ROC curve is obtained by cutting off the ranked list, and checking how many items above the cutoff are members of the target class ("true positives"), and how many are not ("false positives"). The AUC [1, 10, 3] (area under the ROC curve) is often used as a summary statistic. It is obtained by rescaling the axes so the true positives and false positives vary between 0 and 1, and, as the name implies, examining the area under the resulting curve. The AUC measures the ability of a ranker to identify regions in feature space that are unusually densely populated with members of a given class. A ranker can succeed according to this criterion even if positive examples are less dense than negative examples everywhere, but, in order to succeed, it must identify where the positive examples tend to be. This is in contrast with classification, where, if Pr[y = 1|x] is less than 1/2 everywhere, just predicting y = -1 everywhere would suffice. Our Results. It is not hard to see that an AUC of 1/2 can be achieved by random guessing (see [3]), thus it is natural to define a "weak ranker" to be an algorithm that can achieve AUC slightly above 1/2. We show that any weak ranker can be boosted to a strong ranker that achieves AUC arbitrarily close to the best possible value of 1. We also consider the standard independent classification noise model, in which the label of each example is flipped with probability . We show that in this setting, given a noise-tolerant weak ranker (that achieves nontrivial AUC in the presence of noisy data as described above), we can boost to a strong ranker that achieves AUC at least 1 - , for any < 1/2 and any > 0. Related work. Freund, Iyer, Schapire and Singer [4] introduced RankBoost, which performs ranking with more fine-grained control over preferences between pairs of items than we consider here. They performed an analysis that implies a bound on the AUC of the boosted ranking function in terms of a different measure of the quality of weak rankers. Cortes and Mohri [2] theoretically analyzed the "typical" relationship between the error rate of a classifier based on thresholding a scoring function and the AUC obtained through the scoring function; they also pointed out the close relationship between the loss function optimized by RankBoost and the AUC. Rudin, Cortes, Mohri, and Schapire [11] showed that, when each of two classes are equally likely, the loss function optimized by AdaBoost coincides with the loss function of RankBoost. Noise-tolerant boosting has previously been studied for classification. Kalai and Servedio [7] showed that, if data is corrupted with noise at a rate , it is possible to boost the accuracy of any noise-tolerant weak learner arbitrarily close to 1 - , and they showed that it is impossible to boost beyond 1 - . In contrast, we show that, in the presence of noise at a rate arbitrarily close to 1/2, the AUC can be boosted arbitrarily close to 1. Our noise tolerant boosting algorithm uses as a subroutine the "martingale booster" for classification of Long and Servedio [9]. Methods. The key observation is that a weak ranker can be used to find a "two-sided" weak classifier (Lemma 4), which achieves accuracy slightly better than random guessing on both positive and negative examples. Two-sided weak classifiers can be boosted to obtain accuracy arbitrarily close to 1, also on both the positive examples and the negative examples; a proof of this is implicit in the analysis of [9]. Such a two-sided strong classifier is easily seen to lead to AUC close to 1. Why is it possible to boost past the AUC past the noise rate, when this is provably not possible for classification? Known approaches to noise-tolerant boosting [7, 9] force the weak learner to provide a two-sided weak hypothesis by balancing the distributions that are constructed so that both classes are equally likely. However, this balancing skews the distributions so that it is no longer the case that the event that an example is corrupted with noise is independent of the instance; randomization was used to patch this up in [7, 9], and the necessary slack was only available if the desired accuracy was coarser than the noise rate. (We note that the lower bound from [7] is proved using a construction in which the class probability of positive examples is less than the noise rate; the essence of that proof is to show that in that situation it is impossible to balance the distribution given access to noisy examples.) In contrast, having a weak ranker provides enough leverage to yield a two-sided weak classifier without needing any rebalancing. Outline. Section 2 gives some definitions. In Section 3, we analyze boosting the AUC when there is no noise in an abstract model where the weak learner is given a distribution and returns a weak ranker, and sampling issues are abstracted away. In Section 4, we consider boosting in the presence of noise in a similarly abstract model. We address sampling issues in Section 5. 2 Preliminaries Rankings and AUC. Throughout this work we let X be a domain, c : X {-1, 1} be a classifier, and D be a probability distribution over labeled examples (x, c(x)). We say that D is nontrivial (for c) if D assigns nonzero probability to both positive and negative examples. We write D+ to denote the marginal distribution over positive examples and D- to denote the marginal distribution over negative examples, so D is a mixture of the distributions D+ and D- . As has been previously pointed out, we may view any function h : X R as a ranking of X . Note that if h(x1 ) = h(x2 ) then the ranking does not order x1 relative to x2 . Given a ranking function h : X R, for each value R there is a point ( , ) on the ROC curve of h, where is the false positive rate and is the true positive rate of the classifier obtained by thresholding h at : = D- [h(x) ] and = D+ [h(x) ]. Every ROC curve contains the points (0, 0) and (1, 1) corresponding to = and - respectively. Given h : X R and D, the AUC can be defined as AUC(h; D) = PruD+ ,vD- [h(u) > h(v )] + 1 PruD+ ,vD- [h(u) = h(v )]. It is well known (see e.g. [2, 6]) that the AUC as defined 2 above is equal to the area under the ROC curve for h. Weak Rankers. Fix any distribution D. It is easy to see that any constant function h achieves 1 AUC(h; D) = 2 , and also that for X finite and a random permutation of X , the expected AUC 1 of h( (·)) is 2 for any function h. This motivates the following definition: Definition 1 A weak ranker with advantage is an algorithm that, given any nontrivial distribution 1 D, returns a function h : X R that has AUC(h; D) 2 + . In the rest of the paper we show how boosting algorithms originally designed for classification can be adapted to convert weak rankers into "strong" rankers (that achieve AUC at least 1 - ) in a range of different settings. 3 From weak to strong AUC The main result of this section is a simple proof that the AUC can be boosted. We achieve this in a relatively straightforward way by using the standard AdaBoost algorithm for boosting classifiers. As in previous work [9], to keep the focus on the main ideas we will use an abstract model in which the booster successively passes distributions D1 , D2 , ... to a weak ranker which returns ranking functions h1 , h2 , .... When the original distribution D is uniform over a training set, as in the usual analysis of AdaBoost, this is easy to do. In this model we prove the following: Theorem 2 There is an algorithm AUCBoost that, given access to a weak ranker with advantage as an oracle, for any nontrivial distribution D, outputs a ranking function with AUC at least 1 - . ( The AUCBoost algorithm makes T = O( log1/) ) many calls to the weak ranker. If D has finite 2 support of size m, AUCBoost takes O(mT log m) time. As can be seen from the observation that it does not depend on the relative frequency of positive and negative examples, the AUC requires a learner to perform well on both positive and negative examples. When such a requirement is imposed on a base classifier, it has been called two-sided weak learning. The key to boosting the AUC is the observation (Lemma 4 below) that a weak ranker can be used to generate a two-sided weak learner. Definition 3 A two-sided weak learner is an algorithm that, given a nontrivial distribution D, 1 outputs a hypothesis h that satisfies both PrxD+ [h(x) = 1] 2 + and PrxD- [h(x) = -1] 1 2 + . We say that such an h has two-sided advantage with respect to D . Lemma 4 Let A be a weak ranking algorithm with advantage . Then there is a /4 two-sided weak learner A based on A that always returns classifiers with equal error rate on positive and negative examples. Proof: Algorithm A first runs A to get a real-valued ranking function h : X R. Consider the ROC curve corresponding to h. Since the AUC is at least 1 + , there must be some point (u, v ) on 2 the curve such that v u + . Recall that, by the definition of the ROC curve, this means that there is a threshold such that D+ [h(x) ] D- [h(x) ] + . Thus, for the classifier obtained by def def thresholding h at , the class conditional error rates p+ = D+ [h(x) < ] and p- = D- [h(x) ] 1 1 satisfy p+ + p- 1 - . This in turn means that either p+ 2 - 2 or p- 2 - . 2 1 Suppose that p- p+ , so that p- 2 - (the other case can be handled symmetrically). Consider 2 the randomized classifier g that behaves as follows: given input x, (a) if h(x) < , it flips a biased coin, and with probability 0, predicts 1, and with probability 1 - , predicts -1, and (b) if h(x) , it predicts 1. Let g (x, r) be the output of g on input x and with randomization r and let def def - = PrxD- ,r [g (x, r) = 1] and + = PrxD+ ,r [g (x, r) = -1]. We have + = (1 - )p+ and p -p - = p- + (1 - p- ). Let us choose so that - = + ; that is, we choose = 1+++ --- . This p p yields p+ - = + = . (1 ) 1 + p+ - p- For any fixed value of p- the RHS of (1) increases with p+ . Recalling that we have p+ + p- 1 - , def the maximum of (1) is achieved at p+ = 1 - - p- , in which case we have (defining = - = + ) ) p- ( -p- = 1+(11- )p- )-p- = (1---p- . The RHS of this expression is nonincreasing in p- , and therefore - - 2- 2 1 1 is maximized at p- is 0, when it takes the value 2 - 2(2 ) 2 - . This completes the proof. - 4 Figure 1 gives an illustration of the proof of the previous lemma; since the y -coordinate of (a) is at least more than the x-coordinate and (b) lies closer to (a) than to (1, 1), the y -coordinate of (b) is at least /2 more than the x-coordinate, which means that the advantage is at least /4. We will also need the following simple lemma which shows that a classifier that is good on both the positive and the negative examples, when viewed as a ranking function, achieves a good AUC. 1 0.8 true positive rate (b) 0.6 · 0.4 · (a) 0.2 0 0 0.2 0.4 0.6 0.8 1 false positive rate Figure 1: The curved line represents the ROC curve for ranking function h. The lower black dot (a) corresponds to the value and is located at (p- , 1 - p+). The straight line connecting (0, 0) and (1, 1), which corresponds to a completely random ranking, is given for reference. The dashed line (covered by the solid line for 0 x .16) represents the ROC curve for a ranker h which agrees with h on those x for which h(x) but randomly ranks those x for which h(x) < . The upper black dot (b) is at the point of intersection between the ROC curve for h and the line y = 1 - x; its coordinates are (, 1 - ). The randomized classifier g is equivalent to thresholding h with a value corresponding to this point. Lemma 5 Let h : X {-1, 1} and suppose that PrxD+ [h(x) = 1] = 1 - + and PrxD- [h(x) = -1] = 1 - - . Then we have AUC(h; D) = 1 - + +- . 2 Proof: We have AUC(h; D) = (1 - + )(1 - - ) + + + - + (1 - - ) + - (1 - + ) =1- . 2 2 1 1 Proof of Theorem 2: AUCBoost works by running AdaBoost on 2 D+ + 2 D- . In round t, each copy of AdaBoost passes its reweighted distribution Dt to the weak ranker, and then uses the process of Lemma 4 to convert the resulting weak ranking function to a classifier ht with two-sided advantage + /4. Since ht has two-sided advantage /4, no matter how Dt decomposes into a mixture of Dt - 1 and Dt , it must be the case that Pr(x,y)Dt [ht (x) = y ] 2 - /4. l r ( The analysis of AdaBoost (see [5]) shows that T = O og1/) ounds are sufficient for H to have 2 error rate at most under 1 D+ + 1 D- . Lemma 5 now gives that the classifier H (x) is a ranking 2 2 function with AUC at least 1 - . For the final assertion of the theorem, note that at each round, in order to find the value of that defines ht the algorithm needs to minimize the sum of the error rates on the positive and negative examples. This can be done by sorting the examples using the weak ranking function (in O(m log m) time steps) and processing the examples in the resulting order, keeping running counts of the number of errors of each type. 4 Boosting weak rankers in the presence of misclassification noise The noise model: independent misclassification noise. The model of independent misclassification noise has been widely studied in computational learning theory. In this framework there is a noise rate < 1/2, and each example (positive or negative) drawn from distribution D has its true label c(x) independently flipped with probability before it is given to the learner. We write D to denote the resulting distribution over (noise-corrupted) labeled examples (x, y ). Boosting weak rankers in the presence of independent misclassification noise. We now show how the AUC can be boosted arbitrarily close to 1 even if the data given to the booster is corrupted with independent misclassification noise, using weak rankers that are able to tolerate independent misclassification noise. We note that this is in contrast with known results for boosting the accuracy of binary classifiers in the presence of noise; Kalai and Servedio [7] show that no "black-box" boosting algorithm can be guaranteed to boost the accuracy of an arbitrary noise-tolerant weak learner to accuracy 1 - in the presence of independent misclassification noise at rate . v0,1 v0,2 v0,3 v1,3 v1,2 v2,3 Figure 2: The branching program produced by the boosting algorithm. Each node vi,t is labeled with a weak classifier hi,t ; left edges correspond to -1 and right edges to 1. ... v0,T +1 v1,T +1 ... ... ... vT -1,T +1 vT ,T +1 ... o utput -1 output 1 As in the previous section we begin by abstracting away sampling issues and using a model in which the booster passes a distribution to a weak ranker. Sampling issues will be treated in Section 5. Definition 6 A noise-tolerant weak ranker with advantage is an algorithm with the following property: for any noise rate < 1/2, given a noisy distribution D , the algorithm outputs a ranking 1 function h : X R such that AUC(h; D) 2 + . Our algorithm for boosting the AUC in the presence of noise uses the Basic MartiBoost algorithm (see Section 4 of [9]). This algorithm boosts any two-sided weak learner to arbitrarily high accuracy and works in a series of rounds. Before round t the space of labeled examples is partitioned into a series of bins B0,t , ..., Bt-1,t . (The original bin B0,1 consists of the entire space.) In the t-th round the algorithm first constructs distributions D0,t , ..., Dt-1,t by conditioning the original distribution D on membership in B0,t , ..., Bt-1,t respectively. It then calls a two-sided weak learner t times using each of D0,t , ..., Dt-1,t , getting weak classifiers h0,t , ..., ht-1,t respectively. Having done this, it creates t + 1 bins for the next round by assigning each element (x, y ) of Bi,t to Bi,t+1 if hi,t (x) = -1 and to Bi+1,t+1 otherwise. Training proceeds in this way for a given number T of rounds, which is an input parameter of the algorithm. The output of Basic MartiBoost is a layered branching program defined as follows. There is a node vi,t for each round 1 t T + 1 and each index 0 i < t (that is, for each bin constructed during training). An item x is routed through the branching program the same way a labeled example (x, y ) would have been routed during the training phase: it starts in node v0,1 , and from each node vi,t it goes to vi,t+1 if hi,t (x) = -1, and to vi+1,t+1 otherwise. When the item x arrives at a terminal node of the branching program in layer T + 1, it is at some node vj,T +1 . The prediction is 1 if j T /2 and is -1 if j < T /2; in other words, the prediction is according to the majority vote of the weak classifiers that were encountered along the path through the branching program that the example followed. See Figure 3. The following lemma is proved in [9]. (The crux of the proof is the observation that positive (respectively, negative) examples are routed through the branching program according to a random walk that is biased to the right (respectively, left); hence the name "martingale boosting.") Lemma 7 ([9]) Suppose that Basic MartiBoost is provided with a hypothesis hi,t with two-sided advantage w.r.t. Di,t at each node vi,t . Then for T = O(log(1/)/ 2), Basic MartiBoost constructs a branching program H such that D+ [H (x) = -1] and D- [H (x) = 1] . We now describe our noise-tolerant AUC boosting algorithm, which we call Basic MartiRank. Given access to a noise-tolerant weak ranker A with advantage , at each node vi,t the Basic MartiRank algorithm runs A and proceeds as described in Lemma 4 to obtain a weak classifier hi,t . Basic MartiRank runs Basic MartiBoost with T = O(log(1/)/ 2) and simply uses the resulting classifier H as its ranking function. The following theorem shows that Basic MartiRank is an effective AUC booster in the presence of independent misclassification noise: Theorem 8 Fix any < 1/2 and any > 0. Given access to D and a noise-tolerant weak ranker A with advantage , Basic MartiRank outputs a branching program H such that AUC(H ; D) 1 - . Proof: Fix any node vi,t in the branching program. The crux of the proof is the following simple observation: for a labeled example (x, y ), the route through the branching program that is taken by (x, y ) is determined completely by the predictions of the base classifiers, i.e. only by x, and is unaffected by the value of y . Consequently if Di,t denotes the original noiseless distribution D conditioned on reaching vi,t , then the noisy distribution conditioned on reaching vi,t , i.e. (D )i,t , is simply Di,t corrupted with independent misclassification noise, i.e. (Di,t ) . So each time the noisetolerant weak ranker A is invoked at a node vi,t , it is indeed the case that the distribution that it is given is an independent misclassification noise distribution. Consequently A does construct weak rankers with AUC at least 1/2 + , and the conversion of Lemma 4 yields weak classifiers that have advantage /4 with respect to the underlying distribution Di,t . Given this, Lemma 7 implies that the final classifier H has error at most on both positive and negative examples drawn from the original distribution D, and Lemma 5 then implies that H , viewed a ranker, achieves AUC at least 1 - . In [9], a more complex variant of Basic MartiBoost, called Noise-Tolerant SMartiBoost, is presented and is shown to boost any noise-tolerant weak learning algorithm to any accuracy less than 1 - in the presence of independent misclassification noise. In contrast, here we are using just the Basic MartiBoost algorithm itself, and can achieve any AUC value 1 - even for < . 5 Implementing MartiRank with a distribution oracle In this section we analyze learning from random examples. Formally, we assume that the weak ranker is given access to an oracle for the noisy distribution D . We thus now view a noise-tolerant weak ranker with advantage as an algorithm A with the following property: for any noise rate < 1/2, given access to an oracle for D , the algorithm outputs a ranking function h : X R 1 such that AUC(h; D) 2 + . We let mA denote the number of examples from each class that suffice for A to construct a ranking function as described above. In other words, if A is provided with a sample of draws from D such that each class, positive and negative, has at least mA points in the sample with that true label, then algorithm A outputs a -advantage weak ranking function. (Note that for simplicity we are assuming here that the weak ranker always constructs a weak ranking function with the desired advantage, i.e. we gloss over the usual confidence parameter ; this can be handled with an entirely standard analysis.) In order to achieve a computationally efficient algorithm in this setting we must change the MartiRank algorithm somewhat; we call the new variant Sampling Martirank, or SMartiRank. We prove that SMartiRank is computationally efficient, has moderate sample complexity, and efficiently generates a high-accuracy final ranking function with respect to the underlying distribution D. Our approach follows the same general lines as [9] where an oracle implementation is presented for the MartiBoost algorithm. The main challenge in [9] is the following: for each node vi,t in the branching program, the boosting algorithm considered there must simulate a balanced version of the induced distribution Di,t which puts equal weight on positive and negative examples. If only a tiny fraction of examples drawn from D are (say) positive and reach vi,t , then it is very inefficient to simulate this balanced distribution (and in a noisy scenario, as discussed earlier, if the noise rate is high relative to the frequency of the desired class then it may in fact be impossible to simulate the balanced distribution). The solution in [9] is to "freeze" any such node and simply classify any example that reaches it as negative; the analysis argues that since only a tiny fraction of positive examples reach such nodes, this freezing only mildly degrades the accuracy of the final hypothesis. In the ranking scenario that we now consider, we do not need to construct balanced distributions, but we do need to obtain a non-negligible number of examples from each class in order to run the weak learner at a given node. So as in [9] we still freeze some nodes, but with a twist: we now freeze nodes which have the property that for some class label (positive or negative), only a tiny fraction of examples from D with that class label reach the node. With this criterion for freezing we can prove that the final classifier constructed has high accuracy both on positive and negative examples, which is what we need to achieve good AUC. We turn now to the details. Given a node vi,t and a bit b {-1, 1}, let pb,t denote D[x reaches vi,t and c(x) = b]. The i SMartiRank algorithm is like Basic MartiBoost but with the following difference: for each node vi,t and each value b {-1, 1}, if · D[c(x) = b] (2 ) T (T + 1) then the node vi,t is "frozen," i.e. it is labeled with the bit 1 - b and is established as a terminal node with no outgoing edges. (If this condition holds for both values of b at a particular node vi,t then the node is frozen and either output value may be used as the label.) The following theorem establishes that if SMartiRank is given weak classifiers with two-sided advantage at each node that is not frozen, it will construct a hypothesis with small error rate on both positive and negative examples: pb,t < i Theorem 9 Suppose that the SMartiRank algorithm as described above is provided with a hypothesis hi,t that has two-sided advantage with respect to Di,t at each node vi,t that is not frozen. Then for T = O(log(1/)/ 2 ), the final branching program hypothesis H that SMartiRank constructs will have D+ [H (x) = -1] and D- [H (x) = 1] . Proof: We analyze D+ [h(x) = -1]; the other case is symmetric. Given an unlabeled instance x X , we say that x freezes at node vi,t if x's path through the branching program causes it to terminate at a node vi,t with t < T + 1 (i.e. at a node vi,t which was i frozen by SMartiRank). We have D[x freezes and c(x) = 1] = ,t D [x freezes at vi,t and c(x) = i ·D[c(x)=1] 1] 2 · D[c(x) = 1]. Consequently we have ,t T (T +1) D+ [x freezes] = D[x freezes and c(x) = 1] <. D[c(x) = 1] 2 (3 ) Naturally, D+ [h(x) = -1] = D+ [(h(x) = -1) & (x freezes)] + D+ [(h(x) = -1) & (x does not freeze)]. By (3), this is at most 2 + D+ [(h(x) = -1) & (x does not freeze)]. Arguments identical to those in the last two paragraphs of the proof of Theorem 3 in [9] show that D+ [(h(x) = -1) & (x does not freeze)] 2 , and we are done. We now describe how SMartiRank can be run given oracle access to D and sketch the analysis of the required sample complexity (some details are omitted because of space limits). For simplicity of def presentation we shall assume that the booster is given the value p = min{D[c(x) = -1], D[c(x) = 1]}; we note if that p is not given a priori, a standard "guess and halve" technique can be used to efficiently obtain a value that is within a multiplicative factor of two of p, which is easily seen to suffice. We also make the standard assumption (see [7, 9]) that the noise rate is known; this assumption can similarly be removed by having the algorithm "guess and check" the value to sufficiently fine granularity. Also, the confidence can be analyzed using the standard appeal to the union bound ­ details are omitted. SMartiRank will replace (2) with a comparison of sample estimates of the two quantities. To allow for the fact that they are just estimates, it will be more conservative, and freeze when the estimate of pb,t is at most 4T (T +1) times the estimate of D[c(x) = b]. i We first observe that for any distribution D and any bit b, we have Pr(x,y)D [y = b] = + (1 - [ 2 )Pr(x,c(x))D [c(x) = b], which is equivalent to D[c(x) = b] = D 1y=b]- . Consequently, given -2 - an empirical estimate of D [y = b] that is accurate to within an additive ± p(1102) (which can easily 1 be obtained from O( p2 (1-2)2 ) draws to D ), it is possible to estimate D[c(x) = b] to within an p additive ±p/10, and thus to estimate the RHS of (2) to within an additive ± 10T (T +1) . Now in order to determine whether node vi,t should be frozen, we must compare this estimate with a similarly accurate estimate of pb,t (arguments similar to those of, e.g., Section 6.3 of [9] can be used to show i that it suffices to run the algorithm using these estimated values). We have pb,t i = = D[x reaches vi,t ] · D[c(x) = b | x reaches vi,t ] = D [x reaches vi,t ] · Di,t [c(x) = b] . D i,t [y = b] - D [x reaches vi,t ] · 1 - 2 A standard analysis (see e.g. Chapter 5 of [8]) shows that this quantity can be estimated to additive accuracy ± using poly(1/ , 1/(1 - 2 )) many calls to D (briefly, if D [x reaches vi,t ] is less than (1 - 2 ) then an estimate of 0 is good enough, while if it is greater than (1 - 2 ) then a -accurate 1 estimate of the second multiplicand can be obtained using O( 3 (1-2)3 ) draws from D , since at least a (1 - 2 ) fraction of draws will reach vi,t .) Thus for each vi,t , we can determine whether to freeze it in the execution of SMartiRank using poly(T , 1/, 1/p, 1/(1 - 2 )) draws from D . For each of the nodes that are not frozen, we must run the noise-tolerant weak ranker A using the distribution Di,t . As discussed at the beginning of this section, this requires that we obtain a sample from Di,t containing at least mA examples whose true label belongs to each class. The expected number of draws from D that must be made in order to receive an example from a given class is 1/p, and since vi,t is not frozen, the expected number of draws from D belonging to a given class that must be made in order to simulate a draw from Di,t belonging to that class is O(T 2 /). Thus, O(T 2 mA /(p)) many draws from D are required in order to run the weak learner A at any particular node. Since there are O(T 2 ) many nodes overall, we have that all in all O(T 4 mA /(p)) many draws from D are required, in addition to the poly(T , 1/, 1/p, 1/(1 - 2 )) draws required to identify which nodes to freeze. Recalling that T = O(log(1/)/ 2 ), all in all we have: Theorem 10 Let D be a nontrivial distribution over X , p = min{D[c(x) = -1], D[c(x) = 1]}, and < 1 . Given access to an oracle for D and a noise-tolerant weak ranker A with advantage 2 1 1 1 , the SMartiRank algorithm makes mA · poly( 1 , , 1-2 , p ) calls to D , and and with probability 1 - outputs a branching program H such that AUC(h; D) 1 - . Acknowledgement We are very grateful to Naoki Abe for suggesting the problem of boosting the AUC. References [1] A. P. Bradley. Use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30:1145­1159, 1997. [2] C. Cortes and M. Mohri. AUC optimization vs. error rate minimzation. In NIPS 2003, 2003. [3] T. Fawcett. ROC graphs: Notes and practical considerations for researchers. Technical Report HPL-2003-4, HP, 2003. [4] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4(6):933­970, 2004. [5] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119­139, 1997. [6] J. Hanley and B. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1):29­36, 1982. [7] A. Kalai and R. Servedio. Boosting in the presence of noise. Journal of Computer & System Sciences, 71(3):266­290, 2005. Preliminary version in Proc. STOC'03. [8] M. Kearns and U. Vazirani. An introduction to computational learning theory. MIT Press, Cambridge, MA, 1994. [9] P. Long and R. Servedio. Martingale boosting. In Proceedings of the Eighteenth Annual Conference on Computational Learning Theory (COLT), pages 79­94, 2005. [10] F. Provost, T. Fawcett, and Ron Kohavi. The case against accuracy estimation for comparing induction algorithms. ICML, 1998. [11] C. Rudin, C. Cortes, M. Mohri, and R. E. Schapire. Margin-based ranking meets boosting in the middle. COLT, 2005. [12] J. A. Swets. Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. Lawrence Erlbaum Associates, 1995.