Random Classification Noise Defeats All Convex Potential Bo osters Philip M. Long Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043 Ro cco A. Servedio Computer Science Department, Columbia University, New York, NY 10027 plong@google.com rocco@cs.columbia.edu Abstract A broad class of boosting algorithms can be interpreted as performing coordinate-wise gradient descent to minimize some potential function of the margins of a data set. This class includes AdaBoost, LogitBoost, and other widely used and well-studied boosters. In this paper we show that for a broad class of convex potential functions, any such boosting algorithm is highly susceptible to random classification noise. We do this by showing that for any such booster and any nonzero random classification noise rate , there is a simple data set of examples which is efficiently learnable by such a booster if there is no noise, but which cannot be learned to accuracy better than 1/2 if there is random classification noise at rate . This negative result is in contrast with known branching program based boosters which do not fall into the convex potential function framework and which can provably learn to high accuracy in the presence of random classification noise. potential function (z ) = e-z . The MadaBoost algorithm of Domingo and Watanabe [5] may be viewed as the algorithm B corresponding to 1 - z if z 0 (z ) = (1) e-z if z > 0. (We give a more detailed description of exactly what the algorithm B is for a given potential function in Section 2.2.) 1.2. Motivation: noise-tolerant b o osters? It has been widely observed that AdaBoost can suffer poor performance when run on noisy data, see e.g. [10, 17, 4]. The most commonly given explanation for this is that the exponential reweighting of examples which it performs (a consequence of the exponential potential function) can cause the algorithm to invest too much "effort" on correctly classifying noisy examples. Boosting algorithms such as MadaBoost [5] and LogitBoost [12] based on a range of other potential functions have subsequently been provided, sometimes with an explicitly stated motivation of rectifying AdaBoost's poor noise tolerance. However, we are not aware of rigorous results establishing provable noise tolerance for any boosting algorithms that fit into the potential functions framework, even for mild forms of noise such as random classification noise (henceforth abbreviated RCN) at low noise rates. This motivates the following question: are Adaboost's difficulties in dealing with noise due solely to its exponential weighting scheme, or are these difficulties inherent in the potential function approach to boosting? 1.3. Our results: convex p otential b o osters cannot withstand random classification noise This paper shows that the potential function boosting approach provably cannot yield learning algorithms that tolerate even low levels of random classification noise when convex potential functions are used. More 1. Intro duction 1.1. Background Much work has been done on viewing boosting algorithms as greedy iterative algorithms that perform a coordinate-wise gradient descent to minimize a potential function of the margin of the examples, see e.g. [3, 12, 19, 7, 18, 2]. In this framework every potential function defines an algorithm that may possibly be a boosting algorithm; we denote the algorithm corresponding to by B . For example, AdaBoost [11] and its confidence-rated generalization [20] may be viewed as the algorithm B corresponding to the Appearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). Random Classification Noise Defeats All Convex Potential Bo osters precisely, we exhibit a fixed natural set of base classifiers h1 , . . . , hn and show that for every convex function satisfying some very mild conditions and every noise rate > 0, there is a multiset S of labeled examples such that the following holds: · There is a linear separator sgn(1 h1 + · · · + n hn ) over the base classifiers h1 , . . . , hn that correctly labels every example in S with margin > 0 (and hence it is easy for a boosting algorithm trained on S to efficiently construct a final hypothesis that correctly classifies all examples in S ). However, · When the algorithm B is run on the distribution D,S , it constructs a classifier that has error rate 1/2 on the examples in S . Here D,S is the uniform distribution over S but where examples are corrupted with random classification noise at rate , i.e. labels are independently flipped with probability . This result shows that random classification noise can cause convex potential function boosters to fail in a rather strong sense. We note that as discussed in Section 7, there do exist known boosting algorithms [13, 16] that can tolerate random classification noise, and in particular can efficiently achieve perfect accuracy on S , after at most poly(1/ ) stages of boosting, when run on D,S in the scenario described above. Recently Bartlett and Traskin proved that the AdaBoost algorithm is consistent if it is stopped after a suitable number of iterations, given certain conditions on a random source generating the data [1]. Our analysis does not contradict theirs because the source in our construction does not satisfy Condition 1 of their paper. To see why this is the case it is useful, as has become customary, to think of the contribution that a given example makes to the potential as a "loss" paid by the learning algorithm. Informally, Condition 1 from [1] requires linear combinations of base classifier predictions to have total loss arbitrarily close to the best possible loss for any measurable function. Our analysis takes advantage of the fact that, for linear combinations of base classifiers with a convex loss function, large-margin errors are especially egregious: we present the learner with a choice between a lot of cheap errors and relatively few expensive errors. If optimization were to be performed over all measurable functions, roughly speaking, it would be possible to make all errors cheap. Though the analysis required to establish our main result is somewhat delicate, the actual construction is quite simple and admits an intuitive explanation (see Section 4.2). For every convex potential function we use the same set of only n = 2 base classifiers (these are confidence-rated base classifiers which output real values in the range [-1, 1]), and the multiset S contains only three distinct labeled examples; one of these occurs twice in S , for a total multiset size of four. We expect that many other constructions which similarly show the brittleness of convex potential boosters to random classification noise can be given. We describe experiments with one such construction that uses Boolean-valued weak classifiers rather than confidence-rated ones in Section 6. 2. Background and Notation Throughout the paper X will denote the instance space. H = {h1 , . . . , hn } will denote a fixed finite collection of base classifiers over X , where each base classifier is a function hi : X [-1, 1]; i.e. we shall work with confidence-rated base classifiers. S = (x1 , y 1 ), . . . , (xm , y m ) (X × {-1, 1})m will denote a multiset of m examples with binary labels. 2.1. Convex p otential functions We adopt the following natural definition which, as we discuss in Section 5, captures a broad range of different potential functions that have been studied. Definition 1 We say that : R R is a convex potential function if satisfies the fol lowing properties: 1. is convex and nonincreasing and C 1 (i.e. is differentiable and is continuous); 2. (0) < 0 and limx+ (x) = 0. 2.2. Convex p otential b o osters Let be a convex potential function, H = {h1 , . . . , hn } a fixed set of base classifiers, and S = (x1 , y 1 ), . . . , (xm , y m ) a multiset of labeled examples. Similarly to Duffy and Helmbold [6, 7], we consider an iterative algorithm which we denote B . The algorithm performs a coordinatewise gradient descent through the space of all possible coefficient vectors for the weak hypotheses, in an attempt to minimize the convex potential function of the margins of the examples. We now give a more precise description of how B works when run with H on S . Algorithm B maintains a vector (1 , ..., n ) of voting weights for the base classifiers h1 , ..., hn . The weights are initialized to 0. In a given round T , the algorithm chooses an index iT of a base classifier, and modifies Random Classification Noise Defeats All Convex Potential Bo osters the value of iT . If iT had previously been zero, this can be thought of as adding base classifier number iT to a pool of voters, and choosing a voting weight. n Let F (x; 1 , ..., n ) = i=1 i hi (x) be the master hypothesis that the algorithm has constructed prior to stage T (so at stage T = 1 the hypothesis F is identically zero.) We write P,S to denote the "global" potential function over S P,S (1 , ..., n ) = im (y i F (xi ; 1 , ..., n )) (2) If > 0 is such that yi · i 1 h1 (x ) + · · · + n hn (xi ) 2 1 2 + · · · + n for all i, we say that H is boostable w.r.t. c and S with margin . It is well known that if H is boostable w.r.t. c and S with margin , then a range of different boosting algorithms (such as AdaBoost) can be run on the noise-free data set S to efficiently construct a final classifier that correctly labels every example in S. As one concrete exg ample, after O( lo 2m ) stages of boosting AdaBoost will n construct a linear combination F (x) = i=1 i hi (x) of the base classifiers such that sgn(F (xi )) = y i for all i = 1, . . . , m; see [11, 20] for details. 2.4. Random classification noise and noise-tolerant b o osting Random classification noise is a simple, natural, and well-studied model of how benign (nonadversarial) noise can affect data. Given a multiset S of labeled examples and a value 0 < < 1 , we write D,S to 2 denote the distribution corresponding to S corrupted with random classification noise at rate . A draw from D,S is obtained by drawing (x, y ) uniformly at random from S and independently flipping the binary label y with probability . We say that an algorithm B is a boosting algorithm which tolerates RCN at rate if B has the following property. Let c be a target classifier, S be a multiset of m examples, and H be a set of base classifiers such that H is boostable w.r.t. c and S . Then for any > 0, if B is run with H as the set of base classifiers on D,S , at some stage of boosting B constructs a classifier g which has accuracy |{(xi , y i ) S : g (xi ) = y i }| 1 - - . m The accuracy rate above is in some sense optimal, since known results [13] show that no "black-box" boosting algorithm can be guaranteed to construct a classifier g whose accuracy exceeds 1 - in the presence of RCN at rate . As we discuss in Section 7, there are known boosting algorithms [13, 16] which can tolerate RCN at rate for any 0 < < 1/2. These algorithms, which do not follow the convex potential function approach but instead build a branching program over the base classifiers, use poly(1/ , log(1/)) stages to achieve accuracy 1 - - in the presence of RCN at rate if H is boostable w.r.t. c and S with margin . =1 which represents the overall potential of a hypothesis vector (1 , . . . , n ) on the sample S. It is easy to check that this is a convex function from Rn (the space of all possible (1 , . . . , n ) coefficient vectors for F ) to R. In stage T the algorithm B first chooses a base classifier by choosing iT to be the index i [n] which maximizes P,S (1 , ..., n ), - i and then choosing a new value of iT in order to minimize P,S (1 , ..., n ) for the resulting 1 , ..., n . Thus, in the terminology of [6] we consider "un-normalized" algorithms which preserve the original weighting factors 1 , 2 , etc. The AdaBoost algorithm is an example of an algorithm that falls into this framework, as are the other algorithms we discuss in Section 5. Note that the fact that B can determine the exactly optimal weak classifier to add in each round errs on the side of pessimism in our analysis. In our analysis, we will consider the case in which B as being run on a distribution D,S obtained by starting with a finite multiset of examples, and adding independent misclassification noise. One can naturally extend the definition of B to apply to probability distributions over X × {-1, 1} by extending the definition of potential in (2) as follows P,D (1 , ..., n ) = E(x,y)D ((y F (x; 1 , ..., n ))). (3) For rational values of , running B on (3) for D = D,S is equivalent to running B over a finite multiset in which each element of S occurs a number of times proportional to its weight under D. 2.3. Bo osting Fix a classifier c : X {-1, 1} and a multiset S = (x1 , y 1 ), . . . , (xm , y m ) of labeled examples. We say that a set of base classifiers H = {h1 , . . . , hn } is boostable with respect to c and S if there is a vector Rn such that for all i = 1, . . . , m, we have sgn[1 h1 (xi ) + · · · + n hn (xi )] = y i . Random Classification Noise Defeats All Convex Potential Bo osters 3. Main Result As was just noted, there do exist boosting algorithms (based on branching programs) that can tolerate RCN. Our main result is that no convex potential function booster can have this property: Theorem 2 Fix any convex potential function . For any noise rate 0 < < 1/2, the algorithm B does not tolerate RCN at rate . We obtain Theorem 2 as a direct consequence of the following stronger result, which shows that there is a simple RCN learning problem for which B will in fact misclassify half the examples in S . Theorem 3 Fix the instance space X = [-1, 1]2 R2 and the set H = {h1 (x) = x1 , h2 (x) = x2 } of confidence-rated base classifiers over X. For any noise rate 0 < < 1/2 and any convex potential function , there is a target classifier c, a value > 0, and a multiset S of four labeled examples (three of which are distinct) such that (a) H is boostable w.r.t. c and S with margin , but (b) when B is run on the distribution D,S , it constructs a classifier which misclassifies two of the four examples in S . 2. ("steep slop e") At the point (0, 0), the directional derivative of P,D (1 , 2 ) in any direction orthogonal to (1 , 2 ) is not as steep as the direc tional derivative toward (1 , 2 ). We now show that it suffices to establish these two properties to prove part (b) of Theorem 3.1 Suppose we have such an S . Since P,D (1 , 2 ) depends only on the inner product between (1 , 2 ) and the (normalized) example vectors (y x1 , y x2 ), it follows that rotating the set S around the origin by any fixed angle induces a corresponding rotation of the function P,D , and in particular of its minima. (Note that we have used here the fact that every example point in S lies within the unit disc; this ensures that for any rotation of S each weak hypothesis xi will always give outputs in [-1, 1] as required.) Consequently a suitable rotation of S to S will result in the corresponding rotated function P,D having a global minimum at a vector which lies on one of the two coordinate axes (say a vector of the form (0, )). If this is the case, then the "steep slope" property (2) ensures that the directional derivative at (0, 0) in this direction will be steepest, so the convex potential booster B will pick a base classifier corresponding to this direction (in this case h2 ). Since a globally optimal weight vecto( is available in r 1 )2 + (2 )2 is this direction (the vector of length such a vector), B will select such a vector. Once it has achieved such a global optimum it will not change its hypothesis in any subsequent stage, and thus B 's hypothesis will have error rate 1/2 on the points in the rotated set S by the "high error" property (1). 4.2. The sample S Now let us define the multiset S of examples. S consists of three distinct examples, one of which is repeated twice. (We shall specify the value of later and show that 0 < < 1 .) 6 · S contains one copy of the example x = (1, 0) with label y = +1. (We call this the "large margin" example.) · S contains two copies of the example x = ( , - ) with label y = +1. (We call these examples the "penalizers" since they are the points that B will misclassify.) · S contains one copy of the example x = ( , 5 ) with label y = +1. (We call this example the "puller" for reasons described below.) To prove part (a) we need to show that H is boostable w.r.t. some classifier c and S with margin , but as we shall see this is easy to achieve. 1 4. Pro of of Theorem 3 We are given an RCN noise rate 0 < < 1/2 and a convex potential function . 4.1. The basic idea Before specifying the sample S we explain the highlevel structure of our argument. Recall from (3) that P,D is defined as ( P,D (1 , 2 ) = D,S (x, y )(y (1 x1 + 2 x2 )). (4) x,y ) As noted in Section 2.2 the function P,D (1 , 2 ) is convex. It follows immediately from the definition of a convex potential function that P,D (1 , 2 ) 0 for all (1 , 2 ) R2 . The high-level idea of our proof is as follows. We shall construct a multiset S of four labeled examples in [-1, 1]2 (actually in the unit disc {x : x 1} R2 ) such that there is a global minimum (1 , 2 ) of the corresponding P,D (1 , 2 ) which has the following two properties: 1. ("high error") The corresponding classifier g (x) = sgn(1 x1 + 2 x2 ) misclassifies two of the points in S (and thus has error rate 1/2); and Random Classification Noise Defeats All Convex Potential Bo osters Thus all examples in S are positive. It is immediately clear that the classifier c(x) = sgn(x1 ) correctly classifies all examples in S with margin > 0, so the set H = {h1 (x) = x1 , h2 (x) = x2 } of base classifiers is boostable w.r.t. c and S with margin . We further note that since < 1 , each example in S does indeed 6 lie in the unit disc {x : x 1}. Let us give some intuition for why this set S has the "high error" property. The halfspace whose normal vector is (1, 0) classifies all examples correctly, but the noisy (negative labeled) version of the "large margin" example causes a convex potential function to incur a very large cost for this hypothesis vector. Consequently a lower cost hypothesis can be obtained with a vector that points rather far away from (1, 0). The "puller" example (whose y -coordinate is 5 ) outweights the two "penalizer" examples (whose y -coordinates are - ), so it "pulls" the minimum cost hypothesis vector to point up into the first quadrant ­ in fact, so far up that the two "penalizer" examples are misclassified by the optimal hypothesis vector for the potential function . In Section 4.3 below we make this intuition precise and show that there is a global minimum (1 , 2 ) of P,D for which 1 < 2 . This immediately implies that the corresponding classifier g (x) = sgn(1 x1 + 2 x2 ) misclassifies the two copies of ( , - ) in S and gives us the "high error" property (1). In Section 4.4 we show that this (1 , 2 ) moreover has the "steep slope" property (2). 4.3. The "high error" prop erty: analyzing a global minimum of P,D Let 1 < N < be such that = We have that P,D (1 , 2 ) = = ( D,S (x, y )(y (1 x1 + 2 x2 )) x,y ) 1 N +1 , Let L1 (1 , 2 ) and L2 (1 , 2 ) be defined as follows: L1 (1 , 2 ) L2 (1 , 2 ) de f = de f = 4(N + 1)P,D (1 , 2 ) and 1 4(N + 1)P,D (1 , 2 ). 2 For B > 1 to be fixed later, let us write L1 () to denote L1 (, B ) and similarly write L2 () to denote L2 (, B ). It is easy to verify that we have L1 () = N () - (-) + 2 N (-(B - 1) ) -2 ((B - 1) ) + N ((5B + 1) ) - (-(5B + 1) ) and L2 () = -2 N (-(B - 1) ) + 2 ((B - 1) ) +5 N ((5B + 1) ) - 5 (-(5B + 1) ). We introduce the following function to help in the analysis of L1 () and L2 (): for R, Z () = N () - (-). de f Let us establish some basic properties of this function. Since is differentiable and convex, we have that is a non-decreasing function. This is easily seen to imply that Z (·) is a non-decreasing function. We moreover have Z (0) = (0)(N - 1) < 0. The definition of a convex potential function implies that as + we have () 0- , and consequently we have + lim Z () = 0 + lim - (-) > 0, + so 1 - = N N +1 . where the inequality holds since () is a nonincreasing function and (0) < 0. Since and hence Z is continuous, we have that over the interval [0, +) the function Z () assumes every value in the range [ (0)(N - 1), - (0)). Next observe that we may rewrite L1 () and L2 () as 1( 4 [(1 - )(1 x1 + 2 x2 ) x,y )S + (-1 x1 - 2 x2 )] . L1 () = Z () + 2 Z (-(B - 1) ) + Z ((5B + 1) ) (6) and L2 () = -2 Z (-(B - 1) ) + 5 Z ((5B + 1) ). (7) In the rest of this section we shall show that there are values > 0, 0 < < 1/6, B > 1 such that L1 () = L2 () = 0. Since P,D is convex, this will de f It is clear that minimizing 4(N + 1)P,D is the same as minimizing P,D so we shall henceforth work with 4(N + 1)P,D since it gives rise to cleaner expressions. We have that 4(N + 1)P,D (1 , 2 ) equals ( [N (1 x1 + 2 x2 ) + (-1 x1 - 2 x2 )] x,y )S = N (1 ) + (-1 ) imply that (1 , 2 ) = (, B ) is a global minimum for the dataset constructed using this , as required. +2N (1 - 2 ) + 2(-1 + 2 ) +N (1 + 52 ) + (-1 - 52 ). (5) Let us begin with the following claim which will be useful in establishing L2 () = 0. Random Classification Noise Defeats All Convex Potential Bo osters Claim 4 For any B 1 there is a finite value (B ) > 0 such that 2Z (-(B - 1)(B )) = 5Z ((5B + 1)(B )) < 0 (8) Pro of: Fix any value B 1. Recalling that Z (0) = (0)(N - 1) < 0, at = 0 the quantity 2Z (-(B - 1)) equals 2 (0)(N - 1) < 0, and as increases this quantity does not increase. On the other hand, at = 0 the quantity 5Z ((5B + 1)) equals 5 (0)(N - 1) < 2 (0)(N - 1), and as increases this quantity increases to a limit, as +, which is at least 5(- (0)). Since Z is continuous, there must be some > 0 at which the two quantities are equal and are each at most 2 (0)(N - 1) < 0. Observation 5 The function (B ) is a continuous and nonincreasing function of B for B [0, ). Pro of: The larger B 1 is, the faster -(B - 1)) decreases as a function of and the faster (5B + 1) increases as a function of . Continuity of (·) follows from continuity of Z (·). We now fix the value of B to be B = 1 + , where the parameter will be fixed later. We shall only consider settings of , > 0 such that = (B ) = (1 + ); + i.e. given a setting of , we shall take = (1 ) . For any such , we have L2 () = (7) = [-2Z (-(B - 1)(1 + )) +5Z ((5B + 1)(1 + ))] = 0 de f function of , and since Z is a nonincreasing function, the LHS is a nonincreasing function of . Recall that at = 0 we have (1 + ) = (1) which is some fixed finite positive value by Claim 4. So we have lim 0+ LHS = limx+ Z (x) - (0). On the other extreme, since (·) is nonincreasing, we have (1) = + lim LHS lim Z + Z (0) = (0)(N -1) < 0. So as varies through (0, ), the LHS decreases through all values between - (0) and 0. On the other hand, at = 0 the RHS of (9) is clearly 0. Moreover the RHS is always positive for > 0 by Claim 4. Since the RHS is continuous (by continuity of Z (·) and (·)), this together with the previous paragraph implies that there must be some > 0 for which the LHS and RHS of (9) are the same positive value. So we have shown that there are values > 0, > 0, B = 1 + such that L1 () = L2 () = 0. This concludes the proof of the "high error" property (1). We close this section by showing that the value of > 0 obtained above is indeed at most 1/6 (and hence every example in S lies in the unit disc as required). To see this, note that we have shown that for>his , we t (1+ ) have Z ((6 + 5 )(1 + )) < 0 and Z 0. Since Z is a nondecreasing function this implies 6 + 5 < which clearly implies < 1/6 as desired. 4.4. The "steep slop e" prop erty: analyzing directional derivatives Now we turn to proving that the directional derivative in the orthogonal direction is less steep than in the direction of the global minimum (1 , 2 ). We have just established that (, B ) = (, (1 + )) is a global minimum for the data set as constructed above. The directional derivative at (0, 0) in the direction of this 0)+B L optimum is L1 (1+B 22 (0) . Since (0) < 0, by (6) and (7) we have L1 (0) L2 (0) = = (1 + 3 ) (0)(N - 1) < 0 3 (0)(N - 1) < 0. 1 where the last equality is by Claim 4. Now let us consider (6); our goal is to show that for some > 0 it is also 0. For any (, ) pair with = (1 + ), we have by Claim 4 that 2 Z (-(B - 1) ) + Z ((5B + 1) ) = 2 Z (-(B - 1)(1 + )) + Z ((5B + 1)(1 + )) = 6 Z ((5B + 1)(1 + )) where the second equality is by Claim 4. Plugging this + into (6), we have that for = (1 ) , the quantity L1 () equals 0 if and only if = (1 + ) -6 Z ((5B + 1)(1 + )) Z = 6 · (-Z ((6 + 5 ) · (1 + ))). (9) Let us analyze (9). We first note that Observation 5 implies that (1 + ) is a nonincreasing function of + for [0, ). Consequently (1 ) is a decreasing This implies that L1 (0) < L2 (0) < 0, which, since B > 1, implies B L1 (0) - L2 (0) < 0. This means that (B , -1) rather than (-B , 1) is the direction orthogonal to the optimal (1, B ) which has negative slope. Recalling that B = 1 + , we have the following in- Random Classification Noise Defeats All Convex Potential Bo osters equalities: 6. Exp eriments with Binary-valued Weak Learners (1 + 3 ) + 3 (1 + 3 ) - 3 -L1 (0) - L2 (0) (10) < -L1 (0) + L2 (0) < -L1 (0) - L2 (0) (11) < 1 + 6 = < B L1 (0) - L2 (0) < 0, (12) The analysis of this paper leaves open the possibility that a convex potential booster could still tolerate noise if the base classifiers were restricted to be binaryvalued. In this section we describe empirical evidence that this is not the case. We generated 100 datasets, applied three convex potential boosters to each, and calculated the training error. Data. Each dataset consisted of 4000 examples, divided into three groups, 1000 large margin examples, 1000 pullers, and 2000 penalizers. The large margin examples corresponded to the example (1, 0) in Section 4.2, the pullers play the role of ( , 5 ), and the penalizers collectively play the role of ( , - ). Each labeled example (x, y ) in our dataset is generated as follows. First the label y is chosen randomly from {-1, 1}. There are 21 features x1 , . . . , x21 that take values in {-1, 1}. Each large margin example sets x1 = · · · = x21 = y . Each puller assigns x1 = · · · = x11 = y and x12 = · · · = x21 = -y . Each penalizer is chosen at random in three stages: (1) the values of a random subset of five of the first eleven features x1 , . . . , x11 are set equal to y , (2) the values of a random subset of six of the last ten features x12 , . . . , x21 are set equal to y , and (3) the remaining ten features are set to -y . At this stage, if we associate a base classifier with each feature xi , then each of the 4000 examples is classified correctly by a ma jority vote over these 21 base classifiers. Intuitively, when an algorithm responds to the pressure exerted by the noisy large margin examples and the pullers to move toward a hypothesis that is a ma jority vote over the first 11 features only, then it tends to incorrectly classify the penalizers, because in the penalizers only 5 of those first 11 features agree with the class. Finally, each class designation y is corrupted with classification noise with probability 0.1. Bo osters. We experimented with three boosters: AdaBoost, MadaBoost (which is arguably, loosely speaking, the least convex of the convex potential boosters), and LogitBoost. Each booster was run for 100 rounds. Results. The average training error of AdaBoost over the 100 datasets was 33%. The average for LogitBoost was 30%, and for MadaBoost, 27%. B B B (-L1 (0) + L2 (0)) L1 (0) + B L2 (0) where (11) follows from (10) using L1 (0) < L2 (0) < 0. So the directional derivative in the optimal direction (1, B ) is steeper than in (B , -1), and the proof of the "steep slope" property, and with it Theorem 3, is complete. 5. Consequences for Known Bo osting Algorithms A wide range of well-studied boosting algorithms are based on potential functions that satisfy our Definition 1. Theorem 2 thus implies that each of the corresponding convex potential function boosters as defined in Section 2.2 cannot tolerate random classi1 fication noise at any noise rate 0 < < 2 . (In some cases the original versions of the algorithms discussed below are not exactly the same as the B algorithm as described in Section 2.2 because of small differences such as the way the step size is chosen at each update. Thus we do not claim that Theorem 2 applies directly to each of the original boosting algorithms; however we feel that our analysis strongly suggests that the original boosters may, like the corresponding B algorithms, be highly susceptible to random classification noise.) AdaBo ost and MadaBo ost. As discussed in the Introduction and in [6, 18] the Adaboost algorithm [11] is the algorithm B obtained by taking the convex potential function to be (x) = exp(-x). Similarly the MadaBoost algorithm [5] is based on the potential function (x) defined in Equation (1). Each of these functions clearly satisfies Definition 1. LogitBo ost and FilterBo ost. As described in [6, 18, 2], the LogitBoost algorithm of [12] is based on the logistic potential function ln(1 + exp(-x)), which is easily seen to fit our Definition 1. Roughly, FilterBoost [2] combines a variation on the rejection sampling of MadaBoost with the reweighting scheme, and therefore the potential function, of LogitBoost. Random Classification Noise Defeats All Convex Potential Bo osters 7. Discussion We have shown that any boosting algorithm based on coordinate-wise gradient descent to optimize a convex potential function satisfying mild conditions cannot tolerate random classification noise. While our results imply strong limits on the noise-tolerance of algorithms that fit this framework, they do not apply to other boosting algorithms such as Freund's Boost-ByMa jority algorithm [8] and BrownBoost [9] for which the corresponding potential function is non-convex. An interesting direction for future work is to extend our negative results to a broader class of potential functions, or to other types of boosters such as "regularized" boosters [19, 14]. We close by observing that there do exist efficient boosting algorithms (which do not follow the potential function approach) that can provably tolerate random classification noise [13, 16]. These noise-tolerant boosters work by constructing a branching program over the weak classifiers; the original algorithms of [13, 16] were presented only for binary-valued weak classifiers, but recent work [15] extends the algorithm from [16] to work with confidence-rated base classifiers. A standard analysis (omitted because of space constraints) shows that this boosting algorithm for confidence-rated base classifiers can tolerate random classification noise at any rate 0 < < 1/2 according to our definition from Section 2.4. In particular, for any noise rate bounded below 1/4, if this booster is run on the data sets considered in this paper, it can construct a final classifier with accuracy 1 - - > 3/4 after O( log 1/ ) stages of boosting. Since our set of ex2 amples S is of size four, though, this means that the booster's final hypothesis will in fact have perfect accuracy on these data sets which thwart convex potential boosters. [5] C. Domingo and O. Watanabe. Madaboost: a modified version of adaboost. In COLT, pages 180­189, 2000. [6] N. Duffy and D. Helmbold. Potential boosters? In NIPS, pages 258­264, 1999. [7] N. Duffy and D. Helmbold. A geometric approach to leveraging weak learners. TCS, 284:67­108, 2002. [8] Y. Freund. Boosting a weak learning algorithm by ma jority. Information and Computation, 121(2):256­285, 1995. [9] Y. Freund. An adaptive version of the boostby-ma jority algorithm. Machine Learning, 43(3):293­318, 2001. [10] Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148­156, 1996. [11] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. JCSS, 55(1):119­139, 1997. [12] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2):337­407, 1998. [13] A. Kalai and R. Servedio. Boosting in the presence of noise. JCSS, 71(3):266­290, 2005. Preliminary version in Proc. STOC'03. [14] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. NIPS, pages 447­454, 2002. [15] P. Long and R. Servedio. Adaptive martingale boosting. Unpublished manuscript. [16] P. Long and R. Servedio. Martingale boosting. In COLT, pages 79­94, 2005. [17] Richard Maclin and David Opitz. An empirical evaluation of bagging and boosting. In AAAI/IAAI, pages 546­551, 1997. [18] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In NIPS, pages 512­518, 1999. [19] G. R atsch, T. Onoda, and K.-R. M uller. Soft margins for AdaBoost. Machine Learning, 42(3):287­320, 2001. [20] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297­336, 1999. References [1] P. L. Bartlett and M. Traskin. Adaboost is consistent. JMLR, 8:2347­2368, 2007. [2] J. Bradley and R. Schapire. Filterboost: Regression and classification on large datasets. In NIPS, 2007. [3] L. Breiman. Arcing the edge. Technical report 486, Department of Statistics, University of California, Berkeley, 1997. [4] T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 40(2):139­158, 2000.