One-sided Support Vector Regression for Multiclass Cost-sensitive Classification Han-Hsing Tu r96139@csie.ntu.edu.tw Hsuan-Tien Lin htlin@csie.ntu.edu.tw Department of Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan Abstract We propose a novel approach that reduces cost-sensitive classification to one-sided regression. The approach stores the cost information in the regression labels and encodes the minimum-cost prediction with the onesided loss. The simple approach is accompanied by a solid theoretical guarantee of error transformation, and can be used to cast any one-sided regression method as a costsensitive classification algorithm. To validate the proposed reduction approach, we design a new cost-sensitive classification algorithm by coupling the approach with a variant of the support vector machine (SVM) for one-sided regression. The proposed algorithm can be viewed as a theoretically justified extension of the popular one-versus-all SVM. Experimental results demonstrate that the algorithm is not only superior to traditional one-versus-all SVM for cost-sensitive classification, but also better than many existing SVM-based costsensitive classification algorithms. as H1N1-infected; (C) predicting an H1N1-infected patient as healthy. We see that (C) >> (B) > (A) in terms of the cost that the society pays. Many other applications in medical decision making and target marketing share similar needs, which can be formalized as the cost-sensitive classification problem. The problem is able to express any finite-choice and boundedloss supervised learning problems (Beygelzimer et al., 2005), and thus has been attracting much research attention (Abe et al., 2004; Langford & Beygelzimer, 2005; Zhou & Liu, 2006; Beygelzimer et al., 2007). While cost-sensitive classification is well-understood for the binary case (Zadrozny et al., 2003), the counterpart for the multiclass case is more difficult to analyze (Abe et al., 2004; Zhou & Liu, 2006) and will be the main focus of this paper. Many existing approaches for multiclass cost-sensitive classification are designed by reducing (heuristically or theoretically) the problem into other well-known problems in machine learning. For instance, the early MetaCost approach (Domingos, 1999) solved cost-sensitive classification by reducing it to a conditional probability estimation problem. Abe et al. (2004) proposed several approaches that reduce cost-sensitive classification to regular multiclass classification. There are also many approaches that reduce cost-sensitive classification to regular binary classification (Beygelzimer et al., 2005; Langford & Beygelzimer, 2005; Beygelzimer et al., 2007). Reduction-based approaches not only allow us to easily extend existing methods into solving cost-sensitive classification problems, but also broaden our understanding on the connections between cost-sensitive classification and other learning problems (Beygelzimer et al., 2005). In this paper, we propose a novel reduction-based approach for cost-sensitive classification. Unlike existing approaches, however, we reduce cost-sensitive classification to a less-encountered problem: one-sided regression. Such a reduction is very simple but comes with solid theoretical properties. In particular, the re- 1. Introduction Regular classification, which is a traditional and primary problem in machine learning, comes with a goal of minimizing the rate of misclassification errors during prediction. Many real-world applications, however, need different costs for different types of misclassification errors. For instance, let us look at a three-class classification problem of predicting a patient as {healthy, cold-infected, H1N1-infected}. Consider three different types of misclassification errors out of the six possibilities: (A) predicting a healthy patient as cold-infected; (B) predicting a healthy patient Appearing in Proceedings of the 27 th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). One-sided Support Vector Regression for Multiclass Cost-sensitive Classification duction allows the total one-sided loss of a regressor to upper-bound the cost of the associated classifier. In other words, if a regressor achieves small one-sided loss on the reduced problem, the associated classifier would not suffer from much cost on the original cost-sensitive classification problem. Although one-sided regression is not often seen in machine learning, we find that its regularized (and hyperlinear) form can be easily cast as a variant of the popular support vector machine (SVM, Vapnik, 1998). The variant will be named one-sided support vector regression (OSSVR). Similar to the usual SVM, OSSVR could solve both linear and non-linear one-sided regression via the kernel trick. By coupling OSSVR with our proposed reduction approach, we obtain a novel algorithm for cost-sensitive classification. Interestingly, the algorithm takes the common one-versus-all SVM (OVA-SVM, Hsu & Lin, 2002) as a special case, and is only a few lines different from OVA-SVM. That is, our proposed algorithm can be viewed as a simple and direct extension of OVA-SVM towards costsensitive classification. Experimental results demonstrate that the proposed algorithm is indeed useful for general cost-sensitive settings, and outperforms OVASVM on many data sets. In addition, when compared with other SVM-based algorithms, the proposed algorithm can often achieve the smallest average test cost, which makes it the leading SVM-based cost-sensitive classification algorithm. The paper is organized as follows. In Section 2, we give a formal setup of the cost-sensitive classification problem. Then, in Section 3, we reduce cost-sensitive classification to one-sided regression and demonstrate its theoretical guarantees. OSSVR and its use for costsensitive classification is introduced in Section 4. Finally, we present the experimental results in Section 5 and conclude in Section 6. E(h) E (x,y)D y = h(x) is the (expected) test error of a classifier h with respect to the distribution D.1 The cost-sensitive classification problem extends regular classification by coupling a cost vector c RK with every example (x, y). The k-th component c[k] of the cost vector denotes the price to be paid when predicting x as class k. With the additional cost information, we now assume an unknown distribution Dc that generates cost-sensitive examples (x, y, c) X × Y × RK . Consider a cost-sensitive training set Sc = {(xn , yn , cn )}N , where each cost-sensitive training n=1 example (xn , yn , cn ) is drawn i.i.d from Dc . Costsensitive classification aims at using Sc to find a classifier g : X Y that comes with a small Ec (g), where Ec (h) c[h(x)] is the (expected) test cost E (x,y,c)Dc of h with respect to Dc . Note that the label y is actually not needed for calculating Ec (h). We keep the label in our setup to help illustrate the connection between cost-sensitive classification and regular classification. Naturally, we assume that c[y] = cmin = min1 K c[ ]. ¯ We will often consider the calibrated cost vector c ¯ where c[k] c[k] - cmin for every k Y. Thus, ¯ c[y] = 0. Define the calibrated test cost of h as ¯ Ec (h) E (x,y,c)Dc ¯ c[h(x)] = Ec (h) - E (x,y,c)Dc cmin . Because the second term at the right-hand-side is a constant that does not depend on h, finding a classifier g that comes with a small Ec (g) is equivalent to ¯ finding a classifier that comes with a small Ec (g). We put two remarks on our setup above. First, the setup is based on example-dependent cost vectors c : Y R rather than a class-dependent cost matrix C : Y × Y R, where each entry C(y, k) denotes the price to be paid when predicting a class-y example as class k. The class-dependent setup allows one to use a complete picture of the cost information in algorithm design (Domingos, 1999; Zhou & Liu, 2006), but is not applicable when the cost varies in a stochastic environment. On the other hand, the example-dependent setup is more general (Abe et al., 2004), and includes the class-dependent setup as a special case by defining the cost vector in (x, y, c) as c[k] C(y, k) for every k Y. In view of generality, we focus on the example-dependent setup in this paper. Secondly, regular classification can be viewed as a special case of cost-sensitive classification by replacing the 1 The boolean operation · is 1 when the inner condition is true, and 0 otherwise. 2. Problem Statement We start by introducing the regular classification problem before we move to the cost-sensitive classification one. In the regular classification problem, we seek for a classifier that maps the input vector x to some discrete label y, where x is within an input space X RD , and y is within a label space Y = {1, 2, . . . , K}. We assume that there is an unknown distribution D that generates examples (x, y) X × Y. Consider a training set S = {(xn , yn )}N , where each trainn=1 ing example (xn , yn ) is drawn i.i.d from D. Regular classification aims at using S to find a classifier g : X Y that comes with a small E(g), where One-sided Support Vector Regression for Multiclass Cost-sensitive Classification cost information in c with a na¨ (insensitive) cost ive c[1] c[2] matrix of Ce (y, k) y = k . Thus, when applying 6 6 r(x, 2) r(x, 1) a regular classification algorithm directly to general cost-sensitive classification problems, it is as if we are 1 2 feeding the algorithm with inaccurate cost information, which intuitively may lead to unsatisfactory per6 6 r(x, 2) r(x, 1) formance. We will see such results in Section 5. 2 c[3] 6 r(x, 3) c[K] 6 r(x, K) - 3. One-sided Regression for Cost-sensitive Classification From the setup above, the value of each cost component c[k] carries an important piece of information. Recent approaches that reduce cost-sensitive classification to regular classification encode the cost information in the weights (importance) of the transformed classification examples. Some of the approaches leads to more promising theoretical results, such as Sensitive Error Correcting Output Codes (SECOC, Langford & Beygelzimer, 2005), Filter Tree (FT, Beygelzimer et al., 2007) and Weighted All-Pairs (WAP, Beygelzimer et al., 2005). Nevertheless, it has been shown that a large number of weighted classification examples are often required to store the cost information accurately (Abe et al., 2004; Langford & Beygelzimer, 2005), or the encoding structure and procedure can be quite complicated (Langford & Beygelzimer, 2005; Beygelzimer et al., 2005; 2007). Because of those caveats, the practical use of those algorithms has not been fully investigated. To avoid the caveats of encoding the cost information in the weights, we place the cost information in the labels of regression examples instead. Such an approach emerged in the derivation steps of SECOC (Langford & Beygelzimer, 2005), but its direct use has not been thoroughly studied. Regression, like regular classification, is a widely-studied problem in machine learning. Rather than predicting the discrete label y Y with a classifier g, regression aims at using a regressor r to estimate the real-valued labels Y R. We propose to train a joint regressor r(x, k) that estimates the cost values c[k] directly. Intuitively, if we can obtain a regressor r(x, k) that estimates each c[k] perfectly for any cost-sensitive example (x, y, c), we can use the estimate to choose the best prediction gr (x) argmin r(x, k). 1kK 6 r(x, 2) 1 Figure 1. intuition behind one-sided regression 6 r(x, 1) We illustrate such misclassification cases by Figure 1. Without loss of generality, assume that c is ordered such that c[1] c[2] · · · c[K]. We shall further assume that c[1] < c[2] and thus the correct prediction y = 1 is unique. Now, if gr (x) = 2, which means r(x, 2) r(x, k) for every k Y. More specifically, r(x, 2) r(x, 1). Define 1 r(x, 1) - c[1] and 2 c[2] - r(x, 2). Because c[1] < c[2] and r(x, 2) r(x, 1), the terms 1 and 2 cannot both be negative. Then, there are three possible cases. 1. At the top of Figure 1, 1 0 and 2 0. ¯ Then, c[2] = c[2] - c[1] 1 + 2 . 2. At the middle of Figure 1, 1 0 and 2 0. ¯ Then, c[2] 2 . 3. At the bottom of Figure 1, 1 0 and 2 0. ¯ Then, c[2] 1 . In all the above cases in which a misclassification ¯ gr (x) = 2 happens, the calibrated cost c[2] is no larger than max(1 , 0)+max(2 , 0). This finding holds true even when we replace the number 2 with any k between 2 and K, and will be proved in Theorem 1. A conceptual explanation is as follows. There are two different kinds of cost components c[k]. If the component c[k] is the smallest within c (i.e., c[k] = cmin ), it is acceptable and demanded to have an r(x, k) that is no more than c[k] because a smaller r(x, k) can only lead to a better prediction gr (x). On the other hand, if c[k] > cmin , it is acceptable and demanded to have an r(x, k) that is no less than c[k]. If all the demands are satisfied, no cost would incur in gr ; otherwise the calibrated cost would be upper-bounded by the total deviations on the wrong "side." Thus, for any costsensitive example (x, y, c), we can define a special regression loss k (r) max(k (r), 0), where k (r) 2 c[k] = cmin - 1 (2) r(x, k) - c[k] . (1) What if the estimate r(x, k) cannot match the desired value c[k] perfectly? In the real world, it is indeed the common case that r(x, k) would be somewhat different from c[k], and the inexact r(x, k) may lead to misclassification (i.e. more costly prediction) in (1). One-sided Support Vector Regression for Multiclass Cost-sensitive Classification When r(x, k) is on the correct side, k (r) = 0. Otherwise k (r) represents the deviation between the estimate r(x, k) and the desired c[k]. We shall use the definitions to prove a formal statement that connects the cost paid by gr with the loss of r. Theorem 1 (per-example loss bound). For any costsensitive example (x, y, c), K one-sided regression aims at using So to find a regres^ sor r : X R that comes with a small Eo (r), where Eo (q) E (X,Y,Z)Do max Z q(X) - Y , 0 . is the expected linear one-sided loss of the regressor q with respect to Do . With the definition above, we are ready to solve the cost-sensitive classification problem by reducing it to one-sided regression, as shown in Algorithm 1. Algorithm 1 reduction to one-sided regression ~ 1. Construct So = (Xn,k , Yn,k , Zn,k where from Sc , ¯ c[gr (x)] k=1 k (r). (3) Proof. Let = gr (x). When c[ ] = cmin , the lefthand-side is 0 while the right-hand-side is non-negative because all k (r) 0 by definition. On the other hand, when c[ ] > cmin = c[y], by the definition in (2), (r) c[ ] - r(x, ), y (r) r(x, y) - c[y] . Because = gr (x), r(x, y) - r(x, ) Combining (4), (5), and (6), we get K Xn,k = (xn , k); Yn,k = cn [k] ; Zn,k = 2 cn [k] = cn [yn ] - 1. (4) (5) ~ 2. Train a regressor r(x, k) : X × Y R from So with a one-sided regression algorithm. 3. Return the classifier gr in (1). 0. (6) Note that we can define a distribution Do that draws an example (x, y, c) from Dc , chooses k uniformly at random, and then generates (X, Y, Z) by X = (x, k), Y = c[k] , Z = 2 c[k] = cmin - 1. ~ We see that So consists of (dependent) examples N from Do and contains (many) subsets So Do . Thus, a reasonable one-sided regression algorithm should be ~ able to use So to find a regressor r that comes with a small Eo (r). By integrating both sides of (3) with respect to Dc , we get the following theorem. Theorem 2 (error guarantee of Algorithm 1). Consider any Dc and its associated Do . For any regres¯ sor r, Ec (gr ) K · Eo (r) Thus, if we can design a good one-sided regression al~ gorithm that learns from So and returns a regressor r with a small Eo (r), the algorithm can be cast as a cost-sensitive classification algorithm that returns a ¯ classifier gr with a small Ec (gr ). That is, we have reduced the cost-sensitive classification problem to a one-sided regression problem with a solid theoretical guarantee. The remaining question is, how can we design a good one-sided regression algorithm? Next, we propose a novel, simple and useful one-sided regression algorithm that roots from the support vector machine (SVM, Vapnik, 1998). ¯ c[ ] (r) + y (r) k=1 k (r), where the last inequality holds because k (r) 0. Theorem 1 says that for any given cost-sensitive example (x, y, c), if a regressor r(x, k) closely estimates c[k] under the specially designed linear onesided loss k (r), the associated classifier gr (x) only ¯ pays a small calibrated cost c[gr (x)]. We can prove a 2 similar theorem for the quadratic one-sided loss k (r), but the details are omitted because of page limits. Based on Theorem 1, we could achieve the goal of finding a low-cost classifier by learning a low-one-sidedloss regressor first. We formalize the learning problem as one-sided regression, which seeks for a regres^ sor that maps the input vector X X to some real label Y R with the loss evaluated by some direction Z {-1, +1}. We use Z = -1 to indicate that there is no loss at the left-hand-side r(X) Y , and Z = +1 to indicate that there is no loss at the righthand-side r(X) Y . Assume that there is an unknown distribution Do that generates one-sided exam^ ples (X, Y, Z) X ×R×{-1, +1}. We consider a training set So = {(Xn , Yn , Zn )}N , where each training n=1 example (Xn , Yn , Zn ) is drawn i.i.d from Do . Linear One-sided Support Vector Regression for Multiclass Cost-sensitive Classification 4. One-sided Support Vector Regression From Theorem 2, we intend to find a decent regressor r with respect to Do . Nevertheless, Do is defined from an unknown distribution Dc and hence is also unknown. We thus can only rely on the train~ ing set So on hand. Consider an empirical risk minimization paradigm (Vapnik, 1998) that finds r by minimizing an in-sample version of Eo (g). That is, N K r = argmin k=1 n=1 n,k , where n,k denotes k (q) q Algorithm 2 reduction to OSSVR 1. Training: For k = 1,2, . . . , K, solve the primal problem in (8) or the dual problem in (9). Then, obtain a regressor rk (x) = wk , (x) + bk . 2. Prediction: Return gr (x) = argmin rk (x). 1kK on the training example (xn , yn , cn ). We can decompose the problem of finding a joint regressor r(x, k) to K sub-problems of finding individual regressors rk (x) r(x, k). In other words, for every given k, we can separately solve N the (Zn,k cn [k]) terms in (8) and (9) are all replaced by -1. In other words, OVA-SVM can be viewed as a special case of RED-OSSVR by considering the cost vectors cn [k] = 2 yn = k - 1. Using those cost vectors is the same as considering the insensitive cost matrix Ce (see Section 2) by scaling and shifting. That is, OVA-SVM equivalently "wipes out" the original cost information and replaces it by the insensitive costs. rk = argmin qk n=1 n,k . (7) 5. Experiments In this section, we conduct experiments to validate our proposed RED-OSSVR algorithm. In all the experiments, we use LIBSVM (Chang & Lin, 2001) as our SVM solver, adopt the perceptron kernel (Lin & Li, 2008), and choose the regularization parameter within {217 , 215 , . . . , 2-3 } by a 5-fold cross-validation procedure on only the training set (Hsu et al., 2003). Then, we report the results using a separate test set (see below). Following a common practice in regression, the labels Yn,k (that is, cn [k]) are linearly scaled to [0, 1] using the training set. 5.1. Comparison with Artificial Data Set We first demonstrate the usefulness of RED-OSSVR using an artificial data set in R2 with K = 3. Each class is generated from a Gaussian distribution variof 3 1 1 1 ance 4 with centers at (-1, 0), ( 2 , 2 ), ( 2 , - 23 ), respectively. The training set consists of 500 points of each class, as shown in Figure 2. We make the data set cost-sensitive by considering a fixed cost ma 0 1 100 1 . Figure 2(a) shows trix Crot = 100 0 1 100 0 the Bayes optimal boundary with respect to Crot . Because there is a big cost in Crot (1, 2), Crot (2, 3), and Crot (3, 1), the optimal boundary rotates in the counter-clockwise direction to avoid the huge costs. Figure 2(b) depicts the decision boundary obtained from OVA-SVM. The boundary separates the adjacent two Gaussian almost evenly. Although such a boundary achieves a small misclassification error rate, it pays for a big overall cost. On the other hand, the boundary obtained from RED-OSSVR in Figure 2(c) is more similar to the Bayes optimal one. Thus, for Let us look at linear regressors qk (x) = wk , (x) +bk in a Hilbert space H, where the transform : X H, the weight wk H, and the bias bk R. Adding a regularization term wk , wk to the objective function, 2 each sub-problem (7) becomes min n,k wk , wk + 2 n=1 n,k Zn,k N wk ,bk ,n,k (8) s.t. wk , (xn ) + bk - c[k] , n,k 0, for all n. where Zn,k is defined in Algorithm 1. Note that (8) is a simplified variant of the common support vector regression (SVR) algorithm. Thus, we will call the variant one-sided support vector regression (OSSVR). Similar to the original SVR, we can solve (8) easily in the dual domain with the kernel trick K(xn , xm ) (xn ), (xm ) . The dual problem of (8) is min 1 n m Zn,k Zm,k K(xn , xm ) 2 n=1 m=1 N N N + n=1 n Zn,k cn [k] n (9) s.t. 1 Zm,k m = 0; 0 n , for all n. m=1 Coupling Algorithm 1 with OSSVR, we obtain the following novel algorithm for cost-sensitive classification : RED-OSSVR, as shown in Algorithm 2. Note that the common OVA-SVM (Hsu & Lin, 2002) algorithm has exactly the same steps, except that One-sided Support Vector Regression for Multiclass Cost-sensitive Classification cost-sensitive classification problems, it is important to respect the cost information (like RED-OSSVR) instead of dropping it (like OVA-SVM), and decent performance can be obtained by using the cost information appropriately. 5.2. Comparison with Benchmark Data Sets Next, we compare RED-OSSVR with four existing algorithms, namely, FT-SVM (Beygelzimer et al., 2007), SECOC-SVM (Langford & Beygelzimer, 2005) WAPSVM (Beygelzimer et al., 2005) and (cost-insensitive) OVA-SVM (Hsu & Lin, 2002). As discussed in Section 3, the first three algorithms reduce costsensitive classification to binary classification while carrying a strong theoretical guarantee. The algorithms not only represent the state-of-the-art costsensitive classification algorithms, but also cover four major multiclass-to-binary decompositions (Beygelzimer et al., 2005) that are commonly used in SVM: oneversus-all (RED-OSSVR, OVA), tournament (FT), error correcting (SECOC) and one-versus-one (WAP). Ten benchmark data sets (iris, wine, glass, vehicle, vowel, segment, dna, satimage, usps, letter) are used for comparison. All data sets come from the UCI Machine Learning Repository (Hettich et al., 1998) except usps (Hull, 1994). We randomly separate each data set with 75% of the examples for training and the rest 25% for testing. All the input vectors in the training set are linearly scaled to [0, 1] and then the input vectors in the test set are scaled accordingly. The ten benchmark data sets were originally gathered for regular classification and do not contain any cost information. To make the data sets cost-sensitive, we adopt the randomized proportional setup that was used by Beygelzimer et al. (2005). In particular, we consider a cost matrix C(y, k), where the diagonal entries C(y, y) are 0, and the other entries C(y, k) are uniformly sampled from 0, 2000 |{n:yn =k}| . Then, for |{n:yn =y}| each example (x, y), the cost vector c comes from the y-th row of C (see Section 2). Although such a setup has a long history, we acknowledge that it does not fully reflect the realistic needs. The setup is taken here solely for a general comparison on the algorithms. To test the validity of our proposed algorithm on more realistic cost-sensitive classification tasks, we take a random 40% of the huge 10%-training set of KDDCup 1999 (Hettich et al., 1998) as another data set (kdd99). We do not use the test set accompanied because of the known mismatch in training and test distributions, but we do take its original cost matrix for evaluation. The 40% then goes through similar 75%-25% splits and scaling, as done with other data sets. We compare the test costs between RED-OSSVR and each individual algorithms over 20 runs using a pairwise one-tailed t-test of 0.1 significance level, as shown in Table 1. kdd99 takes longer to train and hence we only show the results over 5 runs. We then show the average test costs and their standard errors for all algorithms in Table 2. Furthermore, we list the average test error rate in Table 3. OVA-SVM versus RED-OSSVR. We see that RED-OSSVR can often achieve lower test costs than OVA-SVM (Table 2), at the expense of higher error rates (Table 3). In particular, Tabel 1 shows that RED-OSSVR is significantly better on 5 data sets and significantly worse on only 2: vowel and letter. We can take a closer look at vowel. Table 3 suggests that OVASVM does not misclassify much on vowel. Hence, the resulting test cost is readily small. Then, it is hard for RED-OSSVR to make improvements using arbitrary cost information. On the other hand, for data sets like glass or vehicle, on which OVA-SVM suffers from large error and cost, RED-OSSVR can use the cost information appropriately to perform much better. SECOC-SVM versus RED-OSSVR. SECOCSVM is usually the worst among the five algorithms. Note that SECOC can be viewed as a reduction from cost-sensitive classification to regression coupled with a reduction from regression to binary classification (Langford & Beygelzimer, 2005). Nevertheless, the latter part of the reduction requires a thresholding step (for which we used the grid-based thresholding in the original paper). Theoretically, an infinite number of thresholds is needed, and hence any finite-sized threshold choices inevitably lead to loss of information. From the results, SECOC-SVM can suffer much from the loss of information. RED-OSSVR, on the other hand, only goes through the first part of the reduction, and hence could preserve the cost information accurately and achieves significantly better performance on 9 out of the 11 data sets, as shown in Table 1. WAP-SVM versus RED-OSSVR. Both WAPSVM and RED-OSSVR performs similarly well on 6 out of the 11 data sets. Nevertheless, note that WAPSVM does pairwise comparisons, and hence needs needs K(K-1) underlying binary SVMs. Thus, it takes 2 much longer to train and does not scale well with the number of classes. For instance, on letter, training WAP-SVM would take about 13 times longer than training RED-OSSVR. With the similar performance, RED-OSSVR can be a preferred choice. One-sided Support Vector Regression for Multiclass Cost-sensitive Classification (a) Bayes optimal (b) OVA-SVM (c) RED-OSSVR Figure 2. boundaries learned from a 2D artificial data set FT-SVM versus RED-OSSVR. FT-SVM and RED-OSSVR both need only O(K) underlying SVM classifier/regressors and hence scale well with K. Nevertheless, from Table 1, we see that RED-OSSVR performs significantly better than FT-SVM on 7 out of the 11 data sets. Note that FT-SVM is based on letting the labels compete in a tournament, and thus the design of the tournament can affect the resulting performance. From the results we see that the simple random tournament design, as Beygelzimer et al. (2007) originally used, is not as good as RED-OSSVR. The difference makes RED-OSSVR a better choice unless there is a strong demand on the O(log2 K) prediction complexity of FT-SVM. In summary, RED-OSSVR enjoys three advantages: using the cost-information accurately and appropriately, O(K) training time, and strong empirical performance. The advantages suggest that it shall be the leading SVM-based algorithm for cost-sensitive classification nowadays. Note that with the kernel trick in RED-OSSVR, we can readily obtain a wide range of classifiers of different complexity and thus achieve lower test costs than existing methods that focused mainly on decision trees (Abe et al., 2004; Beygelzimer et al., 2005; Zhou & Liu, 2006). The results from those comparisons are not included here because of page limits. Table 1. comparing the test costs of RED-OSSVR and each algorithm using a pairwise one-tailed t-test of 0.1 significance level data set iris wine glass vehicle vowel segment dna satimage usps letter kdd99 FT SECOC WAP × OVA × × : RED-OSSVR significantly better × : RED-OSSVR significantly worse : otherwise novel RED-OSSVR algorithm is a theoretically justified extension of the commonly used OVA-SVM algorithm. Experimental results demonstrated that REDOSSVR is superior to OVA-SVM for cost-sensitive classification. Furthermore, RED-OSSVR can enjoy some advantages over three major SVM-based costsensitive classification algorithms. The findings suggest that RED-OSSVR is the best SVM-based algorithm for cost-sensitive classification nowadays. Acknowledgments 6. Conclusion We proposed a novel reduction approach from costsensitive classification to one-sided regression. The approach is based on estimating the components of the cost vectors directly via regression, and uses a specifically designed regression loss that is tightly connected to the cost of interest. The approach is simple, yet enjoys strong theoretical guarantees in terms of error transformation. In particular, our approach allows any decent one-sided regression method to be cast as a decent cost-sensitive classification algorithm. We modified the popular SVR algorithm to derive a new OSSVR method that solves one-sided regression problems. Then, we coupled the reduction approach with OSSVR for cost-sensitive classification. Our We thank Chih-Jen Lin, Yuh-Jye Lee, Shou-De Lin and the anonymous reviewers for valuable suggestions. The project was partially supported by the National Science Council of Taiwan via NSC 98-2221-E-002-192 and 98-2218-E-002-019. We are grateful to the NTU Computer and Information Networking Center for the support of high-performance computing facilities. References Abe, N., Zadrozny, B., and Langford, J. An iterative method for multi-class cost-sensitive learning. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 3­11. ACM, 2004. One-sided Support Vector Regression for Multiclass Cost-sensitive Classification Table 2. average test cost of SVM-based algorithms data set iris wine glass vehicle vowel segment dna satimage usps letter kdd99 N 150 178 214 846 990 2310 3186 6435 9298 20000 197608 K 3 3 6 4 11 7 3 6 10 26 5 RED-OSSVR FT-SVM SECOC-SVM WAP-SVM OVA-SVM 23.82±5.52 31.75±5.53 29.30±5.53 28.13±6.20 35.58±7.16 19.43±4.65 20.72±5.27 19.66±4.54 19.99±5.82 28.00±5.16 228.54±15.61 264.52±15.66 251.91±14.18 253.29±16.76 283.86±18.17 178.63±20.37 190.68±21.62 193.52±21.38 187.25±22.69 216.94±17.83 23.07±2.89 25.96±2.76 88.57±7.25 17.63±2.01 13.43±1.84 26.85±1.83 31.01±2.24 61.48±9.73 27.22±2.14 27.07±2.20 42.57±2.97 56.92±3.98 54.86±5.67 50.27±3.38 42.94±2.61 68.62±4.20 78.34±4.54 93.26±5.01 73.64±4.24 79.39±4.06 24.22±0.94 30.64±1.08 75.94±7.29 23.71±0.78 23.76±0.96 27.34±0.57 44.04±0.90 207.89±5.64 26.61±0.51 26.62±0.72 0.0015±0.0001 0.0015±0.0001 0.7976±0.0000 0.0015±0.0001 0.0016±0.0001 (those with the lowest mean are marked with *; those within one standard error of the lowest one are in bold) Table 3. average test error (%) of SVM-based algorithms data set iris wine glass vehicle vowel segment dna satimage usps letter kdd99 RED-OSSVR 6.84±1.15 3.56±0.55 31.48±1.37 26.18±2.46 5.26±0.49 3.66±0.27 7.00±0.62 9.50±0.30 3.45±0.20 3.84±0.22 0.075±0.004 FT-SVM 11.71±2.34 3.00±0.74 49.54±2.24 29.13±2.90 4.09±0.52 4.78±0.48 10.79±1.66 13.05±0.97 4.60±0.39 9.65±0.61 0.074±0.004 SECOC-SVM 19.47±3.58 7.00±2.10 45.65±3.15 29.46±2.82 42.64±2.89 25.35±3.83 13.24±3.31 29.94±3.85 32.74±2.37 76.66±1.72 0.796±0.000 WAP-SVM 6.97±0.82 2.78±0.78 39.81±2.48 29.13±2.94 6.92±0.79 4.28±0.37 7.74±0.70 11.06±0.60 4.96±0.63 7.73±0.21 0.078±0.004 OVA-SVM 4.34±0.75 2.78±0.47 29.91±0.63 20.83±0.61 1.27±0.17 2.59±0.15 4.19±0.14 7.26±0.10 2.19±0.06 2.66±0.07 0.077±0.004 (those with the lowest mean are marked with *; those within one standard error of the lowest one are in bold) Beygelzimer, A., Dani, V., , Hayes, T., Langford, J., and Zadrozny, B. Error limiting reductions between classification tasks. In Machine Learning: Proceedings of the 22rd International Conference, pp. 49­56. ACM, 2005. Beygelzimer, A., Langford, J., and Ravikumar, P. Multiclass classification with filter trees. Downloaded from http://hunch.net/~jl, 2007. Chang, C.-C. and Lin, C.-J. LIBSVM: A Library for Support Vector Machines. National Taiwan University, 2001. Software available at http://www.csie. ntu.edu.tw/~cjlin/libsvm. Domingos, P. MetaCost: A general method for making classifiers cost-sensitive. In Proceedings of the 5th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155­ 164. ACM, 1999. Hettich, S., Blake, C. L., and Merz, C. J. UCI repository of machine learning databases, 1998. Hsu, C.-W. and Lin, C.-J. A comparison of methods for multiclass support vector machines. IEEE Transactions on Neural Networks, 13(2):415­425, 2002. Hsu, C.-W., Chang, C.-C., and Lin, C.-J. A practical guide to support vector classification. Technical report, National Taiwan University, 2003. Hull, J. J. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(5):550­554, 1994. Langford, J. and Beygelzimer, A. Sensitive error correcting output codes. In Learning Theory: 18th Annual Conference on Learning Theory, pp. 158­172. Springer-Verlag, 2005. Lin, H.-T. and Li, L. Support vector machinery for infinite ensemble learning. Journal of Machine Learning Research, 9:285­312, 2008. Vapnik, V. N. Statistical Learning Theory. Wiley, New York, NY, 1998. Zadrozny, B., Langford, J., and Abe, N. Cost sensitive learning by cost-proportionate example weighting. In Proceedings of the 3rd IEEE International Conference on Data Mining, pp. 435, 2003. Zhou, Z.-H. and Liu, X.-Y. On multi-class costsensitive learning. In Proceedings of the 21st National Conference on Artificial Intelligence, pp. 567­ 572. AAAI Press, 2006.