SIGIR 2007 Proceedings Session 12: Learning to Rank I A Regression Framework for Learning Ranking Functions Using Relative Relevance Judgments Zhaohui Zheng Yahoo! Inc. 701 First Avenue Sunnyvale, CA 94089 Hongyuan Zha College of Computing Georgia Institute of Technology Atlanta, GA 30032 Keke Chen, Gordon Sun Yahoo! Inc. 701 First Avenue Sunnyvale, CA 94089 zhaohui@yahoo-inc.com zha@cc.gatech.edu kchen,gzsun@yahooinc.com ABSTRACT Effective ranking functions are an essential part of commercial search engines. We focus on developing a regression framework for learning ranking functions for improving relevance of search engines serving diverse streams of user queries. We explore supervised learning methodology from machine learning, and we distinguish two types of relevance judgments used as the training data: 1) absolute relevance judgments arising from explicit labeling of search results; and 2) relative relevance judgments extracted from user clickthroughs of search results or converted from the absolute relevance judgments. We propose a novel optimization framework emphasizing the use of relative relevance judgments. The main contribution is the development of an algorithm based on regression that can be applied to ob jective functions involving preference data, i.e., data indicating that a document is more relevant than another with respect to a query. Experimental results are carried out using data sets obtained from a commercial search engine. Our results show significant improvements of our proposed methods over some existing methods. 1. INTRODUCTION Research and experiments in information retrieval have produced many fundamental methodologies and algorithms enabling the technological advances in the current commercial search engines. Ranking functions are at the core of search engines and they directly influence the relevance of the search results and users' search experience. In the past, many models and methods for designing ranking functions have been proposed, including vector space models, probabilistic models and the more recently developed language modeling-based methodology [17, 16, 2]. In particular, learning ranking functions within the framework of machine learning have attracted much interest long before the recent advances of Web search [10, 6, 5, 11, 22, 19]. The trend continues to this day and several methods have been proposed incorporating many of the recent advances in machine learning such as SVM and gradient boosting [7, 4, 19]. Machine learning approaches for learning ranking functions, in particular, the supervised learning approaches, entails the generation of training data, in the form of labeled data explicitly constructed from relevance assessment by human editors. As an example, labels or grades such as perfect, good, or bad, can be assigned to documents with respect to a query indicating the degree of relevance of documents. With labels associated with query-document pairs, we are using the absolute relevance framework where judgments are made with respect to whether a document is or is not relevant to a query. Acquiring large quantities of absolute relevance judgments, however, can be very costly because it is necessary to cover a diverse set of queries in the context of Web search. An additional issue is the reliability and variability of absolute relevance judgments. One possibility to alleviate this problem is to make use of the vast amount of data recording user interactions with the search results, in particular, user clickthroughs data [1]. Each individual user click may not be very reliable, but the aggregation of a great number of user clicks can provide a very powerful indicator of relevance preference. In this regard, Joachims and his coworkers have developed methods for extracting relative relevance judgments from user clickthroughs data [13, 14, 20, 15, 21]. Particularly, the relative relevance judgments are in the form of whether a document is more relevant than other documents with respect to a query. The benefit for using relative relevance judgments are the potential unlimited supplies of user click- Categories and Subject Descriptors H.3.3 [Information Systems]: Information Search and Retrieval--Retrieval functions ; H.4.m [Information Systems]: Miscellaneous--Machine learning General Terms Algorithms, Experimentation, Theory Keywords ranking function, machine learning, absolute relevance judgment, relative relevance judgment, preferences, clickthroughs, functional gradient descent, regression, gradient boosting Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGIR'07, July 23­27, 2007, Amsterdam, The Netherlands. Copyright 2007 ACM 978-1-59593-597-7/07/0007 ...$5.00. 287 SIGIR 2007 Proceedings Session 12: Learning to Rank I throughs data and the timeliness of user clickthroughs data for capturing user searching behaviors and preferences. The drawback for using relative relevance judgments are that user clickthroughs data tend to be quite noisy, especially we also need to deal with fraudulent clicks. Although there have been some research on how to extract relative relevance judgments from user clickthroughs data, much research is still needed to make the extraction process more effective. Once relative relevance judgments are extracted from user clickthroughs data, the next question is how to use them for the purpose of learning a ranking function. This falls under the general framework of learning ranking functions from preference data and several algorithms have been proposed in the past: Joachims and his coworkers used RankSVM based on linear SVM for learning ranking functions. To incorporate nonlinear interactions of features in RankSVM, either more complicated features need to be devised or some kind of kernels used [13, 14, 15]. RankNet, developed by a group from Microsoft Research, proposed an optimization approach using an ob jective function based on BradleyTerry models for paired comparisons and explored neural networks for learning the ranking functions [4]. The closest to our proposed method is RankBoost discussed in [8], using ideas of Adaboost for learning ranking functions for preference data. The choice for selecting weak learners for RankBoost as discussed in [8] is very limited and is less flexible to deal with the complicated features used in Web search context. The main contribution of our work is the development of a learning framework for preference data using regression as the basic ingredient. For example, the ranking functions are represented as a combination of regression trees when we use gradient boosting for regression [9]. More interestingly, our experimental results also show that even with absolute relevant judgments, it is more advantageous to first convert them into preference data and apply our proposed methods than to treat the ranking problem with absolute relevant judgments as a regression problem. The rest of the paper is organized as follows: section 2 develops the main algorithmic contribution of the paper; we start with a brief review of the basic idea of gradient descent in function spaces [9]. We then propose an ob jective function, the optimization of which will lead to the construction of the ranking function. We apply functional gradient descent methodology to the ob jective function and transform the problem of learning ranking functions as a sequence of problems of learning regression functions. For concreteness, we use gradient boosting regression as an illustration of the general methodology. In section 3, we present a detailed experimental study using data from a commercial search engine. In the last section, we make some concluding remarks and also point out several directions for future research. using gradient descent, and we start with a brief introduction of gradient descent in function spaces [9]. We then propose a new ob jective function for learning ranking functions using preference data and develop an algorithm that adapts functional gradient descent for optimizing the proposed ob jective function. 2.1 F unctional gradient descent We first give a brief discussion of gradient descent for unconstrained optimization of a multivariate function [3]. To this end, suppose we want to solve minxRd F (x), where F (x) is a d-variable function. The idea of gradient descent is to start with an initial guess x0 of a minimizer, and at each step compute the gradient of the ob jective function F at the current iterate xk , say F (xk ), and use the negative gradient as the search direction to obtain the next iterate xk+1 = xk - k F (xk ), where k is the step size which can be chosen, for example, by a line-search. In the context of regression, we are given a training set {(xi , gi )}N 1 , and we seek to find a function h such that i= gi h(xi ), i = 1, . . . , N . For simplicity we use the square loss function, i.e., we measure the discrepancy between gi and h(xi ) by (gi - h(xi ))2 . Then we need to find a function h(x) to solve the following minimization problem, min L(h) min 1 2 hH hH i N (gi - h(xi ))2 , =1 where H is a pre-defined function class such as the class of polynomials not to exceed certain degree. We can apply gradient descent in function space to minimize the functional L(h), i.e., compute the gradient of L(h) with respect to h at the current iterate hk (x) and form the next iterate as hk+1 (x) = hk (x) - k L(hk (x)). The problem is that, we can not compute L(hk (x)) at all x, but rather we can only compute it at a finite sample, L(hk (xi )) = -(gi - hk (xi )), i = 1, . . . , N . The crucial idea of functional gradient descent is to find a function that interpolates/approximates the above sample values and therefore obtain an approximation of the negative gradient -L(hk (x)) to form the next iterate. As an illustration, we explain the details of the algorithm when the interpolation/approximation is done by fitting a regression tree to the sample values, as is easily seen other regression methods can also be used here [9]. We summarize the above in the following and label it as GBT (Gradient Boosting Trees). Algorithm. (Gradient Boosting Trees [9]) 1. Initialize h0 (x) = N 1 gi /N . i= 2. For k = 1, . . . , M : (number of trees in gradient boosting) (a) For i = 1, . . . , N , compute the negative gradient rik = gi - hk-1 (xi ) (b) Fit a regression tree to {rik }i=1,...,N giving terminal regions Rj k , j = 1, . . . , Jk . 2. A REGRESSION FRAMEWORK FOR LEARNING FROM PREFERENCE DATA Our basic premise is that ranking and regression are fundamentally different problems, but regression can be used to solve ranking problems using preference data. Our main contribution is a framework for solving ranking problems with regression using relative judgments and the regression methods used can be be chosen to tailor to the specific applications. For concreteness, we will discuss the framework È 288 SIGIR 2007 Proceedings Session 12: Learning to Rank I (c) For j = 1, . . . , Jk , compute j k = (gi - hk-1 (xi ))/|{i : xi Rj k }|, i R j k x as the unknowns, and compute the gradient of R(h) with respect to those unknowns. The components of the negative gradient corresponding to h(xi ) and h(yi ), respectively, are max{0, h(yi ) - h(xi )}, - max{0, h(yi ) - h(xi )}. average of the residual in each terminal region. (d) Update hk (x) = hk-1 (x) + ( j Jk j k I (x Rj k )), =1 Both of the above equal to zero when h matches the pair xi , yi , and therefore, in this case no modification is needed for the components corresponding to h(xi ) or h(yi ). On the other hand, if h does not match the pair xi , yi , the components of the gradient are h(yi ) - h(xi ), h(xi ) - h(yi ). where is the shrinkage factor, and I (·) is the indicator function. There are two parameters M , the number of regression trees and , the shrinkage factor that need to be chosen by the user. In general, we use cross-validation for choosing the two parameters. 2.2 Ranking with relative judgments As we mentioned before, the relative relevance judgments are in the form of whether a document is more relevant than other documents with respect to a query. We encode this information as follows: given the feature vectors for two query-document pairs x and y (see Section 3 for details on extraction of query-document features), we use x y to mean that x is preferred over y , i.e., x should be ranked higher than y . Simply put, this means that the document represented by x is considered more relevant than that represented by y with respect to the query in question. We denote the set of available preferences based on the relative relevance judgments as S = { xi , yi | xi y i , i = 1, . . . , N }. The above tells us how to modify the difference of function values, to know how to modify the function itself we need to translate those gradient components into modification to h. We adopt the following simple approach: we set the target value for xi as h(yi ) + and that for yi as h(xi ) - for some fixed . Then, we obtain the following set of data that need to be fitted at each iteration, {(xi , h(yi ) + ), (yi , h(xi ) - )}, (2) where h does not match pair xi , yi . When some feature vectors xi or yi can appear more than once in S , there will be several components of the negative gradient of R(h) that will involve xi or yi . When translating the gradient components to modification of h, we may end up with inconsistent requirements. One approach will be to compute an average taking into account of all the requirements. This is a local approach using information in the training data related to the feature vectors in question. A better alternative approach is to add all the different and potentially inconsistent requirements in the training set, and let the regression methods such as GBT to handle the inconsistency using more global information based on all the training data. We summarize the algorithm, again using GBT for regression as an illustration, as follows which we label as GBrank. A l g o r i t h m . (GBrank) 1 Start with an initial guess h0 , for k = 1, 2, . . . , 1) using hk-1 as the current approximation of h, we separate S into two disjoint sets, S + = { xi , yi S | hk-1 (xi ) hk-1 (yi ) + } and S - = { xi , yi S | hk-1 (xi ) < hk-1 (yi ) + }; 2) fitting a regression function gk (x) using GBT and the following training data {(xi , hk-1 (yi ) + ), (yi , hk-1 (xi ) - ) | (xi , yi ) S - }; 3) forming (with normalization of the range of hk ) khk-1 (x) + gk (x) , k+1 where is a shrinkage factor. hk (x) = Remark. We want to point out that the above framework is generic in the sense that it can use any application-specific regression method for learning the regression function gk (x). If the preferences xi , yi are converted from absolute relevance judgments, we multiple by the absolute value of grade difference between xi and yi 1 We formulate the problem of learning ranking functions as computing a ranking function h H, H a given function class, such that h match the set of preferences, i.e., h(xi ) h(yi ), if xi yi , i = 1, . . . , N , as much as possible. We propose to use the following ob jective function to measure the risk of a ranking function h, R(h) = 1 2 i N (max{0, h(yi ) - h(xi )})2 , (1) =1 the motivation is that if for the pair xi , yi , h matches the given preference, i.e., h(xi ) h(yi ), then h incurs no cost on the pair, otherwise the cost is given by (h(yi ) - h(xi ))2 . Direct optimization of the above can be difficult, the basic idea of our regression framework is to fix either one of the values h(xi ) or h(yi ), e.g., replace either one of the function values by its current predicted value, and solve the problem by way of regression. Remark. To avoid obtaining an optimal h which is constant, we actually need to optimize, for 0 < 1, R(h, ) = 1 2 i N (max{0, h(yi ) - h(xi ) + })2 - 2 , =1 Our implementation in the sequel corresponds to setting to be a fixed constant. To this end, we use the idea of functional gradient descent as reviewed in the previous section. We consider h(xi ), h(yi ), i = 1, . . . , N , 289 SIGIR 2007 Proceedings Session 12: Learning to Rank I 3. EXPERIMENTAL RESULTS We carried out several experiments illustrating the properties and effectiveness of GBrank. We also compared its performance with some existing algorithms such as GBT and RankSVM. 3.1.3 Preference data from clickthroughs data 3.1 Data Collection We first describe how the data used in the experiments are collected. 3.1.1 Feature vectors We also examined a certain amount of clickthroughs data and extracted a set of preference data as follows. For a query q , we consider two documents d1 and d2 among the top 10 results from Yahoo! web search. Assume that in the clickthroughs data, d1 has c1 clicks out of n1 impressions, and d2 has c2 clicks out of n2 impressions. We want to consider document pairs d1 and d2 for which either d1 or d2 is significantly better than the other in terms of clickthroughs rate. To this end, we assume that clicks in user sessions obey binomial distribution. Denote the binomial distribution by B (k; n, p) = As we mentioned before, each query-document pair is represented by a feature vector. For query-document pair (q , d), a feature vector x = [xQ , xD , xQD ] is generated and the features generally fall into the following three categories: · Query-feature vector x which comprises features dependent on the query q only and have constant values across all the documents d D, for example, the number of terms in the query, whether or not the query is a person name, etc. Q np k k (1 - p)n-k We apply likelihood ratio test (LRT) and compute, B (c1 + c2 ; n1 + n2 , (c1 + c2 )/(n1 + n2 )) -2 . B (c1 ; n1 , c1 /n1 )B (c2 ; n2 , c2 /n2 ) · Document-feature vector xD which comprises features dependent on the document d only and have constant values across all the queries q Q, for example, the number of inbound links pointing to the document, the amount of anchor-texts in bytes for the document, and the language identity of the document, etc. · Query-document feature vector xQD which comprises features dependent on the relation of the query q with respect to the document d, for example, the number of times each term in the query q appears in the document d, the number of times each term in the query q appears in the anchor-texts of the document d, etc. The preference data for training are extracted from the follow two sources: absolute relevance judgments arising from editorial labeling, and relative relevance judgments extracted from user clickthroughs data. We consider a pair d1 and d2 when the above is greater than a threshold, and we say the pair is significant. Among the significant pairs, we apply rules similar to those in [15] such as Skip-Above to extract preference data. In total we extracted 20,948 preferences. 3.2 Evaluation Metrics The output of GBrank is a ranking function h which is used to rank the documents x according to h(x). Therefore, document x is ranked higher than y by the ranking function h if h(x) > h(y ), and we call this the predicted preference. We propose the following three metrics to evaluate the performance of a ranking function with respect to a given set of preferences which we considered as the true preferences. · Number of contradicting pairs: for a pair of documents if the predicted preference is different from the true preference, the pair is a contradicting pair. · Precision at K %: for two documents x and y (with respect to the same query), it is reasonable to assume that it is easy to compare x and y if |h(x) - h(y )| is large, and x and y should have about the same rank if h(x) is close to h(y ). Base on this, we sort all the document pairs x, y according to |h(x) - h(y )|. We call precision at K %, the fraction of non-contradicting pairs in the top K % of the sorted list.2 · Discounted Cumulative Gain (DCG): DCG has been widely used to assess relevance in the context of search engines [12]. For a ranked list of N documents (N is set to be 5 in our experiments), we use the following variation of DCG, DCGN = 3.1.2 Preference data from labeled data A set of queries are sampled from query logs, and a certain number of query-document pairs are labeled according to their relevance judged by human editors. A 0-4 grade is assigned to each query-document pair based on the degree of relevance (perfect match, excellent match, etc), and the numerical grades are also used as the target values for GBT regression. We use a data set from a commercial search engine which contains 4,372 queries and 115,278 query-document pairs. We use the above labeled data to generate a set of preference data as follows: given a query q and two documents dx and dy . Let the feature vectors for (q , dx ) and (q , dy ) be x and y , respectively. If dx has a higher grade than dy , we include the preference x y while if dy has a higher grade than dx , we include the preference y x. For each query, we consider all pairs of documents within the search results except those with equal grades. This way, we generate around 1.2 million preferences in total. As we will see later, the labeled data not only provide us with preference data, they also allow us to compare GBrank based on converted preference data and GBT regression using the labeled data. i N =1 Gi , l o g 2 (i + 1) where Gi represents the weights assigned to the label of the document at position i, e.g. 10 for Perfect match, 7 for Excellent match, 3 for Good match, etc. Higher degree of relevance corresponds to higher value of the weight. We will use the symbol dcg to indicate the average of this value over a set of testing queries in our 2 Notice that precision at 100% corresponds to the percentage of contradicting pairs. 290 SIGIR 2007 Proceedings Session 12: Learning to Rank I Number of contradicting pairs in training data v. iterations 350000 133000 132000 num ber of contradicting pairs Number of Contradicting Pairs in test data v. iterations 7 6.9 6.8 6.7 6.6 6.5 6.4 124000 DCG v. Iterations 300000 num ber of contradicting pairs 131000 130000 129000 128000 127000 126000 125000 DCG-5 250000 200000 150000 0 10 20 30 40 ite ra tions 50 60 70 80 123000 0 10 20 30 40 ite ra tions 50 60 70 80 6.3 0 10 20 30 40 Num of iterations 50 60 70 80 Figure 1: Numb er of contradicting pairs in training set against GBrank iterations (left), numb er of contradicting pairs in testing set against GBrank iterations (middle), DCG on testing data againt GBrank iterations (right) # of Contradicting Test Pairs v. Training Data Size # of contradicting test pairs D CG-5 vs. Training Set Size 7 6. 95 6. 9 6. 85 6. 8 6. 75 6. 7 6. 65 6. 6 6. 55 6. 5 10% 20% 30% 40% 50% 60% 70% 80% 90% 100 % % of training data used 142000 140000 138000 136000 134000 132000 130000 128000 126000 124000 122000 10% 20% 30% 40% 50% 60% 70% 80% 90% 100 % % of training data used GBrank GBT dcg-5 GBrank GBT Figure 2: Numb er of contradicting pairs v. training data size (left), DCG v. Training Data Size (right) experiments. In our experiments, dcg will be reported only when absolute relevance judgments are available. tuned for GBT: we use 100 trees each with 15 leaf nodes, and the shrinkage factor is set to be 0.05. All the experiments were conducted on a 2.4Ghz 4-cpu AMD server with 4G RAM. GBT training will take about 15 minutes while the training time for GBrank would be a few hours depending on the number of iterations and number of preferences. The testing time for GBT is only a few minutes and that for GBrank would be the number of iterations multiplied by the above time for GBT. Both training and testing time for GBrank could be significantly reduced by using less number of trees for each iteration, e.g. single tree instead of GBT. Our more recent experiments showed that GBrank using single tree with a few hundred iterations achieves comparable dcg-5 while reducing the time complexity a great deal. 3.3 Experiment Design and Results The following questions guide the design of the experiments we carry out: 1. What is the convergence behavior of GBrank, will the number of contradicting pairs decrease in the training data? i n t he test data? 2. Quantitatively, what is the effect of the training data size on the p erformance of GBrank in terms o f t he three metrics we proposed? 3. When we have absolute relevance judgments, we can also use a regression method such as GBT to learn the ranking function based on the assigned numerical grades [7, 19]. How does GBT learned with absolute relevance judgments compare to GBrank learned with relative relevance judgments converted from the same set of absolute relevance judgments? 4. RankSVM based on linear SVM is a popular method for training ranking functions using preference data. How does GBrank compare with RankSVM? For GBT and RankSVM, we tuned the parameters to get the best performance. For GBrank, we just used the parameters 3.3.1 Experiments with labeled data For the first two questions, we generate the training and testing data as follows: we randomly split the labeled data described in section 3.1.1 by query into training set (60% of labeled queries, 71,338 query-documents and 753,976 preferences) and testing set (the remaining 40% queries, 43,940 query-document pairs and 465,893 preferences). From the left panel of Figure 1, we can see that the number of contradicting pairs monotonically decreases iteration by iteration during training, which indicates the convergent trend of GBrank. As for now, we always use a validation set 291 SIGIR 2007 Proceedings Session 12: Learning to Rank I Table 1: Numb er of contradicting pairs (CP) and precision (Prec) at K % for GBrank and GBT %K 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% num pairs 46590 93179 139768 186358 232947 279536 326126 372715 419304 465893 GBrank CP Prec 225 0.9952 1095 0.9883 5695 0.9593 15324 0.9178 28117 0.8793 46334 0.8434 61189 0.8124 80726 0.7834 101601 0.7577 123939 0.7340 GBT CP Prec 949 0.9796 3313 0.9644 8718 0.9376 18340 0.9016 30865 0.8675 43776 0.8342 63711 0.8046 82976 0.7774 103530 0.7531 126188 0.7291 the documents very accurately. Let us illustrate this using a simple example [19]. Consider two queries q1 : "harvard university" and q2 : "college of san mateo". q1 is more popular than q2 generating about 13 million search results while results for q2 are two orders of magnitude less. For simplicity we consider a ranking function h(x) using a single feature x which counts the number of inbound links to a document. Now let us examine the top three results di1 , di2 , di3 , i = 1, 2, for each of the query. Assume di1 is ranked perfect, di2 is ranked excellent, and di3 is ranked good, and we convert the labels to numerical values as follows, 0 perfect, 1 excellent, 2 good. Since q1 is very popular, each of the top three results generate high feature values, say, x = 100000, 80000, 50000 while for q2 the corresponding feature values are x = 1000, 800, 500. Assume x is negatively correlated with the label, i.e., small x values tend to indicate better relevance, then we will need to find a monotonically decreasing function h such that for q1 h(100000) 0, h(80000) 1, h(50000) 2 and for q 2 , h(1000) 0, h(800) 1, h(500) 2. The ma jor issue comes from the difference of popularity of the queries. This could partially explain why ranking functions could be better learned using GBranking with preference data than GBT regression with absolute relevance judgments. In our experiments, we exclude all tied data (pairs of documents with the same grade) when converting preference data from the absolute relevance judgments, which is a significant information loss especially when the training data are small. Adding those tied data will certainly increase the overlaps among the training preferences. Including those tied data in GBrank learning will be part of our future work. We now turn to the precision at K % metric: Table 1 presents the number of contradicting pairs and precision at K % for GBT learned with all of training data and GBrank learned with the corresponding preference data. This again shows that GBrank outperforms GBT with respect to the precision at K % metric. To further explore the third and the last questions, we conduct an experimental comparison among GBrank, GBT, and RankSVM in a 5-fold cross-validation setting. Again, the 5fold splitting is on queries using the data described in section 3.1.1. Figure 3 and 4 show the results using the two metrics, dcg-5 and number of contradicting pairs, for GBrank, GBT, and RankSVM, from which we can see GBrank is the best performer and RankSVM is worse than both GBrank and GBT. Average over the 5-folds, dcg-5 for GBrank is 1.2% better than GBT with p-value equal to 0.0005, and 5.7% better than RankSVM with p-value close to zero. As a baseline comparison, the dcg-5 difference among the top search engines on this data set is about 2-3%. to decide when to stop the iteration. The middle panel of Figure 1 shows the number of contradicting pairs on the testing data first decreases and then gradually increases after a certain number of iterations. Since we also have available the absolute relevance judgments, we plot the dcgs against each iteration in the right panel of Figure 1. The dcg plot is almost a mirror image along the horizontal axis of the contradicting pairs plot on testing data. We mention that we have observed similar trends on other data sets we have tested. In order to demonstrate the effect of training data size, we randomly sample increasing percentages of training data and generate the corresponding pairwise preference data as described in section 3.1.1. The experimental results on the same testing data are reported as follows: The left panel of Figure 2 shows the number of contradicting pairs in the testing data decreases with the increasing size of training data. The dcg-5 for different training data size was shown in the right panel of Figure 2, which indicates a strongly positive correlation between the dcg gain and t he increase of training data size. Although our ma jor concern is about GBrank using relative relevance judgments, it is actually rather instructive to see how it compares with GBT when trained on the same absolute relevance judgments, of course for GBrank, we will need to convert the absolute relevance judgments to relative relevance judgments. This experiment is aimed at the third question. One interesting observation is that GBrank underperforms GBT when the training data size is small. One plausible explanation for this is that to rank ob jects according to a set of preferences, there needs to be enough overlaps among the preferences, for small amount of data, the overlaps are weak and hence the poorer performance. Why would GBrank outperform GBT when there are plenty of training data? We suspect the deeper reason lies at the fundamental difference between a ranking problem and a regression problem. GBT is designed for regression problems and is therefore not necessarily the optimal choice for ranking problems. We elaborate on this point a bit more now: ranking web search results is fundamentally a preference learning problem rather than a regression problem, i.e., we just want to rank more relevant results higher than those less relevance ones, we do not need to care about predicting the grades of 3.3.2 Experiments with clickthroughs data We also use the preference data extracted from user clickthroughs data as described in section 3.1.2. The comparison is for RankSVM and GBrank in a 5-fold cross validation setting. For this data set, we can no longer use GBT since we do not have the absolute relevance judgments. Tables 292 SIGIR 2007 Proceedings Session 12: Learning to Rank I 2 and 3 present the results with respect to the number of contradicting pairs metric as well as the precision at K % metric. Both tables again show that GBrank outperforms RankSVM. D C G for GBRank, GBT, and RankSVM in 5-fold cross validation 7. 8 7. 6 7. 4 7. 2 dcg-5 7 6. 8 6. 6 6. 4 6. 2 6 1 2 3 fol d number 4 5 GB Rank GB T Rank S VM G + log(d2 )1 . The DCG difference caused by the wrong 2 i+ 1 1 ordering is therefore |G(d1 ) - G(d2 )|[ log2 i+1 - log2 j +1 ]. During training, we can weigh each error term according to that difference. When the absolute relevance judgments are not availabe, we can just remove |G(d1 ) - G(d2 )|. 3) We menw tioned that we can also include the tied data, pairs xi , yi ith the same grade. One way to do that is to add the following to the set in Equation (2) to construct the training set for computing the regression function at each iteration, G ( d1 ) log2 j +1 ( xi , hk-1 (xi ) + hk-1 (yi ) hk-1 (xi ) + hk-1 (yi ) ), (y i , ) 2 2 ; Figure 3: DCG for GBrank, RankSVM in 5-fold cv GBT and N umber of contradicting pairs for GBRank, GBT, and RankSVM in 5fold cross validation 76000 74000 num ber of contradicting pairs 72000 70000 68000 66000 64000 62000 60000 58000 56000 54000 1 2 3 fold number 4 5 GB Rank GB T Rank S VM 4) as we mentioned before, our framework including GBrank is very flexible for combining relative and absolute relevance judgments. With any query-document feature vector xi and its grade gi , we just need to add (xi , gi ) to the set in Equation (2), and there is no need to modify the ob jective function. Such flexibility is desirable considering there are many queries having a single document with absolute relevance judgment(or documents with same absolute relevance judgment), and we could not extract any preference data from that. Acknowledgment. We thank Tong Zhang for suggesting a modification of the ob jective function (1) and Alex Simma for the use of LRT. 5. REFERENCES [1] R. Atterer, M. Wunk, and A. Schmidt. Knowing the user's every move: user activity tracking for website usability evaluation and implicit interaction. Proceedings of the 15th International Conference on World Wide Web, 203-212, 2006. [2] A. Berger. Statistical machine learning for information retrieval. Ph.D. Thesis, School of Computer Science, Carnegie Mellon University, 2001. [3] D. Bertsekas. Nonlinear programming. Athena Scientific, second edition, 1999. [4] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. Proceedings of international conference on Machine learning, 89­96, 2005. [5] H. Chen. Machine Learning for information retrieval: Neural networks, symbolic learning and genetic algorithms. JASIS, 46:194-216, 1995. [6] W. Cooper, F. Gey and A. Chen. Probabilistic retrieval in the TIPSTER collections: an application of staged logistic regression. Proceedings of TREC, 73-88, 1992. [7] D. Cossock and T. Zhang. Subset ranking using regression. COLT, 2006. [8] Y. Freund, R. Iyer, R. Schapire and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933­969, 2003. [9] J. Friedman. Greedy function approximation: a gradient boosting machine. Ann. Statist., 29:1189-1232, 2001. [10] N. Fuhr. Optimum polynomial retrieval functions based on probability ranking principle. ACM Transactions on Information Systems, 7:183-204, 1989. Figure 4: Numb er of contradicting pairs for GBrank, GBT, and RankSVM in 5-fold cv 4. CONCLUSIONS AND FUTURE WORK In this paper we proposed a general regression framework for learning ranking functions from preference data. In particular, we developed GBrank, a specialization of our framework using gradient boosting trees as the regression method. When only preference data are available, GBrank provides a more flexible and effective solution to the problem of learning ranking functions. Even when absolute labels are available, our experiments suggest that it is preferable to first convert them into preference data and apply GBrank over them than directly apply G BT to the o riginal a b o slute l abels. There are several directions we can pursue to further enhance our approaches: 1) when converting absolute relevance data, we can overweigh the document pairs with larger grade difference. 2) weigh each error term in the loss function defined in Equation (1) with the DCG difference. Specifically, assume we have two documents: d1 and d2 . At the current iteration, d1 and d2 were ranked at position i and j respectively, where i < j . Suppose the resulted predicted preference contradicts with the true preference. Their dcg contribution with respect to the wrong ordering would be G ( d1 ) G + log(d2 )1 while that for the correct ordering should be log2 i+1 2 j+ 293 SIGIR 2007 Proceedings Session 12: Learning to Rank I Table 2: Numb er of contradicting pairs (CP) and precision (Prec) at K % for RankSVM on clickthroughs data %K 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% fold 1 CP Prec 18 0.9143 41 0.9021 79 0.8744 141 0.8317 206 0.8034 265 0.7892 344 0.7653 425 0.7464 509 0.73 606 0.7106 fold 2 CP Prec 14 0.9333 30 0.9284 73 0.8839 118 0.8592 187 0.8216 255 0.7971 318 0.7831 397 0.7631 487 0.7416 579 0.7235 fold 3 CP Prec 11 0.9476 32 0.9236 73 0.8839 124 0.8521 186 0.8225 259 0.794 340 0.7681 424 0.747 507 0.731 597 0.7149 fold 4 CP Prec 16 0.9238 36 0.9141 67 0.8935 115 0.8628 175 0.833 242 0.8075 313 0.7865 394 0.7649 488 0.7411 582 0.7221 fold 5 CP Prec 18 0.9143 50 0.881 90 0.8571 146 0.8262 203 0.8067 257 0.7959 342 0.7672 420 0.7499 516 0.7268 627 0.7011 Table 3: Numb er of contradicting pairs (CP) and precision (Prec) at K % for GBrank on clickthroughs data %K 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% fold 1 CP Prec 6 0.9714 24 0.9427 47 0.9253 95 0.8866 155 0.8521 218 0.8266 294 0.7995 373 0.7774 466 0.7528 542 0.7412 fold 2 CP Prec 6 0.9714 14 0.9666 48 0.9237 77 0.9081 129 0.8769 193 0.8465 261 0.822 342 0.7959 420 0.7772 505 0.7588 fold 3 CP Prec 6 0.9714 20 0.9523 52 0.9173 103 0.8771 164 0.8435 207 0.8353 281 0.8083 367 0.781 464 0.7538 555 0.735 fold 4 CP Prec 8 0.9619 23 0.9451 51 0.9189 86 0.8974 124 0.8817 192 0.8473 262 0.8213 333 0.8013 411 0.782 509 0.7569 fold 5 CP Prec 9 0.9571 30 0.9286 72 0.8857 112 0.8667 156 0.8514 211 0.8324 283 0.8074 365 0.7826 459 0.757 542 0.7417 [11] F. Gey, A. Chen, J. He and J. Meggs. Logistic regression at TREC4: probabilistic retrieval from full text document collections. Proceedings of TREC, 65-72, 1995. [12] K. J¨rvelin and J. Kek¨l¨inen. Cumulated gain-based a aa evaluation of IR techniques. ACM Transactions on Information Systems, 20:422-446, 2002. [13] T. Joachims. Optimizing search engines using clickthrough data. Proceedings of the ACM Conference on Know ledge Discovery and Data Mining, 2002. [14] T. Joachims. Evaluating retrieval performance using clickthrough data. Proceedings of the SIGIR Workshop on Mathematical/Formal Methods in Information Retrieval, 2002. [15] T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay. Accurately Interpreting Clickthrough Data as Implicit Feedback. Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2005. [16] J. Ponte and W. Croft. A language modeling approach to information retrieval. In Proceedings of the ACM Conference on Research and Development in Information Retrieval, 1998. [17] G. Salton. Automatic Text Processing. Addison Wesley, Reading, MA, 1989. [18] H. Turtle and W. B. Croft. Inference networks for document retrieval. In Proceedings of the Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1-24, 1990. [19] H. Zha, Z. Zheng, H. Fu and G. Sun. Incorporating query difference for learning retrieval functions in world wide web search. Proceedings of the 15th ACM Conference on Information and Know ledge Management, 2006. [20] Diane Kelly and Jaime Teevan. Implicit Feedback for Inferring User Preference: A Bibliography. SIGIR Forum, 32:2, 2003. [21] F. Radlinski and T. Joachims. Query chains: Learning to rank from implicit feedback. Proceedings of the ACM Conference on Know ledge Discovery and Data Mining (KDD) , 2 0 0 5 . [22] C. Zhai and J. Lafferty. A risk minimization framework for information retrieval , Information Processing and Management, 42:31­55, 2006. 294