Invited Applications Paper :HE-6FDOH%D\HVLDQ&OLFN-7KURXJK5DWH3UHGLFWLRQIRU6SRQVRUHG6HDUFK $GYHUWLVLQJLQ0LFURVRIW¶V%LQJ6HDUFK(QJLQH Thore Graepel Joaquin Quiñonero Candela Thomas Borchert Ralf Herbrich Microsoft Research Ltd., 7 J J Thomson Avenue, Cambridge CB3 0FB, UK THOREG@MICROSOFT.COM JOAQUINC@MICROSOFT.COM TBORCHER@MICROSOFT.COM RHERB@MICROSOFT.COM Abstract We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in 0LFURVRIW¶V %LQJ search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naïve Bayes algorithm. of CTR prediction is absolutely crucial to Sponsored Search advertising because it impacts user experience, profitability of advertising and search engine revenue. Recognising the importance of CTR estimation for online advertising, management at Bing/adCenter decided to run a competition to entice people across the company to develop the most accurate and scalable CTR predictor. The algorithm described in this publication tied for first place in the first competition and won the subsequent competition based on prediction accuracy. As a consequence, it was chosen to replace Bing¶V SUHYLRXV CTR prediction algorithm, a transition that was completed in the summer of 2009. The paper makes three major contributions. First, it describes the Sponsored Search application scenario, the key role of CTR prediction in general, and the particular constraints derived from the task, including accuracy, calibration, scalability, dynamics, and exploration. Second, it describes a new Bayesian online learning algorithm for binary prediction, subsequently referred to as adPredictor. The algorithm is based on a generalised linear model with a probit (cumulative Gaussian) link function, a factorising Gaussian belief distribution on the feature weights, and calculates the approximate posterior using message passing, providing simple, closed-form update equations with automatic feature-wise learning rate adaptation. Third, we discuss the techniques we employed to make adPredictor work in Bing¶Vproduction environment, now driving 100% Sponsored Search traffic with é:sr54 F sr55 ; ad impressions per year. 1. Introduction Sponsored search remains one of the most profitable business models on the web today. It accounts for the overwhelming majority of income for the three major search engines Google, Yahoo and Bing, and generates revenue of at least 25 billion dollars1 per year and rising. All three major players use keyword auctions to allocate display space alongside the algorithmic search results based on a pay-per-click model in which advertisers are charged only if their advertisements are clicked by a user. In this mechanism it is necessary for the search engine to estimate the click-through rate (CTR) of available ads for a given search query to determine the best allocation of display space and appropriate payments (Edelman, Ostrovsky, & Schwarz, 2007). As a consequence, the task Appearing in Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by T. Graepel, J. Quiñonero Candela, T. Borchert and R. Herbrich. 1 Source: eMarketer, April 2009 The paper is structured as follows. In Section 2 we describe in detail how the task of CTR prediction fits into the framework of keyword auctions and which constraints and challenges arise from the application domain of Sponsored Search. In Section 3 we describe the online Bayesian Probit Regression algorithm (adPredictor) in detail and provide a derivation of the update equations based on approximate message passing in a factor graph. In Section 4 we discuss how the algorithm operates at web scale, using accuracy controlled pruning and an implementation of parallel training. In Section 5 we discuss how predictions affect the composition of future training data, and the problem of trading off exploration and exploitation. Before we conclude in Section 7 we provide experimental results from the live system comparing adPredictor¶V prediction accuracy with that of a calibrated Naïve Bayes classifier. 2.2. Input Features We refer to an ad shown to a particular user in a particular page view as an ad impression. One of the key questions is the availability of suitable input features or predictor variables that allow accurate CTR prediction for a given impression (Richardson, Dominowska, & Ragno, 2007). These can generally be grouped into three categories: Ad features include bid phrases, ad title, ad text, landing page URL, landing page itself3, and a hierarchy of advertiser, account, campaign, ad group and ad. Query features include the search keywords, possible algorithmic query expansion, cleaning and stemming. Context features include display location, geographic location, time, user data and search history. Of course, these are only the base features which serve as the building blocks for more complex features modelling the interaction between ad, query and context. These more complex features can, e.g., be constructed by taking the Cartesian product of base features. As in most machine learning problems, constructing and selecting good features is one of the core challenges. For the learning algorithm one of the resulting challenges is the requirement to be able to handle discrete features of very different cardinalities, e.g., a two-valued feature such as gender and a billion-valued feature such as user ID. 2.3. Domain-Specific Challenges 2.3.1. EVALUATION 2. Sponsored Search and CTR Prediction The Sponsored Search advertising model exploits two key aspects of web search. First, the query users enter into a search engine partly reveals their intent and can help identify appropriate ads to be displayed to the users. Second, by clicking on ads users can proceed directly to WKH DGYHUWLVHUV¶ ZHE SDJHV DQG WKH EXVLQHVV YDOXH WKXV generated can easily be attributed to the web search engine. The lecture notes for the Introduction to Computational Advertising at Stanford (Broder & Josifovski, 2009) provide an excellent introduction. 2.1. Keyword Auction In practice, the keyword auctions work as follows (Edelman, Ostrovsky, & Schwarz, 2007). For a given product or service advertisers identify suitable keywords likely to be typed by users interested in their offering. For each of those keywords the advertisers provide a bid indicating the amount of money they would be willing to pay for a click. When a user types a query, the search engine matches the keywords of all the advertisers against the query and decides which advertisers are eligible to participate in an auction for having their ad displayed. The search engine needs to allocate the available ad positions to the ads in the auction and needs to determine appropriate payments. This is achieved by a mechanism referred to as a Generalized Second Price (GSP) Auction. Let us refer to the bid of advertiser E as >Ü and the probability of click (CTR) of advertiser E at the top display position asLÜ . The allocation of ads to display positions is determined by their so-called rank scoreLÜ >Ü , which can be interpreted as expected revenue for ad E if displayed in the top position 2 . The indices E are chosen according to that ranking, such that for all ads E we have: LÜ >Ü R LÜ>5 >Ü>5 . The payments?Ü in a GSP auction are designed to avoid dynamic bidding behaviour because the charge per impression for ad E depends on the value per impression of ad E E s such that?Ü L >Ü>5 LÜ>5 LÜ . It can be seen that the estimated click-through rate LÜ plays a crucial role in determining both allocation and payments, and that it will have a crucial effect on the user experience, the advertiser value and the general health and income of the ad marketplace. An important question is how to evaluate a predictor within the context of a given application domain. Broadly speaking, the performance of a predictor can be evaluated in isolation or as part of the larger system. To evaluate a predictor in isolation, the machine learning community has developed a number of reasonable measures such as the log-likelihood of test data under the model or the area under the receiver-operator (RO) curve (AUC). In the experimental section we will use these measures to evaluate adPredictor in comparison to calibrated Naïve Bayes. However, it is clear that these measures can only act as a proxy for the performance of the predictor in the larger system. Ultimately, the predictor is part of a larger system that serves a purpose different from predicting user behaviour, namely the selection of ads. The ad selection system must be designed to balance the utilities of different players participating in the transaction: advertisers, users, and the search engine. These three types of players have different, even contradictory objectives. Advertisers are interested in maximising their return on investment at high volume. Users would like to see maximally relevant ads that help 3 The calculation of the rank score may also involve other criteria such as relevance of the ad landing page etc. 2 The user only gets to see the landing page once the click has been made. Over time, however, its quality can impact the perception of the advertiser and hence CTR. them pursue their intent. The search engine would like to maximise revenue and growth. Internally, these conflicting goals are mapped to different key performance indicators (KPIs) that are used to tune the ad selection system. However, these KPIs are influenced by a large number of other subsystems such as fraud detection, query expansion, keyword-query matching, etc. Furthermore, there are a large number of parameters influencing the KPIs including reserve prices and rank-score parameters. So, while the ultimate test of a CTR predictor lies in its performance as part of the ad selection system, in a modular architecture it is often best to identify isolated performance measures as proxies for in-system performance. 2.3.2. DYNAMICS AND EXPLORATION potentially billions of different values, and it must be able to handle highly correlated input features as might be present in the nodes of the ad hierarchy (advertiser, account, campaign, etc.) Furthermore, the prediction algorithm itself needs to have a bounded memory footprint in RAM to be able to run continuously in the production system. 3. Online Bayesian Probit Regression The new algorithm presented here is a general Bayesian online learning algorithm for the prediction of binary outcomes. However, in the context of this paper, we will use terminology related to the task of CTR prediction. 3.1. Task and Notation We aim to learn a mapping : \ >rás? where : denotes the set of ad impressions as represented by their feature descriptions, and the interval >rás? represents the set of possible CTRs (probabilities of click). In this application, we consider the case of impressions that are described by 0 discrete multi-valued features, with feature E Ð F»á »? to>rás?. The parameter Ú scales the steepness of the inverse link function. In order to arrive at a Bayesian online learning algorithm we postulate a factorising Gaussian prior distribution over the weights of the model: