Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization Jacob Abernethy Computer Science Division UC Berkeley jake@cs.berkeley.edu Elad Hazan IBM Almaden hazan@us.ibm.com Alexander Rakhlin Computer Science Division UC Berkeley rakhlin@cs.berkeley.edu Abstract We introduce an efficient algorithm for the problem of online linear optimization in thbandit sete ting which achieves the optimal O ( T ) regret. The setting is a natural generalization of the nonstochastic multi-armed bandit problem, and the existence of an efficient optimal algorithm has been posed as an open problem in a number of recent papers. We show how the difficulties encountered by previous approaches are overcome by the use of a self-concordant potential function. Our approach presents a novel connection between online learning and interior point methods. 1 Introduction One's ability to learn and make decisions rests heavily on the availability of feedback. Indeed, an agent may only improve itself when it can reflect on the outcomes of its own taken actions. In many environments feedback is readily available: a gambler, for example, can observe entirely the outcome of a horse race regardless of where he placed his bet. But such perspective is not always available in hindsight. When the same gambler chooses his route to travel to the race track, perhaps at a busy hour, he will likely never learn the outcome of possible alternatives. When betting on horses, the gambler has thus the benefit (or perhaps the detriment) to muse "I should have done...", yet when betting on traffic he can only think "the result was...". This problem of sequential decision making was stated by Robbins [18] in 1952 and was later termed "the multiarmed bandit problem". The name inherits from the model whereby, on each of a sequence of rounds, a gambler must pull the arm on one of several slot machines ("one-armed bandits") that each returns a reward chosen stochastically from a fixed distribution. Of course, an ideal strategy would simply be to pull the arm of the machine with the greatest rewards. However, as the gambler does not know the best arm a priori, his goal is then to maximize the reward of his strategy relative to reward he would receive had he known the optimal arm. This problem has gained much interest over the past 20 years in a number of fields, as it presents a very natural model of an agent seeking to simultaneously explore the world while exploiting high-reward actions. As early as 1990 [8, 13] the sequential decision problem was studied under adversarial assumptions, where we assume the environment may even try to hurt the learner. The multi-armed bandit problem was brought into the adversarial learning model in 2002 by Auer et al [1], who showed that one may obtain nontrivial guarantees on the gambler's performance relative to the best arm even when the arm values are chosen by an adversary! In particular, Auer et al [1] showed that the gambler's regret, i.e. the difference between the gain of the bes rm minus the gain of the gambler, can ta be bounded by O( N T ) where N is the number of bandit arms, and T is the length of the game. In comparison, for the game where the gambler is given full information about alternative arms (such as the horse racing example mentioned above), it is possible to obtain O( T log N ), which scales better in N but identically in T . One natural and well studied problem, which escapes the Auer et al result, is that of "online shortest path", considered in [11, 20] among others. In this problem the decision set is exponentially large (i.e., the set of all paths in a given graph), and the straightforward reduction of modeling each path as an arm for the multi-armed bandit problem suffers from both efficiency issues as well as regret exponential in the description length of the graph. To cope with these issues, several authors [2, 9, 14] have recently proposed a very natural generalization of the multi-armed bandit problem to the field of Convex Optimization, and we will call this "bandit linear optimization". In this setting we imagine that, on each round t, an adversary chooses some linear function ft (·) which is not revealed to the player. The player then chooses a point xt within some given convex set1 K Rn . The player then suffers ft (xt ) and this quantity is revealed to him. This process continues for T rounds, and at the end the learner's payoff is his regret: tT tT RT = ft (xt ) - min ft (x ). =1 x K =1 Online linear optimization has been often considered, yet primarily in the full-information setting where the learner sees all of ft (·) rather than just ft (xt ). In the full-information model, it has been known for some time that the optimal re gret bound is O( T ), and it had been conjectured that the In the case of online shortest path, the convex set can be represented as a set of vectors in R|E | . Hence, the dependence on number of paths in the graph can be circumvented. 1 same should hold for the bandit setting as well. Nevertheless, several initially proposed algorithms were shown only to obtain bounds with O(T 3/4 ) (e.g. [14, 9]) or O(T 2/3 ) (e.g. [2, 7]). Only recently was this conjecture proven to be true by ani et al. [6], who provided an algorithm with D O(poly (n) T ) regret. However, their proposed method, which deploys a clever reduction to the multi-armed bandit algorithm of Auer et al [1], is not efficient. We propose an algorithm for online linear bandit optimization that is the first, we believe, to e both computab tionally efficient and achieve a O(poly (n) T ) regret bound. Moreover, with a thorough analysis we aim to shed light on the difficulties in obtaining such an algorithm. Our technique provides a curious link between the notion of Bregman divergences, which have often been used for constructing and analyzing online learning algorithms, and self-concordant barriers, which are of great importance in the study of interior point methods in convex optimization. A rather surprising consequence is that divergence functions, which are widely used as a regularization tool in online learning, provide the right perspective for the problem of managing uncertainty given limited feedback. To our knowledge, this is the first time such connections have been made. to be as large as possible at every point in K. Rather, the Player's goal is to minimize his regret RT defined as RT := tT =1 ftT xt - min tT =1 x K ftT x . When the objective is his regret, the Player is not competing against arbitrary strategies, he need only perform well relative to the total loss of the single best fixed point in K. We distinguish the full-information and bandit versions of the above problem. The full-information version, the Player may observe the entire function ft as his feedback and can exploit this in making his decisions. In this paper we study p the more challenging bandit setting, where the feedback rovided to the player on round t is only the scalar value ftT xt . This is significantly less information for the Player: instead of observing the entire function ft , he may only witness the value of ft at a single point. 2.1 Algorithms Based on Full Information All previous work on bandit online learning, including the present one, relies heavily on techniques developed in the full-information setting and we now give a brief overview of some well-known approaches. Follow The Leader (FTL) is perhaps the simplest online learning strategy one might think of: the player simply uses the heuristic "select the best choice thus far". For the online optimization task we study, this can be written as xt+1 := arg min st =1 T fs x . 2 Notation and Motivation Let K Rn be a compact closed convex set. For two vectors x, y Rn , we denote their dot product as xT y. We write A B if (A - B ) is positive semi-definite. Let DR (x , y ) := R(x ) - R(y ) - R(y) (x - y ) T x K (1) be the Bregman divergence between x and y with respect to a convex differentiable R. Define the Minkowsky function (see page 34 of [16] for details) on K, parametrized by a pole yt as yt (xt ) = inf {t 0 : yt + t-1 (xt - yt ) K}. We define a scaled version of K by K = {u : x1 (u) (1 + )-1 } for > 0. Here x1 is a "center" of K defined in the later sections. We assume that K is not "flat" and so x1 is a constant distance away from the boundary. In the rest of the section we describe the rich body of previous work which led to our result. The reader familiar with online optimization in the full and partial information settings can skip directly to the next section. The online linear optimization problem is defined as the following repeated game between the learner (player) and the environment (adversary). At each time step t = 1 to T , · Player chooses xt K · Adversary independently chooses ft Rn ·TPlayer suffers loss ftT xt and observes feedback he goal of the Player is not simply to minimize his toT tal loss t=1 ftT xt , for an adversary could simply choose ft For certain types of problems, applying FTL does guarantee low regret. Unfortunately, when the loss functions ft are linear on the input space it can be shown that FTL will suffer regret that grows linearly in T . A natural approach2 , and more well-known within statistical learning, is to regularize the optimization problem (1). That is, an appropriate regularization function R(x ) and a trade-off parameter are selected, and the prediction is obtained as t . s T xt+1 := arg min fs x + R(x ) (2) x K =1 We call the above approach Follow The Regularized Leader (FTRL). An alternative way to view this exact algorithm is by sequential updates, which capture the difference between consecutive solutions for FTRL. Given that R is convex and differentiable, the general form of this update is ¯ xt+1 = R ( R(¯ t ) - ft ), x (3) followed by a projection onto K with respect to the divergence DR : ¯ xt+1 = arg min DR (u, xt+1 ). uK Here R is the Fenchel dual function and is a parameter. This procedure is known as the mirror descent (e.g. [5]). In the context of classification, this approach has been formulated and analyzed by Shalev-Shwartz and Singer [19]. 2 Applying the above rule we see that the well known Online Gradient Descent algorithm [21, 10] is derived3 by choosing the regularizer to be the squared Euclidean norm. Similarly, the Exponentiated Gradient [12] algorithm is obtained with the entropy function as the regularizer. This unified view of various well-known algorithms as solutions to regularization problems gives us an important degree of freedom of choosing the regularizer. Indeed, we will choose a regularizer for our problem that possesses key properties needed for the regret to scale as O( T ). In Section 4, we give a bound on the regret for (2) with any regularizer R and in Section 5 we will discuss the specific R used in this paper. 2.2 The Dilemma of Bandit Optimization Effectively all previous algorithms for the Bandit setting have utilized a reduction to the full-information setting in one way or another. This is reasonable: any algorithm that aimed for low-regret in the bandit setting would necessarily have to achieve low regret given full information. Furthermore, as the full-information online learning setting is relatively well-understood, it is natural to exploit such techniques for this more challenging problem. The crucial reduction that has been utilized by several authors [1, 2, 6, 7, 14] is the following. First choose some fullinformation online learning algorithm A. A will receive input vectors f1 , . . . , ft , corresponding to previously observed functions, and will return some point xt+1 K to predict. On every round t, do one or both of the following: · Query A for its prediction xt and either predict xt exactly or in expectation. · Construct some random estimate ~t in such a way that f E~t = ft , and input ~t into A as though it had been f f observed on this round The key idea here is simple: so long as we are roughly predicting xt per advice of A, and so long as we are "guessing" ft (i.e. so that the estimates ~t is correct in expectation), f then we can guarantee low regret. This approach is validated in Lemma 3 which shows that, as long as A performs well against the random estimates ~t in expectation, then we will f also do well against the true functions f1 , . . . , fT . This observation is quite reassuring yet unfortunately does not address a significant obstacle: how can we simultaneously estimate ~t and predict xt when only one query is alf lowed? The algorithm faces an inherent dilemma: whether to follow the advice of A of predicting xt , or to try to estimate ft by sampling in a wide region around K, possibly hurting its performance on the given round. This explorationexploitation trade-off is the primary source of difficulty in obtaining O( T ) guarantees on the regret. Roughly two categories of approaches have been suggested to perform both exploration and exploitation: 1. Alternating Explore/Exploit: Flip an -biased coin to determine whether to explore or exploit. On explore Strictly speaking, this equivalence is true if the updates are applied to unprojected versions of xt . 3 rounds, sample uniformly on some wide region around K and estimate ft accordingly, and input this into A. On exploit rounds, query A for xt and predict this. 2. Simultaneous Explore/Exploit: Query A for xt and construct a random vector Xt such that EXt = xt . Construct ~t randomly based on the outcome of Xt and f the learned value ftT Xt . The methods of [14, 2, 7] fit within in the first category but, unfortunately, fail to obtain the desired O(poly (n) T ) regret. This is not surprising: it has been suggested by [7] that (T 2/3 ) regret is unavoidable by any algorithm in which the observation ftT xt is ignored on rounds pledged for exploitation. Algorithms falling into the second category, such as those of [1, 6, 9], are more sophisticated and help to motivate our results. We review these methods below. Methods For Simultaneous Exploration and Exploition On first glance, it is rather surprising that one can perform the task of predicting some xt (in expectation) while, simultaneously, finding an unbiased estimate of ft . To get a feel for how this can be done, we briefly review the methods of [1] and [9] below. The work of Auer et al [1] is not, strictly speaking, concerned with a general bandit optimization problem but instead the more simple "Multi-armed bandit" problem. The authors consider the problem of sequentially choosing one of N "arms" each of which contains a hidden loss where the learner may only see the loss of his chosen arm. The regret, in this case, is the learner's loss minus the smallest cumulative loss over all arms. This multi-armed bandit problem can indeed be cast as a bandit optimization problem: let K be the N -simplex (convex hull of {e1 , . . . , eN }), let ft be identically the vector of hidden lossesfon the set of arms, and note fT that minx K s [i]. s x = mini The algorithm of [1], EXP3, utilizes EG (mentioned earlier) as its black box full-information algorithm A. First, a point xt K is returned by A. The hypothesis xt is then biased slightly: 1 . 1 xt (1 - )xt + ,..., n n We describe the need for this bias in Section 2.4. EXP3 then randomly chooses one of the corners of K according to the distribution xt and uses this as its prediction. More precisely, a basis vector ei is sampled with probability xt [i] and clearly EI xt eI = xt . Once we observe ftT ei = ft [i], the estimate is constructed as follows: e f t [i] ~t := f i. xt [i] It is very easy to check that E~t = ft . f Flaxman et al [9] developed a bandit optimization algorithm that used OGD as the full-information subroutine A. Their approach uses a quite different method of performing exploration and exploitation. On each round, the algorithm queries A for a hypothesis xt and, as in [1], this hypothesis is biased slightly: xt (1 - )xt + u 2.3 where u is some "center" vector of the set K. Similarly to EXP3, the algorithm doesn't actually predict xt . The algorithm determines the distance r to the boundary of the set, and a vector rv is sampled uniformly at random from a sphere of radius r. The prediction is yt := xt + rv and indeed Eyt = xt + rEv = xt as desired. The algorithm predicts yt , receives feedback ftT yt , and function ft is estimated as T ~t := ft yt v. f r It is, again, easy to check that this provides an unbiased estimate of ft . The Curse of High Variance and the Blessing of Regularization Upon inspecting the definitions of ~t in the method of Auer f et al and Flaxman et al it becomes apparent that the estimates are inversely proportional to the distance of xt to the boundary. This implies high variance of the estimated functions. At first glance, this seems to be a disaster. Indeed, most fullinformation algorithms scale linearly with the magnitude of the functions played by the environment. Let us take a closer look at how exactly this leads to the suboptimality of the algorithm of Flaxman et al. The bound on the expected regret of OGD on ~t 's inf volves terms E ~t 2 (see proof of Lemma 2), which scale as f the inverse of the squared distance to the boundary. Biasing of xt away from the boundary leads to an upper bound on this quantity of the order -2 . Unfortunately, cannot be taken to be large. Indeed, the optimal point x , chosen in hindsight, lies on the boundary of the set, as the cost functions are linear. Thus, stepping away from the boundary comes at a cost of potentially losing O( T ) over the course of the game. Since the goal is to obtain an O( T ) bound on the regret, = O(T -1/2 ) is the most that can be tolerated. Biasing away from the boundary does reduce the variance of the estimates somewhat; unfortunately, it is not the panacea. To terminate the discussion on the method of Flaxman et al, we state the dependence of the regret bound on the learning rate and the biasing parameter : RT = O( -1 + -2 T + T ). The first term is due to the distance between the initial choice and the comparator; the second is the problematic E ~t 2 f term summed over time; and the last term is due to stepping away from the boundary. The best choice of the parameters leads to the unsatisfying O(T 3/4 ) bound. From the above discussion it is clear that the problematic term is E ~t 2 = O(1/r2 ), owing its high magnitude to its f inverse dependence on the squared distance to the boundary. A similar dependence occurs in the estimate of Auer et al, though the non-uniform sampling from the basis implies an O(1/xt [i]) magnitude. One can ask whether this inverse dependence on the distance is an artifact of these algorithms and can be avoided. In fact, it is possible to prove that this is intrinsic to the problem if we require that ~t be unbiased and f xt be the center of the sampling distri tion. bu Does this result imply that no O( T ) bound on the regret is possible? Fortunately, no. If we restrict our search 2.4 to a regularization algorithm of the type (2), the expected regret can be proved to be equal to an expression involving EDR (xt , xt+1 ) terms. For R(x) x 2 we indeed recover (modulo projections) the method of Flaxman et al with its insurmountable hurdle of E ~t 2. Fortunately, other choices of f R have better behavior. Here, the formulation of the regularized minimization (2) as a dual-space mirror descent comes to the rescue. In the space of gradients (the dual space), the step-wise updates (3) for Follow The Regularized Leader are ~t no f matter what R we choose. It is a known fact (e.g. [5]) that the divergence in the original space between xt and xt+1 is equal to the divergence between the corresponding gradients with respect to the dual potential R . It is, therefore, not surprising that the dual divergence can be tuned to be small even if ~t is very large. Having small divergence corref sponds to the requirement that R be "flat" whenever ~t is f large, i.e. when xt is close to the boundary. Flatness in the dual space corresponds to large curvature in the primal. This motivates the use of a potential function R which becomes more and more curved at the boundary of the set K. In a nutshell, this is the Blessing of Regularization which allows us to obtain an efficient optimal algorithm which was escaping all previous attempts. R ecall that the method of Auer et al attains the optimal O( T ) rate but only when K is the simplex. If our intuition about the importance of regularization is sound, we should find that the method uses a potential which curves at the edges of the simplex. One can see that the exponential weights (more generally, EG) used by Auer et al corresponds to regularization with R being the entropy function n R(x) = i=1 x[i] log x[i]. Taking the second derivative, we see that, indeed, the curvature increases as 1/x[i] as x gets closer to the boundary. For the present paper, we will actually choose a regularizer that curves as inverse squared distance to the boundary. The reader can probably guess that such a regularizer should be defined, roughly, as the logdistance to the boundary. While for simple convex bodies, such as sphere, existence of a function behaving like log-distance to the boundary seems plausible, a similar statement for general convex sets K seems very complex. Luckily, this very question has been studied in the theory of Interior Point Methods, and existence and construction of such functions, called self- concordant barriers, is well-established. 3 Main Result We first state our main result: an algorithm for online linear optimization in the bandit setting for an arbitrary compact convex set K. The analysis of this algorithm has a number of facets and we discuss these individually throughout the remainder of this paper. In Section 4 we describe the regularization framework in detail and show how the regret can be computed in terms of Bregman divergences. In Section 5 we review the theory of self-concordant functions and state two important properties of such functions. In Section 6 we highlight several key elements of the proof of our regret bound. In Section 7 we show how this algorithm can be used for one interesting case, namely the bandit version of the Online Algorithm 1 Bandit Online Linear Optimization 1: Input: > 0, -self-concordant R 2: Let x1 = arg minxK [R(x)]. 3: for t = 1 to T do 4: Let {e1 , . . . , en } and {1 , . . . , n } be the set of eigenvectors and eigenvalues of 2R(xt ). 5: Choose it uniformly at random from {1, . . . , n} and t = ±1 with probability 1/2. -1/2 6: Predict yt = xt + t it eit . 7: Observe the gain ftT yt R. 1 /2 8: Define ~t := n (ftT yt ) t it · eit . f 9: Update . t s ~s x + R(x) fT xt+1 = arg min xK =1 for some strictly-convex differentiable function R. We denote 0 (x) = R(x) and t = t-1 + ~t . f We will assume R approaches infinity at the boundary of K so that the unconstrained minimization problem will have a unique solution within K. We have the following bound on the performance of such an algorithm. Lemma 2 For any u K, the algorithm defined by (4) enjoys the following regret guarantee tT =1 ~tT (xt - u) DR (u, x1 ) + f tT =1 DR (xt , xt+1 ) ~tT (xt - xt+1 ) f DR (u, x1 ) + for any sequence {~t }T=1 . ft tT =1 10: end for In addition, we state a useful result that bounds the true regret based on the regret against the estimated functions ~t . f Shortest Path problem. The precise analysis of our algorithm is given in Section 8. Finally, in Section 9 we spell out how to implement the algorithm with only one iteration of the Damped Newton method per time step. The following theorem is the main result of this paper (see Section 5 for the definition of -self-concordant barrier). Theorem 1 Let K be a convex set and R be a -self-concordant barrier on K . Let u be any vector in K = K1/T . Suppose we have the property that |ftT x| 1 for any x K. Setting o = 4 lg T , the regret of Algorithm 1 is bounded as nT T + tT t T T E ft yt min E ft u 16n T log T =1 uK =1 Lemma 3 Suppose that, for t = 1, . . . , T , ~t is such that f E ~t = ft and yt is such that E yt = xt . Suppose that we f have the following regret bound: tT =1 ~tT xt min f uK t T ~tT u + CT . f =1 Then the expected regret satisfies T T t t T E min E ftT u ft yt =1 uK =1 + CT . whenever T > 8 log T . The exp ted regret over the original set K is within an ec additive O( nT ) factor from the above guarantee, as implied by Lemma 8 in the Appendix. 5 Self-concordant Functions and the Dikin ellipsoid Interior-point methods are arguably one of the greatest achievements in the field of Convex Optimization in the past two decades. These iterative polynomial-time algorithms for Convex Optimization find the solution by adding a barrier function to the objective and solving the unconstrained minimization problem. The rough idea is to gradually reduce the weight of the barrier function as one approaches the solution. The construction of barrier functions for general convex sets has been studied extensively, and we refer the reader to [16, 4] for a thorough treatment on the subject. To be more precise, most of the results of this section can be found in [15], page 22-23, as well as in the aforementioned texts. 5.1 Definitions and Properties Definition 4 A self-concordant function R : int K R is a C 3 convex function such that D 3 /2 |D3 R(x)[h, h, h]| 2 2 R(x)[h, h] . Here, the third-order differential is defined as D3 R(x)[h1 , h2 , h3 ] := 3 |t =t =t =0 R(x + t1 h1 + t2 h2 + t3 h3 ). t1 t2 t3 1 2 3 4 Regularization Algorithms and Bregman Divergences As our algorithm is clearly based on a regularization framework, we now state a general result for the performance of any algorithm minimizing the regularized empirical loss. We call this method Follow the Regularized Leader, and we defer the proof of the regret bound to the Appendix. A similar analysis for convex loss functions can be found in [5], Chapter 11. We remark that the use of Bregman divergences in the context of online learning goes back at least to Kivinen and Warmuth [12]. Let ~1 , . . . , ~T Rn be any sequence of vectors. Supf f pose xt+1 is obtained as t s ~s x + R(x) xt+1 = arg min fT (4) xK =1 t (x) We will further assume that the function approaches infinity for any sequence of points approaching the boundary of K. An additional requirement leads to the notion of a selfconcordant barrier. Definition 5 A -self-concordant barrier R is a self-concordant function with D 1 /2 . |DR(x)[h]| 1/2 2 R(x)[h, h] The generality of interior-point methods comes from the fact that any arbitrary n-dimensional closed convex set admits an O(n)-self-concordant barrier [16]. Hence, throughout this paper, = O(n), but can even be independent of the dimension, as for the sphere. We note that some of the results of this paper, such as the Dikin ellipsoid, rely on R being a self-concordant function, while others necessarily require the barrier property. We therefore assume from the outset that R is a self-concordant barrier. Since K is compact, we can assume that R is non-degenerate. For a given x K, define g, h x = gT 2R(x)h and h x = ( h, h x)-1/2 . This inner product defines the local Euclidean structure at x. Nondegeneracy of R implies that the above norm is indeed a norm, not a seminorm. It is natural to talk about a ball with respect to the above norm. Define the open Dikin ellipsoid of radius r centered at x as the set Wr (x) = {y K : y - x x < r}. The following facts about the Dikin ellipsoid are central to the results of this paper (we refer to [15], page 23 for proofs). The first non-trivial fact is that W1 (x) K for any x K. In other words, the inverse Hessian of the self-concordant function R stretches the space in such a way that the eigenvectors fall in the set K. This is crucial for our sampling procedure. Indeed, our method (Algorithm 1) samples yt from the Dikin ellipsoid centered at xt . Since W1 (xt ) is contained in K, the sampling procedure is legal. The second fact is that within the Dikin ellipsoid, that is for h x < 1, the Hessians of R are "almost proportional" to the Hessian of R at the center of the ellipsoid : (1 - h x )2 2R(x) 2R(x + h) (5) (1 - h x )-2 2R(x). This gives us the crucial control of the Hessians for secondorder approximations. Finally, if h x < 1 (i.e. x + h is in the unit Dikin ellipsoid), then for any z, hx |zT ( R(x + h) - R(x))| z x . (6) 1- h x Assuming that R is a -self-concordant barrier, we have (see page 34 of [16]) 1 . R(u) - R(x1 ) ln 1 - x1 (u) For any u K , x1 (u) (1 + )-1 by definition, implying that (1 - x1 (u))-1 1+ . We conclude that R(u) - R(x1 ) ln( T + 1) 2 log T (7) . for u K1/ T 5.2 Examples of Self-Concordant Functions A nice fact about self-concordant barriers is that R1 + R2 is 1 + 2 -self-concordant for 1 -self-concordant R1 and 2 -self-concordant R2 . For linear constraints aT xt b, the barrier - ln(b - aT xt ) is 1-self-concordant. Hence, for a polyhedron defined by m constraints, the corresponding barrier is m-self-concordant. Thus, for the n-dimensional simplex or a cube, = n, leading to n3/2 dependence on the dimension in the main result. For the n-dimensional ball, i Bn = {x Rn , x2 1}, i the barrier function R(x) = - log(1- x 2 ) is 1-self-concordant. This, somewhat surprisingly, leads to the linear dependence of the regret bound on the dimension n, as = 1. 6 Sketch of Proof We have now presented all necessary tools to prove Theorem 1: regret in terms of Bregman divergences, self-concordant barriers and the Dikin ellipsoid. While we provide a complete proof in Section 8 here we sketch the key elements of the analysis of our algorithm. As we tried to motivate in the end of Section 2, any method that can simultaneously (a) predict xt in expectation and (b) obtain an unbiased one-sample estimate of ~t f will necessarily suffer from high variance when xt is close to the boundary of the set K. As we have hinted previously, we would like our regularizer R to control the variance. Yet the problem is even more subtle than this: xt may be close to the boundary in one dimension while have plenty of space in another, which in turn suggests that ~t need only have high f variance in certain directions. Quite amazingly, the self-concordant function R gives us a handle on two key issues. The Dikin ellipsoid, defined in terms 2R(xt ), gives us exactly a rough approximation to the available "space" around xt . At the same time, 2R(xt )-1 annihilates ~t in exactly the directions in which f it is large. This is absolutely necessary for bounding the regret, as we discuss next. Lemma 2 implites that regret scales with the cumulative divergence -1 DR (xt , xt+1 ) and thus we must have that E DR (xxt+1 ) = O( 2 ) on average to obtain a regret t, bound of O( T ). Analyzing the divergence requires some care and so we provide only a rough sketch here (with more in Section 8). If R were exactly quadratic then the divergence is DR (xt , xt+1 ) := 2~tT ( 2R(xt ))-1~t . f f (8) Even when R is not quadratic, however, (8) still provides a decent approximation to the divergence and, given certain regularity conditions on R, it is enough to bound the quadratic form ~tT ( 2R(xt ))-1~t . f f The precise interaction between the Dikin ellipsoid, the estimates ~t , and the divergence DR (xt , xt+1 ) is as follows. f Assume we are at the point xt and we have computed the unit eigenvectors e1 , . . . , en and corresponding eigenvalues 1 , . . . , n of 2R(xt ). Properties of self-concordant functions ensure that the Dikin ellipsoid around xt is contained within K and thus, in particular, so are the points xt ±i ei -1/2 for each i. Assuming the point yt := xt + j ej was sampled and we received the value ftT yt , we then construct the estimate T ~t := n f j (ft yt )ej . Notice it is crucial that we scale by j , the inverse 2 distance between xt and yt , to ensure that ft is unbiased.On the other hand, we see that the divergence is approximately computed as f f DR (xt , xt+1 ) 2~tT 2R-1~t = 2 n2 (ftT yt )2 j (eT 2R-1 ej ) j = 2 n2 (ftT yt )2 . As an interesting and important aside, a necessary requirement of the above analysis is that we construct our estimates ~t from the eigendirections ej . To see this, imagine f that one eigenvalue 1 is very large, while another, 2 small. This corresponds to a thin and long Dikin ellipsoid, which would occur near a flat boundary. Suppose that instead of eigen-directions, we sample at an angle between them. With the thin ellipsoid the sampled points are still close in 2 distance, implying that ~t will be large in both eigen-directions. f However, the inverse Hessian will only annihilate one of these directions. -1/2 Define the set K as the convex hull of the set of paths. It is well-known that this set is the set of flows in the graph and can be defined using O(m) constraints: positivity constraints and conservation of in-flow and out-flow for every vertex other than source/sink (which have unit out-flow and in-flow, respectively). Theorem 1 implies that Algorithm 1 attains O(m3/2 T ) regret for the bandit linear optimization problem over this set K. However, an astute reader would notice that with this definition of K, the algorithm produces a flow yt K, not necessarily a path, at each round. The loss suffered by the online player is ftT yt and the game is specified differently from the bandit shortest path. However, it is easy to convert this flow algorithm into a randomized online shortest path algorithm: according to the standard flow decomposition theorem (see e.g. [17]), a given flow in the graph can be decomposed into a distribution over at most m + 1 paths in polynomial time. Hence, given a flow yt K, one can obtain an unbiased estimator for ftT yt by choosing a path according to the distribution of the decomposition, and estimating ftT yt by the length of this path. In fact, we have the following general statement. Proposition 1 Suppose that, having computed yt in step (1) ¯ ¯ of Algorithm 1, we predict a random yt K such that Eyt = ¯ yt , and in step (1) observe ftT yt . If we use this observed value instead of ftT yt in step (1), the expected regret of the modified algorithm is the same as that of Algorithm 1. The proposition implies that the modified algorithm attains low regret for games defined over discrete sets of possible predictions for the player. This is achieved by working with the convex hull of the discrete set while predicting in the original set. In particular, the modification allows us to predict a legal path while the algorithm works with the set of flows. The proof of Proposition 1 is straightforward: following closely the proof of Theorem 1, we observe that the value ftT yt is used in only two places. The first is in Equation (9), where it is upper-bounded by 1, and the second is in the proof of the fact that ~t is unbiased. f 7 Application to the online shortest path problem Because of its appealing structure, the online shortest path problem is one of the best studied problems in online optimization. Takimoto and Warmuth [20], and later Kalai and Vempala [11], gave efficient algorithms for the full information setting. Awerbuch and Kleinberg [2] were the first to give an efficient algorithm with O(T 2/3 ) regret in the partial information (bandit) setting. The recent work of Dani et al [6] implies a O(m3/2 T )-regret algorithm, where m = |E | is the number of edges in the graph. Turning to Algorithm 1, we notice that whenever K is defined by linear constraints, R is defined in a straightforward way (see Section 5.2). As we show below, the online shortest path is an optimizati problem on such a set, and we obtain on 3 /2 an efficient O(m T )-regret algorithm. Formally, the bandit shortest path problem is defined as the following repeated game: Given a directed graph G = (V , E ) and a source-sink pair s, t V , at each time step t = 1 to T , · Player chooses a path pt Ps,t , where Ps,t {E }|V | is the set of all s, t-paths in the graph · Adversary independently chooses weights on the edges of the graph ft Rm · Player suffers and observes le ss, which is the weighted o length of the chosen path pt ft (e) The problem is transformed into an instance of bandit linear optimization by associating each path with a vector x {0, 1}|E | , where x (i) indicates the presence of the ith edge. The loss is then defined through the dot product f T x . 8 Proof of the regret bound 8.1 Unbiasedness First, we show that E~t = ft . Condition on the choice it and f average over the choice of t : 1f 1 /2 -1/2 Et ~t = n t · (xt + it eit ) it · eit f 2 1f 1 /2 -1/2 - n t · (xt - it eit ) it · eit 2 = n(ftT eit )eit . Hence, E~t = n f Furthermore, Eyt = xt . E i t ei t ei t T f t = ft . and xt+1 xt W1 (xt ) 1 2 by our choice of and T . We conclude that the local optimum arg minzW 1 (xt ) t (z) is strictly inside W4n (xt ), | | < 4n < 2 and since t is convex, the global optimum is xt+1 = arg min t (z) W4n (xt ). zK Figure 1: The Dikin ellipsoid W1 (xt ) at xt . The next minimum is guaranteed to lie in its scaled version W4n (xt ). 8.2 Closeness of the next minimum We now use the properties of the Dikin ellipsoids mentioned in the previous section. Lemma 6 The next minimizer xt+1 is "close" to xt : xt+1 W4n (xt ). Proof: Recall that xt+1 = arg min t (x) and xt = arg min t-1 (x) xK xK 8.3 Proof of Theorem 1 We are now ready to prove the regret bound for Algorithm 1. Since xt+1 W4n (xt ), we invoke Eq (6) at x = xt and z = h = xt+1 - xt : |hT ( R(xt+1 ) - R(xt ))| h 2t x . 1 - h xt Observe that xt+1 W4n (xt ) implies h xt < 4n . The proof of Lemma 2 (Equation (12) in the Appendix) reveals that R(xt ) - We have ~tT (xt - xt+1 ) = -1 hT ( R(xt+1 ) - f h 1- h 16n2 1 - 4n 32n2 . -1 By Lemma 2, for any u K1/T tT =1 2 xt xt R(xt+1 ) = ~t . f where t (x) = s=1 ~tT x + R(x). Since t-1 (xt ) = 0, f we conclude that t (xt ) = ~t . f 1 Consider any point in z W 2 (xt ). It can be written as z = xt + u for some vector u such that u xt = 1 and (- 1 , 1 ). Expanding, 22 t t (z) = t (xt + u) = t (xt ) + t (xt ) u + T R(xt )) (10) 21 2 u T 2t ( )u 1 = t (xt ) + ~tT u + 2 uT 2t ( )u f 2 for some on the path between xt and xt + u. Let us check where the optimum of the RHS is obtained. Setting the derivative with respect to to zero, we obtain |~t u| f |~ u| f =T t . 2t ( )u u 2R()u The fact that is on the line xt to xt + u implies that - 1 xt xt u xt < 2 . Hence, by Eq (5), T T ~tT (xt - u) -1 DR (u, x1 ) + f -1 tT =1 ~tT (xt - xt+1 ) f DR (u, x1 ) + 32n2 T | | = uT = -1 (R(u) - R(x1 )) + 32n2 T 1 (2 log T ) + 32n2 T , where the first equality follows since R(x1 ) = 0, by the choice of x1 ; the lasinequality follows from Equation (7). t o Balancing with = 4 lg T , we get nT tT =1 2R() (1 - - xt xt )2 2R(xt ) 1 4 1 2R(xt ). 4 Thus uT 2R()u > u xt 1 = 4 , and hence ~tT (xt - u) 16n f T log T . < 4 |~tT u|. f 1 /2 Recall that ~t = n (ft · yt ) t it · eit and so ~tT u is maxf f imized/minimized when u is a unit (with respect to · xt ) -1/2 vector in the direction of eit , i.e. u = ±it eit . We conclude that |~tT u| n |ft · yt | n f (9) for any u in the scaled set K . Using Lemma 3, which we prove below, we obtain the statement of Theorem 1. Expected Regret Note that it is not ~tT xt that the algorithm should be incurring, f but rather ftT yt . However, it is easy to see that these are equal in expectation. 8.4 Proof:[Lemma 3] Let Et [·] = E[·|i1 , . . . , it-1 , 1 , . . . , t-1 ] denote the conditional expectation. Note that Et~tT xt = ftT xt = Et ftT yt . f Taking expectations on both sides of the bound for ~t 's, f E tT =1 ~tT xt E min f uK t T ~tT u + CT f T + ~tT u f T + ft u T =1 min E uK t =1 CT = min E uK t =1 CT . to maintain a sequence of points {zt }, such that zt is obtained from zt-1 by only one iteration of the Damped Newton method. The sequence of points {zt } are shown to be sufficiently close to {xt }, which enjoy the same guarantee ^ as the sequence of {xt } defined by Algorithm 1. A single iteration of the Damped Newton method requires matrix inversion. However, since we have the eigendecomposition ready made, as it was required for the unbiased estimator, we can produce the inverse and the Newton direction in O(n2 ) time. Thus, the most time-consuming part of the algorithm is the eigen-decomposition of the Hessian, and the total running time is O(n3 ) per iteration. Before we begin, we require a few more facts from the theory of interior point methods, taken from [15]. Let be a non-degenerate self-concordant barrier on domain K, for any x K define the Newton direction as e(, x) = -[ 2(x)]-1 (x) and let the Newton decrement be (, x) = (x)T [ 2(x)]-1 (x). The Damped Newton iteration for a given x K is 1 e(, x ). DN (, x ) = x - 1 + (, x ) The following facts can be found in [15]: A: DN (, x ) K. 5 In the case of an oblivious adversary, = T t t T min T ftT u. ft u min E uK =1 uK =1 However, if the adversary is not oblivious, ft depends on the random choices at time steps 1, . . . , t - 1. Of course, it is desirable to obtain a stronger bound on the regret = T t t O( T ), E ftT yt - min T ftT u =1 uK =1 B: (, DN (, x )) 2(, x )2 . C: x - x x D: x - x x (,x ) 1-(,x ) . (,x ) 1-2(,x ) . which allows the optimal u to depend on the randomness of the player4 . Obtaining guarantees for adaptive adversaries is another dimension of the bandit optimization problem and is beyond the scope of the present paper. Auer et al [1] provide a clever modification of their EXP3 algorithm which leads to high-probability bounds on the regret, thus guaranteeing low regret against an adaptive adversary. The modification is based on the idea of adding confidence intervals to the losses. The same idea has been employed in the work of [3] (note that [3] is submitted concurrently with this paper) for the bandit optimization over arbitrary convex sets. While the work of [3] does succeed in obtaining a high-probability bound, the algorithm is based on the inefficient method of Dani et al [6], which is a reduction to the algorithm of Auer et al. Here x = arg minx K (x ). Algorithm 2 Efficient Implementation 1: Input: > 0, -self-concordant R. 2: Let z1 = arg minxK R(x). 3: for t = 1 to T do 4: Let {e1 , . . . , en } and {1 , . . . , n } be the set of eigenvectors and eigenvalues of 2R(zt ). 5: Choose it uniformly at random from {1, . . . , n} and t = ±1 with probability 1/2. -1/2 6: Predict yt = zt + t it eit . T 7: Observe the gain ft yt R. 1 /2 8: Define ^t := n (ftT yt ) t it · eit . f 9: Update zt+1 = zt - where t (z) 10: end for This follows easily since the Newton increment is in the Dikin 1 ellipsoid 1+(,x ) e(, x ) W1 (x ). 5 9 Efficient Implementation In this section we describe how to efficiently implement Algorithm 1. Recall that in each iteration our algorithm requires the eigen-decomposition of the Hessian in order to derive the unbiased estimator, which takes O(n3 ) time. This is coupled with a convex minimization problem in order to compute xt , which seems to be the most time consuming operation in the entire algorithm. The message of this section is that the computation of xt given the previous iterate xt-1 takes essentially only one iteration of the Damped Newton method. More precisely, instead of using xt as defined in Algorithm 1, it suffices It is known that the optimal strategy for the adversary does not need any randomization beyond the player's choices. 4 1 e(t , zt ), 1 + (t , zt ) st =1 ^s z + R(z). fT The functions ^t computed by the above algorithm are f unbiased estimates of ft constructed by sampling eigenvectors of 2R(zt ). Define the Follow The Regularized Leader solutions xt+1 arg min t (x), ^ xK on the new functions ^t 's. The sequence {xt , ^t } is different f ^f from the sequence {xt , ~t } generated by Algoritm 1. Howf ever, the same regret bound can be proved for the new algorithm. The only difference from the proof for Algorithm 1 is in the fact that ^t 's are estimated using the Hessian at zt , not f xt . However, as we show next, zt is very close to xt , and ^ ^ therefore the Hessians are within a factor of 2 by Equation (5), leading to a slightly worse constant for the regret. Lemma 7 It holds that for all t, 2 (t , zt ) 4n2 2 Proof: The proof is by induction on t. For t = 1 the result is true because x1 is chosen to minimize R. Suppose the statement holds for t - 1. By definition, 2 (t , zt ) = = Note that t (zt )[ 2t (zt )]-1 t (zt ) t (zt )[ 2R(zt )]-1 t (zt ). at most the D. Thus, 2R D2 I and thus zt - xt 2 ^ D-1 zt - xt xt 16D-1 n2 2 . ^^ As we proved, it requires only one Damped Newton update to maintain the sequence zt , which are O(1/T ) close to xt . Hence, ^ tT tT T |ft (zt - xt )| ^ ft zt - xt = O(1). ^ =1 =1 Therefore, for any u K E tT =1 ftT (yt - u) = E =E =E =E tT =1 ^tT (zt - u) f ^tT (xt - u) + E f^ ^tT (xt - u) + E f^ tT =1 tT =1 ^tT (zt - xt ) f ^ ftT (zt - xt ) ^ tT =1 tT =1 tT =1 ^tT (xt - u) + O(1) f^ A slight modification of the proofs of Section 8 leads to a ^ O( T ) bound on the expected regret of the sequence {xt }. Acknowledgments. We would like to thank Peter Bartlett for numerous illuminating discussions. We gratefully acknowledge the support of DARPA under grant FA8750-05-2-0249 and NSF under grant DMS-0707060. t (zt ) = t-1 (zt ) + ^tT . f T T T Using (x + y ) A(x + y ) 2x Ax + 2y Ay we obtain 12 (t , zt ) t-1 (zt )[ 2R(zt )]-1 t-1 (zt ) 2 + 2^tT [ 2R(zt )]-1^t f f 2 2^T = (t-1 , zt ) + ft [ 2R(zt )]-1^t . f The first term can be bounded by fact (B) and using the induction hypothesis, 2 (t-1 , zt ) 44 (t-1 , zt-1 ) 64n4 4 . As for the second term, ^t [ 2R(zt )]-1^t n2 f f because of the way ^t is defined and since |ftT yt | 1 by f assumption. Combining the results, 2 (t , zt ) 128n4 4 + 2n2 2 4n2 2 using the definition of of Theorem 1 and large enough T . This proves the induction step. (11) References ` [1] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48­77, 2003. [2] Baruch Awerbuch and Robert D. Kleinberg. Adaptive routing with end-to-end feedback: distributed learning and geometric approaches. In STOC '04: Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 45­53, New York, NY, USA, 2004. ACM. [3] P. Bartlett, V. Dani, T. Hayes, S. Kakade, A. Rakhlin, and A. Tewari. High-probability bounds for the regret of bandit online linear optimization, 2008. In submission to COLT 2008. [4] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, volume 2 of MPS/SIAM Series on Optimization. SIAM, Philadelphia, 2001. ` ´ [5] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [6] Varsha Dani, Thomas Hayes, and Sham Kakade. The price of bandit information for online optimization. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008. Note that Equation (11) with the choice of and large 1 enough T implies 2 (t-1 , zt ) << 2 . Using this together with the above Lemma and facts (B) and (C), we conclude that zt - xt xt 2(t-1 , zt ) 4(t-1 , zt-1 )2 16n2 2 ^^ We observe that xt and zt are very close in the local distance. ^ This implies closeness in L2 distance as well. Indeed, square -1/2 roots of inverse eigenvalues i , being the distances from xt to the corresponding radii of the Dikin ellipsoid, can be ^ [7] Varsha Dani and Thomas P. Hayes. Robbing the bandit: less regret in online geometric optimization against an adaptive adversary. In SODA '06: Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, pages 937­943, New York, NY, USA, 2006. ACM. [8] Meir Feder, Neri Merhav, and Michael Gutman. Correction to 'universal prediction of individual sequences' (jul 92 1258-1270). IEEE Transactions on Information Theory, 40(1):285, 1994. [9] Abraham D. Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In SODA '05: Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 385­394, Philadelphia, PA, USA, 2005. Society for Industrial and Applied Mathematics. [10] D. P. Helmbold, J. Kivinen, and M. K. Warmuth. Relative loss bounds for single neurons. IEEE Transactions on Neural Networks, 10(6):1291­1304, November 1999. [11] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291­307, 2005. [12] Jyrki Kivinen and Manfred K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Inf. Comput., 132(1):1­63, 1997. [13] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212­261, 1994. [14] H. Brendan McMahan and Avrim Blum. Online geometric optimization in the bandit setting against an adaptive adversary. In COLT, pages 109­123, 2004. [15] A.S. Nemirovskii. Interior point polynomial time methods in convex programming, 2004. Lecture Notes. [16] Y. E. Nesterov and A. S. Nemirovskii. Interior Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia, 1994. [17] Satish Rao. Lecure notes: Cs 270, graduate algorithms. 2006. [18] Herbert Robbins. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc., 58(5):527­535, 1952. [19] Shai Shalev-Shwartz and Yoram Singer. A primaldual perspective of online learning algorithms. Mach. Learn., 69(2-3):115­142, 2007. [20] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. J. Mach. Learn. Res., 4:773­818, 2003. [21] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928­936, 2003. A Proofs Proof: [Lemma 2] Since the argmin is in the set, t-1 (xt ) = 0 and Dt-1 (u, xt ) = t-1 (u) - t-1 (xt ). Moreover, t (u) = t-1 (u) + ~tT u. f Combining the above, ~tT u = Dt (u, xt+1 ) + t (xt+1 ) - t-1 (u) f and ~tT xt = Dt (xt , xt+1 ) + t (xt+1 ) - t-1 (xt ). f Thus, ~tT (xt -u) = Dt (xt , xt+1 )+Dt-1 (u, xt )-Dt (u, xt+1 ). f Summing over t = 1 . . . T , tT =1 ~tT (xt - u) = D0 (u, x1 ) - D (u, xT +1 ) f T + tT =1 Dt (xt , xt+1 ) tT =1 D0 (u, x1 ) + Dt (xt , xt+1 ) t-1 By definition, xt satisfies s=1 ~s + R(xt ) = 0 and xt+1 f t ~s + R(xt+1 ) = 0. Subtracting, satisfies s=1 f R(xt ) - Now we realize that DR (xt , xt+1 ) DR (xt , xt+1 ) + DR (xt+1 , xt ) = - R(xt+1 )(xt - xt+1 ) - R(xt )(xt+1 - xt ) = ~tT (xt - xt+1 ). f R(xt+1 ) = ~t . f (12) Lemma 8 For any point x K, it holds that y K min x - y . Proof: Consider the point on the segment [x1 , x ] which intersects the boundary of K , denote it z By definition, we have x - x1 z 1 . - x1 = 1 + As x , x1 , zt are on the same line z-x = x-x1 - z - x1 = x-x1 ·(1- 1 ) . 1+ The last inequality holds by our assumption that the diameter of K is bounded by one. The lemma follows.