Adaptive Online Gradient Descent Peter L. Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@cs.berkeley.edu Elad Hazan IBM Almaden Research Center 650 Harry Road San Jose, CA 95120 hazan@us.ibm.com Alexander Rakhlin Division of Computer Science UC Berkeley Berkeley, CA 94709 rakhlin@cs.berkeley.edu Abstract We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates between T and log T . Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms. 1 Introduction The problem of online convex optimization can be formulated as a repeated game between a player and an adversary. At round t, the player chooses an action xt from some convex subset K of Rn , and then the adversary chooses a convex loss function ft . The player aims to ensure that the total T T loss, t=1 ft (xt ), is not much larger than the smallest total loss t=1 ft (x) of any fixed action x. The difference between the total loss and its optimal value for a fixed action is known as the regret, which we denote RT = tT =1 ft (xt ) - min xK tT =1 ft (x). Many problems of online prediction of individual sequences can be viewed as special cases of online convex optimization, including prediction with expert advice, sequential probability assignment, and sequential investment [1]. A central question in all these cases is how the regret grows with the number of rounds of the game. Zinkevich [2] considered the following gradient descent algorithm, with step size t = (1/ t). (Here, K (v ) denotes the Euclidean projection of v on to the convex set K .) Corresponding author. 1 Algorithm 1 Online Gradient Descent (OGD) 1: Initialize x1 arbitrarily. 2: for t = 1 to T do 3: Predict xt , observe ft . 4: Update xt+1 = K (xt - t+1 ft (xt )). 5: end for Zinkevich showed that the regret of this algorithm grows as T , where T is the number of rounds of the game. This rate cannot be improved in general for arbitrary convex loss functions. However, this is not the case if the loss functions are uniformly convex, for instance, if all ft have second derivative at least H > 0. Recently, Hazan et al [3] showed that in this case it is possible for the regret to grow only logarithmically with T , using the same algorithm but with the smaller step size t = 1/(H t). Increasing convexity makes online convex optimization easier. The algorithm that achieves logarithmic regret must know in advance a lower bound on the convexity of the loss functions, since this bound is used to determine the step size. It is natural to ask if this is essential: is there an algorithm that can adapt to the convexity of the loss functionand achieve the s same regret rates in both cases--O(log T ) for uniformly convex functions and O( T ) for arbitrary convex functions? In this paper, we present an adaptive algorithm of this kind. The key technique is regularization: We consider the online gradient descent (OGD) algorithm, but we add a uniformly convex function, the quadratic t x 2 , to each loss function ft (x). This corresponds to shrinking the algorithm's actions xt towards the origin. It leads to a regret bound of the form tT RT c t + p(1 , . . . , T ). =1 The first term on the right hand side can be viewed as a bias term; it increases with t because the presence of the regularization might lead the algorithm away from the optimum. The second term is a penalty for the flatness of the loss functions that becomes smaller as the regularization increases. We show that choosing the regularization coefficient t so as to balance these two terms in the bound on the regret up to round t is nearly optimal in a strong sense. Not only does this choice give the T and log T regret rates in the linear and uniformly convex cases, it leads to a kind of oracle inequality: The regret is no more than a constant factor times the bound on regret that would have been suffered if an oracle had provided in advance the sequence of regularization coefficients 1 , . . . , T that minimizes the final regret bound. To state this result precisely, we introduce the following definitions. Let K be a convex subset of Rn and suppose that supxK x D. For simplicity, throughout the paper we assume that K is centered around 0, and, hence, 2D is the diameter of K . Define a shorthand t = ft (xt ). Let Ht be the largest value such that for any x K , ft (x ) ft (xt ) + t (x - xt ) + Ht x - xt 2. 2 (1) In particular, if 2ft - Ht · I t 0, then the above inequality is satisfied. Furthermore, suppose t t Gt . Define 1:t := s=1 s and H 1:t := s=1 Hs . Let H 1:0 = 0. Let us now state the Adaptive Online Gradient Descent algorithm as well as the theoretical guarantee for its performance. Algorithm 2 Adaptive Online Gradient Descent 1: Initialize x1 arbitrarily. 2: for t = 1 to T do 3: Predict xt , observe ft . ( . 2 1 2 /(3D 2 ) - (H 4: Compute t = 2 H 1:t + 1:t-1 ) + 8Gt 1:t + 1:t-1 ) 5: Compute t+1 = (H 1:t + 1:t ) . 6: Update xt+1 = K (xt - t+1 ( ft (xt ) + t xt )). 7: end for -1 2 Theorem 1.1. The regret of Algorithm 2 is bounded by D t T (Gt + D)2 t 2 RT 3 inf 1:T + 1 ,...,T H 1:t + :t 1 =1 . While Algorithm 2 is stated with the squared Euclidean norm as a regularizer, we show that it is straightforward to generalize our technique to other regularization functions that are uniformly convex with respect to other norms. This leads to adaptive versions of the mirror descent algorithm analyzed recently in [4, 5]. 2 Preliminary results The following theorem gives a regret bound for the OGD algorithm with a particular choice of step size. The virtue of the theorem is that the step size can be set without knowledge of the uniform lower bound on Ht , which is required in the original algorithm of [3]. The proof is provided in Section 4 (Theorem 4.1), where the result is extended to arbitrary norms. Theorem 2.1. Suppose we set t+1 = 1 H 1:t . Then the regret of OGD is bounded as T 1t G2 t . 2 =1 H 1:t RT In particular, loosening the bound, 2R T maxt G2 t log T . t mint 1 s=1 Hs t Note that nothing prevents Ht from being negative or zero, implying that the same algorithm gives logarithmic retgret even when some of the functions are linear or concave, as long as the partial averages 1 s=1 Hs are positive and not too small. The above result already provides an important t extension to the log-regret algorithm of [3]: no prior knowledge on the uniform convexity of the functions is needed, and the bound is in terms of the observed sequence {Ht }. Yet, there is still a t problem with the algorithm. If H1 > 0 and Ht = 0 for all t > , then s=1 Hs = H1 , resulting 1 bt in a linear regret bound. However, we know from [2] that a O( T ) bound can be o ained. In the next section we provide an algorithm which interpolates between O(log T ) and O( T ) bound on the regret depending on the curvature of the observed functions. 3 Adaptive Regularization Suppose the environment plays a sequence of ft 's with curvature Ht 0. Instead of performing ~ gradient descent on these functions, we step in the direction of the gradient of ft (x) = ft (x) + 1 2 2 t x , where the regularization parameter t 0 is chosen appropriately at each step as a function of the curvature of the previous functions. We remind the reader that K is assumed to be centered around the origin, for otherwise we would instead use x - x0 2 to shrink the actions xt towards the origin x0 . Applying Theorem 2.1, we obtain the following result. ~ Theorem 3.1. If the Online Gradient Descent algorithm is performed on the functions ft (x) = 1 ft (x) + 2 t x 2 with 1 t+1 = H 1:t + 1:t for any sequence of non-negative 1 , . . . , T , then RT T 12 1 t (Gt + t D)2 D 1:T + . 2 2 =1 H 1:t + 1:t 3 ~ Proof. By Theorem 2.1 applied to functions ft , T f tT t 1 1 min ft (x) + t x t (xt ) + t xt 2 x 2 2 =1 =1 + 2 T 1 t (Gt + t D)2 . 2 =1 H 1:t + 1:t ~ Indeed, it is easy to verify that condition (1) for ft implies the corresponding statement with Ht = ~ . Furthermore, by linearity, the bound on the gradient of ft is Gt + t xt Gt + t D. ~ Ht + t for ft T Define x = arg minx t=1 ft (x). Then, dropping the xt 2 terms and bounding x 2 D2 , tT =1 ft (xt ) tT T 1 1 t (Gt + t D)2 , ft (x ) + D2 1:T + 2 2 =1 H 1:t + 1:t =1 which proves the the theorem. The following inequality is important in the rest of the analysis, as it allows us to remove the dependence on t from the numerator of the second sum at the expense of increased constants. We have D 22 T t T (Gt + t D)2 1 1 1t Gt 22 D2 t 2 2 1:T + D 1:T + + 2 H 1:t + 1:t 2 2 =1 H 1:t + 1:t H 1:t + 1:t-1 + t =1 t G2 32 t D 1:T + , 2 H 1:t + 1:t =1 where the first inequality holds because (a + b)2 2a2 + 2b2 for any a, b R. T (2) It turns out that for appropriate choices of {t }, the above theorem recovers the O( T ) bound on the regret for linear functions [2] and the O(log T ) bound for strongly convex functions [3]. Moreover, under specific assumptions on the sequence {Ht }, we can define a sequence {t } which produces intermediate rates between log T and T . These results are exhibited in corollaries at the end of this section. Of course, it would be nice to be able to choose {t } adaptively without any restrictive assumptions on {Ht }. Somewhat surprisingly, such a choice can be made near-optimally by simple loT cal balancing. Observe that the upper bound of Eq. (2) consists of two sums: D2 t=1 t and T G2 t t=1 H 1:t +1:t . The first sum increases in any particular t and the other decreases. While the influence of the regularization parameters t on the first sum is trivial, the influence on the second sum is more involved as all terms for t t depend on t . Nevertheless, it turns out that a simple choice of t is optimal to within a multiplicative factor of 2. This is exhibited by the next lemma. Lemma 3.1. Define HT ({t }) = HT (1 . . . T ) = 1:T + where Ct 0 does not depend on t 's. If t satisfies t = HT ({t }) 2 i f n {t }0 tT =1 Ct , H 1:t + 1:t for t = 1, . . . , T , then Ct H 1:t +1:t HT ({ }). t Proof. We prove this by induction. Let { } be the optimal sequence of non-negative regularization t coefficients. The base of the induction is proved by considering two possibilities: either 1 < or 1 not. In the first case, 1 + C1 /(H1 + 1 ) = 21 2 2( + C1 /(H1 + )). The other case 1 1 1 is proved similarly. Now, suppose HT -1 ({t }) 2HT -1 ({ }). t Consider two possibilities. If 1:T < :T , then 1 HT ({t }) = 1:T + tT =1 Ct = 21:T 2:T 2HT ({ }). 1 t H 1:t + 1:t 4 If, on the other hand, 1:T :T , then 1 Ct Ct Ct Ct 2 + T + =2 2 T H 1:T + 1:T H 1:T + 1:T H 1:T + :T H 1:T + :T 1 1 Using the inductive assumption, we obtain HT ({t }) 2HT ({ }). t . The lemma above is the key to the proof of the near-optimal bounds for Algorithm 2 1 . Proof. (of Theorem 1.1) By Eq. 2 and Lemma 3.1, 3 tT tT 32 G2 G2 t t RT D 1:T + inf D2 :T + 2 1 1 ,...,T 2 H 1:t + 1:t H 1:t + :t 1 =1 =1 1 , T 2 1 t (Gt + t D) 6 inf D2 :T + 1 1 ,...,T 2 2 =1 H 1:t + :t 1 provided the t are chosen as solutions to 32 G2 t D t = . (3) 2 H 1:t + 1:t-1 + t It is easy to verify that i ( 1 2 H 1:t + 1:t-1 ) + 8G2 /(3D2 ) - (H 1:t + 1:t-1 ) t = t 2 s the non-negative root of the above quadratic equation. We note that division by zero in Algorithm 2 occurs only if 1 = H1 = G1 = 0. Without loss of generality, G1 = 0, for otherwise x1 is minimizing f1 (x) and regret is negative on that round. Hence, the algorithm has a bound on the performance which is 6 times the bound obtained by the best offline adaptive choice of regularization coefficients. While the constant 6 might not be optimal, it can be shown that a constant strictly larger than one is unavoidable (see previous footnote). We also remark that if the diameter D is unknown, the regularization coefficients t can still be chosen by balancing as in Eq. (3), except without the D2 term. This choice of t , however, increases the bound on the regret suffered by Algorithm 2 by a factor of O(D2 ). Let us now consider some special cases and show that Theorem 1.1 not only recovers the rate of increase of regret of [3] and [2], but also provides intermediate rates. For each of these special cases, we provide a sequence of {t } which achieves the desired rates. Since Theorem 1.1 guarantees that Algorithm 2 is competitive with the best choice of the parameters, we conclude that Algorithm 2 achieves the same rates. Corollary 3.1. Suppose Gt G for all 1 T . Then for any sequence of convex functions t {ft }, the bound on regret of Algorithm 2 is O( T ). Proof. Let 1 = T and t = 0 for 1 < t T . By Eq. 2, D t T (Gt + t D)2 tT 1 32 G2 t 2 1:T + D 1:T + 2 H 1:t + 1:t 2 H 1:t + 1:t =1 =1 3 T t 3 G2 = D2 T + D 2 + G2 T. 2 2 T =1 1 Lemma 3.1 effectively describes an algorithm for an online problem with competitive ratio of 2. In the full version of this paper we give a lower bound strictly larger than one on the competitive ratio achievable by any online algorithm for this problem. 5 Hence, the regret of Algorithm 2 can never increase faster than T . We now consider the assumptions of [3]. Corollary 3.2. Suppose Ht H > 0 and G2 < G for all 1 t T . Then the bound on regret of t Algorithm 2 is O(log T ). Proof. Set t = 0 for all t. It holds that RT 1 2 G2 t t=1 H 1:t T 1 2 G t=1 tH T G 2H (log T + 1). The above proof also recovers the result of Theorem 2.1. The following Corollary shows a spectrum of rates under assumptions on the curvature of functions. Corollary 3.3. Suppose Ht = t- and Gt G for all 1 t T . 1. If = 0, then RT = O(log T ). 2. If > 1/2, then RT = O( T ). 3. If 0 < 1/2, then RT = O(T ). Proof. The first two cases follow immediately from Corollaries 3.1 and 3.2. For the third case, let t-1 t 1 = T and t = 0 for 1 < t T . Note that s=1 Hs x=0 (x + 1)- dx = (1 - )-1 t1- - (1 - )-1 . Hence, D t T (Gt + t D)2 tT G2 32 1 t 2 1:T + D 1:T + 2 H 1:t + 1:t 2 H 1:t + 1:t =1 =1 2D2 T + G2 (1 - ) tT =1 1 t1- - 1 1 2D2 T + 2G2 T + O(1) = O(T ). 4 Generalization to different norms The original online gradient descent (OGD) algorithm as analyzed by Zinkevich [2] used the Euclidean distance of the current point from the optimum as a potential function. The logarithmic regret bounds of [3] for strongly convex functions were also stated for the Euclidean norm, and such was the presentation above. However, as observed by Shalev-Shwartz and Singer in [5], the proof technique of [3] extends to arbitrary norms. As such, our results above for adaptive regularization carry on to the general setting, as we state below . Our notation follows that of Gentile and Warmuth [6]. Definition 4.1. A function g over a convex set K is called H -strongly convex with respect to a convex function h if H Bh (x, y ). 2 Here Bh (x, y ) is the Bregman divergence with respect to the function h, defined as x, y K . g (x) g (y ) + g(y) ( x - y) + Bh (x, y ) = h(x) - h(y ) - h(y) ( x - y ). This notion of strong convexity generalizes the Euclidean notion: the function g (x) = x 2 is 2 strongly convex with respect to h(x) = x 2 (in this case Bh (x, y ) = x - y 2 ). More gen2 2 erally, the Bregman divergence can be thought of as a squared norm, not necessarily Euclidean, i.e., Bh (x, y ) = x - y 2. Henceforth we also refer to the dual norm of a given norm, defined by y = sup x 1 {y x}. For the case of p norms, we have y = y q where q satisfies 1 1 1 x ¨ y x 2 y 2 + 1 x 2 (this holds for norms p + q = 1, and by Holder's inequality y 2 other than p as well). 6 For simplicity, the reader may think of the functions g , h as convex and differentiable2 . The following algorithm is a generalization of the OGD algorithm to general strongly convex functions (see the derivation in [6]). In this extended abstract we state the update rule implicitly, leaving the issues of efficient computation for the full version (these issues are orthogonal to our discussion, and were addressed in [6] for a variety of functions h). Algorithm 3 General-Norm Online Gradient Descent 1: Input: convex function h 2: Initialize x1 arbitrarily. 3: for t = 1 to T do 4: Predict xt , observe ft . 5: Compute t+1 and let yt+1 be such that h(yt+1 ) = h(xt ) - 2t+1 ft (xt ). 6: Let xt+1 = arg minxK Bh (x, yt+1 ) be the projection of yt+1 onto K . 7: end for The methods of the previous sections can now be used to derive similar, dynamically optimal, bounds on the regret. As a first step, let us generalize the bound of [3], as well as Theorem 2.1, to general norms: Theorem 4.1. Suppose that, for each t, ft is a Ht -strongly convex function with respect to h, and let h be such that Bh (x, y ) x - y 2 for some norm · . Let ft (xt ) Gt for all t. Applying the General-Norm Online Gradient Algorithm with t+1 = H1 :t , we have 1 RT T 1t G2 t . 2 =1 H 1:t Proof. The proof follows [3], with the Bregman divergence replacing the Euclidean distance as a potential function. By assumption on the functions ft , for any x K , Ht Bh (x , xt ). ft (xt ) - ft (x ) ft (xt ) (xt - x ) - 2 By a well-known property of Bregman divergences (see [6]), it holds that for any vectors x, y , z , (x - y ) ( h(z) - Combining both observations, h(y)) = Bh (x, y ) - Bh (x, z ) + Bh (y , z ). 2(ft (xt ) - ft (x )) 2 ft (xt ) (xt - x ) - Ht Bh (x , xt ) 1 = ( h(yt+1 ) - h(xt )) (x - xt ) - Ht Bh (x , xt ) t+1 1 = [Bh (x , xt ) - Bh (x , yt+1 ) + Bh (xt , yt+1 )] - Ht Bh (x , xt ) t+1 1 [Bh (x , xt ) - Bh (x , xt+1 ) + Bh (xt , yt+1 )] - Ht Bh (x , xt ), t+1 where the last inequality follows from the Pythagorean Theorem for Bregman divergences [6], as xt+1 is the projection w.r.t the Bregman divergence of yt+1 and x K is in the convex set. Summing over all iterations and recalling that t+1 = H1 :t , 1 1 + 1 + tT tT 1 1 2RT Bh (x , xt ) - - Ht Bh (x , x1 ) - H1 Bh (xt , yt+1 ) t+1 t 2 t+1 =2 =1 = tT =1 1 t+1 Bh (xt , yt+1 ). (4) 2 Since the set of points of nondifferentiability of convex functions has measure zero, convexity is the only property that we require. Indeed, for nondifferentiable functions, the algorithm would choose a point xt , which ~ is xt with the addition of a small random perturbation. With probability one, the functions would be smooth at the perturbed point, and the perturbation could be made arbitrarily small so that the regret rate would not be affected. 7 We proceed to bound Bh (xt , yt+1 ). By definition of Bregman divergence, and the dual norm inequality stated before, Bh (xt , yt+1 ) + Bh (yt+1 , xt ) = ( h(xt ) - h(yt+1 )) (xt - yt+1 ) = 2t+1 ft (xt ) 2 t+1 ( xt - yt+1 ) t 2 + xt - yt+1 2. 2 . Thus, by our assumption Bh (x, y ) x - y 2, we have 2 2 Bh (xt , yt+1 ) t+1 t 2 + xt - yt+1 2 - Bh (yt+1 , xt ) t+1 Plugging back into Eq. (4) we get T T 1t G2 1t t 2 t+1 Gt = . RT 2 =1 2 =1 H 1:t t The generalization of our technique is now straightforward. Let A2 = supxK g (x) and 2B = supxK g(x) . The following algorithm is an analogue of Algorithm 2 and Theorem 4.2 is the analogue of Theorem 1.1 for general norms. Algorithm 4 Adaptive General-Norm Online Gradient Descent 1: Initialize x1 arbitrarily. Let g (x) be 1-strongly convex with respect to the convex function h. 2: for t = 1 to T do 3: Predict xt , observe ft ( . 2 H 1:t + 1:t-1 ) + 8G2 /(A2 + 2B 2 ) - (H 1:t + 1:t-1 ) 4: Compute t = 1 t 2 5: Compute t+1 = (H 1:t + 1:t ) . 6: Let yt+1 be such that h(yt+1 ) = h(xt ) - 2t+1 ( ft (xt ) + t g(xt ))). 2 7: Let xt+1 = arg minxK Bh (x, yt+1 ) be the projection of yt+1 onto K . 8: end for -1 Theorem 4.2. Suppose that each ft is a Ht -strongly convex function with respect to h, and let g be a 1-strongly convex with respect h. Let h be such that Bh (x, y ) x - y 2 for some norm · . Let ft (xt ) Gt . The regret of Algorithm 4 is bounded by ( . t T (Gt + B )2 t RT inf A2 + 2B 2 ):T + 1 1 ,...,T H 1:t + :t 1 =1 If the norm in the above theorem is the Euclidean norm and g (x) = supxK x = A = B and recover the results of Theorem 1.1. x 2 , we find that D = References ` ´ [1] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [2] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928­936, 2003. [3] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In COLT, pages 499­513, 2006. ¨ [4] Shai Shalev-Shwartz and Yoram Singer. Convex repeated games and fenchel duality. In B. Scholkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, Cambridge, MA, 2007. [5] Shai Shalev-Shwartz and Yoram Singer. Logarithmic regret algorithms for strongly convex repeated games. In Technical Report 2007-42. The Hebrew University, 2007. [6] C. Gentile and M. K. Warmuth. Proving relative loss bounds for on-line learning algorithms using bregman divergences. In COLT. Tutorial, 2000. 8