No-Regret Learning in Convex Games Geoffrey J. Gordon Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213 Amy Greenwald Casey Marks Department of Computer Science, Brown University, Providence, RI 02912 ggordon@cs.cmu.edu amy@cs.brown.edu casey@cs.brown.edu Abstract Quite a bit is known about minimizing different kinds of regret in experts problems, and how these regret types relate to types of equilibria in the multiagent setting of repeated matrix games. Much less is known about the possible kinds of regret in online convex programming problems (OCPs), or about equilibria in the analogous multiagent setting of repeated convex games. This gap is unfortunate, since convex games are much more expressive than matrix games, and since many important machine learning problems can be expressed as OCPs. In this paper, we work to close this gap: we analyze a spectrum of regret types which lie between external and swap regret, along with their corresponding equilibria, which lie between coarse correlated and correlated equilibrium. We also analyze algorithms for minimizing these regret types. As examples of our framework, we derive algorithms for learning correlated equilibria in polyhedral convex games and extensive-form correlated equilibria in extensive-form games. The former is exponentially more efficient than previous algorithms, and the latter is the first of its type. that each agent knows only its own feasible region and observes only its own payoff structure. So, an agent cannot simply compute an equilibrium of the game and play it (even leaving aside the complexity of such a computation and the problem of coordinating with other agents on an equilibrium). All an agent can do is learn a preferred course of action by playing the game repeatedly and observing its own payoffs. What, then, is an appropriate goal for a learning agent? Unlike zero-sum games, general-sum games do not have a well-defined value : even if we had complete knowledge of the game and all players were completely rational, we would not be able to predict how much payoff we should receive. Instead, researchers have defined other goals for learning agents. One popular one is regret minimization. For example, a number of previous algorithms have been designed to minimize external regret (defined in Sec. 2) in convex games, including Generalized Gradient Descent (Gordon, 1999b), GIGA (Zinkevich, 2003), Follow the Perturbed Leader (Kalai & Vempala, 2003), Lagrangian Hedging (Gordon, 2006), and algorithms based on Fenchel duality (Shalev-Shwartz & Singer, 2006). However, no external regret may not be a sufficient goal: a set of agents can all achieve no external regret (which guarantees that the empirical distribution of joint play converges to the set of coarse correlated equilibria, defined in Sec. 4) and still have an incentive to change their play. For example, a no-external-regret learner can consistently observe that its average payoff per trial would have been higher if it had chosen action a every time that it actually played a, and yet never switch to playing action a in these situations. To avoid such behavior, we seek algorithms that provide guarantees stronger than no external regret. In a seminal paper, Foster and Vohra (1997) present an algorithm that exhibits no internal regret (defined in Sec. 2) in matrix games, and further, show that if all players achieve no internal regret, the empirical distribution of joint play converges to the set of correlated 1. Intro duction We wish to build agents that can learn to act effectively in multiagent decision problems. We represent such problems as general-sum games: each agent i is given a feasible region Ai from which to choose an action ai . The payoff to agent i depends not only on i's choice, but also on the actions a¬i chosen by other agents. Since we are modeling learning, we assume Appearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). No-Regret Learning in Convex Games equilibria (see Sec. 4). This guarantee rules out precisely the anomalous behavior described above. Stoltz and Lugosi (2007) generalize these results to convex games. Extending the framework of Greenwald and Jafari (2003) for matrix games, they define a continuum of regret measures called -regret, as well as corresponding -equilibria, for convex games. Given a feasible region A, is a collection of action transformations ; that is, each is a function from A to itself. An agent calculates its -regret by comparing the losses it obtained during its past history of play to the losses it would have obtained had it transformed each action it played according to some . Different choices of lead to different types of regret and corresponding equilibria. In matrix games, the only two regret types known to be of interest are the above-mentioned external and internal regret. No internal regret is equivalent to no swap regret, in which is the set of all transformations from A to itself. In convex games, by contrast, there is a much richer variety of regret concepts. We identify and analyze two novel regret types, which we call extensive-form and finite-element regret. We also analyze linear regret. Each of these regret types is distinct from the others and from external and swap regret. In fact, they form a progression: no swap regret (the strongest property) implies no finite element regret, which implies no linear regret, which implies no extensive-form regret, which implies no external regret (the weakest property). Different regret types require different regretminimization algorithms. For convex games, until recently, most algorithms minimized only external regret. More recently, Stoltz and Lugosi (2007) proved the existence of a no-swap-regret algorithm, and Hazan and Kale (2007) derived an algorithm that exhibits no -regret for any set which is the convex hull of a finite set of transformations. Simultaneously and independently, we developed an algorithm similar to Hazan and Kale's: our algorithm handled more-general representations of transformation sets, but required exact fixed-point calculations (Gordon et al., 2007). Unfortunately, constructing an algorithm according to Stoltz and Lugosi's proof would be prohibitively expensive: both the time and space requirements would grow exponentially with the number of rounds. And, Hazan and Kale's algorithm, which runs in time polynomial in the number of corners of , can also be prohibitively expensive: for example, if A is the unit cube in Rd and is the set of linear transformations that map A to itself, then , which is the Cartesian product of d copies of the unit L1 ball, has (2d)d corners. In this work, we extend our earlier algorithms and proofs, unifying them with Hazan and Kale's. The result is an algorithm which accommodates moreefficient representations of . In the example above, the natural representation of is as a set of d × d matrices satisfying certain linear constraints. Using this representation, our algorithm runs in time polynomial in d--an exponential speedup. In general, we can efficiently achieve no linear regret so long as we can efficiently optimize over the set of linear mappings from A to itself. We also instantiate our algorithm for extensive-form and finite-element regret. These regret types are important in practice: extensive-form regret corresponds to extensive-form correlated equilibrium (Forges & von Stengel, 2002), arguably the most natural notion of equilibrium in extensive-form games. And, our nofinite-element-regret algorithm, with a simple modification described below, guarantees that the empirical distribution of joint play converges to a correlated equilibrium. For extensive-form regret, our algorithm is polynomial in the dimension of the action set A; we are not aware of any prior no-extensive-form-regret algorithms. For finite-element regret, our algorithm is polynomial in the dimension of the action set and in the size of a finite-element mesh that covers . Although the necessary mesh for some choices of is quite large, our algorithm is still by far the most efficient known that guarantees convergence to correlated equilibrium. 2. The General Algorithm When playing a repeated convex game, a single agent's learning problem is called an online convex program (OCP): in each round t, the agent chooses an action at A. At the same time, forces external to the agent choose a convex loss function lt L. (A loss is just a negative payoff.) The agent observes lt and pays lt (at ). The action space A is assumed to be a convex and compact subset of Rd . The set L includes convex loss functions with bounded subgradients. The commonly studied exp erts problem is a special case of an OCP in which the feasible region is the probability simplex in Rd . A learning algorithm takes as input a sequence of loss functions lt and produces as output a sequence of actions at . Action at may depend on l1 . . . lt-1 , but not on lt or later loss functions. The learner's ob jective is T to minimize its cumulative loss, Lt = t=1 lt (at ). The minimum achievable loss depends on the specific sequence lt . To measure how well a learning algorithm No-Regret Learning in Convex Games performs against a given sequence, we calculate its regret. The simplest type of regret is called external regret, and is defined as follows: EXT = sup t tT =1 (lt (at ) - lt (a)) aA That is, the external regret is the difference between the actual loss achieved and the smallest possible loss that could have been achieved on the sequence lt by playing a fixed a A. We say that an algorithm A exhibits no external regret for feasible region A and set L if we can guarantee that its average external regret per trial eventually falls below any > 0, regardless of the particular sequence lt . In other words, A exhibits no external regret if there is a function f (T , A, L) which is o(T ) for any fixed A and L, such that for all a A, t 1 tT =1 algorithm itself is fairly simple, and embodies essentially the same idea that was proposed earlier by Gordon et al. (2007) and Hazan and Kale (2007). However, we develop the idea here so that it applies to a more general class of transformation sets than considered previously, and provide a proof that it achieves no -regret under more general conditions. Our extra generality is crucial for developing efficient implementations for important choices of including linear, extensive-form, and finite-element transformations.1 Our -regret minimizing algorithm A is described in Fig. 1. It takes as input a sequence of loss functions lt L and outputs a sequence of actions at A, which, we will show, satisfies Eq. 2. In designing A, we assume that we have access to subroutines A and A . The subroutine A computes approximate fixed points of transformations . That is, given any and any > 0, A returns some a A such that a-(a) A . Here, · A is an arbitrary norm on Rd . The subroutine A is an externalregret minimizing algorithm whose feasible region is ; we assume that its regret bound is o(T ) whenever we can provide a bound (in an appropriate norm) on the subgradients of the loss functions it encounters. Since algorithm A accesses the transformation set only through the subroutines A and A , it does not depend on any special properties of beyond the existence of these subroutines. To state our theorem, though, we will embed in a vector space, as follows. Since A Rd , we can write as a d-tuple of "coordinate" functions (1 , 2 , . . . , d ), i : A R. For all and i = 1 . . . d, we assume i is a member of some reproducing-kernel Hilbert space (RKHS) H A R.2 Finally, we assume that is a convex and compact subset of Hd . To make these assumptions concrete, suppose for example that is the convex hull of a finite set of transformations {1 , . . . , p }: i.e., p ( p j = j =1 j | j 0, j =1 j = 1 This is the case treated by Hazan and Kale.) If we take H to be the span of all of the coordinate functions j i , then is a simplex in Hd with corners j , for j = 1 . . . p. (In general, 's shape may be much more 1 Hazan and Kale's algorithm is efficient in the special case of external transformations. Indeed, this section's algorithm specializes to their algorithm in this case. 2 A Hilbert space is a (possibly infinite-dimensional) vector space that has an inner product. A reproducing-kernel Hilbert space is a Hilbert space of real- or complex-valued functions in which evaluation at the point a is a continuous linear functional for any a. lt (at ) tT =1 lt (a) + f (T , A, L) (1) The function f can depend on A and L in complicated ways, but usually depends on properties like the diameter of A under some norm, or the length of l(a) under some norm for a A and l L. More generally, an agent can consider replacing its sequence a1 . . . at with (a1 ) . . . (at ), where is some action transformation, that is, a measurable function that maps A into itself. If is a set of such action transformations, we define an algorithm's -regret as = sup t tT =1 (lt (at ) - lt ((at ))) and we say that it exhibits no -regret if it satisfies the following analogue of Eq. 1: for all , t 1 tT =1 lt (at ) tT =1 lt ((at )) + g (T , A, L, ) (2) where g (T , A, L, ) is o(T ) for any fixed A, L, and . Note that external regret is just -regret with equal to the set of constant transformations: i.e., EXT = {x | x A}, where x (a) = x. By setting to larger, more flexible transformation sets, we can define stronger varieties of regret. However, before studying any specific regret types in detail, we next discuss how to achieve no -regret for general . 2.1. General In this section, we develop an algorithm A that exhibits no -regret for any suitable A A. The No-Regret Learning in Convex Games Given feasible region A, transformation set , initial transformation 1 , and subroutines A and A . For t = 1, . . . , T : 1. Send transformation t to the fixed-point algorithm A , along with accuracy parameter t = 1/ t. Receive action at satisfying t (at ) - at A t. 2. Play at ; observe loss function lt and incur loss lt (at ). 3. Define mt : R by mt () = lt ((at )). 4. Send mt to the no-external-regret algorithm A ceive transformation t+1 . Figure 1. Algorithm A. . lt (t (at )) + tC2 . So, tT =1 lt (at ) tT =1 (lt ((at ))+ tC2 )+f (T , , M ) T Since C2 t=1 t = O( T ), this is exactly the desired Co--regret guarantee. n learly, the run-time of A depends on the run-times of its subroutines. In particular, since A requires that A 's accuracy parameter approach 0 as T increases, it is important that A run efficiently even for small . We will discuss run-times in more detail in the context of specific examples below. For now, we note the following trivial generalization of a result due to Hazan and Kale: if the fixed-point algorithm A is r a FPTAS, and if the no-external-regret algorithm A uns in polynomial time, then A can process T actions and loss functions in time polynomial in T . Hazan and Kale allow run-times to be polynomial in the number of corners of (among other parameters); this renders their efficiency guarantees meaningless when has many corners. With our more-efficient representations of , we can replace the dependence on the number of corners with parameters like the dimension of and the norm bounds for a A and l for l L; since these latter parameters can be much smaller, the result will be a much faster run-time. As described so far, the algorithm A is deterministic if its subroutines A and A are. Below, we will also define a randomized variant of A, to strengthen the connection to game-theoretic equilibria. 2.2. Finite-dimensional We defined algorithm A in terms of a generic set of transformations Hd , where H is a RKHS, and each element of H is a real-valued function on A. (So, each is a d-tuple of real-valued functions on A, which we interpret as a function from A to Rd .) Because of the reproducing-kernel property, computing component i (a) of some Hd for a A is the same as computing the inner product i , K (a) . In other words, each i is the composition of a fixed, possibly-nonlinear function K (·) with a linear mapping i , · . This is the so-called "kernel trick" (Cortes & Vapnik, 1995): first, K computes a vector of features of the action a. The inner product with i then combines all of these features to produce the final output i (a). To evaluate (a) in its entirety, we can compute K (a) once, and then evaluate the d inner products 1 , K (a) , . . . , d , K (a) . In this paper, we are chiefly interested in cases where the dimension of H is manageable, so that we can di- Re- complicated than a simplex, as we will see for example in the definition of FE below.) To bound the -regret of algorithm A, we will need bounds on the actions a and the loss-function subgradients l(a), for all l L and a A. In particular, we will suppose that a A C1 and l(a) A C2 , for any a A, any l L, and some constants C1 , C2 > 0. Here · A is the norm that is dual to · A. Theorem 1 Fix a convex and compact feasible region A and a set of loss functions L satisfying the above norm bounds, as wel l as a set of transformations Hd , where H A R is a RKHS. Assume we are given an algorithm A which, for any set of possible loss functions M with bounded subgradients, achieves no external regret on . Also assume we are given an algorithm A which can compute an approximate fixed point of any . Then algorithm A, using subroutines A and A , achieves no -regret. Proof: Define the set of functions M R as M = {l((a)) | l L, a A}. Note that each m M is convex because each l L is convex and (a) is linear in . Moreover, the norm of the subgradient of any m M at any point is bounded by C1 C2 . (A proof of this fact, as well as a definition of the appropriate norm, is given by Gordon et al. (2008).) Because A exhibits no external regret on with the bounded-subgradient set of potential loss functions M , tT =1 mt (t ) tT =1 mt () + f (T , , M ) where f is sublinear in T . So, by the definition of mt , tT =1 lt (t (at )) tT =1 lt ((at )) + f (T , , M ) But, since t (at ) - at A t and lt (at ) A C2 , we have by H¨lder's inequality that lt (at ) o No-Regret Learning in Convex Games rectly write down and work with the transformations Hd . So, for the remainder of the paper, we will assume that H is isomorphic to Rp for some finite p. We will also restrict our interest to linear loss functions lt (a) = a · lt . This is without loss of generality, since we can achieve no regret for a sequence of convex loss functions lt by working with appropriately-chosen linear lower bounds on each lt (Gordon, 1999a). With these additional assumptions, the steps of A can be simplified: each derived loss function mt is linear, and can be described by its subgradient as follows: mt () = (lt ((at ))) = ((at ) · lt ) = lt K (at )T The subgradient mt is a d × p matrix, since lt is a d-vector and K (at ) is a p-vector. Each transformation also corresponds to a d × p matrix (a d-tuple of p-vectors). To evaluate the loss function mt on a transformation , we take the dot product mt · , which is defined to be tr( mt T ) = tr(K (at ) lt T ) = tr( lt T K (at )) = lt T K (at ). As we will see in the next section, a number of interesting transformation sets can be represented as d × p matrices. Representing transformations and subgradients in this way means we can manipulate them efficiently, and, in turn, design efficient no-regret algorithms. ternal regret. We show in the long version of this paper (Gordon et al., 2008) that these five regret varieties are in fact distinct: it is possible to have, e.g., no LIN regret while still having positive FE -regret. Linear Regret The set LIN includes all linear transformations that map A into itself. To achieve no linear regret, we can take K to be the identity. will then be a set of square d × d matrices. To find a fixed point of , we choose an appropriate element of the null space of - I , which takes time polynomial in d. The more expensive task is to achieve no external regret on : depending on the form of A, may or may not lend itself to a description in terms of a small number of simple constraints. If A is a probability simplex, then is the set of stochastic matrices, which can be expressed with O(d2 ) linear constraints on the entries of (this setting yields an algorithm very similar to that of Blum and Mansour (2005)). If A is a unit Euclidean ball, then consists of those matrices whose largest singular value is 1; this set can be represented using a single semidefinite constraint. For general (convex compact) A, the best choice may be to use either GIGA or lazy pro jection (Zinkevich, 2003): the difficult step in these algorithms is a Euclidean pro jection onto , which can be achieved via the ellipsoid algorithm. Finite-Element Regret The finite-element transformations only apply to polyhedral feasible regions A. For finite-element regret, we will define K as a mapping from a polyhedral feasible set A to a highdimensional space K (A) called the barycentric coordinate space. To construct K (a), we first associate each of the p corners of A with one dimension of Rp . We then triangulate A by dividing it into mutually exclusive and exhaustive d-simplices, so that each corner of A is a corner of one or more simplices. Now, to calculate K (a), we first determine the simplex in which a lies (or choose one arbitrarily if it is on a boundary) and calculate the weights of a with respect to the d + 1 corners of that simplex. That is, if j (1) . . . j (d + 1) are the indices of the corners of the simplex containing a, and if cj (1) · · · cj (d+1) are their coordinaties, we find the weights b1 . . . bd+1 by i solving a = bi cj (i) , bi = 1. We then set entry [K (a)]j (i) = bi for each corner j (i), and set all other entries of K (a) to 0. For example, if A = [0, 1]2 , we can divide A into two triangles, one with corners (0, 0), (0, 1), and (1, 1), and the other with corners (0, 0), (1, 0), and (1, 1). 1 1 To calculate K ( 3 , 2 ), note that ( 3 , 2 ) is in the first 3 3 3. Sp ecific Algorithms We now instantiate our algorithm with various transformation sets . We define each as a set of d × p matrices , together with a kernel function K : A Rp , with the guarantee that K (a) A for all a A and . To minimize each ensuing regret type, we go on to identify efficient subroutines A and A for finding fixed points and achieving no external regret. (All other calculations in our algorithm are O(pd), so these subroutines will usually be what limits our efficiency.) For completeness, we also mention EXT , the set of constant transformations on A, and SWAP , the set of all measurable transformations on A. EXT is the weakest form of regret of interest here, and SWAP the strongest. These are the only two regret types known to be of interest in matrix games (no swap regret and no internal regret are equivalent in this setting). In convex games, however, there is a much richer variety of interesting regret concepts. Below, we analyze linear, finite-element, and extensive-form regret, corresponding to transformation sets LIN , FE , and EF . As we will see, in general, EXT EF LIN FE SWAP . So, no swap regret implies no finiteelement regret, which implies no linear regret, which implies no extensive-form regret, which implies no ex- No-Regret Learning in Convex Games 2 1 4 1 5 3 2 are no b, b D with b an ancestor of b (so that all b D are reachable), and that each b D has a sibling a with wb (a) = 1. Extensive-form transformations are interesting since they correspond to the incentive constraints in extensive-form correlated equilibrium (Forges & von Stengel, 2002). We show (Gordon et al., 2008) that each extensive form transformation can be represented by a matrix , whose rows and columns are indexed by choices, so that any action w A is transformed into another action w A. The entries of are as follows: wb (a) if b a and b D 0 1 if b = a and b D, b Tb ab = otherwise (T is the subtree of T rooted at b , so that b Tb eans b is not a descendent of b ; b a means b is an ancestor or a sibling of an ancestor of a in T .) This equation says that column b of is either: a copy of wb with entries wb (a) replaced by 0s for b a (if b D, cases 1, 3); a single 1 on the diagonal (if neither b nor any of its ancestors is in D, cases 2, 3); or all 0s (if b D, but one of b's (strict) ancestors is in D, case 3). mb 3 4 5 Figure 2. Illustration of barycentric coordinates and FE . A is the outer pentagon, triangulated into three simplices. K (A) is a subset of the simplex in R5 (not shown). (A) is the distorted pentagon. The × marks a fixed point of . triangle. If we associate corners of A with dimensions of K (A) in the order (0, 0), (0, 1), (1, 0), (1, 1), 1 then K ( 1 , 2 ) = ( 1 , 1 , 0, 3 ), since these weights express 33 33 12 ( 3 , 3 ) as a convex combination of corners 1, 2, and 4. Given this definition of K , FE is the set of matrices that map K (A) into A. If A is a simplex, then K will be a linear mapping and FE = LIN . (In general, FE LIN .) For another example, see Fig. 2. We note that FE can be factored: it is the Cartesian product of p copies of A, since it just needs to map each corner of A to a point inside A. So, to achieve no external regret in , we can separately run p copies of any no-external-regret algorithm for A. A typical cost for doing so might be O(pd3 ).3 To find a fixed point of , we just need to check each of its linear pieces separately. Each individual check costs O(d3 ), and there is one for each simplex in our mesh. Extensive-Form Regret Let T be a player's sequence tree, describing all possible sequences of choices and observations in an extensive-form game (e.g., Fig. 3 (left)). Suppose that each element of the feasible region A is a sequence weight vector on T (Forges & von Stengel, 2002), specifying a behavior strategy for the game. Define an extensive form transformation as follows: fix a set D of choice nodes in T , along with pure-strategy sequence weight vectors wb for each b D. If the original strategy is ever about to play b D, the transformed strategy deviates, and instead follows wb . We assume that there The precise cost will depend heavily on the shape of A. For general A, most no-external-regret algorithms have a step like solving an LP with feasible region A or pro jecting onto A by minimum Euclidean distance. These computations cost O(d3 ) if we assume that an appropriate measure of the complexity of A is held constant. 3 Now, if we take EF to be the convex hull of all such s, then EF LIN , and no EF -regret immediately implies no regret vs. any extensive form transformation. (So, no EF -regret is related to extensive-form correlated equilibrium; see Sec. 4). For example, if T is as shown in Fig. 3 (left), elements of A are vectors of 4 sequence weights, one each for a1 . . . a4 . The weight for, e.g., a3 is P (a2 | root)P (a3 | o2 ), the product of the conditional probabilities of all choice nodes along the path from the root to a3 . So, strategy a1 , a3 yields weights w = (1, 0, 0, 0)T , while a2 , a3 yields w = (0, 1, 1, 0)T . The set EF for this game is shown in Fig. 3 (right). The parameters a, d, e, and f determine the probability that each choice node is included in D: a 0 is P (a1 D), d 0 is P (a2 D), e 0 is P (a3 D), and f 0 is P (a4 D). If a1 D, parameters b and c specify a strategy for the subtree rooted at a2 . (If a1 D, the game ends right after we reach D, and so we need not specify further choices.) The inequalities listed in Fig. 3 are consistency constraints: e.g., the events a2 D and a3 D are mutually exclusive, so we must have d + e 1. To represent the transformation "play a2 , a3 instead of a1 ," we construct a matrix by setting a, b = 1 a and c, d, e, f = 0. It is easy to verify that w = w s expected. On the other hand, the transformation "play a1 instead of a2 " corresponds to with d = 1 and a, b, c, e, f = 0; again, it is easy to check w = w. No-Regret Learning in Convex Games root a1 o1 a3 a2 o2 a4 1-a a b c d 1-d 0 0 0 0 1-d-e e 0 0 f 1-d-f a payoff-equivalent correlated equilibrium. 4.1. Rep eated Games As described above, we assume the agents play some game repeatedly and learn by observing the relationship between their actions and their payoffs. In repeated matrix games, Greenwald and Jafari (2003) have shown that if each agent plays according to a no i -regret algorithm, then the empirical distribution of joint play converges to the set of i iN -equilibria with probability 1. The empirical distribution of joint play at step t is the following distribution over the joint action set, where at {Ai is the join/ action t i played at time step t: z t () = | a = } t. The analogous result holds for i iN -equilibrium in repeated convex games (e.g., Stoltz and Lugosi (2007)). Because extensive-form games are one class of convex games (Forges & von Stengel, 2002), this result implies that, if the agents all play extensive-form regretminimization algorithms, their play will converge to the set of extensive-form correlated equilibria. (Marks (2008) also provides algorithms with this property, using the less-efficient normal-form representation of extensive-form games.) We can also say something about convergence to fullfledged correlated equilibria in repeated convex games: define a randomized variant of A as follows. On a trial where the deterministic algorithm would have played at , the randomized algorithm samples its play ¯ at from any distribution D such that ED (at ) = at ¯ ED (K (at )) = K (at ) ¯ (4) Figure 3. EF example. b + c = a, d + e 1, d + f 1, and 0 a, b, c, d, e, f 1. 4. Regret and Equilibria Algorithm A achieves no -regret in an online convex program, for any suitable . In this section, we relate this guarantee back to equilibria in convex games. A game consists of a set of players N , a set of actions Ai for each player i N , and a payoff function ri : i Ai R for each player i N . A matrix game is one in which each action set is finite. A variant on a matrix game is an exp erts game in which each action set is a probability simplex. Generalizing experts games, a convex game is one in which each action set is a convex and compact subset of Euclidean space and each payoff function is multi-linear. In experts games and convex games, players can play interior points; but, assuming polyhedral action sets (PAS), we can generate a corresponding corner game by restricting each player's actions to the corners of its action sets. Following Stoltz and Lugosi (2007), who generalize the definition for matrix games given in Greenwald and Jafari (2003), we define equilibria in convex games in terms of transformation sets. Definition 2 Given a game and a col lection of transformation sets, i iN , with each i SWAP , a probability distribution q over i Ai is a i iN equilibrium iff the expectation over a q satisfies E [ri ((ai ), a¬i ) - ri (a)] 0 i N , i (3) (We still use at , rather than at , in constructing mt .) ¯ With such a D, if loss functions are linear, our -regret on A and external regret on differ by a zero-mean random variable; so, we can use standard stochastic convergence results to prove: Corollary 5 Under the conditions of Thm. 1, the additional assumption (4), and restricting L to include only linear loss functions, the randomized variant of A achieves no -regret with probability 1. For FE -regret, we can always find a D that satisfies Equation (4); so (Gordon et al., 2007): Corollary 6 If, in a repeated PAS convex game, each agent plays only corner points and uses an algorithm that achieves no internal regret for the corner game (such as the randomized version of A with = FE ), then the empirical distribution of joint play converges to the set of correlated equilibria of the convex game with probability 1. Intuitively, an equilibrium is a distribution from which no player prefers to deviate using any transformation in its set. Taking each i to be the set of swap transformations defines correlated equilibria; taking each i to be the set of external (i.e., constant) transformations defines coarse correlated equilibria. These definitions lead to the following propositions, proved by Marks (2008) and Gordon et al. (2007). Prop osition 3 A correlated equilibrium of the corner game generated from a PAS convex game is also a correlated equilibrium of the convex game itself. Prop osition 4 For every correlated equlibrium in a PAS convex game, the corresponding corner game has No-Regret Learning in Convex Games To our knowledge, ours is the most efficient algorithm which can make this claim, by a factor which is exponential in the dimension d. References Blum, A., & Mansour, Y. (2005). From external to internal regret. Proceedings of the Conference on Computational Learning Theory (COLT). Cortes, C., & Vapnik, V. N. (1995). Support-vector networks. Machine Learning Journal, 20, 273­297. Forges, F., & von Stengel, B. (2002). Computational ly efficient coordination in game trees (Technical Report LSECDAM-2002-02). London School of Economics and Political Science, Centre for Discrete and Applicable Mathematics. Foster, D., & Vohra, R. (1997). Regret in the on-line decision problem. Games and Economic Behavior, 21, 40­ 55. Gordon, G., Greenwald, A., & Marks, C. (2008). No-regret learning in convex games (Technical Report CS-08-03). Brown University, Department of Computer Science. Gordon, G., Greenwald, A., Marks, C., & Zinkevich, M. (2007). No-regret learning in convex games (Technical Report CS-07-10). Brown University, Department of Computer Science. Gordon, G. J. (1999a). Approximate solutions to Markov decision processes. Doctoral dissertation, Carnegie Mellon University. Gordon, G. J. (1999b). Regret bounds for prediction problems. Proceedings of the ACM Conference on Computational Learning Theory. Gordon, G. J. (2006). No-regret algorithms for online convex programs. Advances in Neural Information Processing Systems (NIPS), 19. Greenwald, A., & Jafari, A. (2003). A general class of noregret algorithms and game-theoretic equilibria. Proceedings of the 2003 Computational Learning Theory Conference (pp. 1­11). Hazan, E., & Kale, S. (2007). Computational equivalence of fixed points and no regret algorithms, and convergence to equilibria. Advances in Neural Information Processing Systems (NIPS), 20. Kalai, A., & Vempala, S. (2003). Efficient algorithms for online decision problems. Proceedings of the 16th Annual Conference on Learning Theory. Marks, C. (2008). No-regret learning and game-theoretic equilibria. Ph.D. Dissertation, Department of Computer Science, Brown University, Providence, RI. Shalev-Shwartz, S., & Singer, Y. (2006). Convex repeated games and Fenchel duality. Advances in Neural Information Processing Systems (NIPS), 19. Stoltz, G., & Lugosi, G. (2007). Learning correlated equilibria in games with compact sets of strategies. Games and Economic Behavior, 59, 187­208. Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. Proceedings of the 20th International Conference on Machine Learning. 5. Discussion We have presented several new forms of regret for online convex programs, analyzed their relationships to one another and to known regret types, and given the first efficient algorithms that directly minimize some of these forms of regret. These algorithms are by far the most efficient known for several purposes, including guaranteeing convergence to a correlated equilibrium in a repeated convex game, and to an extensive-form correlated equilibrium in an extensive-form game. By contrast, most previous OCP algorithms only guarantee convergence to coarse correlated equilibrium, an outcome which may yield much lower payoffs and may leave incentives for rational agents to deviate. In the process of designing our algorithms, we derived efficient representations of the transformation sets for each of our regret types except SWAP : we wrote each as a nonlinear kernel mapping followed by a linear transformation chosen from an appropriate set of matrices. These representations may be of separate interest for designing future algorithms. In this paper, we were chiefly interested in cases where the dimension of the kernel mapping was manageable, so that we could directly work with the transformation matrices. However, it would be very interesting to try to design "kernelized" no--regret algorithms. In such algorithms we would never explicitly write down a transformation , but instead represent it in terms of observed actions and loss functions, thereby making it feasible to use very high-dimensional sets of transformations. Important application areas for OCPs and convex games include multi-agent planning (in which the feasible region for each player is a set of plans, and interactions include contending for resources) and learning in extensive-form games such as poker. We are particularly interested in extensive-form games; this application requires further developments such as learning efficiently from bandit feedback and abstracting large games into smaller representations which we can work with in real time. Acknowledgments The authors would like to thank Martin Zinkevich for very helpful discussions during an early phase of this work. This work was supported in part by a grant from DARPA's Computer Science Study Panel program and in part by a grant from the Sloan Foundation.