Computing Equilibria for Congestion Games with (Im)perfect Information Rene Beier Artur Czumaj Piotr Krysta Berthold Vocking ¨ Abstract We study algorithmic questions concerning a basic microeconomic congestion game in which there is a single provider that offers a service to a set of potential customers. Each customer has a particular demand of service and the behavior of the customers is determined by utility functions that are nonincreasing in the congestion. Customers decide whether to join or leave the service based on the experienced congestion and the offered prices. Following standard game theory, we assume each customer behaves in the most rational way. If the prices of service are fixed, then such a customer behavior leads to a pure, not necessarily unique Nash equilibrium among the customers. In order to evaluate marketing strategies, the service provider is interested in estimating its revenue under the best and worst customer equilibria. We study the complexity of this problem under different models of information available to the provider. obtain an FPRAS for the equilibria problem in the model with imperfect information although the problem with perfect information is inapproximable under the worst case measures. In particular, the worst case complexity of the considered stochastic equilibria problems increases with the precision of the available knowledge. 1 Introduction We investigate computational aspects of a classical economic game in a market in which a single provider offers a service to a set of potential customers. We consider a selfish provider whose goal is to maximize its revenue. Each customer is assumed to have a particular demand of service and the quality of service decreases with the congestion, i.e., the sum of the served demands. We consider a model in which the customers do not cooperate with each other and the customer behavior is determined by utility functions. The utility functions are assumed to be non-increasing in the congestion and they specify whether a customer joins or leaves the service based on the offered prices and the experienced congestion. If all prices are fixed then such a customers behavior leads to an equilibrium among the customers, commonly known as a Nash equilibrium. Therefore, from the perspective of the provider it is important to understand the behavior of the customers in Nash equilibria. In general, the provider may be interested in many possible scenarios. For example, if she is optimistic then she might aim at setting up prices such that the revenue under the best customer equilibria is maximized. A more pessimistic provider could be interested in maximizing the profit under worst equilibria or minimizing the gap between best and worst equilibria to minimize economic risk. The model. In this paper we consider the following game that corresponds to the scenario described above. Suppose a provider wants to sell an on-line service to a set of potential customers. Every customer is assumed to have a demand of service . For the time being let us assume that the provider committed to a particular price vector. Let denote the price offered to customer . Furthermore, let denote an indicator variable that is if customer joins the service and otherwise. Then, the profit (revenue) of the provider is defined as . Due to congestion effects, the quality of service is assumed to be non-increasing with the load, which is the allocated demand . Therefore, the utility of a customer is a non-increasing function of the load. For , let denote the utility function of We first consider the classical model in which the provider has perfect knowledge of the behavior of all customers. We present a complete characterization of the complexity of computing optimal pricing strategies and of computing best and worst equilibria. Basically, we show that most of these problems are inapproximable in the worst case but admit an "average-case FPAS." Our average case analysis covers general distributions for customer demands and utility thresholds. We generalize our analysis to robust equilibria in which players change their strategies only when this promises a significant utility improvement. Copyright © 2004 by the Association for Computing Machinery, Inc. and the Society for industrial and Applied Mathematics. All Rights reserved. Printed in The United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Association for Computing Machinery, 1515 Broadway, New York, NY 10036 and the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688 746 7 @ 9 ' 5 )000) ' % # " 643211((&$ ! 7 9 HF QI HFD 6SRP6GE C 8 " BA 7 Research partially supported by NSF grant CCR-0105701, and by DFG grant Vo889/1-1. Max-Planck-Institut fur Informatik, Stuhlsatzenhausweg 85, 66123 Saar¨ bruecken, Germany. rbeier@mpi-sb.mpg.de. Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA. czumaj@cis.njit.edu. Dept. of Computer Science, Dortmund University, Baroper Str. 301, 44221 Dortmund, Germany. piotr.krysta@cs.uni-dortmund.de. Dept. of Computer Science, Dortmund University, Baroper Str. 301, 44221 Dortmund, Germany. berthold.voecking@cs.uni-dortmund.de. We extend our analysis to a more realistic model in which the provider has incomplete information. Following the game theoretic framework of Bayesian games introduced by Harsanyi, we assume that the provider is aware of probability distributions describing the behavior of the customers and aims at estimating its expected revenue under best and worst equilibria. Somewhat counterintuitive, we ¤ £ ¢ ¡ © § ¨ ¦ ¥ ¥ (1.3) In addition, we study more robust equilibria concepts that avoid thrashing effects. We show that even under these robust equilibria, most of the considered problems remain inapproximable. Only some special, somewhat restrictive cases, e.g., if all customers have the same utility threshold, admit efficient approximation algorithms. Average case analysis. Despite the pseudopolynomial algorithm, most of the problems are inapproximable under the worst case measures. In economic literature one typically does not consider the worst case scenarios (which almost never appear 2 FPAS or FPTAS stands for a fully polynomial time approximation scheme. FPRAS means a fully polynomial time randomized approximation scheme. 3 Throughout this paper, the term "inapproximable" means that for any , the considered problem cannot polynomial time computable function be approximated within a factor of in polynomial time, unless . 747 lo m tnl he igfRd he kjfRd @ function is not strictly decreasing, we extend the definition of the natural, obvious way. 1 If in U X ¥ Condition (1.1) is called a maximality constraint and (1.2) ­ a threshold constraint. Observe, that the problem of maximizing subject to the constraints above (the best equilibria problem) generalizes the maximum knapsack problem. The only difference is that in the knapsack problem all utility thresholds are identical. Of course, in the typical knapsack formulation the maximality constraint is not specified explicitly as it is fulfilled implicitly by profit maximization. However, if customers have different thresholds then the maximality constraint is crucial. The problem of minimizing subject to (1.1)-(1.3), is called the worst equilibria problem. Congestion games. Assuming fixed prices, our game is called a congestion game in the literature. This class of games has been introduced by Rosenthal [27] and since then studied extensively (see, e.g., [14, 21, 22, 23, 31]). In Rosenthal's model players choose a subset of available facilities, and each facility has an associated utility function. A payoff to a player from a facility is a function depending on the number of all players that choose this facility. Rosenthal shows that there always exists a pure strategy Nash equilibrium in his model. A model that is closer to our has been studied by of the demands. Under this assumption, we show the existence of a pseudopolynomial time algorithm. We show that the pseudopolynomial time algorithm cannot be transformed into an FPAS. In fact, we prove that even under uniform pricing functions (i.e., prices are equal to demands) the problem is inapproximable3. Even for flat rate pricing (i.e., fixed price, same for all customers) the best possible approximation ratio is . 5')8% ww1uB 7 customer mapping a load value to the maximum price that the customer is willing to pay for a service under this load. In this way, the payoff of a customer is in case he joins the service (i.e. ), and is zero otherwise. We assume a classical game theoretic setting in which a customer is willing to join the service if the payoff is nonnegative, and he refuses the service otherwise. If the payoff is zero then the customer is ambivalent. An allocation of customers in which it is impossible for any single customer to improve his payoff is called a customer Nash equilibrium. In this paper we investigate two critical properties of customer Nash equilibria: which customers join the service and what is the profit obtained by the provider. Let us observe that the region of allocations in an equilibrium is described by the following constraints: Milchtaich [21, 22]. In a so-called unweighted crowding (congestion) game, every player has a player-specific payoff function that depends on the strategy he plays and on the number of players that choose the same strategy. In a weighted crowding (congestion) game each player has a weight , and the payoff function of a player depends on the sum of the weights of all players choosing the same strategy. In this terminology, our game is a weighted congestion game with player-specific payoff functions in which each player has two strategies. Milchtaich [21] shows that such games always admit pure Nash equilibria, i.e., there exists a vector (usually many such vectors) that fulfills the maximality and the threshold constraints defined above. 1.1 Summary of new results The main theme of this paper is to understand the complexity of determining three important properties of the game described above: what is the best pricing scheme for the provider, which customers will join the service in customer best/worst Nash equilibria, and what is the provider's profit in best/worst Nash equilibria. More specifically, the equilibria defined by these con- Basic model. We begin with the classical model in which straints are called pure equilibria. If we relax the integrality the provider has perfect knowledge about the behavior of all and assume then we obtain so-called mixed equilib- customers. We provide a complete characterization of the complexity of computing best and worst equilibria, and of ria. In this study, we will focus on pure equilibria. Assuming that the prices are fixed, we can simplify the computing optimal pricing strategies in this model. description of pure customer equilibria as follows. Since We give an FPAS2 to compute revenue maximizing prices are non-increasing functions, we can define (utility) thresholds 1 assuming that the provider can offer an individual price to , , and obtain every customer independent of the customers demand. (1.1) Individual pricing is often considered unrealistic. Eco(1.2) nomic literature usually assumes that prices are functions ¥ ¥ TC 7 @ 9 y y e V7V V U X @ I YV 7 V W9 4TC 5')8% w2vut s X V 7 V qV 9 U C f X V 9 i U C e YV 7 V qgph4gf@ 05w'1)8u%tf 7 s V 7 V g9 V d '# a` 7 TVg9p i d 8# ` 7 " C # X RU Tc y '# a` 7 7 d d "') 32v8 x 7 'r# 7 8# cb 7 y 7 9 4 All our results can easily be generalized to discrete distributions as well. our considerations related to knapsacks we will rather use the term weight instead of demand. 5 In 748 vu wkt s 'U X xgw in real life applications) but rather makes some stochastic assumptions about the input. Therefore, we turn our attention to average case analysis and our goal is to consider input distributions that are as general as possible. The pseudopolynomial algorithms for the equilibria problems are based on a reduction to the interval knapsack problem, i.e., the standard 0/1 knapsack problem with an additional lower bound on the weight of knapsack packing. This problem cannot be approximated in its general form as setting the interval length to some small number allows to decide the hard subset-sum problem based on approximate solutions for the interval knapsack problem. We can prove, however, that there is an adaptive approximation scheme whose running time depends linearly on the reciprocal of the length of the interval. This adaptive algorithm is the basis for our average case analysis of the equilibria problem. In our average case analysis, we consider a very general model, where the demands and utility thresholds may have arbitrary, continuous probability distributions4 with bounded mean and density. Different demands and different thresholds can have different distributions. Our main contribution here is the design and analysis of an approximation scheme whose expected running time depends on the maximum density and the maximum expectation over all probability distributions. To give a simple example of the performance of this scheme, if and all utility all demands are uniformly sampled from thresholds are sampled from , then we obtain an "averagecase FPAS." Observe, that under such a distribution, the ratio between the smallest and largest input numbers is only polynomial. This, however, by no means implies that the pseudopolynomial time algorithm can be used to obtain an approximate solution. The crucial property to obtain the average-case FPAS is that there are no small intervals in the interval knapsack problem to which we reduce the equilibria problems. Models with incomplete information. Finally, following economic literature, we turn our attention to a more realistic model in which the provider has incomplete information about the customer behavior. Following the game theoretic framework of Harsanyi [12, 13], we assume that the provider is aware of probability distributions describing the customer behavior and aims at estimating its expected revenue under best and worst equilibria. On a first view, this Bayesian variant of the equilibria problem might seem even harder than the problem under perfect information. However, we can prove the existence of an FPRAS (fully polynomial time randomized approximation scheme) under some mild assumptions about the imperfect information. (Notice that the running time bound does not refer to the random input anymore but to the coin flips made by the algorithm.) In principle, our analysis covers general probability distributions with bounded domain. If the distributions are "well dispersing" then we achieve polynomial running time. The more "concentrated" the distributions are, the larger the running time becomes. Thus, somewhat surprisingly, the complexity of the equilibria problems increases with the precision of the available knowledge. 1.2 Previous and related work Previous work in (economic) game theory that considered the same or similar to our models has already been discussed in the introduction under "congestion games." From the computational viewpoint there are results on the complexity of computing equilibria in a parallel link game [6, 7], in normal form games and Bayesian games [3] (see also the references therein). These papers mostly prove that computing equilibria with some properties is hard. Complexity of equilibria in a market exchange game was studied in [4, 5]. For example, Feldmann et al. [7] have given an FPAS for computing the best equilibria in a simple parallel link game. We now briefly discuss some previous work on algorithmic aspects of games with incomplete information. A classical model of auction design is given a seller who wants to sell a single (or multiple) item(s) to potential customers. Each customer has a private valuation of the item. The seller only knows a probability distribution over possible values of the valuations and her goal is to design a strategy (auction) of which customer gets the item and for what price in order to maximize her expected revenue. This simple model does not consider congestion issues. Ronen [25] has given a approximate auction in this model. These results also hold in a Bayesian context. No deterministic polynomial time auction from a natural class of auctions can do better than a approximation, see Ronen and Saberi [26]. Several studies, e.g., [1, 2, 10], deal with average case analysis for the knapsack problem. In contrast to our work, these studies assume that profits are independent of the weights (i.e., demands in our terminology5), and they study exact algorithms. In our analysis, profits are functions of weights. In this case, one cannot hope to get exact algorithms, which is related to breaking so-called knapsack crypto-systems whose hardness is based on the hardness of random subset-sum instances (i.e., knapsack with profits equal to weights) [15]. In the stochastic knapsack problem there are given items with random weights with associated profits and the knapsack capacity. The objective is to find a knapsack packing that maximizes the expected profit or maximizes the profit under an additionally given overflow probability. Kleinberg et al. [16] have given a -approximation algorithm for general independent distributions. Goel and Indyk [9] improve on this result by giving approximation schemes for particular distributions, e.g., exponential or Bernoulli. Translating our result into this setting, we do not compute a particular knapsack packing but the expected profit of optimal knapsack packings. Other differences are that we assume that profits depend on weights (demands), and we consider harder problems which in contrast to the standard knapsack cannot be approximated within any factor if the weights are adversarial. More detailed references especially to the related work in economics and game theory were either discussed before in the introduction or will appear further in the text. q rp "') 11v8 " ) 48 Outline. Sections 2 through 7 describe our results in more detail, and Section 8 contains the most interesting proofs. 2 An FPAS for individual pricing We first consider the simplest (but also the least realistic) model in which the provider can offer an individual price to every customer. Prices need not depend on the demand of the customer, that is, the provider is allowed to use an . Under such assumptions, arbitrary price vector the maximality constraint in the description of the customer equilibria can be avoided by assigning very high prices to those customers that are unwanted. As a consequence, for the optimal individual price vector there is only one equilibria and the provider does not have to distinguish between the best and worst equilibria. We assume that the utility functions are known and as well as can be evaluated in constant time. In such a model we can provide a strong algorithmic result whose proof is omitted in this extended abstract. T H E O R E M 2 . 1 . Under individual pricing there is a price vector that maximizes the provider's profit and uniquely determines the behavior of each customer. Furthermore, there is an FPAS for approximating such an optimal price vector and for approximating the optimal equilibrium. T H E O R E M 3 . 1 . There is a pseudopolynomial time algorithm for computing the best (worst) customer equilibria for any given price vector. This algorithm is based on a similarity of the equilibria problem and the knapsack problem. In fact, we reduce the equilibria problem to a variant of the knapsack problem that we call interval knapsack, and we then show that the pseudopolynomial time algorithm for knapsack also works for the interval knapsack problem. This theorem is proved in Section 8.2. The theorem has immediate consequences for the calculation of optimal pricing schemes for best or worst customer equilibria. For example, consider block pricing with a constant number of pricing plans. There is only a pseudopolynomial number of ways to choose a constant number of different piecewise linear functions. Thus, all block pricing schemes can be checked in pseudopolynomial time to determine the best one. 4 Inapproximability of equilibria and of pricing Having a pseudopolynomial time algorithm for a problem, it is often possible to transform it into an FPAS. Unfortunately, this does not work for computing best and worst equilibria. We show that these problems are essentially inapproximable, even under uniform pricing, which basically means that prices are equal to demands. The proof of Theorem 4.1 can be found in Section 8.1. Individual pricing, however, is usually considered to be unrealistic for an on-line service provider. Most studies in ecoT H E O R E M 4 . 1 . The best and worst customer equilibria probnomics deal with pricing schemes, where prices are monotone lems are inapproximable under uniform pricing. functions of the demand, see, e.g., [11, 18, 24, 28, 29, 30]. This negative result implies that not only the best (worst) 3 Pseudopolynomial algorithms for more realistic pricing equilibria problem for an arbitrary uniform pricing function is Economic literature focuses mostly on the following classes inapproximable but so is the problem of computing best (worst) of pricing schemes: uniform pricing: a function of the form equilibria problem when the optimal uniform pricing function , with and being the demand of a customer; is given. To see this, suppose the input of the uniform pricing and demands .A linear pricing: with ; block problem are integers uniform pricing is a pricing function for some pricing: is a positive, piece-wise linear function; and flat rate: . Let the utility functions be of the form some constant function. In the context of an on-line service if and 0 otherwise. Then, the optimal uniform pricing provider, block pricing seems to be the most relevant pricing scheme is for all . An approximation of the scheme. In practice, customers are offered a small collection of revenue that can be obtained by these prices under best (worst) so-called pricing plans (i.e., linear price functions) from which equilibria, however, gives also an approximation of the value of each customer can choose the best offer w.r.t. his demand. the underlying best (worst) equilibria itself and, hence, is not Previous work in economics (see, e.g., [11, 17, 18, 20, 24, 28, 29, 30]) focuses mostly on explicit analytical descriptions of possible. equilibria. With this knowledge, pricing schemes are then com- C O RO L L A RY 4 . 1 . The best and worst customer equilibria puted basically by enumerating all possible pricing schemes problems are inapproximable even when the optimal uniform from a given class of pricing schemes. However, the computapricing function is given. tion of the equilibria values of our congestion game is NP-hard and hence does not have such a closed form representation. Still, this inapproximability result leaves a hope that one Consequently, the provider needs more advanced algorithmic might manage to compute the optimal pricing function without solutions in order to compute the value of worst and best equi- being able to approximate the revenue it yields. In fact, in libria. In fact, we can prove that there is a pseudo-polynomial the above example the computation of the best pricing scheme time algorithm for computing best and worst customer equi- itself was trivial. The following theorem, whose proof is libria under arbitrary price vectors. Recall that prices trans- omitted, shows that not only computing the optimal revenue but late into thresholds, that is, the input consists of the thresholds also computing the prices that lead to this revenue is difficult by , demands , and prices . All itself. We restrict our attention to the simplest possible pricing these values are assumed to be positive integers. scheme: flat rates and the best equilibria problem. 749 # X 7 U C | 41212) # )000 " ~ 3212) y)000 y # R y s 7 8} B| {U X 4zTC y1212) ) 0 0 0 8 } y v| )8 } { X xU q C i | yU y &# gX ~` 8}| y2121) ) 0 0 0 z1211) )000 {U X xzC | (# X y` U 3212) y)000 y THEOREM 4 . 2 . For a flat rate , let be the profit of the best equilibria under this flat rate. Let . A flat rate with cannot be calculated in polynomial time, unless . 5 (In)Approximability of robust equilibria All inapproximability results above rely on a very careful and adversarial choice of the utility thresholds s.t. even a slight change of thresholds leads to thrashing effects. This raises the questions of how the complexity of the problem will change if we consider more "robust" equilibria. It is very reasonable to assume that customers move to a different strategy only if this promises a significant improvement in the congestion. This leads to -robust equilibria defined as follows. For some , 6 Adaptive algorithms and average case analysis We continue our analysis of the interval knapsack problem. In its general form the interval knapsack problem cannot be approximated because any algorithm that for arbitrarily small interval length approximates profits in polynomial time could be used to decide subset-sum instances. If the specified interval is large, however, then there is no direct relation to the hardness of subset-sum. In fact, we can give an approximation algorithm for interval knapsack whose running time adapts to the length of the interval. The proof of the next theorem is in Section 8.3. T H E O R E M 6 . 1 . For any , there is an approximation algorithm for the interval knapsack problem with interval boundaries and having running time , where . This result is a key to our average case analysis of the equilibrium problem for general probability distributions. We consider the scenario in which demands and thresholds have What are the effects of such a relaxation on the approxima- independent continuous probability distributions with density bility of the equilibria problems? It turns out that the problems function and and distribution functions and ( basically remain inapproximable even under robust equilibria. ), respectively. Prices are assumed to be a function of the Only some special, somewhat artificial cases in which all cus- demands. We assume that the pricing function is non-negative, tomers have the same threshold become approximable now. non-decreasing and concave. This is a very natural assumption T H E O R E M 5 . 1 . For any , best and worst -robust since rational customers can always achieve concave pricing functions by splitting their demands. For this reason, basically equilibria are inapproximable (even under uniform pricing). all pricing schemes considered in the economic literature (see T H E O R E M 5 . 2 . If then the worst - Section 3) are concave. Furthermore, we assume that the robust equilibria problem admits an FPAS if and is pricing function can be evaluated in constant time. inapproximable if (assuming arbitrary choices of prices The running time of any efficient average case algorithm and demands). for the equilibrium problem must depend on the input probability distributions. Otherwise, one could bypass the inapThe proof of Theorem 5.1 (omitted here) can be seen proximability results presented in the previous sections by enas a "robust" variant of the proof of Theorem 4.1. The coding worst case instances into distributions with very high proof of Theorem 5.2 relies on an analysis of the min-max density. In fact, the running of our algorithm increases with knapsack problem, i.e., the problem of packing a knapsack with the maximum density of the underlying probability distribuminimum profit such that the weight is maximal, i.e., no item tions. Let denote the maximum density over all distribucan be added to the knapsack without exceeding its capacity. tions, i.e., . Let be the We give a complete characterization of the complexity of this maximum expectation over all these distributions, that is, problem that might be of independent interest. Before it was . The proof of the next reonly known that the min max knapsack problem is -hard sult can be found in Section 8.4. [19]. Despite the similarity of the min-max knapsack problem to the standard knapsack problem, we prove that the min- T H E O R E M 6 . 2 . Assume that the pricing function is a nonmax knapsack problem is inapproximable although the latter negative, non-decreasing, and concave function of demands, problem admits an FPAS. More precisely our results are the and and are as defined above. For any , there is an following. -approximation algorithm for the best (worst) equilibria T H E O R E M 5 . 3 . Consider the min-max knapsack problem. (a) If weights and profits are arbitrary then the problem is inapproximable. If weights and profits are equal then the problem admits an FPAS. The term can be seen as a measure of the amount of randomness available. (Observe that one can always scale the distributions so that ; this scaling does not change the value of .) If all input variables have identical uniform (b) The -robust min-max knapsack problem ( ) defined distribution then , which is the smallest possible by value. Next, suppose an adversary specifies possibly different admits an FPAS with running Gaussian distributions (conditioned on positive values) for time poly . the utility thresholds and demands. Then (by definition) 750 ÑÉ ) X @²) ¨ fhU problem with expected running time poly . ÈÇ ' X ª I 4U # Ñ Ñ "') 32v8 ÕGª U ¡ #  Á X R²g£¢cà f` ® 6Æ Ñ 5 7 4X 7 U Å 7 ²) 7 xX 7 U Ä 7 6Y% ÓÐ Ò Ô ÓÐ Ò Ô 5 X 7 U x) X 7 U Y% @vzw w¢Ê«É Å Ä Ð ÏÎ Í ÌË # É "') 11v8 ¬«ª '# Ñ µ´ R ± À À »@º ¹ ¸ ¶ ± ° ¯ qrº ¿ 3¼ ¯ j·q¨ ²w ³ aw ¾½ #Ñ Ø@É Å Y× Ö Ñ @É Ñ wÄ ' X ª I xU É " |U # X yb@n |U X q rp 8} 9 i' y X rxU s V 7 V V y X I 4e V 7 V V gR 'U 9 i 8} p ) X © ) ¨ fhU 5 X I 4U ¥ e V 7 V V 9 i ' d X ¦ 7 U 6X W¦xU s V 7 V V 9 rV 7 V V wTj£¢ 8# 9% ¡ § i' ¥ ¤ 8e 8# a # {{{ y 21# y # y qp # q (zWe X y u |U A s 8 8} E| 0(5'1)8u%t 7 d d 8} W| 'r# 7 8c#` 7 7 Algorithms for imperfect information Let us briefly summarize the results discussed so far. On one hand, our analyses show complete inapproximability for the worst case instances. On the other hand, we have a very general average case analysis that yields efficient algorithms for various input distributions. In this section, we show that the latter result is the key to obtain efficient algorithms for estimating equilibria with imperfect information. We follow the framework of Harsanyi [12, 13], who described an elegant model to study games in which the players have incomplete information. The so-called Harsanyi transformation, one of the ideas for which he was awarded the Nobel Prize together with Nash and Selten, converts games with incomplete information into games with complete but imperfect information. Harsanyi considers players who have different utilities as being of different types. He proposed that such games be modeled by having Nature move first and choose each player type according to a probability distribution. Players initially know only their own type and the distributions for the other players, but not the outcome of the random choices by Nature. The players aim at maximizing the expected payoff. The Harsanyi framework in our setting means that there is only one player that needs to compute its strategy based on imperfect information. Indeed, in realistic scenarios it is reasonable to assume that the provider does not know exactly how the customers will behave under a given price vector. In contrast, customers can be assumed to converge to a Nash equilibrium by best response strategies without knowing other players utilities (this follows from [21, 22]). If prices are fixed, then the customer behavior is determined by demands and utility thresholds; these parameters define the Harsanyi types. The online provider does not know the types of the customers but she knows probability distributions on these types. In order to maximize her expected profit, she is interested in estimating the expected revenue under best and worst equilibria. Somewhat counter-intuitively, we prove that, if the information available to the provider is not too "precise," then the stochastic equilibria problems obtained by the Harsanyi transformation have smaller worst case complexity than their deterministic counterparts. Let the demands and utility thresholds have independent continuous probability distributions with dis( ), respectively. Prices are tribution functions and defined by a non-negative, non-decreasing, concave function of the demands. We need some further technical assumptions: (2) Pr . T H E O R E M 7 . 1 . Suppose assumptions (1) and (2) are satisfied and is bounded polynomially. Then there is an FPRAS for the best (worst) equilibria problem with imperfect information. Observe, that by standard techniques we can achieve deterministic polynomial time instead of time bounds that only hold on expectation. Then, only the approximation guarantee depends on the random coin flips made by the algorithm. 8 Algorithms and proofs 751 ö ê ö 9 # ô Å ó õô ó Å s ¥ ' r# X ßzÞÇ ÝU ' a# X ²6Æ ÝU s e `" y D kà Ý (1) there exists such that and 8.1 Inapproximability We prove here Theorem 4.1. Assume uniform pricing with . We show, that for every choice of there exist demands and utility functions such that the value of the maximum (minimum) profit customer equilibria cannot be approximated. Instead of utility functions we directly specify thresholds . The 's can then be chosen accordingly to satisfy . -hard subset-sum problem We use a reduction from an [8]. Given a set , and some number , decide whether with . W.l.o.g., we assume that all 's are even (otherwise we multiply all 's and by 2.) Maximization: Consider the following max profit customer equilibria problem with profit equal to demand. There are regular customers and two special customers: a "large" customer and a "giant" customer . The demands , and and thresholds are given in Table 1. The parameters and can be chosen to satisfy with . ¸ ãâ À À ä ¯ (g·¶ k¨ @¯ w Ñ @É corresponds to the maximum expectation of any of these values and , with denoting the minimum standard deviation over all these distributions. Thus the running time is poly . Hence, the more dispersed the distributions are, the smaller the running time is. We notice that these results are not directly related to the average case analysis of the knapsack problem in [1, 2, 10]. For details, see our discussion in Section 1.2. The first assumption means that the domain of the probability distributions of the demands and utility thresholds are bounded. This property holds for uniform distributions. For other distributions, one might need to cut the tails at some position where the probability becomes negligibly small. The second assumption is mild and seems natural as well. It says that the market for the offered service is non-empty with probability at least . Now, to obtain an estimation of a random variable, we sample from the distribution and use the adaptive algorithm from Theorem 6.2 to calculate the value of the best (worst) equilibria. Applying assumptions (1) and (2) in a Hoeffding bound, yields that, for every , one needs only samples to get a -approximation of the expected revenue with probability at least . Now, the key argument is that sampling generates average case but not worst case instances. This way, we obtain an efficient randomized approximation algorithm with running time poly . In this scenario, the term should not be interpreted as an indicator of how much randomization is available, but rather should be interpreted as a measure for the degree of precision of the information available to the provider. If is constant then the provider essentially knows nothing. Full knowledge . If the degree of precision is bounded corresponds to by a polynomial, e.g., if all variables have possibly different uniform distributions, each of which covering a polynomial , then we obtain an FPRAS (fully fraction of the domain polynomial time randomized approximation scheme). # ñð ò3ê h 9 C C # X (|U Té` y y Ñ @É XX ¸ ) Ñ É ) 46 ä U (g·¶ f@²) ¨ fyU Ç ¥ 2ê 564211((ïià )000) ' % î í ë ê cÈ3æ8 5 ²1121) Y% ê)000 ê q p # | × YÖ áI' )000) 41112' " Ý) è²8 ª å' X ExU 8 } ª) «xzá ç#Ñ aæ@É H fF | 3ê C C ë ì ¥ É Ñ ¥ " x ÈÇ Ü 6Æ ) X Û ) ¨ fhU × U # X Û Ú ÙÉ Proof. Let us recall the formulation of the equilibria problem: our goal is to maximize (minimize) under constraints (1.1), (1.2), and (1.3). A solution with load is feasible iff the following two conditions hold for all : (i) Maximality condition: if then , and (ii) Threshold condition: if then . The Maximality and Threshold conditions partition the non-negative numbers for each customer into 3 disjoint intervals: and . Any feasible solution with load must satisfy either , or , or , depending on the interval falls into. We overlay these intervals for all customers to partition into up to so-called elementary intervals (each additional customer can divide at most 2 existing elementary intervals). Note that elementary intervals can consist of just one number and they can be opened or closed on both sides independently. Each elementary interval partitions the customers into three sets: IN FREE OUT . Table 1: Best equilibria instance and solution . If the load of a solution in equilibrium falls into the elementary interval then no customer in OUT has joined the Minimization: We construct a worst equilibria instance with service, all customers in IN must have joined the service, and regular customers with demands and there are no restrictions for customers in FREE . Therefore, we thresholds . There is only one special customer with partition the solution space for the equilibria problem according demand and threshold . If there exists a solution to to the load of potential solutions. For each elementary interval the subset-sum problem, then there is a solution to the worst we solve the following interval knapsack problem: equilibria problem with load . If there is no such solution s.t. , then any solution containing only regular customers has FREE FREE IN load at most , violating the maximality constraint of customer . Thus any feasible solution must contain the special customer and has load at least . Since and also can be chosen arbitrarily large, any approximation algorithm The optimal solution to the equilibria problem is the maximum solutions of the corresponding for the worst equilibria problem can be used to decide the (minimum) over all interval knapsack problems. For elementary intervals that are subset-sum problem. open we can use the corresponding closed interval, because at 8.2 Reduction to the interval knapsack problem In this an open end the restriction of a variable having value or section we present a reduction from the customer equilib- ends and the variable becomes free. Therefore the solution ria problems to the interval knapsack problem. It will stays feasible for the equilibria problem. The first and the serve as the basis to solve the equilibria problem in pseudo- last of the elementary intervals are trivial intervals, since they polynomial time and to approximate it. We define the inter- correspond to the complete and empty knapsack, which can be val knapsack problem to be the standard 0/1 knapsack problem tested separately. with an additional lower bound for the weight of knapsacks: . (A knapsack is feasible if its weight falls into the interval .) The minimization version asks for the minimum profit knapsack. Notice, that the interval knapsack problem is inherently inapproximable, because for it requires to decide the subset-sum problem. On the other hand, the problem can be solved in pseudopolynomial time by a straightforward adaptation of the well known dynamic programming approach for the standard knapsack problem. Thus, the following lemma directly implies Theorem 3.1. L E M M A 8 . 1 . A solution to the best (worst) customer equilibria problem can be obtained by solving at most interval 8.3 Adaptive algorithms for the interval knapsack problem In this section, we present an algorithm for the interval knapsack problem that adapts to the length of the specified interval. Its running time increases linearly with the reciprocal of the interval. In particular, we prove Theorem 6.1. We begin with the algorithmic ideas. We use a two phase approach. In each phase, we solve a relaxed interval knapsack problem. The solution to the first problem might miss solutions close to the right boundary , and the solution to the second problem might miss solutions close to the left boundary . The union of both solutions will cover all feasible solutions. The two relaxed problems are solved following the standard dynamic programming algorithm with one additional 752 ' 5'1)8u%x 7 î % æí ¤ u# ñ í E X 8cb 7 # } IT y y # 7 7 gþ 9 " ç) æ8 ¡ 8 ¤ ð y ¢ ñ i R 7 ñ ' # 7 "æz y U ç) ¡¡ ® 05')8% (1ut 7 5 ç U î % # "æ)z è¢í ¤ 2rnñ ) 5" ) (ßh z I y )5 X y I6 y v8 Bæí ¤ 2# ñ y ) î % í ¤ ¡ ¢ yð U " ) ù y 4 I y u) X I y v8 ) 'i gws ñ 7 R í ¤ ¡ ¢ hð ¡ U £X g£¢yw¢ 'i ws '# ø6 7 s s ' 7 gþ 9 8#Þ 7 7 We use customer as an indicator for deciding the subset-sum problem. If there exists a solution to the subset-sum problem, then (as given in Table 1) is a feasible solution to the best equilibria problem with load . If there is no solution , then no feasible equilibrium can contain customer . First notice, that there is no feasible solution containing both and , because their cumulative demand is customers strictly larger than the threshold of . Now assume some solution contains but not . Since the demand of the regular customers cannot sum up to exactly and moreover all 's are even, has load at most , which violates the maximality condition of customer . Hence any feasible equilibrium excludes and has load at most . Since and therefore also can be chosen arbitrarily large, any approximation algorithm for the best equilibria problem can be used to decide the subset-sum problem. customer demands thresholds iff knapsack problems. í ¥ vÅ u Ç í í U X jþ 9 h¢w ê # 3üæ¦ Å " ®) G²¦ 7 jþ 9 55')8% ) w2vYt 7 ® s 7 gþ 9 s ¤ 7 @ gþ R 9% ÷ 7 ' I s 8´ # ÷q7 ' ÷û ø# q7 'a# ÷q7 ÷ T7 ó i ô Ii s ø¥ Å 7 Å ® ÿ # ¥ ' I óòû # ´ i ¥ gxû y i Åy # ¥ W# yy iÅ Å Ç ¥ 7 Å i í ' I i ¥ Å )000) ' X 411124U ó U iÅ X æi ô iu X ¥ úùU Ç ¥ 7 Ç ó c# ´ Å#û g é# ê I s ý¥ Ç Ç 7 ¥ $ y # Ç 7 Ç )000) 43212' Å ÷ 7 í í so that is the -approximation that we are looking for. If then we increase the accuracy by decreasing the value of by a factor of until we find a solution with . The maximum number of iterations that we Opt . We focus on the Clearly, first subproblem, the other one can be solved analogously. Let need to execute is . Thus the overall running and , for all , denote . This proves Theorem 6.1. rounded weights and profits, respectively, where the values of time is For the average case analysis in the next section, we need and will be set momentarily. For any , define and and . Let denote this result in a slightly different form. Let the largest weight among all knapsack packings with denote the maximum and minimum weight, respectively. If and . The following formula gives a profits are defined by a non-decreasing, non-negative, concave function then . This yields the following. recursive definition for all non-trivial values of . C O RO L L A RY 8 . 1 . Suppose profits in the interval knapsack problem are defined by a non-negative, non-decreasing, concave function of weights. Then the algorithm above solves the interval knapsack problem in time . 8.4 Average-case analysis of the equilibria problem We now prove Theorem 6.2. Let be fixed. We first use a reduction to the interval knapsack problems described in Section 8.2. Each interval knapsack problem will be solved using the algorithm described in Section 8.3. Let denote the set of boundary points of the intervals, . For analytical purposes, we consider the with intervals for all pairs , if there exists an evaluation point . s.t. and Consider any with . Note that each otherwise. is of the form or where and are non-negative Observe that infeasible subsets (i.e., ) have no random variables with density functions and respectively, influence on . Sets with are mapped to an with being an upper bound on all the expectations and evaluation point with second coordinate . Sets ­ an upper bound on the values of all functions . We with might be mapped to evaluation points with need to modify the algorithm as follows. If for some , the second coordinate in but they are explicitly filtered out sampled value is negative, then our algorithm by checking . Let denote a set with takes into account only all intervals of the form for all 753 É D5 % # Evu8Br î 5 ®) ò$zGu% D X zGyýæí ®) U % # ®) ²¦ B ÀqÀ ¿ ½ B ¯ A@¾9 ½ x) Ä Å ®) 8 X ²vU Å µ´ R ¸ ¶ ±° j·¨ ²w ± ³ ¯ w y Ä "') 118 cª I y y íp X zGyU ®) ®) zG î5 ) Iò$z®¦Y% r y # I F s 0110 s 0 5 1211) % )000 5 ® ) $gW²6 HF F GF Rounding weights to multiples of introduces an absolute error strictly less than . Setting ensures . Therefore, . The other scaling factor is defined by , where is chosen sufficiently large. Following a dynamic programming approach, we compute the values for all evaluation points: in dimension 1, in dimension 2, and in dimension 3, of the three dimensional table. Now, define à fÁ  ¡ Compute solution Opt . with and 876 ¡ 5 BB A¿ @½9¾ ½ Cs A¿ @(½9¾ ½ ¼¼ Ñ ) ) U X y4h ø e X ` U e X ` U #Ç d"è¦ X Ý) U sU ®U X ßiu X I y# D Æ ) G²¦ Õ X yÞzfh " ®) ) ) U ) ) X y4fhU % ® } X $ U ® } X U " ®) nz¦ u X U 5 Ç Çu )000) Çs) Ç u wGh 3121¢w¢u% 5 Æ Æu ® )000) Æs) Æ% Y Þh 1121²u« 5 )000) ' 64211((% c " ®) nzG a X U " ®) nzG a X U " ®) n²¦ X y4fh$% ) ) U ÷ " Ý) è²G X U X U ) % Æ !! "( "( " ®) nzG r tU u X (hiGª " ®) n²¦ X $ U s wu X I yU s T ® Æ Æ T Compute solution Opt . with and tYGèe X ` ´ u U © e X U Á µ´ R ± ±° ¾½ À »@º ¹ ¸ À º ¿ 3$¼ q¯ j·¶ ¨ zw ³ ¯ w À ¾½¹ ¸ ¯ qÀ º ¿ »º (¼ R¯ j·¶ rw " ) ú² è X $ U s X ` U ' X ª I xU ) 1{ ' t e X ` { 's e X ` { X ª I xïe X ` U U 'U U © ´ 9 è# D s ) {s!e µ´ v ± ±° ¯ À ¨ 4 ³ rw © ´ { X ª I xe 'U # ! u ! u "wÇGh4"wÆÞh®3 s ª 8 «# U X ` 0W5 X gH I y) jH I z²y') X Þ4h$&W# X yÞzpih% )U % ) ) U % % ) ) ' i U {U X 4% # X U # X $ U "ß B î #ð $y 9 # X ` ) ) U U $#hð 9 Ç # X $ Æ U X y4h% "î ý D " Õ Ç{ Ç !"u # 6 Æ{ ! Æ D ²r"u ù # f ± © ´ { ª I xUøeÞ5 `) `zR X' X U X U % ± { ª I xU © X' ¥ ´ © { X ª I x'U ¥ ` U © X´ nice trick: we expand the dynamic programming table by one dimension used to represent an additional vector of artificial profits corresponding to rounded weights. The original weights allow us to strictly enforce one of the two constraints while the rounded weights (= additional profits) enable us to take care also for the other constraint. equally In more detail, we divide the interval into two parts and . Rounding the weights virtually shifts knapsack packings along the weight dimension. In phase 1, we round up, shifting packings from to the right, while in phase 2 we round down. When calculating with sufficient accuracy, packings from will not be shifted beyond , and packings from will not be shifted beyond . Thus, the required accuracy depends on the ratio . Let . Let Opt denote the value of an optimal solution to the interval knapsack problem with interval . An approximate solution to interval knapsack can be found by solving the following two relaxed subproblems: optimal profit in , i.e., yields profit Opt . Then, we conclude that , for some set with weight in and . The size of the dynamic programming table is . needs to be chosen sufficiently large so that all solutions are covered. W.l.o.g., assume . Then choosing Opt is sufficient. Furthermore, if the computed solution satisfies then Opt , which follows analogously to the calculation for the approximation factor of the standard knapsack problem. The problem that remains to be solved is how to determine the right choice for . We do this in form of a binary search. We start by setting . If we do not find any feasible solution then we immediately stop and return Opt . Otherwise, we find a solution whose profit is maximal among all computed solutions. If then U U " ®) X ÷ ` e X ` nzG U # 5' % X b wý# X U ¤ z " Ý) è²G ÷ " ¥Þ ) ® ®U u X I yvÈ® " ®) nz¦ # Õí ¥í ©¨ ¥í §í ¦§í ¦¥í s ® i U u X Wæy# ' 8 02 1 # X U ) Ý . We assume thus that set does not contain any interval with , but instead. Moreover, if we are given an interval , then the algorithm solves only the interval knapsack problem in intervals and where and . That is, the algorithm does itself. Thus, our algorithm not consider interval considers only intervals of the form , , where and , . Let us fix an interval . Let be a random variable denoting the running time of our algorithm for the interval knapsack problem on interval . Let . If for some , then we use a brute force exact algorithm with running time . Note, that this part contributes an to the expected running time, since Pr . In the same way the algorithm also checks if . If , and for , then we call the adaptive algorithm from Section 8.3. The running time of this algorithm is by Corollary 8.1, where and . Our algorithm only considers intervals of the form , , where and , . Thus, for the analysis of the expected running )000)' # z3121a6 e ws s X I iÞ® ®U u 0 Ô ') U Ð U Ð# )U Ô 7 xX 7 U Ä X 2R u 7 Ó # 7 4X 7 U Ä X ) 7 ` Ó R" X ò ` (s s X I yiÞ® ®U u suÑ w`@É s 7 4X 7 4Å Ð Ò s #" Ô U 8 #"8 # éwc ÑU X É gw T p # 8 } sU X (ßWw 5 )000) ' % 64212(Pè c Ä " ç) 8 æyU sÑ # ®) ¢ X ²¦yU Ä Ñ sÈ " Ñ sE " 8 ) A ®) U í E X ²¦yU ®) X ²¦y$ 8 c# X 1@ ` ') U '} ' r# X ) U 5 ) 8% ) 8% # # V zvut V 7 5 4ut 7 V 7 I V y ® 7 8ÿé ì# }8 8 # z) ` 8U ') U X 2 u U I # í g X zGyU ®) 7 7 # X ) 7 ` 7 b y ø X 7 8 s ¡ ¯ U X ) I U À z) À w) Î Î ¯ g£¢2P# X ) 7 5 y 6 ® D y b uqj£¢® } ®% ¡ # ¯ j£ ¯ j ¡ 8 s À ²) À ²) I I 5 6 n D % # ¦ uRy ÿ ® X z) f y U ) X y ¦yU y ¯ j£¢2 ¡ ¯ 0 À z) À w²) 8 s ¯ j£¢3~xi À z) À w²) ¡ ¯ 8 s í W X 4 I U ) ¯ j£¢g ¡ # " s À w) µ ®) 8 8y ÿÿ ®) X zvy U X ²¦U ³Á í HF ® Proof. The expectation can be upper-bounded in the following way: E E . Observe that and , and if , , then . Also if , then . Observe, that the random variables have discontinuity at value , and E and E . Also, observe that in the range of values we can use and as density functions for and . Let be fixed, and . Suppose first that Pr . We can write: 8 y d F 8 G ' ' y © '7 6 y 5 s . So it suffices to bound E E E nl U 8 } 7 µ 8ms X 7 6Ä 8 p} Î ³ ¤ Ä H ) 7 xX 7 6Ä X 8) 7 Ó Ð f X I x$ 'U i Ô U U #"8 wg} 1X 'ò { X I xk" X 8v x" X 'A` 'U i ) 8U { # ) U ¤ ) U 8 } #"8 # pExc 0 ' i @É X c~hU s É X i X wߣ ¶ ÕøÞkÐ þþ jj " X I 4£ ¶ I q@&# wvÉ 'j s U U # É i k 'U j É s i ø Ô k Ð É # Ô ws P Ô À z) ¯ w k Ð É Ò j Ò k É i 8 Ò U ¡ U Ò É # xX z) X ²) g£¢yqw &i Ô 8 s j i Ô 8 s j U ¡ U kÐ Ò xX z) X ²) g£¢yqw VÉ U¡U j 4 ¡ ì# xÔX ²8) Ò X ²s) # bg£Ô ¢' h)qU Ð ÒÒ É #ý 8 s Ô À z) À w) ¯ j£¢ ¯ w¢ Ó Ð É ý xX 1@` Ó Ð É ) U ÓÐ # ÿ)U Gs u' 4ÔX# i7 U Ô iÄu X Ô1'@` fGÒ u 7 " X 'ò # h U f 8 g 8 e f E Suppose now that Pr E , since E . Then time of the part we can assume that possible values of that may appear in are worst case fixed numbers. This can only decrease the expectations of and so is still an upper bound on these expectations, and this does not affect the upper bound. This means that we can treat the two parts and in the running time independently. The expected running time over all interval knapsack problems is by linearity of expectation bounded by: E L E M M A 8 . 2 . Let be two given continuous random varibe given non-negative, ables with density functions , , respectively, and L E M M A 8 . 3 . Let continuous random variables with density functions let be two fixed worst-case numbers. Let respectively. Let , E E , , E , , and . . Define new random variables if and and . Define and otherwise. Let , Then, the expectation of random variable can be bounded then we have: E . as: E . s i Ñ ' i U p~É X cyvs s " hU s ' i Ñ " '1)201020) XU s(¸ì¶g·# pi X é~q@ÉU ¸j·¶ ìs # 1)201020) ` 8 U 5 % # aX re ) 0 0 0 µ n  ¸ X ) 0 7 0 0 U 7 5 ¡ # 7 3211) 7 µ Î © eqeqeq© n Î ³ à fÁ j·¶ # X 7 1211) 7 ` µ)²sÀ © 87 6Á ¯ j£« i²8) I y w¢«c % # se !~a sÑ # s ©Î eqeqeq© X y'6Á ²vÑ 641) 112zp8 iz) I ³ y P e " Î ³ 7 5 U # 5 0 0 0 ) ' # o 5 8 5" ) % 5s)' # Ä ÍÌ e UÄ Í Ì X Ä)000 7 7 D X 7 4w% vzË #ýÉ 7 U ²u121) X 7 U Ä ß!yÉ (" y Rw # Ñ kÙè4) F D e X 7 zz)U Y% 4Ë # 8 )X 7U Ä '2112) )000 X U Ä X 7U Ä ') y) y 'iUi s ÔxX U v H Ó Ð Ò É X ÿghìit xX U Ä H Ó Ð Ò xX U Ä X É X ' 0 t 4X yU j·¶ i X c~q@U (g·¶ ({ X é~@É X é~hz{ ª qw Ä Ð ' i f0 Ñ@É X cyc' s " # É Ô X é~h' X ¸ v ' i Ñ É ¸ U ' i Ñ ' i U U s p U ' i U i Ô r i hU i xU H Ò s 4X U Y" X 'ò H Ó Ð Ò r" X 8ò 'Ó # ) U Ô Ä ) U #" e 8 # X 8z) 7 ` U B À ¿ @½ B ¯ (g·¶ y H A¾9 ½ ¸ ± ± y ´ 0 ' i U i @É X c~h' s 7 xX ') 7 ` Ó Ð f $ s " X ')ò i U Ô U ±´ wf ¸ w  » 0 xtPÃ56Á Á ¡ p j·¶ v { W I ® ® v w©3{ us qw t rª p 87 ¡ µ± ´ % ± ´ 3{ ã â w s hiX z®Gy% ñð © ¢ ³ fg © )U ¸ ÀÀ ¾ ¿ ½ ¯ j·¶ ¯ w ½ ®) z¦ and on E . After applying these lemmas we can obtain the following upper bound on the expected running time: A@ B 9 B µ´ v ± ±° À ¨ 4w ³ ¯ w É ®) ²¦ `PY3 B¡ # 5 z)v8u%tf 7 íW X ²¦yU ®) 5 % #  uxj£¡¢éà fÁ ¸ À¾ À ¿ @½ ¯ (g·¶ ½ A@ B 9 a e # é6 7 µ´ v ± ±° À ¨ 4w ³ ¯ w 5 V 4uø V 7 ) 8% # # V 7 I V y ® 7 I y µ 5 ´ % # vxR uxicÈ Á ¨ zw ± ³ ¯ w ±° 876 ¡ 5 c da a a b % S S V UT `Y P3 S S W XW Q RP E E # ) ®U R" X ²$ Lemmas 8.2, 8.3 below prove upper bounds on E E E 754 where is a density function of random variable us observe, that for any we have therefore write that: Finally, we observe that E E Let E By changing the variable to we obtain that: E , so we have that: E . Let . We can , Ñ Proof. Using the linearity of expectation we can write: Since is a concave function, we can apply the well-known Jensen's inequality to the first term: By using the linearity of expectation we obtain the following: E E E Jensen's inequality to the second term: E E Now we apply E and so it suffices to bound: where we have used the fact that . Putting the two bounds together, we obtain the claim. References [1] R. Beier and B. Vocking. Random knapsack in expected polyno¨ mial time. 35th STOC, 2003. [2] R. Beier and B. Vocking. Probabilistic Analysis of Knapsack ¨ Core Algorithms. To appear in 15th SODA, 2004. [3] V. Conitzer and T. Sandholm. Complexity results about Nash equilibria. TR CMU-CS-02-135. Carnegie-Mellon U., 2002. [4] X. Deng, C. H. Papadimitriou, and S. Safra. On the complexity of equilibria. 34th STOC, 67­71, 2002. [5] N. R. Devanur, C. H. Papadimitriou, A. Saberi, and V. V. Vazirani. Market equilibrium via a primal-dual-type algorithm. 43rd FOCS, 2002. [6] D. Fotakis, S. Kontogiannis, E. Koutsoupias, M. Mavronicolas, and P. Spirakis. The structure and complexity of Nash equilibria for a selfish routing game. 29th ICALP, 123­134, 2002. [7] R. Feldmann, M. Gairing, T. Lucking, B. Monien, and M. Rode. ¨ Nashification and the coordination ratio for a selfish routing game. 30th ICALP, 2003. 755 ~ c f { c f U f } f c f | c ¦w f gv E # 7 4X 7 4Ä Ó Ð Ò s 7 xX 7 4Ä Ó × Ò Ô U Ô U )' i Ñ (éq@É s ci 4X w`h`£ ¶ I X hb£ ¶ U @# 7 4X 7 U Ä Ó × i ' X s u ÑU ÑU Ñ É Ô i þ Î " X b ¶£ @c# Ô U Ñ × U ÑÉ Ñ É s 7 4X 7 zÄ Ñ Ó i 7 Ô ' 7 × þÎ 7 × × # 7 4X 7 zÄ 7Ñ Ó # Ñ Ô U Ô U Ô U 7 xX 7 ²Ä 7Ñ Ó i 7 xX 7 4Ä 7Ñ × ) E E # 0 w w v Ñ w 9 × y 0 X y(¸U g·¸¶ j·¶ z (yw w ¢ x(¸g·¶ # ¦vÑ ¢ u ¸j·¶ s Ñv w v ¸j·¶ s w X U g£¢ (¸g·¶ v ¡ Ñ ¡ ¸ j·¶ # X U Ñ j£¢ # s ¸ j·¶ s w E E Ñ )000 U X 1212) T ' v Ñ À q × ¯ j·¶ s À " 9 × ¯ ¸ µ © eqqeeq© ×n 86Á y (g·¶ ³ 7 5 ¸ v 8 Ñ ¸ ) 0 0 0 U j·¶ X 3121) qw v E E # E E [8] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman, 1979. [9] A. Goel and P. Indyk. Stochastic load balancing and related problems. 40th FOCS, 1999. [10] A. Goldberg and A. Marchetti-Spaccamela. On finding the exact solution to a zero-one knapsack problem. 16th STOC, 359­368, 1984. [11] M. Harris and A. Raviv. A theory of monopoly pricing schemes with demand uncertainty. The American Economic Review, 71(3): 347­365, 1981. [12] J. C. Harsanyi. Games with incomplete information played by Bayesian players, I, II, III. Management Science, 14: 159­182, 320­332, 468­502, 1968. [13] J. C. Harsanyi. Games with randomly disturbed payoffs. International Journal of Game Theory, 2: 1­23, 1973. [14] R. Holzman, N. Law-Yone. Strong equilibrium in congestion games. Games and Economic Behavior, 21: 85­101, 1997. [15] R. Impagliazzo and M. Naor. Efficient cryptographic schemes provably as secure as subset sum. J. Cryptology, 9(4): 199­216, 1996. ´ [16] J. Kleinberg, Y. Rabani, and E. Tardos. Allocating bandwidth for bursty connections. 29th STOC, 664­673, 1997. [17] A. S. Kyle. Informed speculation with imperfect competition. The Review of Economic Studies, 56(3): 317­355, 1989. [18] H E. Leland and A. Meyer. Monopoly pricing structures with imperfect discrimination. The Bell J. Economics, 7(2): 449­462, 1976. [19] D. Manlove. Minimaximal and maximinimal optimization problems: a partial order-based approach. PhD thesis, University of Glasgow, Dept. of Computing Science, June 1998. http://www.dcs.gla.ac.uk/ davidm/publications.html. [20] E. Maskin and J. Riley. Monopoly with incomplete information. The RAND Journal of Economics, 15(2): 171­196, 1984. [21] I. Milchtaich. Congestion games with player-specific payoff function. Games and Economic Behavior, 13: 111­124, 1996. [22] I. Milchtaich. Crowding games are sequentially solvable. International Journal of Game Theory, 27: 501­509, 1998. [23] D. Monderer and L. S. Shapley. Potential games. Games and Economic Behavior, 14: 124­143, 1996. [24] W. Y. Oi. A Disneyland dilemma: Two part tariffs for a Mickey Mouse monopoly. The Quarterly Journal of Economics, 85(1): 77­96, 1971. [25] A. Ronen. On approximating optimal auctions. 3rd ACM Conference on Electronic Commerce (EC '01), 2001. [26] A. Ronen and A. Saberi. On the hardness of optimal auctions. 43st FOCS, 396­405, 2002. [27] R. W. Rosenthal. A class of games possessing pure-strategy Nash equilibria. Int. Journal of Game Theory, 2: 65­67, 1973. [28] R. Schmalensee. Monopolistic two-part pricing arrangements. The Bell J. of Economics, 12(2): 445­466, 1981. [29] S. Scotchmer. Two-tier pricing of shared facilities in a free-entry equilibrium. The RAND J. Economics, 16: 456­472, 1985. [30] R. W. Staiger and F. A. Wolak. Collusive pricing with capacity constraints in the presence of demand uncertainty. The RAND Journal of Economics, 23(2): 203­220, 1992. [31] T. Ui. A Shapley value representation of potential games. Games and Economic Behavior, 31: 121­135, 2000. ¸ j·¶ 000 U ¡ ¸ ¸ 0 X )1121)Ñ j£ j·¶ i 3211) Ñ q (g·¶ X )000 U Ñ u X 2112) bg£¢ ¸ )000 U ¡ #" Ñ Yu X 1121) T j·¶ )000 U w 8 v w ' w t ' v 8 ¸ (g·¶ '