Knows What It Knows: A Framework For Self-Aware Learning Lihong Li lihong@cs.rutgers.edu Michael L. Littman mlittman@cs.rutgers.edu Thomas J. Walsh thomaswa@cs.rutgers.edu Department of Computer Science, Rutgers University, Piscataway, NJ 08854 USA Abstract We introduce a learning framework that combines elements of the well-known PAC and mistake-bound models. The KWIK (knows what it knows) framework was designed particularly for its utility in learning settings where active exploration can impact the training examples the learner is exposed to, as is true in reinforcement-learning and active-learning problems. We catalog several KWIK-learnable classes and open problems. [1,1,1] [1,1,1] [1,1,1] [1,1,1] [0,0,1] [1,0,0] [0,0,1] [0,1,1] [0,0,0] Figure 1. A cost-vector navigation graph. 1. Motivation At the core of recent reinforcement-learning algorithms that have polynomial sample complexity guarantees (Kearns & Singh, 2002; Kearns & Koller, 1999; Kakade et al., 2003; Strehl et al., 2007) lies the idea of distinguishing between instances that have been learned with sufficient accuracy and those whose outputs are still unknown. The Rmax algorithm (Brafman & Tennenholtz, 2002), for example, estimates transition probabilities for each state­action­next-state triple of a Markov decision process (MDP). The estimates are made separately, as licensed by the Markov property, and the accuracy of the estimate is bounded using Hoeffding bounds. The algorithm explicitly distinguishes between probabilities that have been estimated accurately (known) and those for which more experience will be needed (unknown). By encouraging the agent to gather more experience in the unknown states, Rmax can guarantee a polynomial bound on the number of timesteps in which it has a non-near-optimal policy (Kakade, 2003). In this paper, we make explicit the properties that are sufficient for a learning algorithm to be used in efficient Appearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). exploration algorithms like Rmax. Roughly, the learning algorithm needs to make only accurate predictions, although it can opt out of predictions by saying "I don't know" (). However, there must be a (polynomial) bound on the number of times the algorithm can respond . We call such a learning algorithm KWIK ("know what it knows"). Section 2 provides a motivating example and sketches possible uses for KWIK algorithms. Section 3 defines the KWIK conditions more precisely and relates them to established models from learning theory. Sections 4 and 5 survey a set of hypothesis classes for which KWIK algorithms can be created. 2. A KWIK Example Consider the simple navigation task in Figure 1. There is a set of nodes connected by edges, with the node on the left as the source and the dark one on the right as the sink. Each edge in the graph is associated with a binary cost vector of dimension d = 3, indicated in the figure. The cost of traversing an edge is the dot product of its cost vector with a fixed weight vector w = [1, 2, 0]. Assume that w is not known to the agent, but the graph topology and all cost vectors are. In each episode, the agent starts from the source and moves along some path to the sink. Each time it crosses an edge, the agent observes its true cost. Once the sink is reached, the next episode begins. The learning task is to take a non-cheapest path in as few episodes as possible. There are 3 distinct paths in this example. Given the w above, the top has a cost of 12, the middle KWIK Learning Framework 13, and the bottom 15. A simple approach for this task is for the agent to assume edge costs are uniform and walk the shortest (middle) path to collect data. It would gather 4 examples of [1, 1, 1] 3 and one of [1, 0, 0] 1. Standard regression algorithms could use this dataset to find a w that fits this data. Here, w = [1, 1, 1] is a natural ^ ^ choice. The learned weight vector could then be used to estimate costs for the three paths: 14 for the top, 13 for the middle, 14 for the bottom. Using these estimates, an agent would continue to take the middle path forever, never realizing it is not optimal. In contrast, consider a learning algorithm that "knows what it knows". Instead of creating an approximate weight vector w, it reasons about whether the costs ^ for each edge can be obtained from the available data. The middle path, since we've seen all its edge costs, is definitely 13. The last edge of the bottom path has cost vector [0, 0, 0], so its cost must be zero, but the penultimate edge of this path has cost vector [0, 1, 1]. This vector is a linear combination of the two observed cost vectors, so, regardless of w, w·[0, 1, 1] = w·([1, 1, 1]-[1, 0, 0]) = w·[1, 1, 1]-w·[1, 0, 0], which is just 3 - 1 = 2. Thus, we know the bottom path's cost is 14--worse than the middle path. The vector [0, 0, 1] on the top path is linearly independent of the cost vectors we've seen, so its cost is unconstrained. We know we don't know. A safe thing to assume provisionally is that it's zero, encouraging the agent to try the top path in the second episode. Now, it observes [0, 0, 1] 0, allowing it to solve for w and accurately predict the cost for any vector (the training data spans d). It now knows that it knows all the costs, and can confidently take the optimal (top) path. In general, any algorithm that guesses a weight vector may never find the optimal path. An algorithm that uses linear algebra to distinguish known from unknown costs will either take an optimal route or discover the cost of a linearly independent cost vector on each episode. Thus, it can never choose suboptimal paths more than d times. The motivation for studying KWIK learning grew out of its use in multi-state sequential decision making problems like this one. However, other machinelearning problems could benefit from this perspective and from the development of efficient algorithms. For instance, action selection in bandit problems (Fong, 1995) and associative bandit problems (Strehl et al., 2006) (bandit problems with inputs) can both be addressed in the KWIK setting by choosing the better arm when both payoffs are known and an unknown arm otherwise. KWIK could also be a useful framework for studying active learning (Cohn et al., 1994) and anomaly detection (Lane & Brodley, 2003), both of which are machine-learning problems that require some degree of reasoning about whether a recently presented input is predictable from previous examples. When mistakes are costly, as in utility-based data mining (Weiss & Tian, 2006) or learning robust control (Bagnell et al., 2001), having explicit predictions of certainty can be very useful for decision making. 3. Formal Definition This section provides a formal definition of KWIK learning and its relationship to existing frameworks. 3.1. KWIK Definition KWIK is an ob jective for supervised learning algorithms. In particular, we begin with an input set X and output set Y . The hypothesis class H consists of a set of functions from X to Y : H (X Y ). The target function h H is the source of training examples and is unknown to the learner. Note that the setting is "realizable", meaning we assume the target function is in the hypothesis class. The protocol for a "run" is: · The hypothesis class H and accuracy parameters and are known to both the learner and environment. · The environment selects a target function h H adversarially. · Repeat: ­ The environment selects an input x X adversarially and informs the learner. ­ The learner predicts an output y Y {}. ^ ­ If y = , it should be accurate: |y - y | , ^ ^ where y = h (x). Otherwise, the entire run is considered a failure. The probability of a failed run must be bounded by . ­ Over a run, the total number of steps on which y = must be bounded by B ( , ), ^ ideally polynomial in 1/ , 1/ , and parameters defining H . Note that this bound should hold even if h H , although, obviously, outputs need not be accurate in this case. ­ If y = , the learner makes an observation ^ z Z of the output, where z = y in the deterministic case, z = 1 with probability y and KWIK Learning Framework Figure 2. Relationship of KWIK to existing PAC and MB (mistake bound) frameworks in terms of how labels are provided for inputs. 0 with probability 1 - y in the Bernoulli case, or z = y + for zero-mean random variable in the additive noise case. 3.2. Connection to PAC and MB Figure 2 illustrates the relationship of KWIK to the similar PAC (probably approximately correct) (Valiant, 1984) and MB (mistake bound) (Littlestone, 1987) frameworks. In all three cases, a series of inputs (instances) is presented to the learner. Each input is depicted in the figure by a rectangular box. In the PAC model, the learner is provided with labels (correct outputs) for an initial sequence of inputs, depicted by cross-hatched boxes. After that point, the learner is responsible for producing accurate outputs (empty boxes) for all new inputs. Inputs are drawn from a fixed distribution. In the MB model, the learner is expected to produce an output for every input. Labels are provided to the learner whenever it makes a mistake (filled boxes). Inputs are selected adversarially, so there is no bound on when the last mistake might be made. However, MB algorithms guarantee that the total number of mistakes is small, so the ratio of incorrect to correct outputs must go to zero asymptotically. Any MB algorithm for a hypothesis class can be used to provide a PAC algorithm for the same class, but not necessarily vice versa (Blum, 1994). The KWIK model has elements of both PAC and MB. Like PAC, a KWIK algorithm is not allowed to make mistakes. Like MB, inputs to a KWIK algorithm are selected adversarially. Instead of bounding mistakes, a KWIK algorithm must have a bound on the number of label requests () it can make. By requiring performance to be independent of the distribution, a KWIK algorithm can be used in cases in which the input distribution is dependent in complex ways on the KWIK algorithm's behavior, as can happen in on-line or active learning settings. And, like PAC and MB, the definition of KWIK algorithms can be naturally extended to enforce low computational complexity. Note that any KWIK algorithm can be turned into a MB algorithm with the same bound by simply having the algorithm guess an output each time it is not certain. However, some hypothesis classes are exponentially harder to learn in the KWIK setting than in the MB setting. An example is conjunctions of n Boolean variables, in which MB algorithms can guess "false" when uncertain and learn with n + 1 mistakes, but a KWIK algorithm may need (2n/2 ) s to acquire the negative examples required to capture the target hypothesis. 3.3. Other Online Learning Mo dels The notion of allowing the learner to opt out of some inputs by returning is not unique to KWIK. Several other authors have considered related models. For instance, sleeping experts (Freund et al., 1997) can respond for some inputs, although they need not learn from these experiences. Learners in the settings of Selective Sampling (SS) (Cesa-Bianchi et al., 2006) and Label Efficient Prediction (Cesa-Bianchi et al., 2005) request labels randomly with a changing probability and achieve bounds on the expected number of mistakes and the expected number of label requests for a finite number of interactions. These algorithms cannot be used unmodified in the KWIK setting because, with high probability, KWIK algorithms must not make mistakes at any time. In the MB-like Apple-Tasting setting (Helmbold et al., 2000), the learner receives feedback asymmetrically only when it predicts a particular label (a positive example, say), which conflates the request for a sample with the prediction of a particular outcome. Op en Problem 1 Is there a way of modifying SS algorithms to satisfy the KWIK criteria? 4. Some KWIK Learnable Classes This section describes some hypothesis classes for which KWIK algorithms are available. It is not meant to be an exhaustive survey, but simply to provide a flavor for the properties of hypothesis classes KWIK algorithms can exploit. The complexity of many learning problems has been characterized by defining the dimensionality of hypothesis classes (Angluin, 2004). No such definition has been found for the KWIK model, so we resort to enumerating examples of learnable classes. Op en Problem 2 Is there a way of characterizing the "dimension" of a hypothesis class in a way that can be used to derive KWIK bounds? KWIK Learning Framework 4.1. Memorization and Enumeration We begin by describing the simplest and most general KWIK algorithms. Algorithm 1 The memorization algorithm can learn any hypothesis class with input space X with a KWIK bound of |X |. This algorithm can be used when the input space X is finite and observations are noise free. To achieve this bound, the algorithm simply keeps a ^ ^ mapping h initialized to h(x) = for all x X . When the environment chooses an input x, the algorithm re^ ^ ports h(x). If h(x) = , the environment will provide ^ a label y and the algorithm will assign h(x) := y . It will only report once for each input, so the KWIK bound is |X |. Algorithm 2 The enumeration algorithm can learn any hypothesis class H with a KWIK bound of |H | - 1. This algorithm can be used when the hypothesis class H is finite and observations are noise free. ^ The algorithm keeps track of H , the version space, and ^ initially H = H . Each time the environment provides ^ input x X , the algorithm computes L = {h(x)|h ^ H }. That is, it builds the set of all outputs for x for all hypotheses that have not yet been ruled out. If ^ |L| = 0, the version space has been exhausted and the target hypothesis is not in the hypothesis class (h H ). ^ ^ If |L| = 1, it means that all hypotheses left in H agree on the output for this input, and therefore the algorithm knows what the proper output must be. It re^ ^ turns y L. On the other hand, if |L| > 1, two ^ hypotheses in the version space disagree. In this case, the algorithm returns and receives the true label y . It then computes an updated version space ^ H = Figure 3. Schematic of behavior of the planar-distance algorithm after the first (a), second (b), and third (c) time it returns . If p is in the group, it wil l prevent a fight, even if f is present. You want to predict whether a fight wil l break out among a subset of patrons, initial ly without knowing the identities of f and p. The input space is X = 2P and the output space is Y = {fight, no fight}. The memorization algorithm achieves a KWIK bound of 2n for this problem, since it may have to see each possible subset of patrons. However, the enumeration algorithm can KWIK learn this hypothesis class with a bound of n(n - 1) since there is one hypothesis for each possible assignment of a patron to f and p. Each time it reports , it is able to rule out at least one possible instigator­peacemaker combination. 4.2. Real-valued Functions The previous two examples exploited the finiteness of the hypothesis class and input space. KWIK bounds can also be achieved when these sets are infinite. Algorithm 3 Define X = 2, Y = , and H = {f |f : X Y , c 2, f (x) = x - c 2}. This is, there is an unknown point and the target function maps input points to the distance from the unknown point. The planar-distance algorithm can learn in this hypothesis class with a KWIK bound of 3. The algorithm proceeds as follows, illustrated in Figure 3. First, given initial input x, the algorithm says and receives output y . Since y is the distance between x and some unknown point c, we know c must lie on the circle illustrated in Figure 3(a). (If y = 0, then c = x.) Let's call this input­output pair x1 , y1 . The algorithm will return y1 for any future input that matches x1 . Otherwise, it will need to return and will obtain a new input­output pair x, y , as shown in Figure 3(b). They become x2 and y2 . Now, the algorithm can narrow down the location of c to the two hatch-marked points. In spite of this ambiguity, for any input on the dark diagonal line the algorithm will be able to return the correct distance--all ^ {h|h H h(x) = y }. ^ ^ Because |L| > 1, there must be some h H such that h(x) = y . Therefore, the new version space must ^ ^ be smaller |H | |H | - 1. Before the next input is ^ ^ received, the version space is updated H := H . ^ ^ If |H | = 1 at any point, |L| = 1, and the algorithm will no longer return . Therefore, |H | - 1 is the maximum number of s the algorithm can return. Example 1 You own a bar that is frequented by a group of n patrons P . There is one patron f P who is an instigator--whenever a group of patrons is in the bar G P , if f G, a fight wil l break out. However, there is another patron p P , who is a peacemaker. KWIK Learning Framework these points are equidistant from the two possibilities. The two circles must intersect, assuming the target hypothesis is in H 1 . Once an input x is received that is not co-linear with x1 and x2 , the algorithm returns and obtains another x, y pair, as illustrated in Figure 3(c). Finally, since three circles will intersect at at most one point, the algorithm can identify the location of c and use it to correctly answer any future query. Thus, three s suffice for KWIK learning in this setting. The algorithm generalizes to d-dimensional versions of the setting with a KWIK bound of d + 1. Algorithm 3 illustrates a number of important points. First, since learners have no control over their inputs in the KWIK setting, they must be robust to degenerate inputs such as inputs that lie precisely on a line. Second, they can often return valid answers for some inputs even before they have learned the target function over the entire input space. 4.3. Noisy Observations Up to this point, observations have been noise free. Next, we consider the simplest noisy KWIK learning problem in the Bernoulli case. Algorithm 4 The coin-learning algorithm can accurately predict the probability that a biased coin wil l come up heads given Bernoul li observa1 ions with a t . KWIK bound of B ( , ) = 212 ln 2 = O 2 ln 1 We have a biased coin whose unknown probability of heads is p. In the notation of this paper, |X | = 1, Y = [0, 1], and Z = {0, 1}. We want to learn an estimate p that is accurate (|p - p| ) with high ^ ^ probability (1 - ). If we could observe p, then this problem would be trivial: Say once, observe p, and let p = p. The KWIK ^ bound is thus 1. Now, however, observations are noisy. Instead of observing p, we see either 1 (with probability p) or 0 (with probability 1 - p). Each time the algorithm says , it gets an Tndependent i trial that it can use to compute p = T t=1 zt , where ^1 zt Z is the tth observation in T trials. The number of trials needed before we are 1 - certain our estimate is within can be computed using a Hoeffding bound: T 1 Algorithm 5 Define X = d, Y = , and H = {f |f : X Y , w d, f (x) = w · x}. That is, H is the linear functions on d variables. Given additive noise, the noisy linear-regression algorithm can learn in H with a KWIK bound of B ( , ) = ~ ~ O(d3 / 4), where O(·) suppresses log factors. The deterministic case was described in Section 2 with a bound of d. Here, the algorithm must be cautious to average over the noisy samples to make predictions accurately. This problem was solved by Strehl and Littman (2008). The algorithm uses the least squares estimate of the weight vector for inputs with high certainty. Certainty is measured by two terms representing (1) the number and proximity of previous samples to the current point and (2) the appropriateness of the previous samples for making a least squares estimate. When certainty is low for either measure, the algorithm reports and observes a noisy sample of the linear function. Here, solving a noisy version of a problem resulted in an increased KWIK bound (from d to essentially d3 ). Note that the deterministic Algorithm 3 also has a bound of d, but no bound has been found for the stochastic case. Op en Problem 3 Is there a general scheme for taking a KWIK algorithm for a deterministic class and updating it to work in the presence of noise? 5. Combining KWIK Learners This section provides examples of how KWIK learners can be combined to provide learning guarantees for more complex hypothesis classes. Algorithm 6 Let F : X Y be the set of functions mapping input set X to output set Y . Let H1 , . . . , Hk be a set of KWIK learnable hypothesis classes with bounds of B1 ( , ), . . . , Bk ( , ) where Hi F for al l 1 i k . That is, al l the hypothesis classes share the same input/output sets. The union algorithm can i learn the joint hypothesis class H =i Hi with a KWIK bound of B ( , ) = (1 - k ) + Bi ( , ). The union algorithm is like a higher-level version of the enumeration algorithm (Algorithm 2) and applies in the deterministic setting. It maintains a set of ^ active algorithms A, one for each hypothesis class: ^ A = {1, . . . , k }. Given an input x, the union algorithm ^ queries each algorithm i A to obtain a prediction yi ^ ^ ^ from each active algorithm. Let L = {yi |i A}. ^ 1 2 ln = O 22 1 2 ln 1 . They can also intersect at one point, if the circles are tangent, in which case the algorithm can identify c unambiguously. KWIK Learning Framework ^ If L, the union algorithm reports and obtains the correct output y . Any algorithm i for which y = ^ is then sent the correct output y to allow it to learn. If ^ |L| > 1, then there is disagreement among the subalgorithms. The union algorithm reports in this case because at least one of the algorithms has learned the wrong hypothesis and it needs to know which. Any algorithm that made a prediction other than y or is "killed"--removed from consideration. That is, ^ A = hypothesis classes with bounds of B1 ( , ), . . . , Bk ( , ) where Hi (Xi Y ). The input-partition algorithm can learn the hypothesis class H (X1 · · · Xk Y ) i with a KWIK bound of B ( , ) = Bi ( , /k ). The input-partition algorithm runs the learning algorithm for each subclass Hi . When it receives an input x Xi , it queries the learning algorithm for class Hi and returns its response, which is accurate, by request. To achieve 1 - certainty, it insists on 1 - /k certainty from each of the subalgorithms. By the union bound, the overall failure probability must be less than the sum of the failure probabilities for the subalgorithms. Example 3 An MDP consists of n states and m actions. For each combination of state and action and next state, the transition function returns a probability. As the reinforcement-learning agent moves around in the state space, it observes state­action­state transitions and must predict the probabilities for transitions it has not yet observed. In the model-based setting, an algorithm learns a mapping from the size n2 m input space of state­action­state combinations to probabilities via Bernoul li observations. Thus, the problem can be solved via the input-partition algorithm (Algorithm 7) over a set of individual probabilities learned via Algon ithm 4. T.he resulting KWIK bound r 2 nm m is B ( , ) = O 2 ln Note that this approach is precisely what is found in most efficient RL algorithms in the literature (Kearns & Singh, 2002; Brafman & Tennenholtz, 2002). Algorithm 7 combines hypotheses by partitioning the input space. In contrast, the next example concerns combinations in input and output space. Algorithm 8 Let X1 , . . . , Xk and Y1 , . . . , Yk be a set of input and output spaces and H1 , . . . , Hk be a set of KWIK learnable hypothesis classes with bounds of B1 ( , ), . . . , Bk ( , ) on these spaces. That is, Hi (Xi Yi ). The cross-product algorithm can learn the hypothesis class H ((X1 × · · · × Xk )i (Y1 × · · · × Yk )) with a KWIK bound of B ( , ) = Bi ( , /k ). Here, each input consists of a vector of inputs from each of the spaces X1 , . . . , Xk and outputs are vectors of outputs from Y1 , . . . , Yk . Like Algorithm 7, each component of this vector can be learned independently via the corresponding algorithm. Each is learned to within an accuracy of and confidence 1 - /k . Any time any component returns , the cross-product algorithm returns . Since each returned can be traced to one of the subalgorithms, the total is bounded as ^ {i|i A (yi = yi = y )}. ^ ^ On each input for which the union algorithm reports ,ieither one of the subalgorithms reported (at most Bi ( , ) times) or two algorithms disagreed and at ^ least one was removed from A (at most k - 1 times). The KWIK bound follows from these facts. Example 2 Let X = Y = . Now, define H1 = {f |f (x) = |x - c|, c }. That is, each function in H1 maps x to its distance from some unknown point c. We can learn H1 with a KWIK bound of 2 using a 1-dimensional version of Algorithm 3. Next, define H2 = {f |f (x) = y x + b, m , b }. That is, H2 is the set of lines. We can learn H2 with a KWIK bound of 2 using the regression algorithm in Section 2. Final ly, define H = H1 H2 , the union of these two classes. We can use Algorithm 6 to KWIK learn H . Assume the first input is x1 = 2. The union algorithm asks the learners for H1 and H2 the output for x1 and neither has any idea, so it returns and receives the feedback y1 = 2, which it passes to the subalgorithms. The next input is x2 = 8. The learners for H1 and H2 stil l don't have enough information, so it returns and sees y2 = 4, which it passes to the subalgorithms. Next, x3 = 1. Now, the learner for H1 unambiguously computes c = 4, because that's the only interpretation consistent with the first two examples (|2 - 4| = 2, |8 - 4| = 4), so it returns |1 - 4| = 3. On the other hand, the learner for H2 unambiguously computes m = 1/3 and b = 4/3, because that's the only interpretation consistent with the first two examples (2 × 1/3 + 4/3 = 2, 8 × 1/3 + 4/3 = 4), so it returns 1 × 1/3 + 4/3 = 5/3. Since the two subalgorithms disagree, the union algorithm returns one last time and finds out that y3 = 3. It makes al l future predictions (accurately) using the algorithm for H1 . Next, we consider a variant of Algorithm 1 that combines learners across disjoint input spaces. Algorithm 7 Let X1 , . . . , Xk be a set of disjoint input spaces (Xi Xj = if i = j ) and Y be an output space. Let H1 , . . . , Hk be a set of KWIK learnable KWIK Learning Framework described above. By the union bound, total failure probability is no more than k × /k = . Example 4 Transitions in factored-state MDP can be thought of as mappings from vectors to vectors. Given known dependencies, the cross-product algorithm can be used to learn each component of the transition function. Each component is, itself, an instance of Algorithm 7 applied to the coin-learning algorithm. This three-level KWIK algorithm provides an approach to learn the transition function of a factored-state MDP with a polynomial KWIK bound. This insight can be used to derive the factored-state-MDP learning algorithm used by Kearns and Kol ler (1999). The previous two algorithms apply to both deterministic and noisy observations. We next provide a powerful algorithm that generalizes the union algorithm (Algorithm 6) to work with noisy observations as well. Algorithm 9 Let F : X Y be the set of functions mapping input set X to output set Y = [0, 1]. Let Z = {0, 1} be a binary observation set. Let H1 , . . . , Hk be a set of KWIK learnable hypothesis classes with bounds of B1 ( , ), . . . , Bk ( , ) where Hi F for al l 1 i k . That is, al l the hypothesis classes share the same input/output sets. The noisy union algorithm can learn i the joint hypothesis class H = H with a KWIK + ki k k 4 bound of B ( , ) = O 2 ln i=1 Bi ( , k+1 ). For simplicity, we sketch the special case of k = 2. The general case will be briefly discussed at the end. The noisy union algorithm is similar to the union algorithm (Algorithm 6), except that it has to deal with noisy observations. The algorithm proceeds by running the KWIK algorithms, using parameters ( 0, 0 ), as subalgorithms for each of the Hi hypothesis classes, where 0 = 4 and 0 = 3 . Given an input xt in trial t, it queries each algorithm i to obtain a prediction yti . ^ ^ Let Lt be the set of responses. ^ If Lt , the noisy union algorithm reports , obtains an observation zt Z , and sends it to all subalgorithms i with yti = to allow them to learn. In the ^ following, we focus on the other case where Lt . /^ If |yt1 - yt2 | 4 0, then these two predictions are suf^ ^ ficiently consistent, and we claim that, with high probability, the prediction pt = (yt1 + yt2 )/2 is -close to ^ ^ ^ yt = Pr(zt = 1). This claim follows because, by assumption, one of the predictions, say yt1 , deviates from ^ yt by at most 0 with probability at least 1 - /3, and hence |pt - yt | = |pt - yt1 + yt1 - yt | |pt - yt1 | + ^ ^ ^ ^ ^ ^ |yt1 - yt | = |yt1 - yt2 | /2 + |yt1 - yt | 2 0 + 0 < . ^ ^ ^ ^ ^ ^ If |yt1 - yt2 | > 4 0, then the individual predictions are ^ ^ not consistent enough for the noisy union algorithm to make an -accurate prediction. Thus, the noisy union algorithm reports and needs to know which subalgorithm provided an inaccurate response. But, since the observations are noisy in this problem, it cannot eliminate hi on the basis of a single observation. Instead, it maintains the total squared prediction error t 2 for every subalgorithm i: i = ^ I (yti - zt ) , where I = {t| |yt1 - yt2 | > 4 0} is the set of trials in which ^ ^ the subalgorithms gave inconsistent predictions. We observe that |I | is the number of s returned by the noisy union algorithm alone (not counting those returned by the subalgorithms). Our last step is to show i provides a robust measure for eliminating invalid predictors when |I | is sufficiently large. Applying the Hoeffding bound and some algebra, we find Pr ( 1 > 2) -t 2 - . ^ ^ I |yt1 - yt2 | exp 2 2 |I | exp 0 8 Setting the righthand side to be /3 and solving for 1 . |I |, we have |I | = 212 ln 3 = O 2 ln 1 0 Since each hi succeeds with probability 1 - 3 , and the comparison of 1 and 2 also succeeds with probability 1 - 3 , a union bound implies that the noisy union algorithm succeeds with probability at least 1 - . All s i are either from a subalgorithm (at most 1 Bi ( 0, 0 )) ) or from the noisy union algorithm (O 2 ln 1 . The general case where k > 2 can be reduced to the k = 2 case by pairing the k learners and running the noisy union algorithm described above on each pair. Here, each subalgorithm is run with parameter 4 and k= O(k 2 ) pairs, a slightly k+1 . Although there are 2 improved reduction and analysis can reduce the dependence of |I | on k from quadratic to linearithmic, leading to the bound given in the statement. Example 5 Without known dependencies, learning a factored-state MDP is more chal lenging. Strehl et al. (2007) showed that each possible dependence structure can be viewed as a separate hypothesis and provided an algorithm for learning the dependencies in a factored-state MDP while learning the transition probabilities. The algorithm can be viewed as a fourlevel KWIK algorithm with a noisy union algorithm at the top (to discover the dependence structure), a cross-product algorithm beneath it (to decompose the transitions for the separate components of the factoredstate representation), an input-partition algorithm below that (to hand le the different combinations of state component and action), and a coin-learning algorithm KWIK Learning Framework at the very bottom (to learn the transition probabilities themselves). Note that Algorithm 9 is conceptual ly simpler, significantly more efficient (k log k vs. k 2 dependence on k ), and more general ly applicable than the one due to Strehl et al. (2007). Cohn, D. A., Atlas, L., & Ladner, R. E. (1994). Improving generalization with active learning. Machine Learning, 15, 201­221. Fong, P. W. L. (1995). A quantitative study of hypothesis selection. Proceedings of the Twelfth International Conference on Machine Learning (ICML-95) (pp. 226­234). Freund, Y., Schapire, R. E., Singer, Y., & Warmuth, M. K. (1997). Using and combining predictors that specialize. STOC '97: Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (pp. 334­343). Helmbold, D. P., Littlestone, N., & Long, P. M. (2000). Apple tasting. Information and Computation, 161, 85­ 139. Kakade, S., Kearns, M., & Langford, J. (2003). Exploration in metric state spaces. Proceedings of the 20th International Conference on Machine Learning. Kakade, S. M. (2003). On the sample complexity of reinforcement learning. Doctoral dissertation, Gatsby Computational Neuroscience Unit, University College London. Kearns, M. J., & Koller, D. (1999). Efficient reinforcement learning in factored MDPs. Proceedings of the 16th International Joint Conference on Artificial Intel ligence (IJCAI) (pp. 740­747). Kearns, M. J., & Singh, S. P. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49, 209­232. Lane, T., & Brodley, C. E. (2003). An empirical study of two approaches to sequence learning for anomaly detection. Machine Learning, 51, 73­107. Littlestone, N. (1987). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2, 285­318. Strehl, A. L., Diuk, C., & Littman, M. L. (2007). Efficient structure learning in factored-state MDPs. Proceedings of the Twenty-Second National Conference on Artificial Intel ligence (AAAI-07). Strehl, A. L., & Littman, M. L. (2008). Online linear regression and its application to model-based reinforcement learning. Advances in Neural Information Processing Systems 20. Strehl, A. L., Mesterharm, C., Littman, M. L., & Hirsh, H. (2006). Experience-efficient learning in associative bandit problems. Proceedings of the Twenty-third International Conference on Machine Learning (ICML-06). Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27, 1134­1142. Weiss, G. M., & Tian, Y. (2006). Maximizing classifier utility when training data is costly. SIGKDD Explorations, 8, 31­38. 6. Conclusion and Future Work We described the KWIK ("knows what it knows") model of supervised learning, which identifies and generalizes a key step common to a class of algorithms for efficient exploration. We provided algorithms for a set of basic hypothesis classes given deterministic and noisy observations as well as methods for composing hypothesis classes to create more complex algorithms. One example algorithm consisted of a four-level decomposition of an existing learning algorithm from the reinforcement-learning literature. By providing a set of example algorithms and composition rules, we hope to encourage the use of KWIK algorithms as a component in machine-learning applications as well as spur the development of novel algorithms. One concern of particular interest in applying the KWIK framework to real-life data we leave as an open problem. Op en Problem 4 How can KWIK be adapted to apply in the unrealizable setting in which the target hypothesis can be chosen from outside the hypothesis class H ? References Angluin, D. (2004). Queries revisited. Theoretical Computer Science, 313, :175­194. Bagnell, J., Ng, A. Y., & Schneider, J. (2001). Solving uncertain Markov decision problems (Technical Report CMU-RI-TR-01-25). Robotics Institute, Carnegie Mellon University, Pittsburgh, PA. Blum, A. (1994). Separating distribution-free and mistakebound learning models over the Boolean domain. SIAM Journal on Computing, 23, 990­1000. Brafman, R. I., & Tennenholtz, M. (2002). R-MAX--a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3, 213­231. Cesa-Bianchi, N., Gentile, C., & Zaniboni, L. (2006). Worst-case analysis of selective sampling for linear classification. Journal of Machine Learning Research, 7, 1205­1230. Cesa-Bianchi, N., Lugosi, G., & Stoltz, G. (2005). Minimizing regret with label efficient prediction. IEEE Transactions on Information Theory, 51, 2152­2162. Acknowledgments Support provided by NSF IIS-0325281 and DARPA TL.