An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning P Ronald Parr Lihong Li Gavin Taylor Christopher Painter-Wakefield Michael L. Littman D epartment of Computer Science, Duke University, Durham, NC 27708 USA Department of Computer Science, Rutgers University, Piscataway, NJ 08854 USA ARR@CS.DUKE.EDU L I H O N G @ C S . RU T G E R S . E D U G V TAY L O R @ C S . D U K E . E D U P AINT007@CS.DUKE.EDU M L I T T M A N @ C S . RU T G E R S . E D U Abstract We show that linear value-function approximation is equivalent to a form of linear model approximation. We then derive a relationship between the model-approximation error and the Bellman error, and show how this relationship can guide feature selection for model improvement and/or value-function improvement. We also show how these results give insight into the behavior of existing feature-selection algorithms. and a deeper understanding of the problem of feature selection when linear function approximation is used. Specifically, we show that the Bellman error can be decomposed into two types of errors in the learned linear model: the reward error and the feature error. This decomposition gives insight into the behavior of existing approximation techniques, suggests new views of feature selection, and explains the behavior of existing feature-selection algorithms. 2. Formal Framework and Notation We are interested in both controlled and uncontrolled Markov processes with a set S of states s and, when applicable, a set A of actions a. Our main results and experiments consider the uncontrolled or policy-evaluation case, but many of the ideas can be applied to the controlled case, as is discussed in more detail in Section 3.2. We refer to the uncontrolled case as a Markov reward process (MRP): M = (S, P, R, ), and the controlled case as a Markov decision process (MDP): M = (S, A, P, R, ). Given a state si , the probability of a transition to a state sj given action a is given by Pia and results in an expected j a reward of Ri . In the uncontrolled case, we use Pij and Ri to stand for the transitions and rewards. We are concerned with finding value functions V that map each state si to the expected total -discounted reward for the process. In particular, we would like to find the solution to the Bellman equation a V [si ] = max(Ri + a 1. Introduction Broadly speaking, there are two types of reinforcementlearning (RL) algorithms: model-free and model-based algorithms. Model-free approaches typically use samples to learn a value function, from which a policy is implicitly derived. In contrast, model-based approaches build a model of system behavior from samples, and the model is used to compute a value function or policy. Both approaches have advantages and disadvantages, and function approximation can be applied to either, to represent a value function or a model. Examples of function approximators include decision trees, neural networks, and linear functions. The first contribution of this paper shows that, when linear value-function approximation is used for policy evaluation as in nominally model-free approaches such as linear TD learning (Sutton, 1988) or LSTD (Bradtke & Barto, 1996), the value function is precisely the same as the value function that results from an exact solution to a corresponding approximate, linear model, where the value function and linear model are defined over the same set of features. This insight results in a novel view of the Bellman error Appearing in Proceedings of the 25 International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). th j Pia V [sj ]) j in the controlled case (the "max" and a's are eliminated from the equation in the uncontrolled case). For any matrix, A, we use AT to indicate the transpose of A and s pan(A) to indicate the column space of A. An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning 2.1. Linear Value Functions In cases where the value function cannot be represented exactly, it is common to use some form of parametric valuefunction approximation, such as a linear combination of features or basis functions: ^ V= ik =1 1 . . . k for representing transition and reward models, with i (s) defined as the value of feature i in state s. While value-function approximation typically uses features to predict values, we will consider the use of these features to predict next features. For feature vector (s) = [1 (s) . . . k (s)]T , we define (s |s) as the random vector of next features: (s |s) s wi i , P (s |s) = [1 (s ), . . . , k (s )]T , where = {1 , . . . , k } is a set of linearly independent1 basis functions of the state, with i (s) defined as the value of feature i in state s. The vector w = {w1 , . . . , wk } is a set of scalar weights. We can think of as a design matrix with [i, j ] = j (si ), that is, the basis functions span the columns of and the states span the rows. Expressing the ^ weights w as a column vector, we have V = w. Methods for finding a reasonable w given and a set of samples include linear TD (Sutton, 1988), LSTD (Bradtke & Barto, 1996) and LSPE (Yu & Bertsekas, 2006). If the model can be expressed as a factored MDP, then the weights can be found directly (Koller & Parr, 1999). We refer to this family of methods as linear fixed-point methods because they all solve for the fixed point ^ V = w = (R + P w ), (1) our objective will be to produce a k × k matrix P that predicts expected next feature vectors, P T (s) Es P (s |s) {(s |s)}, and minimizes the expected feature-prediction error: s T P = arg minPk Pk (s) - E {(s |s)} 2 . 2 (4) (For brevity, we shall henceforth leave s P (s |s) implicit). One way to solve the minimization problem in Eq. (4) is to compute the expected next feature explicitly as the n × k matrix P and then find the least-squares solution to the over-constrained system P P , since the ith row of P is P 's prediction of the next feature values for state i and the ith row of P is the expected value of these features. The least-squares solution is P = (T )-1 T P , (5) where is an operator that is the -weighted L2 projection into s pan(), where is a state weighting distribution, typically the stationary distribution of P . If = diag ( ), = (T )-1 T . In some cases, an unweighted projection (uniform ) or some other is used. Most of our results do not depend upon the projection weights, so we shall assume uniform unless otherwise stated. Solving for w yields: w = = (I - (T )-1 T P )-1 (T )-1 T R (2) (T - T P )-1 T R. (3) with approximate next feature values P = P . To predict the reward model using the same features, we could perform a standard least-squares projection into s pan() to compute an approximate reward predictor: r = (T )-1 T R, (6) ^ with corresponding approximate reward: R = r . As in the value-function approximation case, it is possible to do a weighted L2 projection, a straightforward generalization that we omit for conciseness of presentation. Classically, an advantage of learning a model and deriving values from the model (indirect learning) over using samples to estimate the values (direct learning) is that such a method can be very data efficient. On the other hand, learning an accurate model can require a great deal of experience. Surprisingly, we find that the two approaches are the same, at least in the linear approximation setting. In this paper, we assume that P is known and that can be constructed exactly, while in all but the factored model case, these would be sampled. This assumption allows us to characterize the representational power of the features as a separate issue from the variance introduced by sampling. 2.2. Linear Models As in the case of linear value functions, we assume the existence of a set of linearly independent features Permitting linearly dependent basis functions would not change our results, but would complicate exposition since the resulting weight vectors would no longer be unique. In practice, one may use SVD to enforce the selection of a unique solution when features are linearly dependent. 1 3. Linear Fixed-Point Solution = Linear-Model Solution The notion that linear fixed-point methods are implicitly computing some sort of model has been recognized in varying degrees for several years. For example, Boyan (1999) An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning considered the intermediate calculations performed by LSTD in some special cases, and interpreted parts of the LSTD algorithm as computing a compressed model. In this section, we show that the linear fixed-point solution for features is exactly the solution to the linear model described by P and r . We first prove it for the uncontrolled case, and then generalize our result to the controlled case. Our results concern unweighted projections, but generalize readily to weighted projections. 3.1. The Uncontrolled Case Recall that the approximate model transforms feature vectors to feature vectors, so any k -vector is a state in the approximate model. If x is such a state, then, in the approximate model, r T x is the reward for this state and P T x is the next state vector. The Bellman equation for state x is: V [x ] = r T x + V [P T x ] = i =0 This result demonstrates that for a given set of features , there is no difference between using the exact model to find an approximate linear fixed-point value function in terms of and first constructing an approximate linear model in terms of and then solving for the exact value function of the approximate model using Eq. (9). Although the modelbased view produces exactly the same value function as the value-function-based view, the model-based view can give a new perspective on error analysis and feature selection, as shown in later sections. 3.2. The Controlled Case: LSPI For the controlled case, we denote a policy as : S A. Since rewards and transitions are action dependent, the value function is defined over state­action pairs and is called a Q-function: For a fixed policy , Q is the unique fixed-point solution to the Bellman equation a Q [si , a] = Ri + i r T (P i )T x. j Pia Q [sj , (sj )]. j Expressed with respect to the original state space, the value function becomes V = i =0 i P i r , As in Section 2.1, Q can be approximated by functions k ^ in s pan(): Q = i=1 wi i , but now the basis functions i are defined over state­action pairs rather than states. In the controlled case, the policy can be refined iteratively, as in the Least-Squares Policy Iteration (LSPI) algorithm (Lagoudakis & Parr, 2003). Starting with an arbitrary policy 1 , LSPI performs two steps iteratively until certain termination conditions are satisfied. In iteration i, it ^ first computes an approximate linear value function Qi for the current policy i (the policy-evaluation step), and then computes a new policy i+1 that is greedy with respect to ^ Qi (the policy-improvement step). In the policy-evaluation step, an algorithm LSTDQ, which is the Q-version of the LSTD algorithm, is used to com^ pute Qi . Since a Markov decision process controlled by a fixed policy is equivalent to an induced Markov reward process whose state space is S × A, LSTDQ can be viewed as LSTD running over this induced MRP. Due to Theorem 3.1, LSTDQ effectively builds a least-squares linear model approximation and then finds the exact solution to this model. ^ Therefore, the intermediate value functions Qi found by LSPI are the exact value functions of the respective approximate linear models with the smallest (weighted) L2 error. which is a linear combination of the columns of . Since V = w for some w, the fixed-point equation becomes: V w w ^ = R + P w = r + P w = (I - P )-1 r . (7) (8) (9) We call the solution to the system above the linear model solution. A solution will exist when P has a spectral radius less than 1/ . This condition is not guaranteed because P is not necessarily a stochastic matrix; it is simply a matrix that predicts expected next feature values. The cases where the spectral radius of P exceeds 1/ correspond to the cases where the value function defined by P and r assigns unbounded value to some states. Theorem 3.1 For any MRP M and set of features , the linear-model solution and the linear fixed-point solution are identical. Proof We begin with the expression for the linear-model solution from Eq. (9) and then proceed by substituting the definitions of P and r from Eq. (5) and Eq. (6), yielding: (I - P )-1 r (I - (T )-1 T P )-1 (T )-1 T R w . 4. Analysis of Error: Uncontrolled Case Value-function-based methods often analyze the error of a ^ value function V in terms of the one-step lookahead error, or Bellman error: ^ ^ ^ B E (V ) = R + P V - V . w = = = An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning In the context of linear value functions and linear models, we shall define the Bellman error for a set of features as the error in the linear fixed-point value function for : B E () = B E (w ) = R + P w - w . To understand the relationship between the error in the linear model and the Bellman error, we define two components of the model error, the reward error: ^ R = R - R , and the per-feature error: = P - P . The per-feature error is the error in the prediction of the expected next feature values, so both terms can be thought of as the residual error of the linear model. The next theorem relates the Bellman error to these model errors. Theorem 4.1 For any MRP M and features , B E () = R + w . Proof Using the definitions of B E (), R , and : B E () = R + P w - w ^ = (R + R) + ( + P )w - w = = = = ^ (R + w ) + R + ( P - )w ^ (R + w ) + R - (I - P )w ^ (R + w ) + R - r R + w . 5.1. General Observations about R and The condition R = = 0 is sufficient (but not necessary) to achieve zero Bellman error and a perfect value function. Specifically, it requires that the features of the approximate model capture the structure of the reward function, and that the features of the approximate model are sufficient to predict expected next features. In the case where is a set of indicator functions over disjoint partitions of S , these conditions are similar to those specified for model minimization (Dean & Givan, 1997) in MDPs. Features that are insufficient to represent the immediate reward are likely to be problematic since any error in the prediction of the immediate reward based upon the features (R ) can appear directly in the Bellman error through the first summand of Eq. (10). This finding is consistent with the observation of Petrik (2007) of the problems that arise when the reward is orthogonal to the features. For = 0, the Bellman error is determined entirely by R , with no dependence on . This observation has some interesting implications for feature selection and the analysis of the resulting approximate value function, topics we address further in Section 5.3. 5.2. Incremental Feature Generation This section presents two existing methods for incrementally building a basis, the Krylov basis, and Bellman Error Basis Functions (BEBFs). We also propose a new method based upon the model error, Model Error Basis Functions (MEBFs), then show that all three methods are equivalent given the same initial conditions. 5 . 2 . 1 . T H E K RY L OV BA S I S The Krylov basis is defined in terms of powers of the transition matrix multiplied by R. We refer to the Krylov basis with k basis functions, starting from X, as Krylov k (X), with Krylov k (X) = {P i-1 X : 1 i k }. For an MRP, typically X = R. The Krylov basis, and Krylov methods in general, are standard techniques for the iterative solution to systems of linear equations. Its relevance to feature selection for RL was demonstrated by Petrik (2007). 5.2.2. BEBFS Many researchers have proposed using features based upon the residual error in the current feature set (Wu & Givan, 2004; Sanner & Boutilier, 2005; Keller et al., 2006). Parr et al. (2007) describe this family of techniques as Bellman Error Basis Functions (BEBFs), and analyze some of the properties of this approach. More formally, if k wk is the current value function, BEBF adds k+1 = B E (k ) as the next basis function. We refer to the basis resulting from k - 1 iterations of BEBF, starting from X, as BEBF k (X). (10) ^ The final step follows from the definition of R, and the penultimate step follows from Eq. (9) and Theorem 3.1. This decomposition of the Bellman error lets us think of the Bellman error as composed of two separate sources of error: reward error, and per-feature error. In the next section, we show that this view can give insight into the problem of feature selection, but we also caution that there can be interactions between R and . For example, consider the basis composed of the single basis function = [V ]. Clearly, B E ( ) = 0, but for any non-trivial problem and approximate model, R and will be nonzero and will cancel each other out in Eq. (10). A similar result may be possible for the controlled case, but there are some subtleties. For example, there is not a clean notion of a fixed point for the outer loop of the LSPI algorithm since the algorithm is not guaranteed to converge to a single policy or w. 5. Feature Selection We present several insights on the problem of feature selection that follow from the results presented above. An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning Theorem 5.1 (Petrik2 ) For any k 1, s pan(Krylov k (R)) = s pan(BEBF k (R)). Proof The proof is by induction on k . For the basis: Krylov 1 (R) = BEBF 1 (R) = R. For the inductive step, we assume equality up to k , so for both methods the value function can be expressed as: k wk = Now, observe that: B E (k ) = R + P ( ik =1 For the inductive step, we assume equality up to k and consider the behavior of MEBF. For k 1, R = 0, since R is the first basis function added. The basis k is equivalent to a collection of basis functions of the form i = P i-1 R for 1 i k . As a result, P i is already in the basis for all 1 i < k . Thus, the only nonzero column of will correspond to feature k and will be P k R - P P k-1 R. Since P P k-1 R is necessarily in s pan(k ), the only new contribution to the basis made by MEBF will be from P k R, which is precisely what is added by Krylov k+1 (R). These results show that, starting from R, all three methods will produce the same basis. An advantage of BEBF is that it will produce orthogonal basis vectors. An advantage of MEBF is that it can add multiple new basis vectors at each iteration if it is initialized with a set of basis functions. ik =1 wi P i-1 R. wi P i-1 R) - ik =1 wi P i-1 R. 5.3. Invariant Subspaces of P The form of the Bellman error in Eq. (10) suggests that features for which = 0 are particularly interesting. If a dictionary of such features were readily available, then the feature-selection problem would reduce to the problem of predicting the immediate reward using this dictionary. The condition = 0 means that, collectively, the features are a basis for a perfect linear predictor of their own next, expected values. More formally, features are subspace invariant with respect to P if P is in s pan(), which means that there exists a such that P = . At first, it may seem like subspace invariance is an extraordinary requirement that could hold only for a complete basis for P . It turns out, however, that there are many ways to describe invariant subspaces of P . Any set of eigenvectors of P forms an invariant subspace. For eigenvectors X1 . . . Xk with eigenvalues 1 . . . k , = diag(1 . . . k ). The set of generalized eigenvectors corresponding to a particular eigenvalue of P is subspace invariant with respect to P . For an eigenvalue with multiplicity i, there will be i generalized eigenvectors, X1 . . . Xi satisfying (P - I )Xj = Xj -1 for 1 j i and (P - I )j Xj = 0 for 0 j i. More generally, if 1 and 2 are subspace invariant with respect to P , then so is their union. Finally, the Schur decomposition of a matrix P provides a set of nested invariant subspaces of P . In fairness, we point out that these methods all require knowledge of P and superlinear computation time in the dimension of P . We defer discussion of the practicality of implementing these methods to Section 6 and Section 7. Theorem 5.4 For any MRP M and subspace invariant feature set , = 0. Proof First, we observe that P has a particularly simple The only part of the above that is not already in the basis is the contribution from P k+1 R, which is precisely what is added in Krylov k+1 (R). 5.2.3. MEBFS A natural generalization of BEBFs to the model-based view would be to add features that capture the residual error in the model. Starting from k , this technique (MEBF) adds R and (or the linearly independent components thereof) to the basis at each iteration to create k+1 . In contrast to BEBFs, this method can add a large number of basis functions at each iteration since has as many columns as . One might imagine that this process could result in an exponential growth in the number of basis functions. In fact, however, the number of new basis functions added at each iteration will not grow since each new set of basis functions that is added will drive the error in the previous basis functions to 0. We refer to the basis resulting from k - 1 iterations of MEBF, starting from X, as MEBF k (X). For an initial basis of , the MEBF basis expansion is guaranteed to contain the BEBF basis expansion. Theorem 5.2 s pan(BEBF 2 ()) s pan(MEBF 2 ()). Proof Follows immediately from Eq. (10). Theorem 5.3 For k 1: s pan(Krylov k (R)) = s pan(MEBF k (R)). Proof The proof is by induction on k . For the basis: Krylov 1 (R) = MEBF 1 (R) = R. 2 M. Petrik, personal communication, 2007. An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning form as a consequence of subspace invariance: P = (T )-1 T P = (T )-1 T = . Substituting into the definition of : = P - P = P - P = - = 0. Subspace invariant features have additional intriguing properties. The resulting value function always exists and can be interpreted as the result of using the true transition ^ model with the approximate reward function R. Theorem 5.5 For any MRP M and subspace invariant feature set , w always exists and ^ w = (I - P )-1 R. Proof Starting with the form of w from Eq. (7) and the fact that = 0: w w - P w w = = = = ^ R + P w ^ R + P w ^ R ^ (I - P )-1 R. PVF: This is the proto-value function (PVF) framework described by Mahadevan and Maggioni (2007). PVFs use eigenvalues of the Laplacian derived from an empirically constructed adjacency matrix (from random walk trajectories), enumerated in increasing order of eigenvalue. We reproduced their method as closely as possible, including adding links to the adjacency matrix for all policies, not just the policy under evaluation. Curiously, removing the off-policy links seemed to produce worse performance. We avoided using samples to eliminate the confounding (for our purposes) issue of variance between experiments. We used the combinatorial Laplacian for the 50-state and blackjack problems, but used the normalized Laplacian in the two-room problem to match Mahadevan and Maggioni (2007). PVF-MP: This algorithm selects basis functions from the set of PVFs, but selects them incrementally based upon the Bellman error. Specifically, basis function k + 1 is the PVF that has highest dot product with the Bellman error resulting from the previous k basis functions. It can be interpreted as a form of matching pursuits (Mallat & Zhang, 1993) on the Bellman error with a dictionary of PVFs. Eig-MP: This algorithm is similar to PVF-MP, but selects from a dictionary of the eigenvectors of P . Both Eig-MP and PVF-MP are similar in spirit to Petrik's WL algorithm. BEBF: This is the BEBF algorithm starting with 0 = R, as described in Section 5.2. Our experiments performed unweighted L2 projection and report unweighted L2 norm error. We also considered L error and L2 projections weighted by stationary distributions, but the results were not qualitatively different. We report the Bellman error, the reward error, and the feature error, which is the contribution of the per-feature errors to the Bellman error: w . These metrics are presented as a function of the number of basis functions. 6.1. 50-state Chain We applied all 4 algorithms to the 50-state chain problem from Lagoudakis and Parr (2003), with the results shown in Figure 1(a­c). As demanded by theory, Eig-MP has 0 feature error, which means that the entirety of the Bellman error is expressed in R . BEBFs represent the other extreme since R = 0 after the first basis function is added and the entirety of the Bellman error is expressed through . For this problem, PVFs appear to be approximately subspace invariant, resulting in low . However, both Eig-MP and the PVF methods do poorly because the reward is not easily expressed as linear combination of a small number of PVFs. PVF-MP does better than plain PVFs because it is actively trying to reduce the error, while plain PVFs choose basis functions in an order that ignores the reward. ^ To confirm that such a w actually exists, we note that R -1 s pan() by construction, and that (I - P ) must exist for the actual P and 0 < 1, allowing us to rewrite: w = i =0 ^ i P i R, which remains in s pan() because of 's subspace invariance with respect to P . Our analysis has some similarities with that of Petrik (2007). Petrik considered the eigenvalue decomposition of P as a basis and considered the error in the projection of V into this basis. Petrik also suggested the use of the Jordan form, which would provide generalized eigenvectors for matrices that are not diagonalizable. Our analysis focuses on the Bellman error of the linear fixedpoint solution. Insights from the model-based view of linear approximation architectures allow us to decompose the error into distinct components corresponding to the reward and transition models, making the role of invariant subspaces particularly salient. 6. Experimental Results We present policy-evaluation results on three different problems. Our objective is to demonstrate how our theoretical results can inform the feature-selection process and explain the behavior of known feature-selection algorithms. We consider 4 algorithms: An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning 6.2. Two-room Problem We tried all four algorithms on an optimal policy for the two-room navigation problem from Mahadevan and Maggioni (2007). The transition matrix for this problem is not diagonalizable and typical methods for extracting generalized eigenvectors proved unreliable, so we do not show results for the Eig-MP method. Figure 1(d­f) shows the breakdown of error for the remaining algorithms. In this case, the Laplacian approach produces features that behave less like an invariant subspace, resulting in high R and . However, there is some cancellation between them. 6.3. Blackjack We tested a version of the bottomless-deck blackjack problem from Sutton and Barto (1998), evaluating the policy they propose. For the model described in the book, all methods except BEBF performed extremely poorly. To make the problem more amenable to eigenvector-based methods, we implemented an ergodic version that resets to an initial distribution over hands with a value of 12 or larger and used a discount of 0.999. The breakdown of error for the different algorithms is shown in Figure 1(g­ i), where we again omit Eig-MP. As expected, BEBFs exhibit R = 0, and drive the Bellman error down fairly rapidly. PVFs exhibit some interesting behaviors: When the PVFs are enumerated in order of increasing eigenvalue, they form an invariant subspace. As a result, the feature error for PVFs hugs the abscissa in Figure 1(i). However, this ordering completely fails to match R until the very last eigenvectors are added, resulting in very poor performance overall. In contrast, PVF-MP adds basis eigenvectors in an order that does not result in subspace invariant features sets, but that does match R earlier, resulting in a more consistent reduction of error. To focus on the expressive power of the features, our results in this paper do not directly consider sampled data, the regime in which linear fixed-point methods are most often employed. Some initial results on the effects of noise in feature generation for BEBF/MEBF/Krylov methods can be found in Parr et al. (2007), however further analysis would still be helpful. For eigenvector-based methods, there are some questions about the cost of estimating eigenvectors of P , or an approximation to P via the Laplacian. Computing eigenvectors can be computationally intensive and, for general P , prone to numerical instabilities. An important direction for future work is seeking a deeper understanding of the interaction between feature-selection and policy-improvement algorithms such as LSPI. 8. Conclusion This paper demonstrated a fundamental equivalence between linear value-function approximation and linear model approximation for RL. This equivalence led to a novel view of the Bellman error, which then gave insight into the problem of feature selection. These insights were used to explain the behavior of existing feature-selection algorithms on some sample problems. While this research has not, yet, led to a novel algorithmic approach, we believe that it helps address fundamental questions of representation and feature selection encountered by anyone wishing to solve real RL problems. Acknowledgment We thank Carlo Tomasi and Xiaobai Sun for helpful discussions, Sridhar Mahadevan and Jeff Johns for pointing out some discrepancies between our interpretation of the two-room problem in an earlier version of this paper and the version in Mahadevan and Maggioni (2007), and Marek Petrik for Theorem 5.1. This work was supported in part by DARPA CSSG HR0011-06-1-0027, and by NSF IIS-0713435. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors. 7. Discussion and Future Work A significant finding in our work is the close relationship between value-function approximation and model-based learning. Our experimental results illustrate the relationship between the power of the features to represent an approximate model and the Bellman error. While features that represent feature transitions accurately have highly desirable properties, both components of the model, the reward function and the transition function, should be respected by the features. Both a strength and weakness of the BEBF/MEBF/Krylov methods is their connection to specific policies and reward structures. Our results are consonant with those of Petrik (2007), which showed good performance for the Krylov basis and some surprisingly weak performance for eigenvector-based methods despite their appealing properties. References Boyan, J. A. (1999). Least-squares temporal difference learning. ICML-99. Bradtke, S., & Barto, A. (1996). Linear least-squares algorithms for temporal difference learning. Machine Learning, 2. Dean, T., & Givan, R. (1997). Model minimization in Markov decision processes. AAAI-97. Keller, P., Mannor, S., & Precup, D. (2006). Automatic basis function construction for approximate dynamic programming and reinforcement learning. ICML 2006. Koller, D., & Parr, R. (1999). Computing factored value functions for policies in structured MDPs. IJCAI-99. Lagoudakis, M., & Parr, R. (2003). Least squares policy iteration. JMLR, 4. Mahadevan, S., & Maggioni, M. (2007). Proto-value functions: A Laplacian framework for learning representation and control in Markov decision processes. JMLR, 8. Mallat, S. G., & Zhang, Z. (1993). Matching pursuits with time-frequency dictionaries. IEEE Trans. on Signal Processing, 41. An Analysis of Linear Models, Linear Value-Function Approximation, and Feature Selection for Reinforcement Learning PVF 1.2 1 Bellman error 0.8 0.6 0.4 0.2 0 10 20 30 40 Number of basis functions 50 Reward error 1.2 1 0.8 0.6 0.4 0.2 0 10 20 30 40 Number of basis functions 50 Feature error PVF-MP Eig-MP BEBF 1.2 1 0.8 0.6 0.4 0.2 0 10 20 30 40 Number of basis functions 50 (a) Chain Bellman Error (b) Chain Reward Error (c) Chain Feature Error PVF 100 100 PVF-MP BEBF 100 80 Bellman error Reward error 80 Feature error 80 60 60 60 40 40 40 20 20 20 0 10 20 30 40 50 Number of basis functions 60 0 10 20 30 40 50 Number of basis functions 60 0 10 20 30 40 50 Number of basis functions 60 (d) Two-room Bellman Error (e) Two-room Reward Error (f) Two-room Feature Error PVF 1.5 1.5 PVF-MP BEBF 1.5 Bellman error 0.5 Reward error Feature error 1 1 1 0.5 0.5 0 50 100 150 Number of basis functions 200 0 50 100 150 Number of basis functions 200 0 50 100 150 Number of basis functions 200 (g) Ergodic Blackjack Bellman Error (h) Ergodic Blackjack Reward Error (i) Ergodic Blackjack Feature Error Figure 1. Decomposition of the Bellman error for three different problems. First row: 50-state chain; Second row: Two-room problem; Third row: Ergodic Blackjack. First column: Bellman error; Second Column: reward error; Third Column: feature error Parr, R., Painter-Wakefield, C., Li, L., & Littman, M. (2007). Analyzing feature generation for value-function approximation. ICML-07. Petrik, M. (2007). An analysis of Laplacian methods for value function approximation in MDPs. IJCAI-07. Sanner, S., & Boutilier, C. (2005). Approximate linear programming for first-order MDPs. UAI-05. Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. The MIT Press. Wu, J.-H., & Givan, R. (2004). Feature-discovering approximate value iteration methods (Technical Report TR-ECE-04-06). Purdue University. Yu, H., & Bertsekas, D. (2006). Convergence results for some temporal difference methods based on least squares (Technical Report LIDS-2697). MIT.