Temporal Difference Updating without a Learning Rate Marcus Hutter RSISE@ANU and SML@NICTA Canberra, ACT, 0200, Australia marcus@hutter1.net www.hutter1.net Shane Legg IDSIA, Galleria 2, Manno-Lugano CH-6928, Switzerland shane@vetta.org www.vetta.org/shane Abstract We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(), however it lacks the parameter that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD() and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q() and Sarsa() and find that it again offers superior performance without a learning rate parameter. 1 Introduction In the field of reinforcement learning, perhaps the most popular way to estimate the future discounted reward of states is the method of temporal difference learning. It is unclear who exactly introduced this first, however the first explicit version of temporal difference as a learning rule appears to be Witten [9]. The idea is as follows: The expected future discounted reward of a state s is, r , V s := E k + rk+1 + 2 rk+2 + · · · |sk = s Our task, at time t, is to compute an estimate Vst of V s for each state s. The only information we have to base this estimate on is the current history of state transitions, s1 , s2 , . . . , st , and the current history of observed rewards, r1 , r2 , . . . , rt . Equation (1) suggests that at time t + 1 the value of rt + Vst+1 provides us with information on what Vst should be: If it is higher than Vstt then perhaps this estimate should be increased, and vice versa. This intuition gives us the following estimation heuristic for state st , , r Vstt+1 := Vstt + t + Vstt+1 - Vstt where is a parameter that controls the rate of learning. This type of temporal difference learning is known as TD(0). 1 where the rewards rk , rk+1 , . . . are geometrically discounted into the future by < 1. From this definition it follows that, r . V s = E k + V sk+1 |sk = s (1) One shortcoming of this method is that at each time step the value of only the last state st is updated. States before the last state are also affected by changes in the last state's value and thus these could be updated too. This is what happens with so called temporal difference learning with eligibility traces, where a history, or trace, is kept of which states have been recently visited. Under this method, when we update the value of a state we also go back through the trace updating the earlier states as well. Formally, for any state s its eligibility trace is computed by, t-1 Es if s = st , t Es := t Es-1 + 1 if s = st , where is used to control the rate at which the eligibility trace is discounted. The temporal difference update is then, for all states s, . r t (2) + Vstt+1 - Vstt Vst+1 := Vst + Es This more powerful version of temporal different learning is known as TD() [7]. The main idea of this paper is to derive a temporal difference rule from statistical principles and compare it to the standard heuristic described above. Superficially, our work has some similarities to LSTD() ([2] and references therein). However LSTD is concerned with finding a least-squares linear function approximation, it has not yet been developed for general and , and has update time quadratic in the number of features/states. On the other hand, our algorithm "exactly" coincides with TD/Q/Sarsa() for finite state spaces, but with a novel learning rate derived from statistical principles. We therefore focus our comparison on TD/Q/Sarsa. For a recent survey of methods to set the learning rate see [1]. In Section 2 we derive a least squares estimate for the value function. By expressing the estimate as an incremental update rule we obtain a new form of TD(), which we call HL(). In Section 3 we compare HL() to TD() on a simple Markov chain. We then test it on a random Markov chain in Section 4 and a non-stationary environment in Section 5. In Section 6 we derive two new methods for policy learning based on HL(), and compare them to Sarsa() and Watkins' Q() on a simple reinforcement learning problem. Section 7 ends the paper with a summary and some thoughts on future research directions. 2 Derivation The empirical future discounted reward of a state sk is the sum of actual rewards following from state sk in time steps k , k + 1, . . ., where the rewards are discounted as they go into the future. Formally, the empirical value of state sk at time k for k = 1, ..., t is, u u-k ru , (3) vk := =k where the future rewards ru are geometrically discounted by < 1. In practice the exact value of vk is always unknown to us as it depends not only on rewards that have been already observed, but also on unknown future rewards. Note that if sm = sn for m = n, that is, we have visited the same state twice at different times m and n, this does not imply that vn = vm as the observed rewards following the state visit may be different each time. Our goal is that for each state s the estimate Vst should be as close as possible to the true expected future discounted reward V s . Thus, for each state s we would like Vs to be close to vk for all k such that s = sk . Furthermore, in non-stationary environments we would like to discount old evidence by some parameter (0, 1]. Formally, we want to minimise the loss function, t v 2 1k t-k k - Vstk . (4) L := 2 =1 For stationary environments we may simply set = 1 a priori. As we wish to minimise this loss, we take the partial derivative with respect to the value estimate of each state and set to zero, kt kt kt v L t-k k - Vstk sk s = Vst t-k sk s vk = 0, =- t-k sk s - Vst =1 =1 =1 2 where we could change Vstk into Vst due to the presence of the Kronecker sk s , defined xy := 1 if t t x = y , and 0 otherwise. By defining a discounted state visit counter Ns := k=1 t-k sk s we get t Vst Ns = kt t-k sk s vk . (5) =1 Since vk depends on future rewards rk , Equation (5) can not be used in its current form. Next we note that vk has a self-consistency property with respect to the rewards. Specifically, the tail of the future discounted reward sum for each state depends on the empirical value at time t in the following way, t u-1 u-k ru + t-k vt . vk = =k Substituting this into Equation (5) and exchanging the order of the double sum, t Vst Ns = t u-1 k u t u-1 t-k sk s u-k ru + ku =1 =1 kt t-k sk s t-k vt kt ( )t-k sk s vt =1 = = t Es t-u ( )u-k sk s ru + =1 t Rs + =1 t Es v t , =1 t := k=1 ( ) sk s is the eligibility trace of state s, and Rs := where the discounted reward with eligibility. t t-k t-1 u=1 u t-u Es ru is t t Es and Rs depend only on quantities known at time t. The only unknown quantity is vt , which we have to replace with our current estimate of this value at time t, which is Vstt . In other words, we bootstrap our estimates. This gives us, t t t Vst Ns = Rs + Es Vstt . (6) t t t For state s = st , this simplifies to Vstt = Rst /(Nst - Est ). Substituting this back into Equation (6) we obtain, Rt t t t Vst Ns = Rs + Es t st t . (7) N s t - Es t This gives us an explicit expression for our V estimates. However, from an algorithmic perspective an incremental update rule is more convenient. To derive this we make use of the relations, t t Ns +1 = Ns + st+1 s , t t Es+1 = Es + st+1 s , t t t Rs+1 = Rs + Es rt , 0 0 0 with Ns = Es = Rs = 0. Inserting these into Equation (7) with t replaced by t + 1, t Vst+1 Ns +1 t t = Rs+1 + Es+1 t Rs+11 t+ t+ Nst+1 1 t - Es+11 t+ t t Rst+1 + Est+1 rt t+1 t t Nst+1 - Est+1 t t = Rs + Es rt + Es . t By solving Equation (6) for Rs and substituting back in, t Vst+1 Ns +1 = Vt t tt s N s - Es V s t + s t t Es rt + Es+1 t t t Nst+1 Vstt+1 - Est+1 Vstt + Est+1 rt t t Nst+1 - Est+1 = t Ns + st+1 s t + Es+1 Vt t t - st+1 s Vst - Es Vstt + Es rt t t t Nst+1 Vstt+1 - Est+1 Vstt + Est+1 rt t t Nst+1 - Est+1 . t t Dividing through by Ns +1 (= Ns + st+1 s ), Vst+1 = Vst + t t -st+1 s Vst - Es Vstt + Es rt t Ns + st+1 s 3 + t t t t ( Es + st+1 s )(Nst+1 Vstt+1 - Est+1 Vstt + Est+1 rt ) t t t (Nst+1 - Est+1 )(Ns + st+1 s ) . Making the first denominator the same as the second, then expanding the numerator, Vst+1 = Vst + + t t t t t t t Es rt Nst+1 - Es Vstt Nst+1 - st+1 s Vst Nst+1 - Est+1 Es rt t t t (Nst+1 - Est+1 )(Ns + st+1 s ) tt tt t t t Est+1 Es Vstt + Est+1 Vst st+1 s + Es Nst+1 Vstt+1 - Es Est+1 Vstt t t t (Nst+1 - Est+1 )(Ns + st+1 s ) t t t tt Es Est+1 rt + st+1 s Nst+1 Vstt+1 - st+1 s Est+1 Vstt + st+1 s Est+1 rt t t t (Nst+1 - Est+1 )(Ns + st+1 s ) + . After cancelling equal terms (keeping in mind that in every term with a Kronecker xy factor we t may assume that x = y as the term is always zero otherwise), and factoring out Es we obtain, ( t t t t Es rt Nst+1 - Vstt Nst+1 + Vst st+1 s + Nst+1 Vstt+1 - st+1 s Vstt + st+1 s rt Vst+1 = Vst + t t t Nst+1 - Est+1 )(Ns + st+1 s ) t Finally, by factoring out Nst+1 + st+1 s we obtain our update rule, t Vst+1 = Vst + Es t (s, st+1 ) where the learning rate is given by, t (s, st+1 ) := r t + Vstt+1 - Vstt , (8) t Nst+1 1 . t t t Nst+1 - Est+1 Ns (9) Examining Equation (8), we find the usual update equation for temporal difference learning with eligibility traces (see Equation (2)), however the learning rate has now been replaced by t (s, st+1 ). This learning rate was derived from statistical principles by minimising the squared loss between the estimated and true state value. In the derivation we have exploited the fact that the latter must be self-consistent and then bootstrapped to get Equation (6). This gives us an equation for the learning rate for each state transition at time t, as opposed to the standard temporal difference learning where the learning rate is either a fixed free parameter for all transitions, or is decreased over time by some monotonically decreasing function. In either case, the learning rate is not automatic and must be experimentally tuned for good performance. The above derivation appears to theoretically solve this problem. The first term in t seems to provide some type of normalisation to the learning rate, though the intuition behind this is not clear to us. The meaning of second term however can be understood as t t follows: Ns measures how often we have visited state s in the recent past. Therefore, if Ns t Nst+1 then state s has a value estimate based on relatively few samples, while state st+1 has a value estimate based on relatively many samples. In such a situation, the second term in t boosts the learning rate so that Vst+1 moves more aggressively towards the presumably more accurate rt + Vstt+1 . In the opposite situation when st+1 is a less visited state, we see that the reverse occurs and the learning rate is reduced in order to maintain the existing value of Vs . 3 A simple Markov process For our first test we consider a simple Markov process with 51 states. In each step the state number is either incremented or decremented by one with equal probability, unless the system is in state 0 or 50 in which case it always transitions to state 25 in the following step. When the state transitions from 0 to 25 a reward of 1.0 is generated, and for a transition from 50 to 25 a reward of -1.0 is generated. All other transitions have a reward of 0. We set the discount value = 0.99 and then computed the true discounted value of each state by running a brute force Monte Carlo simulation. We ran our algorithm 10 times on the above Markov chain and computed the root mean squared error in the value estimate across the states at each time step averaged across each run. The optimal 4 0.40 0.35 0.30 RMSE RMSE 0.25 0.20 0.15 0.10 0.05 0.0 0.5 1.0 Time 1.5 2.0 HL(1.0) TD(0.9) a = 0.1 TD(0.9) a = 0.2 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.0 0.5 1.0 Time 1.5 2.0 HL(1.0) TD(0.9) a = 8.0/sqrt(t) TD(0.9) a = 2.0/cbrt(t) x1e+4 x1e+4 Figure 1: 51 state Markov process averaged over 10 runs. The parameter a is the learning rate . Figure 2: 51 state Markov process averaged over 300 runs. value of for HL() was 1.0, which was to be expected given that the environment is stationary and thus discounting old experience is not helpful. For TD() we tried various different learning rates and values of . We could find no settings where TD() was competitive with HL(). If the learning rate was set too high the system would learn as fast as HL() briefly before becoming stuck. With a lower learning rate the final performance was improved, however the initial performance was now much worse than HL(). The results of these tests appear in Figure 1. Similar tests were performed with larger and smaller Markov chains, and with different values of . HL() was consistently superior to TD() across these tests. One wonders whether this may be due to the fact that the implicit learning rate that HL() uses is not fixed. To test this we explored the performance of a number of different learning rate functions on the 51 state Markov chain described above. We found that functions of the form always performed poorly, however good performance t was possible by setting correctly for functions of the form t and t . As the results were much 3 closer, we averaged over 300 runs. These results appear in Figure 2. With a variable learning rate TD() is performing much better, however we were still unable to find an equation that reduced the learning rate in such a way that TD() would outperform HL(). This is evidence that HL() is adapting the learning rate optimally without the need for manual equation tuning. 4 Random Markov process To test on a Markov process with a more complex transition structure, we created a random 50 state Markov process. We did this by creating a 50 by 50 transition matrix where each element was set to 0 with probability 0.9, and a uniformly random number in the interval [0, 1] otherwise. We then scaled each row to sum to 1. Then to transition between states we interpreted the ith row as a probability distribution over which state follows state i. To compute the reward associated with each transition we created a random matrix as above, but without normalising. We set = 0.9 and then ran a brute force Monte Carlo simulation to compute the true discounted value of each state. The parameter for HL() was simply set to 1.0 as the environment is stationary. For TD we experimented with a range of parameter settings and learning rate decrease functions. We found that a fixed learning rate of = 0.2, and a decreasing rate of 13.5 performed reasonable well, but never t as well as HL(). The results were generated by averaging over 10 runs, and are shown in Figure 3. Although the structure of this Markov process is quite different to that used in the previous experiment, the results are again similar: HL() preforms as well or better than TD() from the beginning to the end of the run. Furthermore, stability in the error towards the end of the run is better with HL() and no manual learning tuning was required for these performance gains. 5 0.7 0.6 0.5 RMSE RMSE 0.4 0.3 0.2 0.1 0.0 0 1000 2000 Time 3000 4000 5000 HL(1.0) TD(0.9) a = 0.2 TD(0.9) a = 1.5/cbrt(t) 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0.0 HL(0.9995) TD(0.8) a = 0.05 TD(0.9) a = 0.05 0.5 1.0 Time 1.5 x1e+4 2.0 Figure 3: Random 50 state Markov process. The parameter a is the learning rate . Figure 4: 21 state non-stationary Markov process. 5 Non-stationary Markov process The parameter in HL(), introduced in Equation (4), reduces the importance of old observations when computing the state value estimates. When the environment is stationary this is not useful and so we can set = 1.0, however in a non-stationary environment we need to reduce this value so that the state values adapt properly to changes in the environment. The more rapidly the environment is changing, the lower we need to make in order to more rapidly forget old observations. To test HL() in such a setting, we used the Markov chain from Section 3, but reduced its size to 21 states to speed up convergence. We used this Markov chain for the first 5,000 time steps. At that point, we changed the reward when transitioning from the last state to middle state to from -1.0 to be 0.5. At time 10,000 we then switched back to the original Markov chain, and so on alternating between the models of the environment every 5,000 steps. At each switch, we also changed the target state values that the algorithm was trying to estimate to match the current configuration of the environment. For this experiment we set = 0.9. As expected, the optimal value of for HL() fell from 1 down to about 0.9995. This is about what we would expect given that each phase is 5,000 steps long. For TD() the optimal value of was around 0.8 and the optimum learning rate was around 0.05. As we would expect, for both algorithms when we pushed above its optimal value this caused poor performance in the periods following each switch in the environment (these bad parameter settings are not shown in the results). On the other hand, setting too low produced initially fast adaption to each environment switch, but poor performance after that until the next environment change. To get accurate statistics we averaged over 200 runs. The results of these tests appear in Figure 4. For some reason HL(0.9995) learns faster than TD(0.8) in the first half of the first cycle, but only equally fast at the start of each following cycle. We are not sure why this is happening. We could improve the initial speed at which HL() learnt in the last three cycles by reducing , however that comes at a performance cost in terms of the lowest mean squared error attained at the end of each cycle. In any case, in this non-stationary situation HL() again performed well. 6 Windy Gridworld Reinforcement learning algorithms such as Watkins' Q() [8] and Sarsa() [5, 4] are based on temporal difference updates. This suggests that new reinforcement learning algorithms based on HL() should be possible. For our first experiment we took the standard Sarsa() algorithm and modified it in the obvious way to use an HL temporal difference update. In the presentation of this algorithm we have changed notation slightly to make things more consistent with that typical in reinforcement learning. Specifically, we have dropped the t super script as this is implicit in the algorithm specification, and have 6 Algorithm 1 HLS() Initialise Q(s, a) = 0, N (s, a) = 1 and E (s, a) = 0 for all s, a Initialise s and a repeat Take action a, observed r, s Choose a by using -greedy selection on Q(s , ·) r + Q(s , a ) - Q(s, a) E (s, a) E (s, a) + 1 N (s, a) N (s, a) + 1 for all s, a do s ,a 1 ((s, a), (s , a )) N (s ,a )- E (s ,a ) N ((s,a)) N end for for all s, a do ( E Q(s, a) Q(s, a) + s, a), (s , a ) (s, a) E (s, a) E (s, a) N (s, a) N (s, a) end for s s ; a a until end of run defined Q(s, a) := V(s,a) , E (s, a) := E(s,a) and N (s, a) := N(s,a) . Our new reinforcement learning algorithm, which we call HLS() is given in Algorithm 1. Essentially the only changes to the standard Sarsa() algorithm have been to add code to compute the visit counter N (s, a), add a loop to compute the values, and replace with in the temporal difference update. To test HLS() against standard Sarsa() we used the Windy Gridworld environment described on page 146 of [6]. This world is a grid of 7 by 10 squares that the agent can move through by going either up, down, left of right. If the agent attempts to move off the grid it simply stays where it is. The agent starts in the 4th row of the 1st column and receives a reward of 1 when it finds its way to the 4th row of the 8th column. To make things more difficult, there is a "wind" blowing the agent up 1 row in columns 4, 5, 6, and 9, and a strong wind of 2 in columns 7 and 8. This is illustrated in Figure 5. Unlike in the original version, we have set up this problem to be a continuing discounted task with an automatic transition from the goal state back to the start state. We set = 0.99 and in each run computed the empirical future discounted reward at each point in time. As this value oscillated we also ran a moving average through these values with a window length of 50. Each run lasted for 50,000 time steps as this allowed us to see at what level each learning algorithm topped out. These results appear in Figure 6 and were averaged over 500 runs to get accurate statistics. Despite putting considerable effort into tuning the parameters of Sarsa(), we were unable to achieve a final future discounted reward above 5.0. The settings shown on the graph represent the best final value we could achieve. In comparison HLS() easily beat this result at the end of the run, while being slightly slower than Sarsa() at the start. By setting = 0.99 we were able to achieve the same performance as Sarsa() at the start of the run, however the performance at the end of the run was then only slightly better than Sarsa(). This combination of superior performance and fewer parameters to tune suggest that the benefits of HL() carry over into the reinforcement learning setting. Another popular reinforcement learning algorithm is Watkins' Q(). Similar to Sarsa() above, we simply inserted the HL() temporal difference update into the usual Q() algorithm in the obvious way. We call this new algorithm HLQ()(not shown). The test environment was exactly the same as we used with Sarsa() above. The results this time were more competitive (these results are not shown). Nevertheless, despite spending a considerable amount of time fine tuning the parameters of Q(), we were unable to beat HLQ(). As the performance advantage was relatively modest, the main benefit of HLQ() was that it achieved this level of performance without having to tune a learning rate. 7 6 Future Discounted Reward 5 4 3 2 1 HLS(0.995) e = 0.003 Sarsa(0.5) a = 0.4 e = 0.005 0 0 1 2 Time 3 4 x1e+4 5 Figure 5: [Windy Gridworld] S marks the start state and G the goal state, at which the agent jumps back to S with a reward of 1. Figure 6: Sarsa() vs. HLS() in the Windy Gridworld. Performance averaged over 500 runs. On the graph, e represents the exploration parameter , and a the learning rate . 7 Conclusions We have derived a new equation for setting the learning rate in temporal difference learning with eligibility traces. The equation replaces the free learning rate parameter , which is normally experimentally tuned by hand. In every setting tested, be it stationary Markov chains, non-stationary Markov chains or reinforcement learning, our new method produced superior results. To further our theoretical understanding, the next step would be to try to prove that the method converges to correct estimates. This can be done for TD() under certain assumptions on how the learning rate decreases over time. Hopefully, something similar can be proven for our new method. In terms of experimental results, it would be interesting to try different types of reinforcement learning problems and to more clearly identify where the ability to set the learning rate differently for different state transition pairs helps performance. It would also be good to generalise the result to episodic tasks. Finally, just as we have successfully merged HL() with Sarsa() and Watkins' Q(), we would also like to see if the same can be done with Peng's Q() [3], and perhaps other reinforcement learning algorithms. Acknowledgements This research was funded by the Swiss NSF grant 200020-107616. References [1] A. P. George and W. B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Journal of Machine Learning, 65(1):167­198, 2006. [2] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107­1149, 2003. [3] J. Peng and R. J. Williams. Increamental multi-step Q-learning. Machine Learning, 22:283­290, 1996. [4] G. A. Rummery. Problem solving with reinforcement learning. PhD thesis, Cambridge University, 1995. [5] G. A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technial Report CUED/F-INFENG/TR 166, Engineering Department, Cambridge University, 1994. [6] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998. [7] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9­44, 1988. [8] C.J.C.H Watkins. Learning from Delayed Rewards. PhD thesis, King's College, Oxford, 1989. [9] I. H. Witten. An adaptive optimal controller for discrete-time markov environments. Information and Control, 34:286­295, 1977. 8