Learning for Control from Multiple Demonstrations Adam Coates Pieter Abb eel Andrew Y. Ng Stanford University CS Department, 353 Serra Mall, Stanford, CA 94305 USA acoates@cs.stanford.edu pabbeel@cs.stanford.edu ang@cs.stanford.edu Abstract We consider the problem of learning to follow a desired tra jectory when given a small number of demonstrations from a sub-optimal expert. We present an algorithm that (i) extracts the--initially unknown--desired trajectory from the sub-optimal expert's demonstrations and (ii) learns a local model suitable for control along the learned tra jectory. We apply our algorithm to the problem of autonomous helicopter flight. In all cases, the autonomous helicopter's performance exceeds that of our expert helicopter pilot's demonstrations. Even stronger, our results significantly extend the state-of-the-art in autonomous helicopter aerobatics. In particular, our results include the first autonomous tic-tocs, loops and hurricane, vastly superior performance on previously performed aerobatic maneuvers (such as in-place flips and rolls), and a complete airshow, which requires autonomous transitions between these and various other maneuvers. 1. Intro duction Many tasks in robotics can be described as a tra jectory that the robot should follow. Unfortunately, specifying the desired tra jectory and building an appropriate model for the robot dynamics along that tra jectory are often non-trivial tasks. For example, when asked to describe the tra jectory that a helicopter should follow to perform an aerobatic flip, one would have to specify a tra jectory that (i) corresponds to the aerobatic flip task, and (ii) is consistent with the helicopter's dynamics. The latter requires (iii) an accurate helicopter dynamics model for all of the flight regimes encountered in the vicinity of the tra jectory. These coupled tasks are non-trivial for systems with complex dynamics, such as helicopters. Failing to adequately address these points leads to a significantly more difficult conAppearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). trol problem. In the apprenticeship learning setting, where an expert is available, rather than relying on a handengineered target tra jectory, one can instead have the expert demonstrate the desired tra jectory. The expert demonstration yields both a desired tra jectory for the robot to follow, as well as data to build a dynamics model in the vicinity of this tra jectory. Unfortunately, perfect demonstrations can be hard (if not impossible) to obtain. However, repeated expert demonstrations are often suboptimal in different ways, suggesting that a large number of suboptimal expert demonstrations could implicitly encode the ideal tra jectory the suboptimal expert is trying to demonstrate. In this paper we propose an algorithm that approximately extracts this implicitly encoded optimal demonstration from multiple suboptimal expert demonstrations, and then builds a model of the dynamics in the vicinity of this tra jectory suitable for high-performance control. In doing so, the algorithm learns a target tra jectory and a model that allows the robot to not only mimic the behavior of the expert but even perform significantly better. Properly extracting the underlying ideal tra jectory from a set of suboptimal tra jectories requires a significantly more sophisticated approach than merely averaging the states observed at each time-step. A simple arithmetic average of the states would result in a trajectory that does not even obey the constraints of the dynamics model. Also, in practice, each of the demonstrations will occur at different rates so that attempting to combine states from the same time-step in each tra jectory will not work properly. We propose a generative model that describes the expert demonstrations as noisy observations of the unobserved, intended target tra jectory, where each demonstration is possibly warped along the time axis. We present an EM algorithm--which uses a (extended) Kalman smoother and an efficient dynamic programming algorithm to perform the E-step--to both infer the unobserved, intended target tra jectory and a timealignment of all the demonstrations. The time-aligned demonstrations provide the appropriate data to learn Learning for Control from Multiple Demonstrations good local models in the vicinity of the tra jectory-- such tra jectory-specific local models tend to greatly improve control performance. Our algorithm allows one to easily incorporate prior knowledge to further improve the quality of the learned tra jectory. For example, for a helicopter performing in-place flips, it is known that the helicopter can be roughly centered around the same position over the entire sequence of flips. Our algorithm incorporates this prior knowledge, and successfully factors out the position drift in the expert demonstrations. We apply our algorithm to learn tra jectories and dynamics models for aerobatic flight with a remote controlled helicopter. Our experimental results show that (i) our algorithm successfully extracts a good tra jectory from the multiple sub-optimal demonstrations, and (ii) the resulting flight performance significantly extends the state of the art in aerobatic helicopter flight (Abbeel et al., 2007; Gavrilets et al., 2002). Most importantly, our resulting controllers are the first to perform as well, and often even better, than our expert pilot. We posted movies of our autonomous helicopter flights at: http://heli.stanford.edu The remainder of this paper is organized as follows: Section 2 presents our generative model for (multiple) suboptimal demonstrations; Section 3 describes our tra jectory learning algorithm in detail; Section 4 describes our local model learning algorithm; Section 5 describes our helicopter platform and experimental results; Section 6 discusses related work. approximate model of the dynamics zt+1 = f (zt ) + t , (z ) t (z ) N (0, (z) ). (1) The dynamics model does not need to be particularly accurate--in our experiments, we use a single generic model learned from a large corpus of data that is not specific to the tra jectory we want to perform. In our experiments (Section 5) we provide some concrete examples showing how accurately the generic model captures the true dynamics for our helicopter.1 Our generative model represents each demonstration as a set of independent "observations" of the hidden, ideal tra jectory z. Specifically, our model assumes k yj = zj + j , k (y ) j (y ) N (0, (y) ). (2) k Here j is the time index in the hidden tra jectory to k which the observation yj is mapped. The noise term in the observation equation captures both inaccuracy in estimating the observed tra jectories from sensor data, as well as errors in the maneuver that are the result of the human pilot's imperfect demonstration.2 k The time indices j are unobserved, and our model assumes the following distribution with parameters dk : i k k k d1 if j +1 - j = 1 dk if k - k = 2 2 k k j +1 j P(j +1 |j ) = (3) k k dk if j +1 - j = 3 3 0 otherwise k 0 0. (4) 2. Generative Mo del 2.1. Basic Generative Mo del We are given M demonstration tra jectories of length N k , for k = 0..M - 1. Each tra jectory is a sequence of states, sk , and control inputs, uk , composed into a j j single state vector: sk , k j for j = 0..N k - 1, k = 0..M - 1. yj = uk j Our goal is to estimate a "hidden" target tra jectory of length T , denoted similarly: s , t for t = 0..T - 1. zt = u t | j = 0..N - We use the following notation: y = 1, k = 0..M - 1}, z = {zt | t = 0..T - 1}, and similarly for other indexed variables. The generative model for the ideal tra jectory is given by an initial state distribution z0 N (µ0 , 0 ) and an k {yj k To accommodate small, gradual shifts in time between the hidden and observed tra jectories, our model assumes the observed tra jectories are subsampled versions of the hidden tra jectory. We found that having a hidden tra jectory length equal to twice the average length of the demonstrations, i.e., T = M 1 2( M k=1 N k ), gives sufficient resolution. Figure 1 depicts the graphical model corresponding to our basic generative model. Note that each observak tion yj depends on the hidden tra jectory's state at k k k time j , which means that for j unobserved, yj depends on all states in the hidden tra jectory that it could be associated with. 2.2. Extensions to the Generative Mo del Thus far we have assumed that the expert demonstrations are misaligned copies of the ideal tra jectory The state transition model also predicts the controls as a function of the previous state and controls. In our experiments we predict u+1 as u plus Gaussian noise. t t 2 Even though our observations, y, are correlated over time with each other due to the dynamics governing the observed tra jectory, our model assumes that the observations k yj are independent for all j = 0..N k - 1 and k = 0..M - 1. 1 Learning for Control from Multiple Demonstrations tories, we augment the latent tra jectory's state with a k "drift" vector t for each time t and each demonstrated tra jectory k . We model the drift as a zero-mean random walk with (relatively) small variance. The state k observations are now noisy measurements of zt + t rather than merely zt . 2.2.3. Incorporating Prior Knowledge Even though it might be hard to specify the complete ideal tra jectory in state space, we might still have prior knowledge about the tra jectory. Hence, we introduce additional observations t = (zt ) corresponding to our prior knowledge about the ideal tra jectory at time t. The function (zt ) computes some features of the hidden state zt and our expert supplies the value t that this feature should take. For example, for the case of a helicopter performing an in-place flip, we use an observation that corresponds to our expert pilot's knowledge that the helicopter should stay at a fixed position while it is flipping. We assume that these observations may be corrupted by Gaussian noise, where the variance of the noise expresses our confidence in the accuracy of the expert's advice. In the case of the flip, the variance expresses our knowledge that it is, in fact, impossible to flip perfectly in-place and that the actual position of the helicopter may vary slightly from the position given by the expert. Incorporating prior knowledge of this kind can greatly enhance the learned ideal tra jectory. We give more detailed examples in Section 5. 2.2.4. Model Summary In summary, we have the following generative model: zt+1 t+1 k t+1 = f (zt ) + t + t , (z ) Figure 1. Graphical model representing our tra jectory assumptions. (Shaded nodes are observed.) merely corrupted by Gaussian noise. Listgarten et al. have used this same basic generative model (for the case where f (·) is the identity function) to align speech signals and biological data (Listgarten, 2006; Listgarten et al., 2005). We now augment the basic model to account for other sources of error which are important for modeling and control. 2.2.1. Learning Local Model Parameters For many systems, we can substantially improve our modeling accuracy by using a time-varying model ft (·) that is specific to the vicinity of the intended tra jectory at each time t. We express ft as our "crude" model, f , augmented with a bias term3 , t : zt+1 = ft (zt ) + t (z ) f (zt ) + t + t . (z ) To regularize our model, we assume that t changes only slowly over time. We have t+1 N (t , ( ) ). We incorporate the bias into our observation model k k k by computing the observed bias j = yj - f (yj -1 ) for each of the observed state transitions, and modeling this as a direct observation of the "true" model bias corrupted by Gaussian noise. The result of this modification is that the ideal tra jectory must not only look similar to the demonstration tra jectories, but it must also obey a dynamics model which includes those errors consistently observed in the demonstrations. (5) (6) (7) (8) (9) (10) = = = = ( ) t k yj k j (z ) ( ) ( ) t + t , ( ) k t + t , () (zt ) + t , (y ) k zj + j + j , k k k P(j +1 |j ) () (y ) 2.2.2. Factoring out Demonstration Drift It is often difficult, even for an expert pilot, during aerobatic maneuvers to keep the helicopter centered around a fixed position. The recorded position trajectory will often drift around unintentionally. Since these position errors are highly correlated, they are not explained well by the Gaussian noise term in our observation model. To capture such slow drift in the demonstrated tra jecOur generative model can incorporate richer local models. We discuss our choice of merely using biases in our generative tra jectory model in more detail in Section 4. 3 Here t , t , t , t , j are zero mean Gaussian random variables with respective covariance matrices (z) , ( ) , () , () , (y) . The transition probabilik ties for j are defined by Eqs. (3, 4) with parameters k dk , d2 , dk (collectively denoted d). 3 1 3. Tra jectory Learning Algorithm Our learning algorithm automatically finds the timealignment indexes , the time-index transition probabilities d, and the covariance matrices (·) by (approximately) maximizing the joint likelihood of the observed tra jectories y and the observed prior knowl- Learning for Control from Multiple Demonstrations edge about the ideal tra jectory , while marginalizing out over the unobserved, intended tra jectory z. Concretely, our algorithm (approximately) solves , (· ) , d max log P(y, , ; (·) , d). (11) Then, once our algorithm has found , d, (·) , it finds the most likely hidden tra jectory, namely the tra jectory z that maximizes the joint likelihood of the observed tra jectories y and the observed prior knowledge about the ideal tra jectory for the learned parameters , d, (·) .4 The joint optimization in Eq. (11) is difficult because (as can be seen in Figure 1) the lack of knowledge of the time-alignment index variables introduces a very large set of dependencies between all the variables. However, when is known, the optimization problem in Eq. (11) greatly simplifies thanks to context specific independencies (Boutilier et al., 1996). When is fixed, we obtain a model such as the one shown in Figure 2. In this model we can directly estimate the multinomial parameters d in closed form; and we have a standard HMM parameter learning problem for the covariances (·) , which can be solved using the EM algorithm (Dempster et al., 1977)--often referred to as Baum-Welch in the context of HMMs. Concretely, for our setting, the EM algorithm's E-step computes the pairwise marginals over sequential hidden state variables by running a (extended) Kalman smoother; the M-step then uses these marginals to update the covariances (·) . fixed (·) and d, and for fixed z, we can find the optimal time-indexing variables using dynamic programming over the time-index assignments for each demonstration independently. The dynamic programming algorithm to find is known in the speech recognition literature as dynamic time warping (Sakoe & Chiba, 1978) and in the biological sequence alignment literature as the Needleman-Wunsch algorithm (Needleman & Wunsch, 1970). The fixed z we use, is the one that maximizes the likelihood of the observations for the current setting of parameters , d, (·) .5 In practice, rather than alternating between complete optimizations over (·) , d and , we only partially optimize over (·) , running only one iteration of the EM algorithm. We provide the complete details of our algorithm in the full paper (Coates et al., 2008). 4. Lo cal Mo del Learning For complex dynamical systems, the state zt used in the dynamics model often does not correspond to the "complete state" of the system, since the latter could involve large numbers of previous states or unobserved variables that make modeling difficult.6 However, when we only seek to model the system dynamics along a specific tra jectory, knowledge of both zt and how far we are along that tra jectory is often sufficient to accurately predict the next state zt+1 . Once the alignments between the demonstrations are computed by our tra jectory learning algorithm, we can use the time aligned demonstration data to learn a sequence of tra jectory-specific models. The time indices of the aligned demonstrations now accurately associate the demonstration data points with locations along the learned tra jectory, allowing us to build models for the state at time t using the appropriate corresponding data from the demonstration tra jectories.7 Fixing z means the dynamic time warping step only approximately optimizes the original ob jective. Unfortunately, without fixing z, the independencies required to obtain an efficient dynamic programming algorithm do not hold. In practice we find our approximation works very well. 6 This is particularly true for helicopters. Whereas the state of the helicopter is very crudely captured by the 12D rigid-body state representation we use for our controllers, the "true" physical state of the system includes, among others, the airflow around the helicopter, the rotor head speed, and the actuator dynamics. 7 We could learn the richer local model within the trajectory alignment algorithm, updating the dynamics model during the M-step. We chose not to do so since these models are more computationally expensive to estimate. The richer local models have minimal influence on the alignment because the biases capture the average model error-- the richer models capture the derivatives around it. Given the limited influence on the alignment, we chose to save computational time and only estimate the richer models 5 Figure 2. Example of graphical model when is known. (Shaded nodes are observed.) To also optimize over the time-indexing variables , we propose an alternating optimization procedure. For 4 Note maximizing over the hidden tra jectory and the covariance parameters simultaneously introduces undesirable local maxima: the likelihood score would be highest (namely infinity) for a hidden tra jectory with a sequence of states exactly corresponding to the (crude) dynamics model f (·) and state-transition covariance matrices equal to all-zeros as long as the observation covariances are nonzero. Hence we marginalize out the hidden tra jectory to find , d, (·) . Learning for Control from Multiple Demonstrations Figure 3. Our XCell Tempest autonomous helicopter. To construct an accurate nonlinear model to predict zt+1 from zt , using the aligned data, one could use locally weighted linear regression (Atkeson et al., 1997), where a linear model is learned based on a weighted dataset. Data points from our aligned demonstrations that are nearer to the current time index along the tra jectory, t, and nearer the current state, zt , would be weighted more highly than data far away. While this allows us to build a more accurate model from our time-aligned data, the weighted regression must be done online, since the weights depend on the current state, zt . For performance reasons8 this may often be impractical. Thus, we weight data only based on the time index, and learn a parametric model in the remaining variables (which, in our experiments, has the same form as the global "crude" model, f (·)). Concretely, when estimating the model for the dynamics at time t, we weight a data point at time t by:9 - , (t - t )2 W (t ) = exp 2 where is a bandwidth parameter. Typical values for are between one and two seconds in our experiments. Since the weights for the data points now only depend on the time index, we can precompute all models ft (·) along the entire tra jectory. The ability to precompute the models is a feature crucial to our control algorithm, which relies heavily on fast simulation. which can perform professional, competition-level maneuvers.10 We collected multiple demonstrations from our expert for a variety of aerobatic tra jectories: continuous inplace flips and rolls, a continuous tail-down "tic toc," and an airshow, which consists of the following maneuvers in rapid sequence: split-S, snap roll, stall-turn, loop, loop with pirouette, stall-turn with pirouette, "hurricane" (fast backward funnel), knife-edge, flips and rolls, tic-toc and inverted hover. The (crude) helicopter dynamics f (·) is constructed using the method of Abbeel et al. (2006a).11 The helicopter dynamics model predicts linear and angular accelerations as a function of current state and inputs. The next state is then obtained by integrating forward in time using the standard rigid-body equations. In the tra jectory learning algorithm, we have bias terms t for each of the predicted accelerations. We k use the state-drift variables, t , for position only. For the flips, rolls, and tic-tocs we incorporated our prior knowledge that the helicopter should stay in place. We added a measurement of the form: 0 = p(zt ) + (0 ) , (0 ) N (0, (0 ) ) where p(·) is a function that returns the position coordinates of zt , and (0 ) is a diagonal covariance matrix. This measurement--which is a direct observation of the pilot's intended tra jectory--is similar to advice given to a novice human pilot to describe the desired maneuver: A good flip, roll, or tic-toc tra jectory stays close to the same position. We also used additional advice in the airshow to indicate that the vertical loops, stall-turns and split-S should all lie in a single vertical plane; that the hurricanes should lie in a horizontal plane and that a good knife-edge stays in a vertical plane. These measurements take the form: c = N p(zt ) + (1 ) , (1 ) N (0, (1 ) ) 5. Exp erimental Results 5.1. Exp erimental Setup To test our algorithm, we had our expert helicopter pilot fly our XCell Tempest helicopter (Figure 3), after alignment. 8 During real-time control execution, our model is queried roughly 52000 times per second. Even with KDtree or cover-tree data structures a full locally weighted model would be much too slow. 9 In practice, the data points along a short segment of the tra jectory lie in a low-dimensional subspace of the state space. This sometimes leads to an ill-conditioned parameter estimation problem. To mitigate this problem, we regularize our models toward the "crude" model f (·). where, again, p(zt ) returns the position coordinates of zt . N is a vector normal to the plane of the maneuver, c is a constant, and (1 ) is a diagonal covariance matrix. 10 We instrumented the helicopter with a Microstrain 3DM-GX1 orientation sensor. A ground-based camera system measures the helicopter's position. A Kalman filter uses these measurements to track the helicopter's position, velocity, orientation and angular rate. 11 The model of Abbeel et al. (2006a) naturally generalizes to any orientation of the helicopter regardless of the flight regime from which data is collected. Hence, even without collecting data from aerobatic flight, we can reasonably attempt to use such a model for aerobatic flying, though we expect it to be relatively inaccurate. Learning for Control from Multiple Demonstrations 40 North (m) 30 20 10 -5 0 5 10 15 East (m) (a) 5.2. Tra jectory Learning Results (b) (c) (d) Figure 4. Colored lines: demonstrations. Black dotted line: tra jectory inferred by our algorithm. (See text for details.) Figure 4(a) shows the horizontal and vertical position of the helicopter during the two loops flown during the airshow. The colored lines show the expert pilot's demonstrations. The black dotted line shows the inferred ideal path produced by our algorithm. The loops are more rounded and more consistent in the inferred ideal path. We did not incorporate any prior knowledge to this extent. Figure 4(b) shows a topdown view of the same demonstrations and inferred tra jectory. The prior successfully encouraged the inferred tra jectory to lie in a vertical plane, while obeying the system dynamics. Figure 4(c) shows one of the bias terms, namely the model prediction errors for the Z-axis acceleration of the helicopter computed from the demonstrations, before time-alignment. Figure 4(d) shows the result after alignment (in color) as well as the inferred acceleration error (black dotted). We see that the unaligned bias measurements allude to errors approximately in the 1G to -2G range for the first 40 seconds of the airshow (a period that involves high-G maneuvering that is not predicted accurately by the "crude" model). However, only the aligned biases precisely show the magnitudes and locations of these errors along the tra jectory. The alignment allows us to build our ideal tra jectory based upon a much more accurate model that is tailored to match the dynamics observed in the demonstrations. Results for other maneuvers and state variables are similar. At the URL provided in the introduction we posted movies which simultaneously replay the different demonstrations, before alignment and after alignment. The movies visualize the alignment results in many state dimensions simultaneously. 5.3. Flight Results After constructing the idealized tra jectory and models using our algorithm, we attempted to fly the tra jectory on the actual helicopter. Our helicopter uses a receding-horizon differential dynamic programming (DDP) controller (Jacobson & Mayne, 1970). DDP approximately solves general continuous state-space optimal control problems by taking advantage of the fact that optimal control problems with linear dynamics and a quadratic reward function (known as linear quadratic regulator (LQR) problems) can be solved efficiently. It is well-known that the solution to the (time-varying, finite horizon) LQR problem is a sequence of linear feedback controllers. In short, DDP iteratively approximates the general control problem with LQR problems until convergence, resulting in a sequence of linear feedback controllers that are approximately optimal. In the receding-horizon algorithm, we not only run DDP initially to design the sequence of controllers, but also re-run DDP during control execution at every time step and recompute the optimal controller over a fixed-length time interval (the horizon), assuming the precomputed controller and cost-to-go are correct after this horizon. As described in Section 4, our algorithm outputs a sequence of learned local parametric models, each of the form described by Abbeel et al. (2006a). Our implementation linearizes these models on the fly with a 2 second horizon (at 20Hz). Our reward function penalizes error from the target tra jectory, s , as well t as deviation from the desired controls, u , and the t desired control velocities, u+1 - u . t t First we compare our results with the previous stateof-the-art in aerobatic helicopter flight, namely the inplace rolls and flips of Abbeel et al. (2007). That work used hand-specified target tra jectories and a single nonlinear model for the entire tra jectory. Figure 5(a) shows the Y-Z position12 and the collective (thrust) control inputs for the in-place rolls for both their controller and ours. Our controller achieves (i) better position performance (standard deviation of approximately 2.3 meters in the Y-Z plane, compared to about 4.6 meters and (ii) lower overall collective control values (which roughly represents the amount of energy being used to fly the maneuver). Similarly, Figure 5(b) shows the X-Z position and the collective control inputs for the in-place flips for both controllers. Like for the rolls, we see that our controller significantly outperforms that of Abbeel et al. (2007), both in position accuracy and in control energy expended. 12 These are the position coordinates pro jected into a plane orthogonal to the axis of rotation. Learning for Control from Multiple Demonstrations 14 12 10 10 Altitude (m) 5 0 -15 1 Collective Input 0.5 0 -0.5 -1 0 5 10 15 20 Time (s) 25 30 35 Collective Input -10 -5 0 5 North Position (m) 10 15 20 1 0.5 0 Altitude (m) 5 0 -20 -15 -10 -5 0 5 East Position (m) 10 15 10 8 Altitude (m) 6 4 2 0 -2 -4 -0.5 -1 0 -6 5 10 15 20 Time (s) 25 30 35 -10 -5 0 North Position (m) 5 (a) (b) (c) Figure 5. Flight results. (a),(b) Solid black: our results. Dashed red: Abbeel et al. (2007). (c) Dotted black: autonomous tic-toc. Solid colored: expert demonstrations. (See text for details.) Besides flips and rolls, we also performed autonomous "tic tocs"--widely considered to be an even more challenging aerobatic maneuver. During the (tail-down) tic-toc maneuver the helicopter pitches quickly backward and forward in-place with the tail pointed toward the ground (resembling an inverted clock pendulum). The complex relationship between pitch angle, horizontal motion, vertical motion, and thrust makes it extremely difficult to create a feasible tic-toc tra jectory by hand. Our attempts to use such a hand-coded trajectory with the DDP algorithm from (Abbeel et al., 2007) failed repeatedly. By contrast, our algorithm readily yields an excellent feasible tra jectory that was successfully flown on the first attempt. Figure 5(c) shows the expert tra jectories (in color), and the autonomously flown tic-toc (black dotted). Our controller significantly outperforms the expert's demonstrations. We also applied our algorithm to successfully fly a complete aerobatic airshow, which consists of the following maneuvers in rapid sequence: split-S, snap roll, stall-turn, loop, loop with pirouette, stall-turn with pirouette, "hurricane" (fast backward funnel), knifeedge, flips and rolls, tic-toc and inverted hover. The tra jectory-specific local model learning typically captures the dynamics well enough to fly all the aforementioned maneuvers reliably. Since our computer controller flies the tra jectory very consistently, however, this allows us to repeatedly acquire data from the same vicinity of the target tra jectory on the real helicopter. Similar to Abbeel et al. (2007), we incorporate this flight data into our model learning, allowing us to improve flight accuracy even further. For example, during the first autonomous airshow our controller achieves an RMS position error of 3.29 meters, and this procedure improved performance to 1.75 meters RMS position error. Videos of all our flights are available at: http://heli.stanford.edu 6. Related Work Although no prior works span our entire setting of learning for control from multiple demonstrations, there are separate pieces of work that relate to various components of our approach. Atkeson and Schaal (1997) use multiple demonstrations to learn a model for a robot arm, and then find an optimal controller in their simulator, initializing their optimal control algorithm with one of the demonstrations. The work of Calinon et al. (2007) considered learning tra jectories and constraints from demonstrations for robotic tasks. There, they do not consider the system's dynamics or provide a clear mechanism for the inclusion of prior knowledge. Our formulation presents a principled, joint optimization which takes into account the multiple demonstrations, as well as the (complex) system dynamics and prior knowledge. While Calinon et al. (2007) also use some form of dynamic time warping, they do not try to optimize a joint ob jective capturing both the system dynamics and time-warping. Among others, An et al. (1988) and, more recently, Abbeel et al. (2006b) have exploited the idea of tra jectory-indexed model learning for control. However, contrary to our setting, their algorithms do not time align nor coherently integrate data from multiple tra jectories. While the work by Listgarten et al. (Listgarten, 2006; Listgarten et al., 2005) does not consider robotic control and model learning, they also consider the problem of multiple continuous time series alignment with a hidden time series. Our work also has strong similarities with recent work on inverse reinforcement learning, which extracts a reward function (rather than a tra jectory) from the expert demonstrations. See, e.g., Ng and Russell (2000); Abbeel and Ng (2004); Ratliff et al. (2006); Neu and Szepesvari (2007); Ramachandran and Amir (2007); Syed and Schapire (2008). Learning for Control from Multiple Demonstrations Most prior work on autonomous helicopter flight only considers the flight-regime close to hover. There are three notable exceptions. The aerobatic work of Gavrilets et al. (2002) comprises three maneuvers: split-S, snap-roll, and stall-turn, which we also include during the first 10 seconds of our airshow for comparison. They record pilot demonstrations, and then hand-engineer a sequence of desired angular rates and velocities, as well as transition points. Ng et al. (2004) have their autonomous helicopter perform sustained inverted hover. We compared the performance of our system with the work of Abbeel et al. (2007), by far the most advanced autonomous aerobatics results to date, in Section 5. Boutilier, C., Friedman, N., Goldszmidt, M., & Koller, D. (1996). Context-specific independence in Bayesian networks. Proc. UAI. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing and generalizing a task in a humanoid robot. IEEE Trans. on Systems, Man and Cybernetics, Part B. Coates, A., Abbeel, P., & Ng, A. Y. (2008). Learning for control from multiple demonstrations (Full version). http://heli.stanford.edu/icml2008. Dempster, A. P., Laird, N. M., & Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. of the Royal Statistical Society. Gavrilets, V., Martinos, I., Mettler, B., & Feron, E. (2002). Control logic for automated aerobatic flight of miniature helicopter. AIAA Guidance, Navigation and Control Conference. Jacobson, D. H., & Mayne, D. Q. (1970). Differential dynamic programming. Elsevier. Listgarten, J. (2006). Analysis of sibling time series data: alignment and difference detection. Doctoral dissertation, University of Toronto. Listgarten, J., Neal, R. M., Roweis, S. T., & Emili, A. (2005). Multiple alignment of continuous time series. NIPS 17. Needleman, S., & Wunsch, C. (1970). A general method applicable to the search for similarities in the amino acid sequence of two proteins. J. Mol. Biol. Neu, G., & Szepesvari, C. (2007). Apprenticeship learning using inverse reinforcement learning and gradient methods. Proc. UAI. Ng, A. Y., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., & Liang, E. (2004). Autonomous inverted helicopter flight via reinforcement learning. ISER. Ng, A. Y., & Russell, S. (2000). Algorithms for inverse reinforcement learning. Proc. ICML. Ramachandran, D., & Amir, E. (2007). Bayesian inverse reinforcement learning. Proc. IJCAI. Ratliff, N., Bagnell, J., & Zinkevich, M. (2006). Maximum margin planning. Proc. ICML. Sakoe, H., & Chiba, S. (1978). Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing. Syed, U., & Schapire, R. E. (2008). A game-theoretic approach to apprenticeship learning. NIPS 20. 7. Conclusion We presented an algorithm that takes advantage of multiple suboptimal tra jectory demonstrations to (i) extract (an estimate of ) the ideal demonstration, (ii) learn a local model along this tra jectory. Our algorithm is generally applicable for learning tra jectories and dynamics models along tra jectories from multiple demonstrations. We showed the effectiveness of our algorithm for control by applying it to the challenging problem of autonomous helicopter aerobatics. The ideal target tra jectory and the local models output by our tra jectory learning algorithm enable our controllers to significantly outperform the prior state of the art. Acknowledgments We thank Garett Oku for piloting and building our helicopter. Adam Coates is supported by a Stanford Graduate Fellowship. This work was also supported in part by the DARPA Learning Locomotion program under contract number FA8650-05-C-7261. References Abbeel, P., Coates, A., Quigley, M., & Ng, A. Y. (2007). An application of reinforcement learning to aerobatic helicopter flight. NIPS 19. Abbeel, P., Ganapathi, V., & Ng, A. Y. (2006a). Learning vehicular dynamics with application to modeling helicopters. NIPS 18. Abbeel, P., & Ng, A. Y. (2004). Apprenticeship learning via inverse reinforcement learning. Proc. ICML. Abbeel, P., Quigley, M., & Ng, A. Y. (2006b). Using inaccurate models in reinforcement learning. Proc. ICML. An, C. H., Atkeson, C. G., & Hollerbach, J. M. (1988). Model-based control of a robot manipulator. MIT Press. Atkeson, C., & Schaal, S. (1997). Robot learning from demonstration. Proc. ICML. g Atkeson, C. G., Moore, A. W., & Schaal, S. (1997). Locally weighted learning for control. Artificial Intel ligence Review, 11.