Hilbert Space Embeddings of Hidden Markov Models Le Song lesong@cs.cmu.edu Byron Boots beb@cs.cmu.edu School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA Sajid M. Siddiqi Google, Pittsburgh, PA 15213, USA siddiqi@google.com Geoffrey Gordon ggordon@cs.cmu.edu School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA Alex Smola Yahoo! Research, Santa Clara, CA 95051, USA alex@smola.org Abstract Hidden Markov Models (HMMs) are important tools for modeling sequence data. However, they are restricted to discrete latent states, and are largely restricted to Gaussian and discrete observations. And, learning algorithms for HMMs have predominantly relied on local search heuristics, with the exception of spectral methods such as those described below. We propose a nonparametric HMM that extends traditional HMMs to structured and non-Gaussian continuous distributions. Furthermore, we derive a localminimum-free kernel spectral algorithm for learning these HMMs. We apply our method to robot vision data, slot car inertial sensor data and audio event classification data, and show that in these applications, embedded HMMs exceed the previous state-of-the-art performance. Despite their simplicity and wide applicability, HMMs are limited in two major respects: first, they are usually restricted to discrete or Gaussian observations, and second, the latent state variable is usually restricted to have only moderate cardinality. For nonGaussian continuous observations, and for structured observations with large cardinalities, standard inference algorithms for HMMs run into trouble: we need huge numbers of latent states to capture such observation distributions accurately, and marginalizing out these states during inference can be very computationally intensive. Furthermore, standard HMM learning algorithms are not able to fit the required transition and observation distributions accurately: local search heuristics, such as the EM algorithm, lead to bad local optima, and standard approaches to regularization result in under- or overfitting. Recently, Hsu et al. (2009) proposed a spectral algorithm for learning HMMs with discrete observations and hidden states. At its core, the algorithm performs a singular value decomposition of a matrix of joint probabilities of past and future observations, and then uses the result, along with additional matrices of joint probabilities, to recover parameters which allow tracking or filtering. The algorithm employs an observable representation of a HMM, and avoids explicitly recovering the HMM transition and observation matrices. This implicit representation enables the algorithm to find a consistent estimate of the distribution of observation sequences, without resorting to local search. Unfortunately, this spectral algorithm is only formulated for HMMs with discrete observations. In contrast, many sources of sequential data are continous 1. Introduction Hidden Markov Models (HMMs) have successfully modeled sequence data in a wide range of applications including speech recognition, analysis of genomic sequences, and analysis of time series. HMMs are latent variable models of dynamical systems: they assume a latent state which evolves according to Markovian dynamics, as well as observations which depend only on the hidden state at a particular time. Appearing in Proceedings of the 27 th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). Hilbert Space Embeddings of Hidden Markov Models or structured; the spectral algorithm does not apply to such data without discretization and flattening. So, the goal of the current paper is to provide a new kernelbased representation and kernelized spectral learning algorithm for HMMs; this new representation and algorithm will allow us to learn HMMs in any domain where we can define a kernel. Furthermore, our algorithm is free of local minima and admits finite-sample generalization guarantees. In particular, we will represent HMMs using a recent concept called Hilbert space embedding (Smola et al., 2007; Sriperumbudur et al., 2008). The essence of Hilbert space embedding is to represent probability measures (in our case, corresponding to distributions over observations and latent states in a HMM) as points in Hilbert spaces. We can then perform inference in the HMM by updating these points, entirely in their Hilbert spaces, using covariance operators (Baker, 1973) and conditional embedding operators (Song et al., 2009). By making use of the Hilbert space's metric structure, our method works naturally with continous and structured random variables, without the need for discretization. In addition to generalizing HMMs to arbitary domains where kernels are defined, our learning algorithm contributes to the theory of Hilbert space embeddings with hidden variables. Previously, Song et al. (2009) derived a kernel algorithm for HMMs; however, they only provided results for fully observable models, where the training data includes labels for the true latent states. By contrast, our algorithm only requires access to an (unlabeled) sequence of observations. We provide experimental results comparing embedded HMMs learned by our spectral algorithm to several other well-known approaches to learning models of time series data. The results demonstrate that our novel algorithm exceeds the previous state-of-the-art performance, often beating the next best algorithm by a substantial margin. (e.g. Axt:1 = Axt . . . Ax1 ). We use 1m to denote an m × 1 column of ones. A discrete HMM defines a probability distribution over sequences of hidden states, Ht {1, . . . , N }, and observations, Xt {1, . . . , M }. We assume N M, and let T RN ×N be the state transition probability matrix with Tij = P(Ht+1 = i|Ht = j), O RM ×N be the observation probability matrix with Oij = P(Xt = i|Ht = j), and RN be the stationary state distribution with i = P(Ht = i). The conditional independence properties of the HMM imply that T , O and fully characterize the probability distribution of any sequence of states and observations. 2.1. Observable representation for HMMs Jaeger (2000) demonstrated that discrete HMMs can be formulated in terms of `observation operators' Axt . Each Axt is a matrix of size N × N with its ij-th entry defined as P(Ht+1 = i|Ht = j)P(Xt = xt |Ht = j), or in matrix notation, Axt = T diag(Oxt ,1 , . . . , Oxt ,m ). Then the probability of a sequence of observations, x1:t , can be written as matrix operations, P(x1:t ) = 1N Axt . . . Ax1 = 1N Axt:1 . (1) Essentially, each Axt incorporates information about one-step observation likelihoods and one-step hidden state transitions. The sequence of matrix multiplications in equation (1) effectively implements the marginalization steps for the sequence of hidden variables, Ht+1:1 . Likewise, the predictive distribution for one-step future Xt+1 given a history of observations can be written as a sequence of matrix multiplications, (P(Xt+1 = i|x1:t ))i=1 OAxt:1 M (2) The drawback of the representations in (1) and (2) is that they requires the exact knowledge of the transition matrix T and observation matrix O, and neither quantity is available during training (since the latent states are usually not observable). A key observation concerning Equations (1) and (2) is that if we are only interested in the final quantity 1N Axt:1 and OAxt:1 , we may not need to recover the Axt s exactly. Instead, it will suffice to recover them up to some invertible transformation. More specifically, suppose that matrix S is invertible, we can define a set of new quantities, b1 := S, b := OS -1 , Bx := SAx S -1 (3) 2. Preliminaries In this paper, we follow the convention that uppercase letters denote random variables (e.g. Xt , Ht ) and lowercase letters their instantiations (e.g. xt , ht ). We will use P to denote probability distribution in the discrete cases and density in the continuous cases. For matrices and vectors, we will use notation u = (ui )i and C = (Cij )ij to list their entries. Following (Hsu et al., 2009), we abbreviate a sequence (x1 , . . . , xt ) by x1:t , and its reverse (xt , . . . , x1 ) by xt:1 . When we use a sequence as a subscript, we mean the product of quantities indexed by the sequence elements and equivalently compute OAxt:1 by cancelling out all S during matrix multiplications, resulting in OAxt:1 = OS -1 SAxt S -1 . . . SAx1 S -1 (S) (4) = b Bxt:1 b1 Hilbert Space Embeddings of Hidden Markov Models The natural question is how to choose S such that b1 , b and Bx can be computed based purely on observation sequences, x1:t . Hsu et al. (2009) show that S = U O works, where U is the top N left singular vectors of the joint probability matrix (assuming stationarity of the distribution): C2,1 := (P(Xt+1 = i, Xt = M j))i,j=1 part of the algorithm is a SVD which is local minimum free. Furthermore, (Hsu et al., 2009) also prove that under suitable conditions this spectral algorithm for HMMs efficiently estimates both the marginal and predictive distributions. 3. Hilbert Space Embeddings of HMMs The spectral algorithm for HMMs derived by Hsu et al. (2009) is only formulated for discrete random variables. Based on their formulation, it is not clear how one can apply this algorithm to general cases with continuous and structured variables. For instance, a ^ difficulty lies in estimating C3,x,1 . As we mentioned ^3,x,1 , we need to partition earlier, to estimate each C ^ the observation triples according to x, and each C3,x,1 only gets a fraction of the data for the estimation. For continous observations, x can take infinite number of possibile values, which makes the partition estimator impractical. Alternatively, one can perform a Parzen window density estimation for continuous variables. However, further approximations are needed in order to make Parzen window compatible with this spectral algorithm (Siddiqi et al., 2009). In the following, we will derive a new presentation and a kernel spectral algorithm for HMMs using a recent concept called Hilbert space embeddings of distributions (Smola et al., 2007; Sriperumbudur et al., 2008). The essence of our method is to represent distributions as points in Hilbert spaces, and update these points entirely in the Hilbert spaces using operators (Song et al., 2009). This new approach avoids the need for partitioning the data making it applicable to any domain where kernels can be defined. 3.1. Hilbert space embeddings Let F be a reproducing kernel Hilbert space (RKHS) associated with kernel k(x, x ) := (x), (x ) F . Then for all functions f F and x X we have the reproducing property: f, (x) F = f (x), i.e. the evaluation of function f at x can be written as an inner product. Examples of kernels include the Gaussian 2 RBF kernel k(x, x ) = exp(-s x - x ), however kernel functions have also been defined on strings, graphs, and other structured objects. Let P be the set of probability distributions on X , and X the random variable with distribution P P. Following Smola et al. (2007), we define the mapping of P P to RKHS F, µX := EXP [(X)], as the Hilbert space embedding of P or simply mean map. For all f F, EXP [f (X)] = f, µX F by the reproducing property. A characteristic RKHS is one for which the mean map is injective: that is, each distribution has a unique embedding (Sriperumbudur et al., 2008). This . (5) Furthermore, b1 , b and Bx can also be computed from observable quantities (assuming stationarity), u1 := (P(Xt = i))i=1 , C3,x,1 := (P(Xt+2 = i, Xt+1 = x, Xt = M j))i,j=1 M (6) (7) which are the marginal probability vector of sequence singletons, and one slice of the joint probability matrix of sequence triples (i.e. a slice indexed by x from a 3dimensional matrix). Hsu et al. (2009) showed b1 = U u, b = C2,1 (U C2,1 ) Bx = (U C3,x,1 )(U C2,1 ) . (8) (9) 2.2. A spectral algorithm for learning HMMs The spectral algorithm for learning HMMs proceeds by first estimating u1 , C2,1 and C3,x,1 . Given a dataset m of m i.i.d. triples (xl , xl , xl ) l=1 from a HMM (su1 2 3 perscripts index training examples), we estimate u1 = ^ 1 m 1 m 1 m l=1:m (xl ) 1 (x2 )(x1 ) I[xl = x](xl )(xl ) 2 3 1 (10) (11) (12) ^ C2,1 = l=1:m ^ C3,x,1 = l=1:m where the delta function (or delta kernel) is defined as I[xl = x] = 1 if xl = x and 0 otherwise; and we have 2 2 used 1-of-M representation for discrete variables. In this representation, (x = i) is a vector of length M with all entries equal to zero except 1 at i-th position. For instance, if x = 2, then (x) = (0, 1, 0, . . . , 0) . ^ Furthermore, we note that C3,x,1 is not a single but a collection of matrices each indexed by an x. Effectively, the delta function I[xl = x] partition the obser2 ^ vation triples according to x, and each C3,x,1 only gets a fraction of the data for the estimation. ^ Next, a `thin' SVD is computed for C2,1 . Let its top ^ , then the observable repN left singular vectors be U ^ resentation for the HMM (^1 , ^ and Bx ) can be esb b timated by replacing the population quantities with their corresponding finite sample counterparts. A key feature of the algorithm is that it does not explicitly estimate the transition and observation models; instead it estimates a set of observable quantities that differ by an invertible transformation. The core Hilbert Space Embeddings of Hidden Markov Models property holds for many commonly used kernels (eg. the Gaussian and Laplace kernels when X = Rd ). As a special case of the mean map, the marginal probability vector of a discrete variable X is a Hilbert space embedding, i.e. (P(X = i))M = µX . Here the keri=1 nel is the delta function k(x, x ) = I[x = x ], and the feature map is the 1-of-M representation for discrete variables (see section 2.2). Given m i.i.d. observations an estimate of the m 1 mean map is straightforward: µX := m l=1 (xl ) = ^ 1 1 m m 1m , where := ((x ), . . . , (x )) is a conceptual arrangement of feature maps into columns. Furthermore, this estimate computes an approximation within an error of Op (m-1/2 ) (Smola et al., 2007). 3.2. Covariance operators The covariance operator is a generalization of the covariance matrix. Given a joint distribution P(X, Y ) over two variables X on X and Y on Y 1 , the uncentered covariance operator CXY is (Baker, 1973) CXY := EXY [(X) (Y )], (13) m xl l=1 , conditional probability tables (CPT), i.e. (P(Y = i|X = j))M i,j=1 = CY |X , and each individual conditional embedding corresponds to one column of the CPT, i.e. (P(Y = i|X = x))M = µY |x . i=1 Given m i.i.d. pairs (xl , y l ) l=1 from P(X, Y ), the conditional embedding operator can be estimated as ^Y |X = ( + I)-1 = (K + mI)-1 (15) C m m m where we have defined the kernel matrix K := with (i, j)th entry k(xi , xj ). The regularization parameter is to avoid overfitting. Song et al. (2009) also showed µY |x - µY |x G = Op (1/2 + (m)-1/2 ). ^ 3.4. Hilbert space observable representation We will focus on the embedding µXt+1 |x1:t for the predictive density P(Xt+1 |x1:t ) of a HMM. Analogue to the discrete case, we first express µXt+1 |x1:t as a set of Hilbert space `observable operators' Ax . Specifically, let the kernels on the observations and hidden states be k(x, x ) = (x), (x ) F and l(h, h ) = (h), (h ) G respectively. For rich RKHSs, we define a linear operator Ax : G G such that Ax (ht ) = P(Xt = x|ht ) EHt+1 |ht [(Ht+1 )]. (16) Then, by applying variable elimination, we have µXt+1 |x1:t = EHt+1 |x1:t EXt+1 |Ht+1 [(Xt+1 )] = CXt+1 |Ht+1 EHt+1 |x1:t [(Ht+1 )] = CXt+1 |Ht+1 Axt EHt |x1:t-1 [(Ht )] = CXt+1 |Ht+1 t =1 where denotes tensor product. Alternatively, CXY can simply be viewed as an embedding of joint distribution P(X, Y ) using joint feature map (x, y) := (x) (y) (in tensor product RKHS G F). For discrete variables X and Y with delta kernels on both domains, the covariance operator will coincide with the joint probability table, i.e. (P(X = i, Y = j)M i,j=1 = CXY (also see section 2.2). Given m pairs of i.i.d. observations (xl , y l ) l=1 , we denote by = (x1 ), . . . , (xm ) and = (y 1 ), . . . , (y m ) . Conceptually, the covariance op1 ^ erator CXY can then be estimated as CXY = m . This estimate also computes an approximation within an error of Op (m-1/2 ) (Smola et al., 2007). 3.3. Conditional embedding operators By analogy with the embedding of marginal distributions, the conditional density P(Y |x) can also be represented as an RKHS element, µY |x := EY |x [(Y )]. We emphasize that µY |x now traces out a family of embeddings in G, with each element corresponding to a particular value of x. These conditional embeddings can be defined via a conditional embedding operator CY |X : F G (Song et al., 2009), -1 µY |x = CY |X (x) := CY X CXX (x). m Ax µH1 . (17) where we used the following recursive relation EHt+1 |x1:t [(Ht+1 )] = EHt |x1:t-1 P(Xt = xt |Ht ) EHt+1 |Ht [(Ht+1 )] = Axt EHt |x1:t-1 [(Ht )] . (18) If we let T := CXt |Ht , O := CXt+1 |Ht+1 and := µH1 , we obtain a form µXt+1 |x1:t = OAxt:1 analogous to the discrete case (Equation (2)). The key difference is that Hilbert space representations are applicable to general domains with kernels defined. Similar to the discrete case, the operators Ax cannot be directly estimated from the data since the hidden states are not provided. Therefore we derive a representation for µXt+1 |x1:t based only on observable quantities (assuming stationarity of the distribution): µ1 := EXt [(Xt )] = µXt C3,x,1 := EXt+2 (Xt+1 =x)Xt [(Xt+2 ) (Xt )] = P(Xt+1 = x)C3,1|2 (x). (21) (19) C2,1 := EXt+1 Xt [(Xt+1 ) (Xt )] = CXt+1 Xt (20) (14) For discrete variables with delta kernels, conditional embedding operators correspond exactly to 1 a kernel l(y, y ) = (y), (y ) associated RKHS G. G is define on Y with Hilbert Space Embeddings of Hidden Markov Models where we have defined C3,1|2 := CXt+2 Xt |Xt+1 . First, we examine the relation between these observable quantities and the unobserved O, T and : µ1 = EHt EXt |Ht [(Xt )] = CXt |Ht EHt [(Ht )] = O =CXt+1 |Ht+1 CHt+1 |Ht CHt Ht CXt |Ht = OT CHt Ht O C3,x,1 = EHt OAx T (Ht ) EXt |Ht [(Xt )] = OAx T CHt Ht O In (24), we plugged in the following expansion EXt+2 Ht+2 Ht+1 (Xt+1 =x)|Ht [(Xt+2 )] = EHt+1 |Ht P(x|Ht+1 )EHt+2 |Ht+1 EXt+2 |Ht+2 [(Xt+2 )] = OAx T (Ht ) (25) (24) (23) (22) C2,1 = EHt EXt+1 Ht+1 |Ht [(Xt+1 )] EXt |Ht [(Xt )] Algorithm 1 Kernel Spectral Algorithm for HMMs In: m i.i.d. triples (xl , xl , xl ) l=1 , a sequence x1:t . 1 2 3 Out: µXt+1 |xt:1 ^ 1: Denote feature matrices = ((x1 ), . . . , (xm )), 1 1 = ((x1 ) . . . (xm )) and = ((x1 ) . . . (xm )). 2 2 3 3 2: Compute kernel matrices K = , L = , G = and F = . 3: Compute top N generalized eigenvectors i using LKLi = i Li (i R and i Rm ). 4: Denote A = (1 , . . . , N ), = diag(1 , . . . , N ) and D = diag (1 L1 )-1/2 , . . . , (N LN )-1/2 . 1 ^ 5: 1 = m D A G1m ^ = Q where Q = KLAD-1 6: P(x ) ^ 7: Bx = m D A F diag (L + I)-1 (x ) Q, for = 1, . . . , t. ^ ^ ^ 8: µX |x = Bx 1 ^ t+1 t:1 t:1 m Second, analogous to the discrete case, we perform a `thin' SVD of the covariance operator C2,1 , and take its top N left singular vectors U, such that the operator U O is invertible. Some simple algebraic manipulations establish the relation between observable and unobservable quantities 1 := U µ1 = (U O) := C2,1 (U C2,1 ) = O(U O)-1 (26) (27) 1 ^ ((x1 ), . . . , (xm )), and estimate C2,1 = m . 2 2 m Then the left singular vector v = ( R ) can be estimated as follows v = v KL = LKL = L, ( Rm , R) (30) Bx := (U C3,x,1 )(U C2,1 ) = (UO)Ax (UO)-1 . (28) With 1 , and Bxt:1 , µXt+1 |x1:t can be expressed as the multiplication of observable quantities µXt+1 |x1:t = Bxt:1 1 (29) In practice, C3,x,1 (in equation (24)) is difficult to estimate, since it requires partitioning the training samples according to Xt+1 = x. Intead, we use C3,1|2 (x) which does not require such partitioning, and is only a fixed multiplicative scalar P(x) away from C3,x,1 . We ¯ define Bx := (U (C3,1|2 (x)))(U C2,1 ) , and we have ¯ µXt+1 |x1:t Bxt:1 1 . We may want to predict i steps into future, i.e. obtain embeddings µXt+i |xt:1 instead of µXt+1 |xt:1 . This can be achieved by defining an i-step covariance operator Ci+1,1 := EXt+i Xt [(Xt+i ) (Xt )] and replacing C2,1 in (equation (27)) by Ci+1,1 . We then obtain the i ¯ i embedding µXt+i |xt:1 Bxt:1 1 where we use to denote Ci+1,1 (U C2,1 ) . 3.5. Kernel spectral algorithm for HMMs m Given a sample of m i.i.d. triplets (xl , xl , xl ) l=1 1 2 3 from a HMM, the kernel spectral algorithm for HMMs proceeds by first performing a `thin' SVD of the ^ sample covariance C2,1 . Specifically, we denote feature matrices = ((x1 ), . . . , (xm )) and = 1 1 where K = and L = are the kernel matrices, and is the generalized eigenvector. After normalization, we have v = 1 . Then the U oper L ator in equation (26), (27) and (28) is the column con^ catenation of the N top left singular vectors, i.e. U = (v1 , . . . , vN ). If we let A := (1 , . . . , N ) Rm×N be the column concatenation of the N top i , and D := diag (1 L1 )-1/2 , . . . , (N LN )-1/2 RN ×N , we ^ can concisely express U = AD. 1 Next we estimate µ1 = m 1m , and according to (26) ^ 1 ^ 1 = m D A 1m . Similarly, according to (27) 1 1 ^ = m D A m = KLAD-1 , where we have defined := diag (1 , . . . , N ), and used the relation LKLA = LA and A LA = D-2 . ^ Last denote = (x1 ), . . . , (xm ) , then C3,1|2 (·) = 3 3 -1 -1 diag (L + I) (·) KLAD in (21). The kernel spectral algorithm for HMMs can be summarized in Algorithm 1. Note that in the algorithm, we assume that the marginal probability P(x ) ( = 1 . . . t) is provided to the algorithm. In practice, this quantity is never explicitly estimated. Therefore, the ^ ¯ ^ algorithm returns Bxt:1 1 which is just a constant ¯ scaling away from µXt+1 |xt:1 (note Bx := Bx /P(x)). 3.6. Sample complexity In this section, we analyze the sample complexity of our kernel spectral algorithm for HMMs. In particu- Hilbert Space Embeddings of Hidden Markov Models lar, we want to investigate how the difference between the estimated embedding µXt+1 |x1:t and its population ^ counterpart scales with respect to the number m of training samples and the length t of the sequence x1:t in the conditioning. We use Hilbert space distances as our error measure and obtain the following result (the proof follows the template of Hsu et al. (2009), and it can be found in the appendix): Theorem 1 Assume (x) F 1, (h) G 1, maxx Ax 2 1. Then µXt+1 |x1:t - µXt+1 |x1:t F = ^ Op (t(1/2 + (m)-1/2 )). We expect that Theorem 1 can be further improved. Currently it suggests that given a sequence of length t, in order to obtain an unbiased estimator of µXt+1 |x1:t , we need to decrease with a schedule of Op (m-1/2 ) and obtain an overall convergence rate of Op (tm-1/4 ). Second, the assumption, maxx Ax 2 1, imposes smoothness constrants on the likelihood function P(x|Ht ) for the theorem to hold. Finally, the current bound depends on the length t of the conditioning sequence. Hsu et al. (2009) provide a result that is independent of t using the KL-divergence as the error measure. For Hilbert space embeddings, it remains an open question as to how to estimate the KL-divergence and obtain a bound independent of t. 3.7. Predicting future observations We have shown how to maintain the Hilbert space embeddings µXt+1 |x1:t for the predictive distribution P(Xt+1 |x1:t ). The goal here is to determine the most probable future observations based on µXt+1 |x1:t . We note that in general we cannot directly obtain the probability of the future observation based on the embedding presentation of the distribution. However, for a Gaussian RBF kernel defined over a compact subset of a real vector space, the embedding µXt+1 |x1:t can be viewed as a nonparametric density estimator after proper normalization. In particular, let f be a constant function in the RKHS such that f, (Xt+1 ) F = 1, then the normalization constant ^ Z can be estimated as Z = f, µXt+1 |x1:t F . Since ^ m ^ µXt+1 |x1:t is represented as l=1 i (xl ), Z is simply ^ 3 m i . We can then find the maximum a posteri l=1 (MAP) future observation by ^ xt+1 = argmaxx ^ µX |x , (xt+1 ) /Z (31) ^ t+1 t+1 1:t ... xi C2,1 x i -1 . . . x i +1 C3,x,1 xi xi -1 Cfuture,past x i + j -1 ... x i x i -1 ... x i -k Cfuture,x,past x i + j ... x i +1 x i x i -1 ... x i -k Figure 1. Operators Cf uture,past and Cf uture,x,past capture the dependence of sequences of k past and j future observations instead of single past and future observations. (x) in such a way that an efficient algorithm for solving the optimization can be obtained, e.g. Cortes et al. (2005). In practice, we can also decode xt+1 by choos^ ing the best one from existing training examples. 3.8. Learning with sequences of observations In the learning algorithm formulated above, each variable Xt corresponds to a single observation xt from a data sequence. In this case, the operator C2,1 only captures the dependence between a single past observation and a single future observation (similarly for C3,x,1 ). In system identification theory, this corresponds to assuming 1-step observability (Van Overschee & De Moor, 1996) which is unduly restrictive for many partially observable real-world dynamical systems of interest. More complex sufficient statistics of past and future may need to be modeled, such as the block Hankel matrix formulations for subspace methods (Van Overschee & De Moor, 1996), to identify linear systems that are not 1-step observable. To overcome this limitation one can consider sequences of observations in the past and future and estimate operators Cf uture,past and Cf uture,x,past accordingly (Figure 1). As long as past and future sequences never overlap, these matrices have rank equal to that of the dynamics model and the theoretical properties of the learning algorithm continue to hold (see (Siddiqi et al., 2009) for details). 4. Experimental Results We designed 3 sets of experiments to evaluate the effectiveness of learning embedded HMMs for difficult realworld filtering and prediction tasks. In each case we compare the learned embedded HMM to several alternative time series models including (I) linear dynamical systems (LDS) learned by Subspace Identification (Subspace ID) (Van Overschee & De Moor, 1996) with stability constraints (Siddiqi et al., 2008), (II) discrete HMMs learned by EM, and (III) the Reduced-rank HMM (RR-HMM) learned by spectral methods (Siddiqi et al., 2009). In these experiments we demonstrate that the kernel spectral learning algorithm for embedded HMMs achieves the state-of-the-art performance. Robot Vision. In this experiment, a video of 2000 frames was collected at 6 Hz from a Point Grey Bumblebee2 stereo camera mounted on a Botrics Obot d100 mobile robot platform circling a stationary obstacle F Since (x) F = 1 for a Gaussian RBF kernel, a geometric interpretation of the above MAP estimate is to find a delta distribution xt+1 such that its embedding (xt+1 ) is closest to µXt+1 |x1:t , i.e. xt+1 = ^ ^ argminxt+1 (xt+1 ) - µXt+1 |x1:t F . The optimization ^ in (31) may be a hard problem in general. In some cases, however, it is possible to define the feature map Hilbert Space Embeddings of Hidden Markov Models Avg. Prediction Err. Avg. Prediction Err. Slot Car A. Example Images B. Path 9 8 7 6 5 4 3 2 1 x 10 6 A. IMU B. Environment Prediction Horizon Figure 2. Robot vision data. (A) Sample images from the robot's camera. The figure below depicts the hallway environment with a central obstacle (black) and the path that the robot took through the environment (the red counterclockwise ellipse). (B) Squared error for prediction with different estimated models and baselines. HMM Mean RR-HMM Last LDS Embedded 0 10 20 30 40 50 60 70 80 90 100 Racetrack Prediction Horizon Figure 3. Slot car inertial measurement data. (A) The slot car platform and the IMU (top) and the racetrack (bottom). (B) Squared error for prediction with different estimated models and baselines. 8 7 6 5 4 3 2 1 0 0 x 10 6 Mean Last LDS HMM RR-HMM Embedded 10 20 30 40 50 60 70 80 90 100 (under imperfect human control) (Figure 2(A)) and 1500 frames were used as training data for each model. Each frame from the training data was reduced to 100 dimensions via SVD on single observations. The goal of this experiment was to learn a model of the noisy video, and, after filtering, to predict future image observations. We trained a 50-dimensional2 embedded HMM using Algorithm 1 with sequences of 20 consecutive observations (Section 3.8). Gaussian RBF kernels are used and the bandwidth parameter is set with the median of squared distance between training points (median trick). The regularization parameter is set of 10-4 . For comparison, a 50-dimensional RR-HMM with Parzen windows is also learned with sequences of 20 observations (Siddiqi et al., 2009); a 50-dimensional LDS is learned using Subspace ID with Hankel matrices of 20 time steps; and finally a 50-state discrete HMM and axis-aligned Gaussian observation models is learned using EM algorithm run until convergence. For each model, we performed filtering3 for different extents t1 = 100, 101, . . . , 250, then predicted an image which was a further t2 steps in the future, for t2 = 1, 2..., 100. The squared error of this prediction in pixel space was recorded, and averaged over all the different filtering extents t1 to obtain means which are plotted in Figure 2(B). As baselines, we also plot the error obtained by using the mean of filtered data as a predictor (Mean), and the error obtained by using the last filtered observation (Last). Any of the more complex algorithms perform better than the baselines (though as expected, the `Last' predictor is a good one-step predictor), indicating that this is a nontrivial prediction problem. The embedded HMM learned by the kernel spectral algorithm yields significantly lower prediction error compared to each of the alternatives (including the recently published RR2 3 HMM) consistently for the duration of the prediction horizon (100 timesteps, i.e. 16 seconds). Slot Car Inertial Measurement. In a second experiment, the setup consisted of a track and a miniature car (1:32 scale model) guided by a slot cut into the track. Figure 3(A) shows the car and the attached IMU (an Intel Inertiadot) in the upper panel, and the 14m track which contains elevation changes and banked curves. At each time step we extracted the estimated 3-D acceleration of the car and the estimated difference between the 3-D orientation of the car from the previous time step at a rate of 10Hz. We collected 3000 successive measurements of this data while the slot car circled the track controlled by a constant policy. The goal was to learn a model of the noisy IMU data, and, after filtering, to predict future readings. We trained a 20-dimensional embedded HMM using Algorithm 1 with sequences of 150 consecutive observations (Section 3.8). The bandwidth parameter of the Gaussian RBF kernels is set with `median trick'. The regularization parameter is 10-4 . For comparison, a 20-dimensional RR-HMM with Parzen windows is learned also with sequences of 150 observations; a 20-dimensional LDS is learned using Subspace ID with Hankel matrices of 150 time steps; and finally, a 20state discrete HMM (with 400 level of discretization for observations) is learned using EM algorithm. For each model, we performed filtering for different extents t1 = 100, 101, . . . , 250, then predicted an image which was a further t2 steps in the future, for t2 = 1, 2..., 100. The squared error of this prediction in the IMU's measurement space was recorded, and averaged over all the different filtering extents t1 to obtain means which are plotted in Figure 3(B). Again the embedded HMM yields lower prediction error compared to each of the alternatives consistently for the duration of the prediction horizon. Audio Event Classification. Our final experiment concerns an audio classification task. The data, recently presented in Ramos et al. (2010), consisted of sequences of 13-dimensional Mel-Frequency Cepstral Set N = 50 in Algorithm 1. Update models online with incoming observations. Hilbert Space Embeddings of Hidden Markov Models Coefficients (MFCC) obtained from short clips of raw audio data recorded using a portable sensor device. Six classes of labeled audio clips were present in the data, one being Human speech. For this experiment we grouped the latter five classes into a single class of Non-human sounds to formulate a binary Human vs. Non-human classification task. Since the original data had a disproportionately large amount of Human Speech samples, this grouping resulted in a more balanced dataset with 40 minutes 11 seconds of Human and 28 minutes 43 seconds of Non-human audio data. To reduce noise and training time we averaged the data every 100 timesteps (equivalent to 1 second). For each of the two classes, we trained embedded HMMs with 10, 20, . . . , 50 latent dimensions using spectral learning and Gaussian RBF kernels with bandwidth set with the `median trick'. The regularization parameter is 10-1 . For comparison, regular HMMs with axis-aligned Gaussian observation models, LDSs and RR-HMMs were trained using multi-restart EM (to avoid local minima), stable Subspace ID and the spectral algorithm of (Siddiqi et al., 2009) respectively, also with 10, . . . , 50 latent dimensions. For RR-HMMs, regular HMMs and LDSs, the classconditional data sequence likelihood is the scoring function for classification. For embedded HMMs, the scoring function for a test sequence x1:t is the log of the product of the compatibility scores for each obsert vation, i.e. =1 log (x ), µX |x1: -1 F . ^ For each model size, we performed 50 random 2:1 partitions of data from each class and used the resulting datasets for training and testing respectively. The mean accuracy and 95% confidence intervals over these 50 randomizations are reported in Figure 4. The graph indicates that embedded HMMs have higher accuracy and lower variance than other standard alternatives at every model size. Though other learning algorithms for HMMs and LDSs exist, our experiment shows this to be a non-trivial sequence classification problem where embedded HMMs significantly outperform commonly used sequential models trained using typical learning and model selection methods. 90 Accuracy (%) 85 80 75 70 65 60 10 20 30 40 50 HMM LDS RR-HMM Embedded Latent Space Dimensionality Figure 4. Accuracies and 95% confidence intervals for Human vs. Non-human audio event classification, comparing embedded HMMs to other common sequential models at different latent state space sizes. kernel spectral algorithm for learning the embedded HMMs, which exceeds previous state-of-the-art in real world challenging problems. We believe that this new way of combining kernel methods and graphical models can potentially solve many other difficult problems in graphical models and advance kernel methods to more structured territory. Acknowledgement LS is supported by a Ray and Stephenie Lane fellowship. SMS was supported by the NSF under grant number 0000164, by the USAF under grant number FA8650-05C-7264, by the USDA under grant number 4400161514, and by a project with MobileFusion/TTC. BB was supported by the NSF under grant number EEEC-0540865. BB and GJG were supported by ONR MURI grant number N00014-09-1-1052. References Baker, C. (1973). Joint measures and cross-covariance operators. Trans. A.M.S., 186, 273­289. Cortes, C., Mohri, M., & Weston, J. (2005). A general regression techinque for learning transductions. In ICML. Hsu, D., Kakade, S., & Zhang, T. (2009). A spectral algorithm for learning hidden markov models. In COLT. Jaeger, H. (2000). Observable operator models for discrete stochastic time series. Neural Computation, 12 (6), 1371­ 1398. Ramos, J., Siddiqi, S., Dubrawski, A., Gordon, G., & Sharma, A. (2010). Automatic state discovery for unstructured audio scene classification. In ICASSP. Siddiqi, S., Boots, B., & Gordon, G. (2008). A constraint generation approach to learning stable linear dynamical systems. In NIPS. Siddiqi, S., Boots, B., & Gordon, G. (2009). Reduced-rank hidden markov models. http://arxiv.org/abs/0910.0902. Smola, A., Gretton, A., Song, L., & Sch¨lkopf, B. (2007). o A Hilbert space embedding for distributions. In ALT. Song, L., Huang, J., Smola, A., & Fukumizu, K. (2009). Hilbert space embeddings of conditional distributions. In ICML. Sriperumbudur, B., Gretton, A., Fukumizu, K., Lanckriet, G., & Sch¨lkopf, B. (2008). Injective Hilbert space emo beddings of probability measures. In COLT. Van Overschee, P., & De Moor, B. (1996). Subspace identification for linear systems: Theory, implementation, applications. Kluwer. 5. Conclusion We proposed a Hilbert space embedding of HMMs that extends traditional HMMs to structured and nonGaussian continuous observation distributions. The essence of this new approach is to represent distributions as elements in Hilbert spaces, and update these elements entirely in the Hilbert spaces using operators. This allows us to derive a local-minimum-free