Estimation of (near) low-rank matrices with noise and high-dimensional scaling Sahand Negahban sahand n@eecs.berkeley.edu Department of EECS, University of California, Berkeley, CA 94720, USA Martin J. Wainwright wainwrig@eecs.berkeley.edu Department of Statistics, EECS, University of California, Berkeley, CA 94720, USA Abstract We study an instance of high-dimensional statistical inference in which the goal is to use N noisy observations to estimate a matrix Rk×p that is assumed to be either exactly low rank, or "near" low-rank, meaning that it can be well-approximated by a matrix with low rank. We consider an M -estimator based on regularization by the trace or nuclear norm over matrices, and analyze its performance under high-dimensional scaling. We provide non-asymptotic bounds on the Frobenius norm error that hold for a general class of noisy observation models, and apply to both exactly low-rank and approximately low-rank matrices. We then illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low-rank matrices from random projections. Simulations show excellent agreement with the high-dimensional scaling of the error predicted by our theory. analysis. In settings where the number of parameters may be large relative to the sample size, the utility of classical "fixed p" results is questionable, and accordingly, a line of on-going statistical research seeks to obtain results that hold under high-dimensional scaling, meaning that both the problem size and sample size (as well as other problem parameters) may tend to infinity simultaneously. It is usually impossible to obtain consistent procedures in such settings without imposing some sort of additional constraints. Accordingly, there are now various lines of work on high-dimensional inference based on imposing different types of structural constraints. A substantial body of past work has focused on models with sparsity constraints (e.g., (1; 2; 3)). A theme common to much of this work is the use of 1 -penalty as a surrogate function to enforce the sparsity constraint. In this paper, we focus on the problem of highdimensional inference in the setting of matrix estimation. In contrast to past work, our interest in this paper is the problem of estimating a matrix Rk×p that is either exactly low rank, meaning that it has at most r min{k, p} non-zero singular values, or more generally is near low-rank, meaning that it can be wellapproximated by a matrix of low rank. As we discuss at more length in the sequel, such exact or approximate low-rank conditions are appropriate for many applications, including multivariate or multi-task forms of regression, system identification for autoregressive processes, collaborative filtering, and matrix recovery from random projections. Analogous to the use of an 1 -regularizer for enforcing sparsity, we consider the use of the nuclear norm (also known as the trace norm) for enforcing a rank constraint in the matrix setting. By definition, the nuclear norm is the sum of the singular values of a matrix, and so encourages sparsity in the vector of singular values, or equivalently for the matrix to be low-rank. The problem of low-rank ap- 1. Introduction High-dimensional inference refers to instances of statistical estimation in which the ambient dimension of the data is comparable to (or possibly larger than) the sample size. Problems with a high-dimensional character arise in a variety of applications in science and engineering, including analysis of gene array data, medical imaging, remote sensing, and astronomical data Appearing in Proceedings of the 27 th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). Low-rank matrix estimation proximation has been studied by various researchers in optimization and machine learning (e.g., (4; 5; 6; 7; 8)), with the nuclear norm relaxation studied for (among other problems) noiseless random projections (9), as well as for matrix completion problems (10; 11). In addition, Bach (7) has provided some consistency results for nuclear norm regularization in the classical (fixed "p") setting, but not in the high-dimensional setting considered here. The goal of this paper is to analyze the nuclear norm relaxation for a general class of noisy observation models, and obtain non-asymptotic error bounds on the Frobenius norm that hold under high-dimensional scaling, and are applicable to both exactly and approximately low-rank matrices. We begin by presenting a generic observation model, and illustrating how it can be specialized to several cases of interest, including low-rank multivariate regression, estimation of autoregressive processes, and random projection (compressed sensing) observations. Our theoretical results on these models are obtained by leveraging the ideas from our own past work (12) on M -estimators with decomposable regularizers, where it is shown that error rates can be obtained by bounding the restricted strong convexity (RSC) parameter and specifying a suitable choice of the regularization parameter. Establishing bounds on these parameters for specific models can involve some non-trivial analysis, and in this paper, we use random matrix theory to provide the requisite control. Notation: For the convenience of the reader, we collect standard pieces of notation here. For a pair of matrices and with commensurate dimensions, we let , = trace(T ) denote the trace inner product on matrix space. For a matrix Rk×p , we let m = min{k, p}, and denote its (ordered) singular values by 1 () 2 () . . . m () 0. We also use the notation max () = 1 () and min () = m () to refer to the maximal and minimal singular values respectively. We use the notation ||| · ||| for various types of matrix norms based on these singular values, includm ing the nuclear norm ||||||1 = j=1 j (), the spectral or operator norm ||||||op = 1 (), and the Frobenius refer the reader to Horn and Johnson (13) for more background on these matrix norms and their properties. norm ||||||F = trace(T ) = m j=1 2 j (). We duce the semidefinite program (SDP) based on nuclear norm regularization that we study in this paper. 2.1. Models with rank constraints Imposing a rank r constraint on a matrix Rk×p is equivalent to requiring that the rows (or columns) of lie in some r-dimensional subspace of Rp (or Rk respectively). Such types of rank constraints (or approximate forms thereof) arise in a variety of applications, as we discuss here. In some sense, rank constraints are a generalization of sparsity constraints; rather than assuming that the data is sparse in a known basis, a rank constraint implicitly imposes sparsity but without assuming the basis. We first consider the problem of multivariate regression, also referred to as multi-task learning in statistical machine learning. The goal of multivariate regression is to estimate a prediction function that maps covariates Zj Rp to multi-dimensional output vectors Yj Rk . More specifically, let us consider the linear model, specified by a matrix Rk×p , of the form Ya = Za + Wa , for a = 1, . . . , n, (1) where {Wa }n is an i.i.d. sequence of k-dimensional a=1 zero-mean noise vectors. Given a collection of observations {Za , Ya }n of covariate-output pairs, our goal a=1 is to estimate the unknown matrix . This type of model has been used in many applications, including analysis of fMRI image data, neural response modeling, and analysis of financial data. This model and closely related ones also arise in the problem of collaborative filtering (5), in which the goal is to predict users' preferences for items (such as movies or music) based on their and other users' ratings of related items. As a second (not unrelated) example, we consider the problem of system identification in vector autoregressive processes (see the book (14) for a detailed background). A vector autoregressive (VAR) process in p-dimensions is a stochastic process {Zt } specified t=1 by an initialization Z1 Rp , followed by the recursion Zt+1 = Zt + Wt , for t = 1, 2, 3, . . .. (2) In this recursion, the sequence {Wt } consists of t=1 i.i.d. samples of innovations noise. We assume that each vector Wt Rp is zero-mean with covariance 2 I, so that the process {Zt } is zero-mean, and t=1 has a covariance matrix given by the solution of the discrete-time Ricatti equation = ( )T + 2 I. (3) 2. Background and problem set-up We begin with some background on problems and applications in which rank constraints arise, before describing a generic observation model. We then intro- Low-rank matrix estimation The goal of system identification in a VAR process is to estimate the unknown matrix Rp×p on the basis of a sequence of samples {Zt }n . In many applit=1 cation domains, it is natural to expect that the system is controlled primarily by a low-dimensional subset of variables. For instance, models of financial data might have an ambient dimension p of thousands (including stocks, bonds, and other financial instruments), but the behavior of the market might be governed by a much smaller set of macro-variables (combinations of these financial instruments). Similar statements apply to other types of time series data, including neural data, subspace tracking models in signal processing, and motion models in computer vision. A third example that we consider in this paper is a compressed sensing observation model, in which one observes random projections of the unknown matrix . This observation model has been studied extensively in the context of estimating sparse vectors (2; 3), and Recht et al. (9) suggested and studied its extension to low-rank matrices. In their set-up, one observes trace inner products of the form Xi , = trace(XiT ), where Xi Rk×p is a random matrix (for instance, filled with standard normal N (0, 1) entries). Like compressed sensing for sparse vectors, applications of this model include computationally efficient updating in large databases (where the matrix measures the difference between the database at two different time instants), and matrix denoising. A fourth example that can also be treated within our framework is the matrix completion model, in which each observation matrix takes the form Xi = ea(i) eT , b(i) so that Xi is non-zero except at a randomly chosen pair (a(i), b(i)) of row/column indices. This problem has been studied by several authors in recent work (e.g., (5; 10; 8; 11)). 2.2. A generic observation model We now introduce a generic observation model that will allow us to deal with these different observation models in an unified manner. For pairs of matrices A, B Rk×p , recall the Frobenius or trace inner product A, B := trace(BAT ). We then consider a linear observation model of the form yi = Xi , + i , for i = 1, 2, . . . , N , (4) with a similar definition for RN in terms of {i }N . We then use the observation matrices {Xi }N i=1 i=1 to define an operator X : Rk×p RN via X() i = Xi , . With this notation, the observation model (4) can be re-written as y = X( ) + . 2.3. Regression with nuclear norm regularization We now consider an estimator that is naturally suited to the problems described in the previous section. Recall that the nuclear or trace norm of a matrix min{k,p} Rk×p is given by ||||||1 = j=1 j (), corresponding to the sum of its singular values. Given a collection of observations (yi , Xi ) R × Rk×p , for i = 1, . . . , N from the observation model (4), we consider estimating the unknown by solving the following optimization problem arg min Rk×p (5) 1 y - X() 2N 2 2 + N ||||||1 , (6) where N > 0 is a regularization parameter. Note that the optimization problem (6) can be viewed as the analog of the Lasso estimator, tailored to low-rank matrices as opposed to sparse vectors. An important property of the optimization problem (6) is that it can be solved in time polynomial in the sample size N and the matrix dimensions k and p. Indeed, the optimization problem (6) is an instance of a semidefinite program, a class of convex optimization problems that can be solved efficiently by various polynomial-time algorithms. For instance, when the problem parameters are small, interior point methods are a classical method that can be employed for solving the semidefinite programs. However, as we discuss in Section 4, there are a variety of other methods tailored to solving our specific M -estimation procedure that lend themselves to solving larger-scale problems. Like in any typical M -estimator for statistical inference, the regularization parameter N is specified by the statistician. As part of the theoretical results in the next section, we provide suitable choices of this parameter that guarantee that the estimate is close in Frobenius norm to the unknown matrix . which is specified by the sequence of observation matrices {Xi }N and observation noise {i }N . This obseri=1 i=1 vation model can be written in a more compact manner using operator-theoretic notation. In particular, let us define the observation vector y = y1 . . . yN T 3. Main results and some consequences In this section, we state our main results and discuss some of their consequences. Section 3.1 is devoted to results that apply to generic instances of low-rank problems, whereas Section 3.2 is devoted to the consequences of these results for more specific problem RN , Low-rank matrix estimation classes, including low-rank multivariate regression, estimation of vector autoregressive processes, and recovery of low-rank matrices from random projections. 3.1. Results for general model classes We begin by introducing the key technical condition that allows us to control the error - between an SDP solution and the unknown matrix . We refer to it as the restricted strong convexity condition (12), since it amounts to guaranteeing that the quadratic loss function in the SDP (6) is strictly convex over a restricted set of directions. Letting C Rk×p denote the restricted set of directions, we say that the operator X satisfies restricted strong convexity (RSC) over the set C if there exists some (X) > 0 such that 1 X() 2N 2 2 theory specifies a choice for this quantity in terms of the adjoint of the operator X--namely, the operator X : RN Rk×p defined by N X () : = i=1 i Xi . (9) With this notation, we now state a deterministic result, analogous to the main result in our past work (12), which specifies two conditions--namely, an RSC condition and a choice of the regularizer--that suffice to guarantee that any solution of the SDP (6) fall within a certain radius. Theorem 1. Suppose that the operator X satisfies restricted strong convexity with parameter (X) > 0 over the set C(r; ), and that the regularization parameter N is chosen such that N 2|||X ()|||op /N . Then any solution to the semidefinite program (6) satisfies 32N r , ||| - |||F max , (X) 16 N |||M (U r ,V r ) ( )|||1 (X) 1/2 (X) ||||||2 F for all C. (7) We note that analogous conditions have been used to establish error bounds in the context of sparse linear regression (1), in which case the set C corresponded to certain subsets of sparse vectors. Of course, the definition (7) hinges on the choice of the restricted set C. In order to define the set, we require some additional notation. For any matrix Rk×p , we let row() Rp and col() Rk denote its row space and column space, respectively. For a given positive integer r min{k, p}, any r-dimensional subspace of Rk can be represented by some orthogonal matrix U Rk×r (i.e., that satisfies U T U = Ir×r . In a similar fashion, any r-dimensional subspace of Rp can be represented by an orthogonal matrix V Rp×r . For any fixed pair of such matrices (U, V ), we may define M(U, V ) as the set of such that row() V and col() U and M (U, V ) as the set of such that row() V and col() V . Finally, we let M(U,V ) and M (U,V ) denote the (respective) projection operators onto these subspaces. When the subspaces (U, V ) are clear from context, we use the shorthand notation = M (U,V ) () and = - . Finally, for any positive integer r min{k, p}, we let (U r , V r ) denote the subspace pair defined by the top r left and right singular vectors of . For a given integer r and tolerance > 0, we then define a subset of matrices as follows: C(r; ) : = Rk×p | ||||||F , ||| |||1 3||| |||1 + 4|||M (U r ,V r ) ( )|||1 . (8) The next ingredient is the choice of the regularization parameter N used in solving the SDP (6). Our . (10) Apart from the tolerance parameter , the two main terms in the bound (10) have a natural interpretation. The first term (involving r) corresponds to estimation error, capturing the difficulty of estimating a rank r matrix. The second is an approximation error, in which the projection onto the set M (U r , V r ) describes the gap between the true matrix and the rank r approximation. Let us begin by illustrating the consequences of Theorem 1 when the true matrix has exactly rank r, in which case there is a very natural choice of the subspaces represented by U and V . In particular, we form U from the r non-zero left singular vectors of , and V from its r non-zero right singular vectors. Note that this choice of (U, V ) ensures that M (U,V ) ( ) = 0. For technical reasons, it suffices to set = 0 in the case of exact rank constraints, and we thus obtain the following result: Corollary 1 (Exact low-rank recovery). Suppose that has rank r, and X satisfies RSC with respect to C(r; 0). Then as long as N 2|||X ()|||op /N , any optimal solution to the SDP (6) satisfies the bound 32 r N . (11) ||| - |||F (X) Like Theorem 1, Corollary 1 is a deterministic statement on the SDP error. It takes a much simpler Low-rank matrix estimation form since when is exactly low rank, then neither tolerance parameter nor the approximation term are required. As a more delicate example, suppose instead that is nearly low-rank, an assumption that we can formalize by requiring that its singular value sequence min{k,p} decays quickly enough. In particu{i ( )}i=1 lar, for a parameter q [0, 1] and a positive radius Rq , we define the set min{k,p} end results are simply stated upper bounds that hold with high probability. We begin with the case of rank-constrained multivariate regression. Recall that we observe pairs (Yi , Zi ) Rk × Rp linked by the linear model Yi = Zi + Wi , where Wi N (0, 2 Ik×k ) is observation noise. Here we treat the case of random design regression, meaning that the covariates Zi are modeled as random. In particular, in the following result, we assume that Zi N (0, ), i.i.d. for some p-dimensional covariance matrix 0. Recalling that max () and min () denote the maximum and minimum eigenvalues respectively, we have: Corollary 3 (Low-rank multivariate regression). Consider the random design multivariate regression model where Bq (Rq ). There are universal constants {ci , i = 1, 2, 3} such that if we solve the SDP (6) with regularization parameter N = 10 max () (k+p) n , Bq (Rq ) := R k×p | i=1 |i ()|q Rq . Note that when q = 0, the set B0 (R0 ) corresponds to the set of matrices with rank at most R0 . Corollary 2 (Near low-rank recovery). Suppose that Bq (Rq ), the regularization parameter is lower bounded as N 2|||X ()|||op /N , and the operator X satisfies RSC with parameter (X) (0, 1] over the set C(Rq -q ; ). Then any solution to the SDP (6) N satisfies ||| - |||F max , 32 Rq N (X) 1-q/2 we have 1-q/2 ||| - |||2 c1 F 2 max () 2 min () Rq k+p n 1-q/2 . (12) (13) with probability greater than 1 - c2 exp(-c3 (k + p)). Remarks: Corollary 3 takes a particularly simple form when = Ip×p : then there exists a constant . c such that ||| - |||2 c 2-2/q Rq k+p 1 1 F n When is exactly low rank--that is, q = 0, and has rank r = R0 --this simplifies even further to ||| - |||2 c F 1 2 r (k + p) . n 1-q/2 Note that the error bound (12) reduces to the exact low rank case (11) when q = 0, and = 0. The quantity -q Rq acts as the "effective rank" in this setting. N This particular choice is designed to provide an optimal trade-off between the approximation and estimation error terms in Theorem 1. Since N is chosen to decay to zero as the sample size N increases, this effective rank will increase, reflecting the fact that as we obtain more samples, we can afford to estimate more of the smaller singular values of the matrix . 3.2. Results for specific model classes As stated, Corollaries 1 and 2 are fairly abstract in nature. More importantly, it is not immediately clear how the key underlying assumption--namely, the RSC condition--can be verified, since it is specified via subspaces that depend on , which is itself the unknown quantity that we are trying to estimate. Nonetheless, we now show how, when specialized to more concrete models, these results yield concrete and readily interpretable results. Each corollary requires overcoming two main technical obstacles: establishing that the appropriate form of the RSC property holds in a uniform sense (so that a priori knowledge of is not required), and specifying an appropriate choice of the regularization parameter N . Each of these two steps is nontrivial, requiring some random matrix theory, but the The scaling in this error bound is easily interpretable: naturally, the squared error is proportional to the noise variance 2 , and the quantity r(k + p) counts the number of degrees of freedom of a k × p matrix with rank r. Note that if we did not impose any constraints on , then since a k × p matrix has a total of kp free parameters, we would expect at best 2 k to obtain rates of the order ||| - |||2 = ( n p ). F Note that when is low rank--in particular, when r min{k, p}--then the nuclear norm estimator achieves substantially faster rates. Finally, we note that as stated, the result requires that min{k, p} tend to infinity in order for the claim to hold with high probability. Although such high-dimensional scaling is the primary focus of this paper, we note that for application to the classical setting of fixed (k, p), the same statement (with different constants) holds with k + p replaced by log n. Low-rank matrix estimation Next we turn to the case of estimating the system matrix of an autoregressive (AR) model. Corollary 4 (Autoregressive models). Suppose that we are given n samples {Zt }n from a p-dimensional t=1 autoregressive process (2) that is stationary, based on a system matrix that is stable (||| |||op < 1), and approximately low-rank ( Bq (Rq )). Then there are universal constants {ci , i = 1, 2, 3} such that if we solve the SDP (6) with regularization parameter N = 80 ||||||op p 1- n , then any solution satisfies ||| - |||2 c1 F max () min () (1 - ) 2-q bounded as N > 4 max(k, p) (100 Rq )2/(2-q) . Then any solution to the SDP (6) satisfies the bound ||| - |||2 256 2-q Rq F k + N p N 2-q (15) with probability greater than 1 - c1 exp(-c2 (k + p)). The central challenge in proving this result is in proving an appropriate form of the RSC property. The following result on the random operator X may be of independent interest here: Proposition 1. Under the stated conditions, the random operator X satisfies X() N 2 Rq p n 1-q/2 (14) with probability greater than 1 - c2 exp(-c3 p). Remarks: Like Corollary 3, the result as stated requires that p tend to infinity, but the same bounds hold with p replaced by log n, yielding results suitable for classical (fixed dimension) scaling. Second, the factor (p/n)1-q/2 , like the analogous term1 in Corollary 3, shows that faster rates are obtained if can be well-approximated by a low rank matrix, namely for choices of the parameter q [0, 1] that are closer to zero. Indeed, in the limit q = 0, we again reduce to the case of an exact rank constraint r = R0 , and the corresponding squared error scales as rp/n. In contrast to the case of multivariate regression, the error bound (14) also depends on the upper bound ||| |||op = < 1 on the operator norm of the system matrix . Such dependence is to be expected since the quantity controls the (in)stability and mixing rate of the autoregressive process. The dependence of the sampling in the AR model introduces some technical challenges not present in the setting of multivariate regression. Finally, we turn to the analysis of the compressed sensing model for matrix recovery. The following result applies to the setting in which the observation matrices {Xi }N are drawn i.i.d., with standard N (0, 1) i=1 elements. We assume that the observation noise vec tor RN satisfies the bound 2 2 N for some constant , an assumption that holds for any bounded noise, and also holds with high probability for any random noise vector with sub-Gaussian entries with parameter (one example being Gaussian noise N (0, 2 )). Corollary 5 (Compressed sensing recovery). Suppose that Bq (Rq ), and that the sample size is lower The term in Corollary 3 has a factor k + p, since the matrix in that case could be non-square in general. 1 1 ||||||F - 4 k + N p N ||||||1 (16) for all Rk×p with probability at least 1 - 2 exp(-N/32). Proposition 1 also implies an interesting property of the null space of the operator X; one that can be used to establish a corollary about recovery of low-rank matrices when the observations are noiseless. In particular, suppose that we are given the noiseless observations yi = Xi , for i = 1, . . . , N , and that we try to recover the unknown matrix by solving the SDP Rkp min ||||||1 s.t. Xi , = yi i = 1, . . . , N , (17) a recovery procedure that was studied by Recht et al. (9). Proposition 1 allows us to obtain a sharp result on recovery using this method: Corollary 6. Suppose that has rank r, and that we are given N > 402 r(k + p) noiseless samples. Then with probability at least 1 - 2 exp(-N/32), the SDP (17) recovers the matrix exactly. This result removes some extra logarithmic factors that were included in the earlier work (9), and provides the appropriate analog to compressed sensing results for sparse vectors (2). Note that (like in most of our results) we have made little effort to obtain good constants in this result: the important property is that the sample size N scales linearly in both r and k + p. 4. Experimental results In this section, we report the results of various simulations that demonstrate the close agreement between the scaling predicted by our theory, and the actual Low-rank matrix estimation Error versus sample size 0 -1 Frobenius Norm Error -2 -3 -4 -5 -6 -7 0 p = 40 p = 80 p = 160 Error versus rescaled sample size 0 -1 Frobenius Norm Error -2 -3 -4 -5 -6 -7 0 p = 40 p = 80 p = 160 2000 4000 6000 Sample size 8000 10000 2 4 6 Rescaled sample size 8 10 (a) (b) Figure 1. Results of applying the SDP (6) to the problem of low-rank multivariate regression. (a) Plots of the Frobenius error ||| - |||F on a logarithmic scale versus the sample size N for three different matrix sizes p {40, 80, 160}, all with rank r = 10. (b) Plots of the same Frobenius error versus the rescaled sample size N/(rp). Consistent with theory, all three plots are now extremely well-aligned. behavior of the SDP-based M -estimator (6) in practice. In all cases, we solved the convex program (6) by using our own implementation in MATLAB of an accelerated gradient descent method which adapts a non-smooth convex optimization procedure (15) to the nuclear-norm (16). We chose the regularization parameter N in the manner suggested by our theoretical results; in doing so, we assumed knowledge of quantities such as the noise variance 2 . (In practice, one would have to estimate such quantities from the data using standard methods.) We report simulation results for problems of lowrank multivariate regression, estimation in vector autoregressive processes, and matrix recovery from random projections (compressed sensing). In each case, we solved instances of the SDP for a square matrix Rp×p , where p {40, 80, 160} for the first two examples, and p {20, 40, 80} for the compressed sensing example. In all cases, we considered the setting of exact low rank constraints, with rank( ) = r = 10, and we generated by choosing the subspaces of its left and right singular vectors uniformly at random from the Grassman manifold. The observation or innovations noise had variance 2 = 1 in each case. The VAR process was generated by first solving for the covariance matrix using the MATLAB function dylap and then generating a sample path. For each setting of (r, p), we solved the SDP for various sample sizes N. Figure 1 shows results for a multivariate regression model with the covariates chosen randomly from a N (0, I) distribution. Naturally, in each case, the er- ror decays to zero as N increases, but larger matrices require larger sample sizes, as reflected by the rightward shift of the curves as p is increased. We note that panel (b) shows the exact same set of simulation results, but now with the Frobenius error plotted versus the rescaled sample size N : = N/(rp). As predicted by Corollary 3, the error plots now are all aligned with one another; the degree of alignment in this particular case is so close that the three plots are now indistinguishable. (The blue curve is the only one visible since it was plotted last by our routine.) Consequently, the figures show that N/(rp) acts as the effective sample size in this high-dimensional setting. Figure 2 shows similar results for the autoregressive model. The figure plots the Frobenius error versus the rescaled sample size N/(rp); as predicted by Corollary 4, the errors for different matrix sizes p are again quite well-aligned. In this case, we find (both in our theoretical analysis and experimental results) that the dependence in the autoregressive process slows down the rate at which the concentration occurs, so that the results are not as crisp as the low-rank multivariate setting in Figure 1. Finally, Figure 3 presents the same set of results for the compressed sensing observation model discussed. Even though the observation matrices Xi here are qualitatively different (in comparison to the multivariate regression and autoregressive examples), we again see the "stacking" phenomenon of the curves when plotted versus the rescaled sample size N/rp, as predicted by Corollary 5. Low-rank matrix estimation Error versus rescaled sample size 0 -0.5 Frobenius Norm Error -1 -1.5 -2 -2.5 -3 -3.5 0 p = 40 p = 80 p = 160 Frobenius Norm Error 0 Error versus rescaled sample size p = 20 p = 40 p = 80 -0.5 -1 -1.5 -2 -2.5 2 4 6 Rescaled sample size 8 10 -3 0 2 4 6 Rescaled sample size 8 10 Figure 2. Results of applying the SDP (6) to estimating the system matrix of a vector autoregressive process. Plot of the Frobenius error versus the rescaled sample size N/(rp). Consistent with theory, all three plots are now reasonably well-aligned. Figure 3. Results of applying the SDP (6) to recovering a low-rank matrix on the basis of random projections (compressed sensing model). Plot of the same Frobenius error versus the rescaled sample size N/(rp). Consistent with theory, all three plots are now reasonably well-aligned. 5. Discussion In this paper, we provided a detailed analysis of the nuclear norm relaxation for a general class of noisy observation models, and obtained non-asymptotic error bounds on the Frobenius norm valid under highdimensional scaling. Our results are applicable to both exactly and approximately low-rank matrices. Exploiting a deterministic result that leverages our past work (12), we showed concrete and easily interpretable rates for various specific models, including low-rank multivariate regression, estimation of autoregressive processes, and matrix recovery from random projections. It is worth noting that our theory can also be applied to noisy matrix completion, yielding analogous rates to those reported here. Lastly, our simulation results showed very close agreement with the predictions from our theory. [6] J. Abernethy, F. Bach, T. Evgeniou, and J. Stein, "Low-rank matrix factorization with attributes," Tech. Rep., September 2006. [7] F. Bach, "Consistency of trace norm minimization," Journal of Machine Learning Research, vol. 9, pp. 1019­1048, June 2008. [8] R. H. Keshavan, A. Montanari, and S. Oh, "Matrix completion from noisy entries," 2009, preprint available at http://arxiv.org/abs/0906.2027v1. [9] B. Recht, M. Fazel, and P. A. Parrilo, "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization," SIAM Review, 2007. [10] E. J. Cand`s and B. Recht, "Exact matrix completion e via convex optimization," CoRR, vol. abs/0805.4471, 2008. [11] R. Mazumber, T. Hastie, and R. Tibshirani, "Spectral regularization algorithms for learning large incomplete matrices," Stanford, Tech. Rep., July 2009. [12] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, "A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers," in NIPS, December 2009. [13] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge: Cambridge University Press, 1985. [14] H. L¨tkepolhl, New introduction to multiple time seu ries analysis. New York: Springer, 2006. [15] Y. Nesterov, "Gradient methods for minimizing composite objective function," CORE, Universit'e catholique de Louvain, Tech. Rep. 2007/76, 2007. [16] S. Ji and J. Ye, "An accelerated gradient method for trace norm minimization," in ICML, 2009. References [1] P. Bickel, Y. Ritov, and A. Tsybakov, "Simultaneous analysis of lasso and dantzig selector," Annals of Statistics, vol. 3, pp. 38­60, 2009. [2] D. Donoho, "Compressed sensing," IEEE Trans. Info. Theory, vol. 52, no. 4, pp. 1289­1306, April 2006. [3] E. Candes and T. Tao, "Decoding by linear programming," IEEE Trans. Info Theory, vol. 51, no. 12, pp. 4203­4215, December 2005. [4] M. Fazel, "Matrix rank minimization with applications," Ph.D. dissertation, Stanford, 2002. [5] N. Srebro, J. Rennie, and T. S. Jaakkola, "Maximummargin matrix factorization," in NIPS, Vancouver, Canada, December 2004.