Compressed Regression Shuheng Zhou John Lafferty Larry Wasserman Computer Science Department Department of Statistics Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213 Abstract Recent research has studied the role of sparsity in high dimensional regression and signal reconstruction, establishing theoretical limits for recovering sparse models from sparse data. In this paper we study a variant of this problem where the original n input variables are compressed by a random linear transformation to m n examples in p dimensions, and establish conditions under which a sparse linear model can be successfully recovered from the compressed data. A primary motivation for this compression procedure is to anonymize the data and preserve privacy by revealing little information about the original data. We characterize the number of random projections that are required for 1 -regularized compressed regression to identify the nonzero coefficients in the true model with probability approaching one, a property called "sparsistence." In addition, we show that 1 -regularized compressed regression asymptotically predicts as well as an oracle linear model, a property called "persistence." Finally, we characterize the privacy properties of the compression procedure in information-theoretic terms, establishing upper bounds on the rate of information communicated between the compressed and uncompressed data that decay to zero. 1 Introduction Two issues facing the use of statistical learning methods in applications are scale and privacy. Scale is an issue in storing, manipulating and analyzing extremely large, high dimensional data. Privacy is, increasingly, a concern whenever large amounts of confidential data are manipulated within an organization. It is often important to allow researchers to analyze data without compromising the privacy of customers or leaking confidential information outside the organization. In this paper we show that sparse regression for high dimensional data can be carried out directly on a compressed form of the data, in a manner that can be shown to guard privacy in an information theoretic sense. The approach we develop here compresses the data by a random linear or affine transformation, reducing the number of data records exponentially, while preserving the number of original input variables. These compressed data can then be made available for statistical analyses; we focus on the problem of sparse linear regression for high dimensional data. Informally, our theory ensures that the relevant predictors can be learned from the compressed data as well as they could be from the original uncompressed data. Moreover, the actual predictions based on new examples are as accurate as they would be had the original data been made available. However, the original data are not recoverable from the compressed data, and the compressed data effectively reveal no more information than would be revealed by a completely new sample. At the same time, the inference algorithms run faster and require fewer resources than the much larger uncompressed data would require. The original data need not be stored; they can be transformed "on the fly" as they come in. 1 In standard regression, a response variable Y = X + Rn is associated with the input variables, where i are independent, mean zero additive noise variables. In compressed regression, we assume that the response is also compressed, resulting in the transformed response Y Rm given by Y Y Y = X + = X + . Note that under compression, i , i {1, . . . , m}, in the transformed noise = are no longer independent. In the sparse setting, the parameter Rp is sparse, with a relatively small number s = 0 of nonzero coefficients in . The method we focus on is 1 -regularized least squares, also known as the lasso [17]. We study the ability of the compressed lasso estimator to identify the correct sparse set of relevant variables and to predict well. We omit details and technical assumptions in the following theorems for clarity. Our first result shows that the lasso is sparsistent under compression, meaning that the correct sparse set of relevant variables is identified asymptotically. In more detail, the data are represented as a n × p matrix X . Each of the p columns is an attribute, and each of the n rows is the vector of attributes for an individual record. The data are compressed by a random linear transformation X X X , where is a random m × n matrix with m n. It is also natural to consider a random affine transformation X X X + , where is a random m × p matrix. Such transformations have been called "matrix masking" in the privacy literature [6]. The entries of and are taken to be independent Gaussian random variables, but other distributions are possible. We think of X as "public," while and are private and only needed at the time of compression. However, even with = 0 and known, recovering X from X requires solving a highly under-determined linear system and comes with information theoretic privacy guarantees, as we demonstrate. SparsisteCce (Theorem 3.3): If the number of compressed examples m satisfies C1 s2 log nps n 2 m 2 n/ log n, and the regularization parameter m satisfies m 0 and mm / log p 1 , s hen the compressed las estimator m = arg min 2m Y - X 2 + m 1 is sparsistent: t so 2 P upp (m ) = supp ( ) 1 as m , where supp( ) = {j : j = 0}. Our second result shows that the lasso is persistent under compression. Roughly speaking, persistence [10] means that the procedure predicts well, as measured by the predictive risk R( ) = Y 2 E - T X , where X Rp is a new input vector and Y is the associated response. Persistence is a weaker condition than sparsistency, and in particular does not assume that the true model is linear. as n , in case Ln,m = o (m/ log(np))1/4 . Our third result analyzes the privacy properties of compressed regression. We evaluate privacy in information theoretic terms by bounding the average mutual information I (X ; X )/np per matrix entry in the original data matrix X , which can be viewed as a communication rate. Bounding this mutual information is intimately connected with the problem of computing the channel capacity of certain multiple-antenna wireless communication systems [13]. Information Resistence (Propositions 5.1 and 5.2): The rate at which information about X is m ;e revealed by the compressed data X satisfies rn,m = sup I (XpX ) = O n 0, where the n supremum is over distributions on the original data X . Persistence (Theorem 4.1): Given a sequence of sets of estimators Bn,m Rp such that Bn,m = { : 1 Ln,m } with log2 (np) m n, the sequence of compressed lasso estimators Y 2 n,m = arg min X - T X 2 1 Ln,m Y - 2 is persistent with the predictive risk R( ) = E P R( ) - 0, over uncompressed data with respect to Bn,m , meaning that R(n,m ) - inf L 1 n ,m As summarized by these results, compressed regression is a practical procedure for sparse learning in high dimensional data that has provably good properties. Connections with related literature are briefly reviewed in Section 2. Analyses of sparsistence, persistence and privacy properties appear in Section 3­5. Simulations for sparsistence and persistence of the compressed lasso are presented in Section 6. We conclude with a discussion that points out directions for future work in Section ??. The proofs are included in the full version of the paper, available at http://arxiv.org/abs/ 0706.0534. 2 2 Background and Related Work In this section we briefly review related work in high dimensional statistical inference, compressed sensing, and privacy, to place our work in context. Sparse Regression. An estimator that has received much attention in the recent literature is the lasso n [17], defined as n = arg min 21 Y - X 2 + n 1 , where n is a regularization 2 n parameter. In [14] it was shown that the lasso is consistent in the high dimensional setting under certain assumptions. Sparsistency proofs for high dimensional problems have appeared recently in [20] and [19]. The results and method of analysis of Wainwright [19], where X comes from a Gaussian ensemble and i is i.i.d. Gaussian, are particularly relevant to the current paper. We describe this Gaussian Ensemble result, and compare our results to it in Sections 3, 6.Given that under compression, the noise = is not i.i.d, one cannot simply apply this result to the compressed case. Persistence for the lasso was first defined and studied by Greenshtein and Ritov in [10]; we review their result in Section 4. Compressed Sensing. Compressed regression has close connections to, and draws motivation from compressed sensing [4, 2]. However, in a sense, our motivation is the opposite of compressed sensing. While compressed sensing of X allows a sparse X to be reconstructed from a small number of random measurements, our goal is to reconstruct a sparse function of X . Indeed, from the point of view of privacy, approximately reconstructing X , which compressed sensing shows is possible if X is sparse, should be viewed as undesirable; we return to this point in Section ??. Several authors have considered variations on compressed sensing for statistical signal processing tasks [5, 11]. They focus on certain hypothesis testing problems under sparse random measurements, and a generalization to classification of a signal into two or more classes. Here one observes y = x, where y Rm , x Rn and is a known random measurement matrix. The problem is to select between the hypotheses Hi : y = (si + ). The proofs use concentration properties of random projection, which underlie the celebrated Johnson-Lindenstrauss lemma. The compressed regression problem we introduce can be considered as a more challenging statistical inference task, where the problem is to select from an exponentially large set of linear models, each with a certain set of relevant variables with unknown parameters, or to predict as well as the best linear model in some class. Privacy. Research on privacy in statistical data analysis has a long history, going back at least to [3]. We refer to [6] for discussion and further pointers into this literature; recent work includes [16]. The work of [12] is closely related to our work at a high level, in that it considers low rank random linear transformations of either the row space or column space of the data X . The authors note the JohnsonLindenstrauss lemma, and argue heuristically that data mining procedures that exploit correlations or pairwise distances in the data are just as effective under random projection. The privacy analysis is restricted to observing that recovering X from X requires solving an under-determined linear system. We are not aware of previous work that analyzes the asymptotic properties of a statistical estimator under random projection in the high dimensional setting, giving information-theoretic guarantees, although an information-theoretic quantification of privacy was proposed in [1]. We cast privacy in terms of the rate of information communicated about X through X , maximizing over all distributions on X , and identify this with the problem of bounding the Shannon capacity of a multi-antenna wireless channel, as modeled in [13]. Finally, it is important to mention the active area of cryptographic approaches to privacy from the theoretical computer science community, for instance [9, 7]; however, this line of work is quite different from our approach. 3 Compressed Regression is Sparsistent In the standard setting, X is a n × p matrix, Y = X + is a vector of noisy observations under a linear model, and p is considered to be a constant. In the high-dimensional setting we allow p to grow with n. The lasso refers to the following: (P1 ) min Y - X 2 such that 1 L. In 2 Lagrangian form, this becomes: (P2 ) min 21 Y - X 2 + n 1 . For an appropriate choice of 2 n the regularization parameter = (Y , L), the solutions of these two problems coincide. In compressed regression we project each column Xj Rn of X to a subspace of m dimensions, using an m × n random projection matrix . Let X = X be the compressed design matrix, and 3 let Y = Y be the compressed response. Thus, the transformed noise is no longer i.i.d.. The compressed lasso is the following optimization problem, for Y = X + = X + , with m being the set of optimal solutions: 1 1 (a) (P2 ) min Y - X 2 + m 1 , (b) m = arg min Y - X 2 + m 1 . (1) 2 2 2m 2m R p Although sparsistency is the primary goal in selecting the correct variables, our analysis establishes conditions for the stronger property of sign consistency: Definition 3.1. (Sign Consistency) A set of estimators n is sign consistent with the true if P n n s.t. sgn(n ) = sgn( ) 1 as n , where sgn(·) is given by sgn(x) = 1, 0, and -1 for x >, =,s or < 0 respectively. As a shorthand, denote the event that a sign consistent solution : . exists with E gn(n ) = sgn( ) = n such that sgn( ) = sgn( ) Clearly, if a set of estimators is sign consistent then it is sparsistent. All recent work establishing results on sparsity recovery assumes some form of incoherence condition on the data matrix X . To formulate such a condition, it is convenient to introduce an additional piece of notation. Let S = {j : j = 0} be the set of relevant variables and let S c = {1, . . . , p} \ S be the set of irrelevant variables. Then XS and XS c denote the corresponding sets of columns of the matrix X . We will impose the following incoherp nce condition; related conditions are used by [18] e in a deterministic setting. Let A = maxi j =1 |Aij | denote the matrix -norm. Definition 3.2. (S -Incoherence) Let X be an n × p matrix and let S {1, . . . , p} be nonempty. We say that X is S -incoherent in case 1T 1T 1 - , for some (0, 1]. + n XS XS - I|S | (2) n XS c XS Although not explicitly required, we only apply this definition to X such that columns of X satisfy Xj 2 = (n), j {1, . . . , p}. We can now state our main result on sparsistency. 2 Theorem 3.3. Suppose that, before compression, Y = X + , where each column of X is normalized to have 2 -norm n, and N (0, 2 In ). Assume that X is S -incoherent, where S = supp ( ), and define s = |S | and m = miniS |i |. We observe, after compression, Y = X + , Y = Y , X = X , and = , where ij N (0, 1 ). Let m m as in (1b). If where n ( 1 n 4C2 s 6C1 s2 ln p + 2 log n + log 2(s + 1)) m + (3) 2 16 log n 4e with C1 = 6 2.5044 and C2 = 8e 7.6885, and m 0 satisfies l (1 T m 2 2 1 og s m -1 (a) 0. (4) , and (b) + m n XS XS ) log(p - s) m m s Then the compressed lasso is sparsistent: P upp (m ) = supp ( ) 1 as m . 4 Compressed Regression is Persistent Persistence (Greenshtein and Ritov [10]) is a weaker condition than sparsistency. In particular, the assumption that E(Y |X ) = T X is dropped. Roughly speaking, persistence implies that a procedure predicts well. We review the arguments in [10] first; we then adapt it to the compressed case. Uncompressed Persistence. Consider a new pair (X, Y ) and suppose we want to predict Y from X . The predictive risk using predictor T X is R( ) = E(Y - T X )2 . Note that this is a welldefined quantity even though we do not assume that E(Y |X ) = T X . It is convenient to rewrite the risk in the following way: define Q = (Y , X1 , . . . , Xp ) and = (-1, 1 , . . . , p )T , then R( ) = T , where = E(QQT ). 4 (5) Let Q = (Q1 Q2 · · · Qn )T , where Qi = (Yi , X1i , . . . , Xpi )T Q, i = 1, . . . , n are i.i.d. random vectors and the training error is in 1 T Rn ( ) = 1 (6) (Yi - Xi )2 = T n , where n = QT Q. n =1 n ( , Given Bn = { : 1 Ln } for Ln = o n/ log n)1/4 we define the oracle predictor ,n = arg min 1 Ln R( ), and the uncompressed lasso estimator n = arg min 1 Ln Rn ( ). q Following arguments in [10], it can be shown that un( er Assumptio, 1 and given a sequence of sets d n of estimators Bn = { : 1 Ln } for Ln = o n/ log n)1/4 the sequence of uncompressed P Rn ( ) is persistent, i.e., R(n ) - R(,n ) 0. lasso estimators n = arg min Bn Assumption 1. Suppose that, for each j and k , E (|Z | ) q !M q-2 s/2, for every q 2 and some constants M and s, where Z = Qj Qk - E(Qj Qk ). Compressed Persistence. For the compressed case, again we want to predict (X, Y ), but now the estimator n,m is based on the lasso from the compressed data of size mn . Let = (-1, 1 , . . . , p )T as before and we replace Rn with Rn,m ( ) = T n,m , where n,m = 1 QT T Q. mn (7) Theorem 4.1. Given compressed sample size mn , let Bn,m = { : 1 Ln,m }, where Ln,m = m 1/ 4 n o log(npn ) . We define the compressed oracle predictor ,n,m = arg min : 1 Ln,m R( ) Rn,m ( ). and the compressed lasso estimator n,m = arg min : 1 Ln,m Under Assumption 1let Q1 , . . . , Qpn +1 denote the columns of Q. Let M1 > 0 , Q M1 n, j {1, . . . , pn + 1}. For any sequence Bn,m be a constant such that E j2 2 Rp with log2 (npn ) mn n, where Bn,m consists of all coefficient vectors such that ( , 1 Ln,m = o mn / log(npn ))1/4 the sequence of compressed lasso procedures n,m = e f P Rn,m ( ) is persistent: R(n,m ) - R(,n,m ) 0, when pn = O nc or c < 1/2. arg min Bn ,m The main difference between the sequence of compressed lasso estimators and the original uncompressed sequence is that n and mn together define the sequence of estimators for the compr ssed data. Here mn is allowed to grow drom (log2 (np)) to n; hence for each fixed n, e f n,m , mn such that log2 (np) < mn n efines a subsequence of estimators. In Section 6 we illustrate the compressed lasso persistency via simulations to compare the empirical risks with the oracle risks on such a subsequence for a fixed n. 5 Information Theoretic Analysis of Privacy Next we derive bounds on the rate at which the compressed data X reveal information about the uncompressed data X . Our general approach is to consider the mapping X X + as a noisy communication channel, where the channel is characterized by multiplicative noise and additive noise . Since the number of symbols in X is np we normalize by this effective block length to ;e define the information rate rn,m per symbol as rn,m = supp(X ) I (XpX ) . Thus, we seek bounds on n the capacity of this channel. A privacy guarantee is given in terms of bounds on the rate rn,m 0 decaying to zero. Intuitively, if the mutual information satisfies I (X ; X ) = H (X ) - H (X | X ) 0, then the compressed data X reveal, on average, no more information about the original data X than could be obtained from an independent sample. The underlying channel is equivalent to the multiple antenna model for wireless communication [13], where there are n transmitter and m receiver antennas in a Raleigh flat-fading environment. 5 The propagation coefficients between pairs of transmitter and receiver antennas are modeled by the matrix entries ij ; they remain constant for a coherence interval of p time periods. Computing the channel capacity over multiple intervals requires optimization of the joint density of pn transmitted signals, the problem studied in [13]. Formally, the channel is modeled as Z = X + , where n 1 2 > 0, ij N (0, 1), ij N (0, 1/n) and n i=1 E[Xij ] P , where the latter is a power constraint. 2 Theorem 5.1. Suppose that E [Xj ] P and the compressed data are formed by Z = X + , where is m × n with independent entries ij N (0, 1/n) and is m × p with indepen; dent entries ij N (0, 1). Then the information rate rn,m satisfies rn,m = supp(X ) I (XpZ ) n . 1 P m + 2 n log This result is implicitly contained in [13]. When = 0, or equivalently = 0, which is the case assumed in our sparsistence and persistence results, the above analysis yields the trivial bound rn,m . We thus derive a separate bound for this case; however, the resulting asymptotic order of the information rate is the same. 2 Theorem 5.2. Suppose that E [Xj ] P and the compressed data are formed by Z = X , where is m × n with independent entries ij N (0, 1/n). Then the information rate rn,m satisfies ; m rn,m = supp(X ) I (XpZ ) 2n log (2 eP ) . n Under our sparsistency lower bound on m, the above upper bounds are rn,m = O(log(np)/n). We note that these bounds may not be the best possible since they are obtained assuming knowledge of the compression matrix , when in fact the privacy protocol requires that and are not public. Proofs of both results appear in Appendix D. 6 Experiments In this section, we report results of simulations designed to validate the theoretical analysis presented in previous sections. We first present results that show the compressed lasso is comparable to the uncompressed lasso in recovering the sparsity pattern of the true linear model. We then show results on persistence that are in close agreement with the theoretical results of Section 4. We only include Figures 1­2 here; additional plots are included in the full version. Sparsistency. Here we run simulations to compare the compressed lasso with the uncompressed lasso in terms of the probability of success in recovering the sparsity pattern of . We use random matrices for both X and , and reproduce the experimental conditions of [19]. A design parameter n is the compression factor f = m , which indicates how much the original data are compressed. The results show that when the compression factor f is large enough, the thresholding behaviors as specified in (8) and (9) for the uncompressed lasso carry over to the compressed lasso, when X is drawn from a Gaussian ensemble. In general, the compression factor f is well below the requirement that we have in Theorem 3.3 in case X is deterministic. In more detail, we consider the Gaussian ensemble for the projection matrix , where i,j N (0, 1/n) are independent. The noise is N (0, 2 ), where 2 = 1. We consider Gaussian ensembles for the design matrix X with both diagonal and Toeplitz covariance. In the Toeplitz case, the covariance is given by T ()i,j = |i-j | ; we use = 0.1. [19] shows that when X comes from a Gaussian ensemble under these conditions, there exist fixed constants and u such that for any > 0 and s = supp ( ), if n > 2(u + )s log(p - s) + s + 1, then the lasso identifies true variables with probability approaching one. Conversely, if n < 2( - )s log(p - s) + s + 1, (9) (8) then the probability of recovering the true variables using the lasso approaches zero. In the following simulations, we carry out the lasso using procedure lars(Y , X ) that implements the LARS algorithm of [8] to calculate the full regularization path. For the uncompressed case, we run lars(Y , X ) such that Y = X + , and for the compressed case we run lars(Y , X ) such that 6 Identity; FP =0.5, =0.2; p=1024 Toeplitz =0.1; Fractional Power =0.5, =0.2 1.0 Prob of success p=128 256 512 1024 0.6 0.8 1.0 Uncomp. f=5 f = 10 f = 20 f = 40 f = 80 f = 120 0.8 Prob of success 0.6 0.0 0.0 0.2 0.4 0.5 Control parameter 1.0 1.5 2.0 2.5 3.0 0.4 0.2 Prob of success 0.6 0.8 1.0 Uncompressed f = 120 Toeplitz =0.1; FP =0.5, =0.2; p=1024 Compressed dimension m 0.0 0.2 0 50 100 150 200 250 300 Uncomp. f=5 f = 10 f = 20 f = 40 f = 80 f = 120 0.0 0.4 0.0 0.5 Control parameter 1.0 1.5 2.0 2.5 3.0 Figure 1: Plots of the number of samples versus the probability of success for recovering sgn( ). Each point on a curve for a particular or m, where m = 2 2 s log(p - s) + s + 1, is an average over 200 tXals; for each trial, we randomly draw Xn×p , m×n , and Rn . The covariance ri a 1 T X nd model are fixed across all curves in the plot. The sparsity level is s(p) = = nE 0.2p1/2 . The four sets of curves in the left plot are for p = 128, 256, 512 and 1024, with dashed lines marking m for = 1 and s = 2, 3, 5 and 6 respectively. In the plots on the right, each curve has a compression factor f {5, 10, 20, 40, 80, 120} for the compressed lasso, thus n = f m; dashed lines mark = 1. For = I , u = = 1, while for = T (0.1), u 1.84 and 0.46 [19], for the uncompressed lasso in (8) and in (9). n=9000, p=128, s=9 18 Uncompressed predictive Compressed predictive Compressed empirical Risk 8 10 12 14 16 0 2000 4000 6000 8000 Compressed dimension m Figure 2: Risk versus compressed dimension. We fix n = 9000 and p = 128, and set s(p) = 3 and Ln = 2.6874. The model is = (-0.9, -1.7, 1.1, 1.3, -0.5, 2, -1.7, -1.3, -0.9, 0, . . . , 0)T so that b 1 > Ln and b Bn , and the uncompressed oracle predictive risk is R = 9.81. For each value of m, a data point corresponds to the mean empirical risk, which is defined in (7), over 100 trials, and each vertical bar shows one standard deviation. For each trial, we randomly draw Xn×p with i.i.d. row vectors xi N (0, T (0.1)), and Y = X + . 7 Persistence. Here we solve the following 1 -constrained optimization problem = arg min 1 L Y - X 2 directly, based on algorithms described by [15]. We constrain the so lution to lie in the ball Bn = { 1 Ln }, where Ln = n1/4 / log n. By [10], the uncompressed lasso estimator n is persistent over Bn . For the compressed lasso, given n and pn , and a varying compressed sample size m, we take the ball Bn,m = { : 1 Ln,m } where l Ln,m = m1/4 / og(npn ). The compressed lasso estimator n,m for log2 (npn ) m n, is persistent over Bn,m by Theorem 4.1. The simulations confirm this behavior. The full version of this paper is available at http://arxiv.org/abs/0706.0534. ( Y = X + . The regularization parameter is m = c log(p - s) log s)/m. The results show that the behavior under compression is close to the uncompressed case. References [1] D. Agrawal and C. C. Aggarwal. On the design and quantification of privacy preserving data mining algorithms. In In Proceedings of the 20th Symposium on Principles of Database Systems, May 2001. ` [2] E. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Communications in Pure and Applied Mathematics, 59(8):1207­1223, August 2006. [3] T. Dalenius. Towards a methodology for statistical disclosure control. Statistik Tidskrift, 15:429­444, 1977. [4] D. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289­1306, April 2006. [5] M. Duarte, M. Davenport, M. Wakin, and R. Baraniuk. Sparse signal detection from incoherent projections. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2006. [6] G. Duncan and R. Pearson. Enhancing access to microdata while protecting confidentiality: Prospects for the future. Statistical Science, 6(3):219­232, August 1991. [7] C. Dwork. Differential privacy. In 33rd International Colloquium on Automata, Languages and Programming­ICALP 2006, pages 1­12, 2006. [8] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407­ 499, 2004. [9] J. Feigenbaum, Y. Ishai, T. Malkin, K. Nissim, M. J. Strauss, and R. N. Wright. Secure multiparty computation of approximations. ACM Trans. Algorithms, 2(3):435­472, 2006. [10] E. Greenshtein and Y. Ritov. Persistency in high dimensional linear predictor-selection and the virtue of over-parametrization. Journal of Bernoulli, 10:971­988, 2004. [11] J. Haupt, R. Castro, R. Nowak, G. Fudge, and A. Yeh. Compressive sampling for signal classification. In Proc. Asilomar Conference on Signals, Systems, and Computers, October 2006. [12] K. Liu, H. Kargupta, and J. Ryan. Random projection-based multiplicative data perturbation for privacy preserving distributed data mining. IEEE Trans. on Knowl. and Data Engin., 18(1), Jan. 2006. [13] T. L. Marzetta and B. M. Hochwald. Capacity of a mobile multiple-antenna communication link in rayleigh flat fading. IEEE Trans. Info. Theory, 45(1):139­157, January 1999. [14] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. Technical Report 720, Department of Statistics, UC Berkeley, 2006. [15] M. Osborne, B. Presnell, and B. Turlach. On the lasso and its dual. J. Comp. and Graph. Stat., 9(2):319­ 337, 2000. [16] A. P. Sanil, A. Karr, X. Lin, and J. P. Reiter. Privacy preserving regression modelling via distributed computation. In Proceedings of Tenth ACM SIGKDD, 2004. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267­288, 1996. [18] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231­2242, 2004. [19] M. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity. Technical Report 709, Department of Statistics, UC Berkeley, May 2006. [20] P. Zhao and B. Yu. On model selection consistency of lasso. J. Mach. Learn. Research, 7:2541­2567, 2007. 8