Bolasso: Model Consistent Lasso Estimation through the Bootstrap Francis R. Bach F R A N C I S . B AC H @ M I N E S . O R G ´ INRIA - WILLOW Project-Team, Laboratoire d'Informatique de l'Ecole Normale Superieure, Paris, France Abstract We consider the least-square linear regression problem with regularization by the 1 -norm, a problem usually referred to as the Lasso. In this paper, we present a detailed asymptotic analysis of model consistency of the Lasso. For various decays of the regularization parameter, we compute asymptotic equivalents of the probability of correct model selection (i.e., variable selection). For a specific rate decay, we show that the Lasso selects all the variables that should enter the model with probability tending to one exponentially fast, while it selects all other variables with strictly positive probability. We show that this property implies that if we run the Lasso for several bootstrapped replications of a given sample, then intersecting the supports of the Lasso bootstrap estimates leads to consistent model selection. This novel variable selection algorithm, referred to as the Bolasso, is compared favorably to other linear regression methods on synthetic data and datasets from the UCI machine learning repository. ing vectors with many zeros, and thus performs model selection. Recent works (Zhao & Yu, 2006; Yuan & Lin, 2007; Zou, 2006; Wainwright, 2006) have looked precisely at the model consistency of the Lasso, i.e., if we know that the data were generated from a sparse loading vector, does the Lasso actually recover the sparsity pattern when the number of observed data points grows? In the case of a fixed number of covariates, the Lasso does recover the sparsity pattern if and only if a certain simple condition on the generating covariance matrices is verified (Yuan & Lin, 2007). In particular, in low correlation settings, the Lass o is indeed consistent. However, in presence of strong correlations between relevant variables and irrelevant variabl es, the Lasso cannot be consistent, shedding light on potential problems of such procedures for variable selection. Adaptive versions where data-dependent weights are added to the 1 -norm then allow to keep the consistency in all situations (Zou, 2006). In this paper, we first derive a detailed asymptotic analysis of sparsity pattern selection of the Lasso estimation procedure, that extends previous analysis (Zhao & Yu, 2006; Yuan & Lin, 2007; Zou, 2006), by focusing on a specific decay of the regularization parameter. Namely, we show that when the decay is proportional to n-1/2 , where n is the number of observations, then the Lasso will select all the variables that should enter the model (the relevant variables) with probability tending to one exponentially fast with n, while it selects all other variables (the irrelevant variables) with strictly positive probability. If several datasets generated from the same distribution were available, then the latter property would suggest to consider the intersection of the supports of the Lasso estimate s for each dataset: all relevant variables would always be selected for all datasets, while irrelevant variables would e nter the models randomly, and intersecting the supports from sufficiently many different datasets would simply eliminate them. However, in practice, only one dataset is given; but resampling methods such as the bootstrap are exactly dedicated to mimic the availability of several datasets by resampling from the same unique dataset (Efron & Tibshirani, 1998). In this paper, we show that when using the bootstrap and intersecting the supports, we actually get a consistent 1. Introduction Regularization by the 1 -norm has attracted a lot of interest in recent years in machine learning, statistics and sign al processing. In the context of least-square linear regressi on, the problem is usually referred to as the Lasso (Tibshirani, 1994). Much of the early effort has been dedicated to algorithms to solve the optimization problem efficiently. In particular, the Lars algorithm of Efron et al. (2004) allows to find the entire regularization path (i.e., the set of solutions for all values of the regularization parameters) at th e cost of a single matrix inversion. Moreover, a well-known justification of the regularization by the 1 -norm is that it leads to sparse solutions, i.e., loadAppearing in Proceedings of the 25 th International Conference on Machine Learning, Helsinki, Finland, 2008. Copyright 2008 by the author(s)/owner(s). Bolasso: Model Consistent Lasso Estimation through the Bootstrap model estimate, without the consistency condition required by the regular Lasso. We refer to this new procedure as the Bolasso (bootstrap-enhanced least absolute shrinkage operator). Finally, our Bolasso framework could be seen as a voting scheme applied to the supports of the bootstrap Lasso estimates; however, our procedure may rather be considered as a consensus combination scheme, as we keep the (largest) subset of variables on which all regressors agree in terms of variable selection, which is in our case provably consistent and also allows to get rid of a potential additional hyperparameter. The paper is organized as follows: in Section 2, we present the asymptotic analysis of model selection for the Lasso; in Section 3, we describe the Bolasso framework, while in Section 4, we illustrate our results on synthetic data, wher e the true sparse generating model is known, and data from the UCI machine learning repository. Sketches of proofs can be found in Appendix A. Notations For a vector v Rp , we denote v 2 = (v v )1/2 its 2 -norm, pv = maxi{1,...,p} |vi | its norm and v 1 = i=1 |vi | its 1 -norm. For a R, sign(a) denotes the sign of a, defined as sign(a) = 1 if a > 0, -1 if a < 0, and 0 if a = 0. For a vector v Rp , sign(v ) Rp denotes the the vector of signs of elements of v . Moreover, given a vector v Rp and a subset I of {1, . . . , p}, vI denotes the vector in RCard(I ) of elements of v indexed by I . Similarly, for a matrix A Rp×p , AI ,J denotes the submatrix of A composed of elements of A whose rows are in I and columns are in J . We let denote J = {j, wj = 0} the sparsity pattern of w, s = sign(w) the sign pattern of w, and = Y - X w the additive noise.1 Note that our assumption regarding cumulant generating functions is satisfied when X and have compact supports, and also when the densities of X and have light tails. We consider independent and identically distributed (i.i.d.) data (xi , yi ) Rp × R, i = 1, . . . , n, sampled from PX Y ; the data are given in the form of matrices Y Rn and X Rn×p . Note that the i.i.d. assumption, together with (A1-3), are the simplest assumptions for studying the asymptotic behavior of the Lasso; and it is of course of interest to allow more general assumptions, in particular growing number of variables p, more general random variables, etc., which are outside the scope of this paper--see, e.g., Meinshausen and Yu (2008); Zhao and Yu (2006); Lounici (2008). 2.2. Lasso Estimation n We consider the square loss function 21 i=1 (yi - n 1 2 w xi ) = 2n Y - X w 2 andp the regularization by the 2 1 -norm defined as w 1 = i=1 |wi |. That is, we look at the following Lasso optimization problem (Tibshirani, 1994): min 1 w R p 2n 2 Y - X w 2 + µn w 1 , (1) where µn 0 is the regularization parameter. We denote w any global minimum of Eq. (1)--it may not be unique in ^ general, but will with probability tending to one exponentially fast under assumption (A2). 2.3. Model Consistency - General Results In this section, we detail the asymptotic behavior of the Lasso estimate w, both in terms of the difference in norm ^ with the population value w (i.e., regular consistency) and of the sign pattern sign(w), for all asymptotic behaviors ^ of the regularization parameter µn . Note that information about the sign pattern includes information about the support, i.e., the indices i for which wi is different from zero; ^ moreover, when w is consistent, consistency of the sign ^ pattern is in fact equivalent to the consistency of the support. We now consider five mutually exclusive possible situations which explain various portions of the regularization path (we assume (A1-3)); many of these results appear elsewhere (Yuan & Lin, 2007; Zhao & Yu, 2006; Fu & Knight, 2000; Zou, 2006; Bach, 2008; Lounici, 2008) but some of the finer results presented below are new (see Section 2.4). 1 Throughout this paper, we use boldface fonts for population quantities. 2. Asymptotic Analysis of Model Selection for the Lasso In this section, we describe existing and new asymptotic results regarding the model selection capabilities of the Lasso. 2.1. Assumptions We consider the problem of predicting a response Y R from covariates X = (X1 , . . . , Xp ) Rp . The only assumptions that we make on the joint distribution PX Y of (X, Y ) are the following: (A1) The cumulant generating functions E exp(s X and E exp(sY 2 ) are finite for some s > 0. 2 2) (A2) The joint matrix of second order moments Q = EX X Rp×p is invertible. (A3) E(Y |X ) = X w and var(Y |X ) = 2 a.s. for some w Rp and R . + Bolasso: Model Consistent Lasso Estimation through the Bootstrap 1. If µn tends to infinity, then w = 0 with probability ^ tending to one. 2. If µn tends to a finite strictly positive constant µ0 , then w converges in probability to the unique global mini^ mum of 1 (w - w) Q(w - w) + µ0 w 1 . Thus, the 2 estimate w never converges in probability to w, while ^ the sign pattern tends to the one of the previous global minimum, which may or may not be the same as the one of w.2 3. If µn tends to zero slower than n-1/2 , then w con^ verges in probability to w (regular consistency) and the sign pattern converges to the sign pattern of the global minimum of 1 v Qv + vJ sign(wJ ) + vJc 1. 2 This sign pattern is equal to the population sign vector s = sign(w) if and only if the following consistency condition is satisfied: 1 QJc J Q-J sign(wJ ) J In this paper, we now consider the specific case where µn = µ0 n-1/2 for µ0 (0, ), where we derive new asymptotic results. Indeed, in this situation, we get the correct signs of the relevant variables (those in J) with probability tending to one, but we also get all possible sign patterns consistent with this, i.e., all other variables (thos e not in J) may be non zero with asymptotically strictly positive probability. However, if we were to repeat the Lasso estimation for many datasets obtained from the same distribution, we would obtain for each µ0 , a set of active variables, all of which include J with probability tending to one, but potentially containing all other subsets. By intersecting those, we would get exactly J. However, this requires multiple copies of the samples, which are not usually available. Instead, we consider bootstrapped samples which exactly mimic the behavior of having multiple copies. See Section 3 for more details. 2.4. Model Consistency with Exact Root-n Regularization Decay In this section we present detailed new results regarding the pattern consistency for µn tending to zero exactly at rate n-1/2 (see proofs in Appendix A): Proposition 1 Assume (A1-3) and µn = µ0 n-1/2 , with µ0 > 0. Then for any sign pattern s {-1, 0, 1}p such that sJ = sign(wJ ), P(sign(w) = s) tends to a limit ^ (s, µ0 ) (0, 1), and we have: P(sign(w) = s) - (s, µ0 ) = O(n-1/2 log n). ^ Proposition 2 Assume (A1-3) and µn = µ0 n-1/2 , with µ0 > 0. Then, for any pattern s {-1, 0, 1}p such that sJ = sign(wJ ), there exist a constant A(µ0 ) > 0 such that log P(sign(w) = s) ^ -nA(µ0 ) + O(n-1/2 ). 1. (2) Thus, if Eq. (2) is satisfied, the probability of correct sign estimation is tending to one, and to zero otherwise (Yuan & Lin, 2007). 4. If µn = µ0 n-1/2 for µ0 (0, ), then the sign pattern of w agrees on J with the one of w with probabil^ ity tending to one, while for all sign patterns consistent on J with the one of w, the probability of obtaining this pattern is tending to a limit in (0, 1) (in particular strictly positive); that is, all patterns consistent on J are possible with positive probability. See Section 2.4 for more details. ^ 5. If µn tends to zero faster than n-1/2 , then w is consistent (i.e., converges in probability to w) but the support of w is equal to {1, . . . , p} with probability tend^ ing to one (the signs of variables in Jc may be negative or positive). That is, the 1 -norm has no sparsifying effect. Among the five previous regimes, the only ones with consistent estimates (in norm) and a sparsity-inducing effect are µn tending to zero and µn n1/2 tending to a limit µ0 (0, ] (i.e., potentially infinite). When µ0 = +, then we can only hope for model consistent estimates if the consistency condition in Eq. (2) is satisfied. This somewhat disappointing result for the Lasso has led to various improvements on the Lasso to ensure model consistency even when Eq. (2) is not satisfied (Yuan & Lin, 2007; Zou, 2006). Those are based on adaptive weights based on the non regularized least-square estimate. We propose in Section 3 an alternative way which is based on resampling. Here and in the third regime, we do not take into account the pathological cases where the sign pattern of the limit in unstable, i.e., the limit is exactly at a hinge point of the regularization path. 2 The last two propositions state that we get all relevant vari ables with probability tending to one exponentially fast, while we get exactly get all other patterns with probability tending to a limit strictly between zero and one. Note that the results that we give in this paper are valid for finite n, i.e., we can derive actual bounds on probability of sign pattern selections with known constants that explictly depend on w, Q and the joint distribution PX Y . 3. Bolasso: Bootstrapped Lasso Given the n i.i.d. observations (xi , yi ) Rd × R, i = 1, . . . , n, put together into matrices X Rn×p and Y Rn , we consider m bootstrap replications of the n data points (Efron & Tibshirani, 1998); that is, for k = k 1, . . . , m, we consider a ghost sample (xk , yi ) Rp × R, i k k n×p and Y Rn . i = 1, . . . , n, given by matrices X R Bolasso: Model Consistent Lasso Estimation through the Bootstrap kk The n pairs (xi , yi ), i = 1, . . . , n, are sampled uniformly at random with replacement from the n original pairs in (X , Y ). The sampling of the nm pairs of observations is independent. In other words, we defined the distribution of the ghost sample (X , Y ) by sampling n points with replacement from (X , Y ), and, given (X , Y ), the m ghost samples are independently sampled i.i.d. from the distribu tion of (X , Y ). The asymptotic analysis from Section 2 suggests to estimate the supports Jk = {j, wj = 0} of the Lasso esti^k mates wk for the bootstrap samples, k = 1, . . . , m, and ^ to intersect them to dmfine the Bolasso model estimate of e the support: J = k=1 Jk . Once J is selected, we estimate w by the unregularized least-square fit restricted to variables in J . The detailed algorithm is given in Algorithm 1. The algorithm has only one extra parameter (the number of bootstrap samples m). Following Proposition 3, log(m) should be chosen growing with n asymptotically slower than n. In simulations, we always use m = 128 (except in Figure 3, where we study the influence of m). Algorithm 1 Bolasso Input: data (X , Y ) Rn×(p+1) number of bootstrap replicates m regularization parameter µ for k = 1 to m do k k Generate bootstrap samples (X , Y ) Rn×(p+1) k k Compute Lasso estimate wk from (X , Y ) ^ k Compute support Jk = {j, wj = 0} ^ end for m Compute J = k=1 Jk Compute wJ from (X J , Y ) ^ Note that in practice, the Bolasso estimate can be computed simultaneously for a large number of regularization parameters because of the efficiency of the Lars algorithm (which we use in simulations), that allows to find the entire regularization path for the Lasso at the (empirical) cost of a single matrix inversion (Efron et al., 2004). Thus the computational complexity of the Bolasso is O(m(p3 + p2 n)). The following proposition (proved in Appendix A) shows that the previous algorithm leads to consistent model selection. Proposition 3 Assume (A1-3) and µn = µ0 n-1/2 , with µ0 > 0. Then, for all m > 1, the probability that the Bolasso does not exactly select the correct model, i.e., P(J = J), has the following upper bound: P(J = J) ( mA1 e-A2 n + A3 log(n) + A4 logmm) , n1/2 Therefore, if log(m) tends to infinity slower than n when n tends to infinity, the Bolasso asymptotically selects with overwhelming probability the correct active variable, and by regular consistency of the restricted least-square esti mate, the correct sign pattern as well. Note that the previous bound is true whether the condition in Eq. (2) is satisfied or not, but could be improved on if we suppose that Eq. (2) is satisfied. See Section 4.1 for a detailed comparison with the Lasso on synthetic examples. 4. Simulations In this section, we illustrate the consistency results obtained in this paper with a few simple simulations on synthetic examples and some medium scale datasets from the UCI machine learning repository (Asuncion & Newman, 2007). 4.1. Synthetic examples For a given dimension p, we sampled X Rp from a normal distribution with zero mean and covariance matrix generated as follows: (a) sample a p×p matrix G with independent standard normal distributions, (b) form Q = GG , (c) scale Q to unit diagonal. We then selected the first Card(J) = r variables and sampled non zero loading vectors as follows: (a) sample each loading signs in {-1, 1} uniformly at random and (b) rescale those by a scaling 1 which is uniform at random between 3 and 1 (to ensure minj J |wj | 1/3). Finally, we chose a constant noise level equal to 0.1 times (E(w X )2 )1/2 , and the additive noise is normally distributed with zero mean and variance 2 . Note that the joint distribution on (X, Y ) thus defined satisfies with probability one (with respect to the sampling of the covariance matrix) assumptions (A1-3). In Figure 1, we sampled two distributions PX Y with p = 16 and r = 8 relevant variables, one for which the consistency condition in Eq. (2) is satisfied (left), one for which it was not satisfied (right). For a fixed number of sample n = 1000, we generated 256 replications and computed the empirical frequencies of selecting any given variable for the Lasso as the regularization parameter µ varies. Those plots show the various asymptotic regimes of the Lasso detailed in Section 2. In particular, on the right plot, although no µ leads to perfect selection (i.e., exactly variables with indices less than r = 8 are selected), there is a range where all relevant variables are always selected, while all other s are selected with probability within (0, 1). In Figure 2, we plot the results under the same conditions for the Bolasso (with a fixed number of bootstrap replications m = 128). We can see that in the Lassoconsistent case (left), the Bolasso widens the consistency region, while in the Lasso-inconsistent case (right), the Bolasso "creates" a consistency region. where A1 , A2 , A3 , A4 are strictly positive constants. Bolasso: Model Consistent Lasso Estimation through the Bootstrap 1 1 variable index variable index 5 10 15 0 5 10 -log(µ) 15 5 10 15 0 5 10 -log(µ) 15 0.5 P(correct signs) 2 4 6 8 10 12 -log(µ) P(correct signs) 0.5 0 0 5 10 -log(µ) 15 Figure 1. Lasso: log-odd ratios of the probabilities of selection of each variable (white = large probabilities, black = small probabilities) vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). Figure 3. Bolasso (red, dashed) and Lasso (black, plain): probability of correct sign estimation vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). The number of bootstrap replications m is in {2, 4, 8, 16, 32, 64, 128, 256}. variable selection error variable index variable index 2 1.5 1 0.5 0 2 2.5 3 log10(n) 3.5 variable selection error 5 10 15 0 5 10 -log(µ) 15 5 10 15 0 5 10 -log(µ) 15 6 4 2 0 2 2.5 3 log10(n) 3.5 Figure 2. Bolasso: log-odd ratios of the probabilities of selection of each variable (white = large probabilities, black = small probabilities) vs. regularization parameter. Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). In Figure 3, we selected the same two distributions and compared the probability of exactly selecting the correct support pattern, for the Lasso, and for the Bolasso with varying numbers of bootstrap replications (those probabil ities are computed by averaging over 256 experiments with the same distribution). In Figure 3, we can see that in the Lasso-inconsistent case (right), the Bolasso indeed allows to fix the unability of the Lasso to find the correct pattern. Moreover, increasing m looks always beneficial; note that although it seems to contradict the asymptotic analysis in Section 3 (which imposes an upper bound for consistency), this is due to the fact that not selecting (at least) the relevant variables has very low probability and is not observed with only 256 replications. Finally, in Figure 4, we compare various variable selection procedures for linear regression, to the Bolasso, with two distributions where p = 64, r = 8 and varying n. For all the methods we consider, there is a natural way to select exactly r variables with no free parameters (for the Bolasso, we select the most stable pattern with r elements, i.e., the pattern which corresponds to most values of µ). We can see that the Bolasso outperforms all other variable selection methods, even in settings where the number of samples becomes of the order of the number of variables, which requires additional theoretical analysis, subject of ongoing Figure 4. Comparison of several variable selection methods: Lasso (black circles), Bolasso (green crosses), forward greedy (magenta diamonds), thresholded LS estimate (red stars), adaptive Lasso (blue pluses). Consistency condition in Eq. (2) satisfied (left) and not satisfied (right). The averaged (over 32 replications) variable selection error is computed as the square distance between sparsity pattern indicator vectors. research. Note in particular that we compare with bagging of least-square regressions (Breiman, 1996a) followed by a thresholding of the loading vector, which is another simple way of using bootstrap samples: the Bolasso provides a more efficient way to use the extra information, not for usual stabilization purposes (Breiman, 1996b), but directly for model selection. Note finally, that the bagging of Lasso estimates requires an additional parameter and is thus not tested. 4.2. UCI datasets The previous simulations have shown that the Bolasso is succesful at performing model selection in synthetic examples. We now apply it to several linear regression problems and compare it to alternative methods for linear regression, namely, ridge regression, Lasso, bagging of Lass o estimates (Breiman, 1996a), and a soft version of the Bolasso (referred to as Bolasso-S), where instead of intersec ting the supports for each bootstrap replications, we select those which are present in at least 90% of the bootstrap replications. In Table 1, we consider data randomly generated as in Section 4.1 (with p = 32, r = 8, n = 64), where Bolasso: Model Consistent Lasso Estimation through the Bootstrap the true model is known to be composed of a sparse loading vector, while in Table 2, we consider regression datasets from the UCI machine learning repository, for which we have no indication regarding the sparsity of the best linear predictor. For all of those, we perform 10 replications of 10-fold cross validation and for all methods (which all have one free regularization parameter), we select the best regularization parameter on the 100 folds and plot the mean square prediction error and its standard deviation. Note that when the generating model is actually sparse (Table 1), the Bolasso outperforms all other models, while in other cases (Table 2) the Bolasso is sometimes too strict in intersecting models, i.e., the softened version works be tter and is more competitive with other methods. Studying the effects of this softened scheme (which is more similar to usual voting schemes), in particular in terms of the potential trade-off between good model selection and low prediction error, and under conditions where p is large, is the subject of ongoing work. Table 1. Comparison of least-square estimation methods, data generated as described in Section 4.1, with 1 = QJc J Q-J sJ (cf. Eq. (2)). Performance is meaJ sured through mean squared prediction error (multiplied by 100). Ridge Lasso Bolasso Bagging Bolasso-S 0.93 8.8 ± 4.5 7.6 ± 3.8 5.4 ± 3.0 7.8 ± 4.7 5.7 ± 3.8 1.20 4.9 ± 2.5 4.4 ± 2.3 3.4 ± 2.4 4.6 ± 3.0 3.0 ± 2.3 1.42 7.3 ± 3.9 4.7 ± 2.5 3.4 ± 1.7 5.4 ± 4.1 3.1 ± 2.8 1.28 8.1 ± 8.6 5.1 ± 6.5 3.7 ± 10.2 5.8 ± 8.4 3.2 ± 8.2 Table 2. Comparison of least-square estimation methods, UCI regression datasets. Performance is measured through mean squared prediction error (multiplied by 100). 5. Conclusion We have presented a detailed analysis of the variable selection properties of a boostrapped version of the Lasso. The model estimation procedure, referred to as the Bolasso, is provably consistent under general assumptions. This work brings to light that poor variable selection results of the Lasso may be easily enhanced thanks to a simple parameter-free resampling procedure. Our contribution also suggests that the use of bootstrap samples by L. Breiman in Bagging/Arcing/Random Forests (Breiman, 1998) may have been so far slightly overlooked and considered a minor feature, while using boostrap samples may actually be a key computational feature in such algorithms for good model selection performances, and eventually good prediction performances on real datasets. The current work could be extended in various ways: first, we have focused on a fixed total number of variables, and allowing the numbers of variables to grow is important in theory and in practice (Meinshausen & Yu, 2008). Second, the same technique can be applied to similar settings than least-square regression with the 1 -norm, namely regularization by block 1 -norms (Bach, 2008) and other losses such as general convex classification losses. Finally, theoretical and practical connections could be made with other ¨ work on resampling methods and boosting (Buhlmann, 2006). Autompg Ridge 18.6 ± 4.9 Lasso 18.6 ± 4.9 Bolasso 18.1 ± 4.7 Bagging 18.6 ± 5.0 Bolasso-S 17.9 ± 5.0 Imports 7.7 ± 4.8 7.8 ± 5.2 20.7 ± 9.8 8.0 ± 5.2 8.2 ± 4.9 Machine 5.8 ± 18.6 5.8 ± 19.8 4.6 ± 21.4 6.0 ± 18.9 4.6 ± 19.9 Housing 28.0 ± 5.9 28.0 ± 5.7 26.9 ± 2.5 28.1 ± 6.6 26.8 ± 6.4 timization problems, namely that if the sign pattern of the solution is known, then we can get the solution in closed form. A.1. Optimality Conditions We let denote = Y - X w Rn , Q = X X /n Rp×p and q = X /n Rp . First, we can equivalently rewrite Eq. (1) as: min 1 (w - w) Q(w - w) - q (w - w) + µn w R p 2 w 1 . (3) The optimality conditions for Eq. (3) can be written in terms of the sign pattern s = s(w) = sign(w) and the sparsity pattern J = J (w) = {j, wj = 0} (Yuan & Lin, 2007): 1 1 (QJ c J Q-J QJ J - QJ c J )wJ + (QJ c J Q-J qJ - qJ c ) J J 1 +µn QJ c J Q-J sJ µn , (4) J 1 1 1 sign(Q-J QJ J wJ + Q-J qJ - µn Q-J sJ ) = sJ . (5) J J J A. Proof of Model Consistency Results In this appendix, we give sketches of proofs for the asymptotic results presented in Section 2 and Section 3. The proofs rely on the well-known property of the Lasso op- In this paper, we focus on regularization parameters µn of the form µn = µ0 n-1/2 . The main idea behind the results is to consider that (Q, q ) are distributed according to their limiting distributions, obtained from the law of large numbers and the central limit theorem, i.e., Q converges to Q a.s. and n1/2 q is asymptotically normally distributed with mean zero and covariance matrix 2 Q. When assuming this, Propositions 1 and 2 are straightforward. The main effort is to make sure that we can safely replace (Q, q ) by Bolasso: Model Consistent Lasso Estimation through the Bootstrap their limiting distributions. The following lemmas give sufficient conditions for correct estimation of the signs of var iables in J and for selecting a given pattern s (note that all constants could be expressed in terms of Q and w, details are omitted here): Lemma 1 Assume (A2) and Q - Q 2 min (Q)/2. Then sign(wJ ) = sign(wJ ) implies Q-1/2 q 2 ^ C1 - µn C2 , where C1 , C2 > 0. Lemma 2 Assume (A2) and let s {-1, 0, 1}p such that sJ = sign(wJ ). Let J = {j, sj = 0} J. Assume Q-Q 2 Q-1/2 q 2 min {1 , min (Q)/2} , min{2 , C1 - µn C4 }, µn (6) (7) 1 where C (s, ) is the set of t such that (a) QJ c J Q-J tJ - J -1 tJ c - Q J c J QJ J sJ i Q and (b) for all i 1 J \J, si -J (tJ - sJ ) 0. Note that with J = O((log n)n-1/2 ), which tends to zero, we have: P {t C (s, µ0 (1 - ))} P {t C (s, µ0 )} + O(). All / / terms (if A is large enough) are thus O((log n)n-1/2 ). 1 1 QJ c J Q-J qJ - qJ c - µn QJ c J Q-J sJ J J i J \J, si with C4 , C5 , C6 , C7 , C8 are positive constants. sign(w) = sign(w). ^ Q-1 -C5 1 µn - C6 1 2 , (8) i µn C7 1+C8 1 2 , (9) J J (qJ - µn sJ ) Then ^ (s, µ0 ) + This shows that P(sign(w) = sign(w)) O((log n)n-1/2 ) where (s, µ0 ) = P {t C (s, µ0 )} (0, 1)­the probability is strictly between 0 and 1 because the set and its complement have non empty interiors and the normal distribution has a positive definite covariance matrix 2 Q. The other inequality can be proved similarly. Note that the constant in O((log n)n-1/2 ) depends on µ0 but by carefully considering this dependence on µ0 , we can make the inequality uniform in µ0 as long as µ0 tends to zero or infinity at most at a logarithmic speed (i.e., µn deviates from n-1/2 by at most a logarithmic factor). Also, it would be interesting to consider uniform bounds on portions of the regularization path. A.4. Proof of Proposition 2 From Lemma 1, the probability of not selecting any of the variables in J is upperbounded by P( Q-1/2 q 2 > C1 - µn C2 ) + P( Q-Q 2 > min (Q)/2), Those two lemmas are useful because they relate optimality of certain sign patterns to quantities from which we can derive concentration inequalities. A.2. Concentration Inequalities Throughout the proofs, we need to provide upper bounds on the following quantities P( Q-1/2 q 2 > ) and P( Q - Q 2 > ). We obtain, following standard arguments (Boucheron et al., 2004): if < C9 and < C10 (where C9 , C10 > 0 are constants), -2. n P( Q-1/2 q 2 > ) 4p exp 2pC9 . - n 2 P( Q - Q 2 > ) 4p2 exp 2p2 C10 We also consider multivariate Berry-Esseen inequalities (Bentkus, 2003); the probability P(n1/2 q C ) can be estimated as P(t C ) where t is normal with mean zero and covariance matrix 2 Q. The error |P(n1/2 q C ) - P(t C )| is then uniformly (for all convex sets C ) upperbounded by: 400p1/4 n-1/2 min (Q)-3/2 E||3 X A.3. Proof of Proposition 1 By Lemma 2, for any A and n large enough, the probability that the sign is different from s is upperbounded by Q + + Q n 1/2 n 1/2 -1 / 2 P q 2 > A(log /2) P - Q 2 > A(log /2) 1 n n1 P {t C (s, µ0 (1 - ))} + 2C11 n-1/2 , / 3 2 which is straightforwardly upper bounded (using Section A.2) by a term of the required form. A.5. Proof of Proposition 3 In order to simplify the proof, we made the simplifying assumption that the random variables X and have compact supports. Extending the proofs to take into account the looser condition that X 2 and 2 have non uniformly infinite cumulant generating functions (i.e., assumption (A1)) cam be done with minor changes. The probability that n k=1 Jk is different from J is upper bounded by the sum of the following probabilities: (a) Probability of missing at least one variable in J in any of the m replications: by Lemma 1, the probability that for the k -th replication, one index in J is not selected, is upper bounded by P( Q-1/2 q 2 > C1 /2) + P( Q - Q 2 > min (Q)/2), where q corresponds to the ghost sample; as common in theoretical analysis of the bootstrap, we relate q to q as follows: P( Q-1/2 q 2 > C1 /2) P( Q-1/2 (q - -1 / 2 q ) 2 > C1 /4) + P( Q q 2 > C1 /4) (and similarly for P( Q - Q 2 > min (Q)/2)). Because we have assumed that X and have compact supports, the bootstrapped variables have also compact support and we can use concentration inequalities (given the original variables X , and also = C11 n-1/2 . Bolasso: Model Consistent Lasso Estimation through the Bootstrap after expectation with respect to X ). Thus the probability for one bootstrap replication is upperbounded by B e-C n where B and C are strictly positive constants. Thus the overall contribution of this part is less than mB e-C n . (b) Probability of not selecting exactly J in all replications: note that this is not tight at all since on top of the relevant variables which are selected with overwhelming probability, different additional variables may be select ed for different replications and cancel out when intersectin g. . P By Our goal is thus to bound E (J = J|X )m Lemma 2, we have that P(J = J|X ) is upper bounded by Q + n 1/2 -1 / 2 P q 2 > A(log /2) |X n1 + P Q - Q 2 > A(log n)1/2 |X n1/2 References Asuncion, A., & Newman, D. (2007). UCI machine learning repository. Bach, F. R. (2008). Consistency of the group Lasso and multiple kernel learning. J. Mac. Learn. Res., to appear. Bentkus, V. (2003). On the dependence of the Berry­ Esseen bound on dimension. Journal of Statistical Planning and Inference, 113, 385­402. Boucheron, S., Lugosi, G., & Bousquet, O. (2004). Concentration inequalities. Advanced Lectures on Machine Learning. Springer. Breiman, L. (1996a). Bagging predictors. Machine Learning, 24, 123­140. Breiman, L. (1996b). Heuristics of instability and stabilization in model selection. Ann. Stat., 24, 2350­2383. Breiman, L. (1998). Arcing classifier. Ann. Stat., 26, 801­ 849. ¨ Buhlmann, P. (2006). Boosting for high-dimensional linear models. Ann. Stat., 34, 559­583. Efron, B., Hastie, T., Johnstone, I., & Tibshirani, R. (2004). Least angle regression. Ann. Stat., 32, 407. Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. Chapman & Hall. Fu, W., & Knight, K. (2000). Asymptotics for Lasso-type estimators. Ann. Stat., 28, 1356­1378. Lounici, K. (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electronic Journal of Statistics, 2. Meinshausen, N., & Yu, B. (2008). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Stat., to appear. Tibshirani, R. (1994). Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc. B, 58, 267­288. Wainwright, M. J. (2006). Sharp thresholds for noisy and high-dimensional recovery of sparsity using 1 constrained quadratic programming (Tech. report 709). Dpt. of Statistics, UC Berkeley. Yuan, M., & Lin, Y. (2007). On the non-negative garrotte estimator. J. Roy. Stat. Soc. B, 69, 143­161. Zhao, P., & Yu, B. (2006). On model selection consistency of Lasso. J. Mac. Learn. Res., 7, 2541­2563. Zou, H. (2006). The adaptive Lasso and its oracle properties. J. Am. Stat. Ass., 101, 1418­1429. og P(t C (µ0 )|X ) + 2C11 n-1/2 + O( ln1/n ), / 2 As in (a), the first two terms and the last two ones are uniog formly O( ln1/n ) (if A is large enough). We then have to 2 consider the remaining term. We have C (µ0 ) = {t 1 1 µ0 }. By Rp , QJc J Q-J t - t c - µ0 QJc J Q-J sJ J J JJ Hoeffding's inequality, we can replace the covariance matrix that depends on X and Y by 2 Q, at cost O(n-1/2 ). We thus have to bound P(n1/2 q + y C (µ0 )|q ) for y / normally distributed and C (µ0 ) a fixed compact set. Because the set is compact, there exist constants A, B > 0 such that, if n1/2 q 2 for large enough, then 2 P(n1/2 q + y C (µ0 )|q ) 1 - Ae-B . Thus, by trunca/ tion, we obtain a bound of the form: E P (J = J|X )m ( 2 2 log n 1-Ae-B +F 1/2 )m +C e-B n 2 2 log n ) + C e-B , n1/2 y where now, given X , Y , t is normalln distributed with 1 mean n1/2 q and covariance matrix n i=1 2 xi x . i i exp(-mAe-B + mF where we have used Hoeffding's inequality to upper bound P( n1/2 q 2 > ). By minimizing in closed form with 2 2 mA lo n respect to e-B , i.e., with e-B = F n1g 2 + log(mA/C ) , A/ we obtain the desired inequality. Acknowledgements I would like to thank Za¨d Harchaoui and Jean-Yves Aui dibert for fruitful discussions related to this work. This work was supported by a French grant from the Agence Nationale de la Recherche (MGA Project).