Proximal Methods for Sparse Hierarchical Dictionary Learning Rodolphe Jenatton1 RODOLPHE . JENATTON @ INRIA . FR Julien Mairal1 JULIEN . MAIRAL @ INRIA . FR Guillaume Obozinski GUILLAUME . OBOZINSKI @ INRIA . FR Francis Bach FRANCIS . BACH @ INRIA . FR INRIA - WILLOW Project, Laboratoire d'Informatique de l'Ecole Normale Sup´ rieure (INRIA/ENS/CNRS UMR 8548) e 23, avenue d'Italie, 75214 Paris. France Abstract We propose to combine two approaches for modeling data admitting sparse representations: on the one hand, dictionary learning has proven effective for various signal processing tasks. On the other hand, recent work on structured sparsity provides a natural framework for modeling dependencies between dictionary elements. We thus consider a tree-structured sparse regularization to learn dictionaries embedded in a hierarchy. The involved proximal operator is computable exactly via a primal-dual method, allowing the use of accelerated gradient techniques. Experiments show that for natural image patches, learned dictionary elements organize themselves in such a hierarchical structure, leading to an improved performance for restoration tasks. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models. As far as we know, while much attention has been given to efficiently solving the corresponding optimization problem (Lee et al., 2007; Mairal et al., 2010), there are few attempts in the literature to make the model richer by adding structure between dictionary elements (Bengio et al., 2009; Kavukcuoglu et al., 2009). We propose to use recent work on structured sparsity (Zhao et al., 2009; Jenatton et al., 2009; Kim & Xing, 2009) to embed the dictionary elements in a hierarchy. Hierarchies of latent variables, typically used in neural networks and deep learning architectures (see Bengio, 2009 and references therein) have emerged as a natural structure in several applications, notably to model text documents. Indeed, in the context of topic models (Blei et al., 2003), hierarchical models using Bayesian non-parametric methods have been proposed by Blei et al. (2010). Quite recently, hierarchies have also been considered in the context of kernel methods (Bach, 2009). Structured sparsity has been used to regularize dictionary elements by Jenatton et al. (2010), but to the best of our knowledge, it has never been used to model dependencies between them. This paper makes three contributions: · We propose to use a structured sparse regularization to learn a dictionary embedded in a tree. · We show that the proximal operator for a tree-structured sparse regularization can be computed exactly in a finite number of operations using a primal-dual approach, with a complexity linear, or close to linear, in the number of variables. Accelerated gradient methods (e.g., Nesterov, 2007) can then be applied to solve tree-structured sparse decomposition problems, which may be useful in other uses of tree-structured norms (Kim & Xing, 2009; Bach, 2009). · Our method establishes a bridge between dictionary learning for sparse coding and hierarchical topic models (Blei et al., 2010), which builds upon the interpretation of topic models as multinomial PCA (Buntine, 2002), and can learn similar hierarchies of topics. See Section 5 for a discussion. 1. Introduction Learned sparse representations, initially introduced by Olshausen & Field (1997), have been the focus of much research in machine learning and signal processing, leading notably to state-of-the-art algorithms for several problems in image processing (Elad & Aharon, 2006). Modeling signals as a linear combination of a few "basis" vectors offers more flexibility than decompositions based on principal component analysis and its variants. Indeed, sparsity allows for overcomplete dictionaries, whose number of basis vectors are greater than the original signal dimension. 1 Contributed equally. Appearing in Proceedings of the 27 th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). Proximal Methods for Sparse Hierarchical Dictionary Learning 2. Problem Statement 2.1. Dictionary Learning Let us consider a set X = [x1 , . . . , xn ] Rm×n of n signals of dimension m. Dictionary learning is a matrix factorization problem that aims to represent these signals as linear combinations of dictionary elements, denoted here by the columns of a matrix D = [d1 , . . . , dp ] Rm×p . More precisely, the dictionary D is learned along with a matrix of decomposition coefficients A = [1 , . . . , n ] Rp×n , so that xi Di for every signal xi , as measured by any convex loss, e.g., the square loss in this paper. While learning simultaneously D and A, one may want to encode specific prior knowledge about the task at hand, such as, for example, the positivity of the decomposition (Lee & Seung, 1999), or the sparsity of A (Olshausen & Field, 1997; Lee et al., 2007; Mairal et al., 2010). This leads to penalizing or constraining (D, A) and results in the following formulation: 1 DD,AA n min n i=1 Figure 1. Left: example of a tree-structured set of groups G (dashed contours in red), corresponding to a tree T = {1, . . . , 6} (in black). Right, example of a sparsity pattern: the groups {2, 4}, {4} and {6} are set to zero, so that the corresponding nodes (in gray) that form subtrees of T are removed. The remaining nonzero variables {1, 3, 5} are such that, if a node is selected, the same goes for all its ancestors. 1 i ||x - Di ||2 + (i ) , 2 2 (1) where A and D denote two convex sets and is a regularization term, usually a norm, whose effect is controlled by the regularization parameter > 0. Note that D is assumed to be bounded to avoid any degenerate solutions of Eq. (1). For instance, the standard sparse coding formulation takes to be the 1 norm, D to be the set of matrices in Rm×p whose columns are in the unit ball of the 2 norm, with A = Rp×n (Lee et al., 2007; Mairal et al., 2010). However, this classical setting treats each dictionary element independently from the others, and does not exploit possible relationships between them. We address this potential limitation of the 1 norm by embedding the dictionary in a tree structure, through a hierarchical norm introduced by Zhao et al. (2009) and Bach (2009), which we now present. 2.2. Hierarchical Sparsity-Inducing Norms We organize the dictionary elements in a rooted-tree T composed of p nodes, one for each dictionary element dj , j {1, . . . , p}. We will identify these indices j in {1, . . . , p} and the nodes of T . We want to exploit the structure of T in the following sense: the decomposition of any vector x can involve a dictionary element dj only if the ancestors of dj in T are themselves part of the decomposition. Equivalently, one can say that when a dictionary element dj is not involved in the decomposition of a vector x then its descendants in T should not be part of the decomposition. While these two views are equivalent, the latter leads to an intuitive penalization term. To obtain models with the desired property, one considers for all j in T , the group gj {1, . . . , p} of dictionary elements that only contains j and all its descendants, and penalizes the number of such groups that are involved in the decomposition of x (a group being "involved in the decomposition" meaning here that at least one of its dictionary element is part of the decomposition). We call G this set of groups (Figure 1). While this penalization is non-convex, a convex proxy has been introduced by Zhao et al. (2009) and was further considered by Bach (2009) and Kim & Xing (2009) in the context of regression. For any vector Rp , let us define () = gG wg |g , where |g is the vector of size p whose coordinates are equal to those of for indices in the set g, and 0 otherwise2 . . stands either for the or 2 norm, and (wg )gG denotes some positive weights3 . As analyzed by Zhao et al. (2009), when penalizing by , some of the vectors |g are set to zero for some g G. Therefore, the components of corresponding to some entire subtrees of T are set to zero, which is exactly the desired effect (Figure 1). Note that even though we have presented for simplicity reasons this hierarchical norm in the context of a single tree with a single element at each node, it can be extended easily to the case of forests of trees, and/or trees containing several dictionary elements at each node. More generally, this formulation can be extended with the notion of treestructured groups, which we now present. Note the difference with the notation g , which is often used in works on structured sparsity, where g is a vector of size |g|. 3 For a complete definition of for any q norm, a discussion of the choice of q, and a strategy for choosing the weights wg , see (Zhao et al., 2009; Kim & Xing, 2009). 2 Proximal Methods for Sparse Hierarchical Dictionary Learning Definition 1 (Tree-structured set of groups.) A set of groups G = {g}gG is said to be tree-structured in {1, . . . , p}, if gG g = {1, . . . , p} and for all g, h G, (g h = ) (g h or h g). For such a set of groups, there exists a (non-unique) total order relation such that: g h g h or g h = . The quadratic term keeps the update in a neighborhood where f is close to its linear approximation, and L > 0 is a parameter. This problem can be rewritten as, Rp min 1 ^ - - 2 1 ^ L f () 2 2 + L (). Sparse hierarchical norms having been introduced, we now address the optimization dealing with such norms. 3. Optimization Optimization for dictionary learning has already been intensively studied, and a typical scheme alternating between the variables D and A = [1 , . . . , n ], i.e., minimizing over one while keeping the other one fixed, yields good results in general (Lee et al., 2007). The main difficulty of our problem lies essentially in the optimization of the vectors i , i {1, . . . , n} for D fixed, since n may be large, and since it requires to deal with the nonsmooth regularization term . The optimization of the dictionary D (for A fixed) is in general easier, as discussed in Section 3.5. Within the context of regression, several optimization methods to cope with have already been proposed. A boosting-like technique with a path-following strategy is used by Zhao et al. (2009). Kim & Xing (2009) uses a reweighted least-squares scheme when . is the 2 norm. The same approach is considered by Bach (2009), but built upon an active set strategy. In this paper, we propose to perform the updates of the vectors i based on a proximal approach which we now introduce. 3.1. Proximal Operator for the Norm Proximal methods have drawn increasing attention in the machine learning community (e.g., Ji & Ye, 2009 and references therein), especially because of their convergence rates (optimal for the class of first-order techniques) and their ability to deal with large nonsmooth convex problems (e.g., Nesterov, 2007; Beck & Teboulle, 2009). In a nutshell, these methods can be seen as a natural extension of gradient-based techniques when the objective function to minimize has a nonsmooth part. In our context, when the dictionary D is fixed and A = Rp , we minimize for each signal x the following convex nonsmooth objective function w.r.t. Rp : f () + (), where f () stands for the data-fitting term 1 x - D 2 . 2 2 At each iteration of the proximal algorithm, f is linearized ^ around the current estimate , and the current value of is updated as the solution of the proximal problem: R Solving efficiently and exactly this problem is crucial to enjoy the fast convergence rates of proximal methods. In addition, when the nonsmooth term is not present, the previous proximal problem exactly leads to the standard gradient update rule. More generally, the proximal operator associated with our regularization term , is the function that maps a vector u Rp to the (unique) solution of vR min p 1 u-v 2 2 2 + (v). (2) In the simpler setting where G is the set of singletons, is the 1 norm, and the proximal operator is the (elementwise) soft-thresholding operator uj sign(uj ) max(|uj | - , 0), j {1, . . . , p}. Similarly, when the groups in G form a partition of the set of variables, we have a group Lasso like penalty, and the proximal operator can be computed in closed-form (see Bengio et al., 2009 and references therein). This is a priori not possible anymore as soon as some groups in G overlap, which is always the case in our hierarchical setting with tree-structured groups. 3.2. Primal-Dual Interpretation We now show that Eq. (2) can be solved with a primaldual approach. The procedure solves a dual formulation of Eq. (2) involving the dual norm4 of . , denoted by . , and defined by = max z 1 z for any vector in Rp . The formulation is described in the following lemma that relies on conic duality (Boyd & Vandenberghe, 2004): Lemma 1 (Dual of the proximal problem) Let u Rp and let us consider the problem Rp×|G| max - 1 2 g u- gG g 2 2 - u 2 2 (3) g j = 0 if j g, / s.t. g G, wg and where = ( g )gG and g denotes the j-th coordinate of j the vector g in Rp . Then, problems (2) and (3) are dual to each other and strong duality holds. In addition, the pair of primal-dual variables {v, } is optimal if and only if is a feasible point of the optimization problem (3), and v =u- g G, g gG , v|g g = v|g or v|g = 0. g and g = wg , ^ ^ ^ min f ()+( - ) f () + () + p L 2 ^ 2 - 2. 4 It is easy to show that the dual norm of the 2 norm is the 2 norm itself. The dual norm of the is the 1 norm. Proximal Methods for Sparse Hierarchical Dictionary Learning For space limitation reasons, we have omitted all the detailed proofs from this section. They will be available in a longer version of this paper. Note that we focus here on specific tree-structured groups, but the previous lemma is valid regardless of the nature of G. The structure of the dual problem of Eq. (3), i.e., the separability of the (convex) constraints for each vector g , g G, makes it possible to use block coordinate ascent (Bertsekas, 1999). Such a procedure is presented in Algorithm 1. It optimizes sequentially Eq. (3) with respect to the variable g , while keeping fixed the other variables h , for h = g. It is easy to see from Eq. (3) that such an update for a group g in G amounts to the orthogonal projection of the vector u|g - h=g |h onto the ball of radius wg of the dual g norm . . We denote this projection g . w Algorithm 1 Block coordinate ascent in the dual Inputs: u Rp and set of groups G. Outputs: (v, ) (primal-dual solutions). Initialization: v = u, = 0. while ( maximum number of iterations not reached ) do for g G do v u - h=g h . g g (v|g ). w end for end while v u - gG g . pass of Algorithm 1 in the case where G contains only two nested groups g h, provided that g is computed before h . In the following proposition, this lemma is extended to general tree-structured sets of groups G: Proposition 1 (Convergence in one pass) Suppose that the groups in G are ordered according to and that the norm . is either the 2 or norm5 . Then, after initializing to 0, one pass of Algorithm 1 with the order gives the solution of Eq. (2). Proof sketch. The proof relies on Lemma 2. We proceed by induction, by showing that we keep the optimality conditions of Eq. (3) satisfied after each update in Algorithm 1. The induction is initialized by the leaves. Once the induction reaches the last group, i.e., after one complete pass over G, the dual variable satisfies the optimality conditions for Eq. (3), which implies that {v, } is optimal. Since strong duality holds, v is the solution of Eq. (2). 3.4. Efficient Computation of the Proximal Operator Since one pass of Algorithm 1 involves |G| = p projections onto the ball of the dual norm (respectively the 2 and the 1 norms) of vectors in Rp , a naive implementation leads to a complexity in O(p2 ), since each of these projections can be obtained in O(p) operations (see Mairal et al., 2010, and references therein). However, the primal solution v, which is the quantity of interest, can be obtained with a better complexity, as exposed below: Proposition 2 (Complexity of the procedure) i) For the 2 norm, the primal solution v of Algorithm 1 can be obtained in O(p) operations. ii) For the norm, v can be obtained in O(pd) operations, where d is the depth of the tree. The linear complexity in the 2 norm case results from a recursive implementation. It exploits the fact that each projection amounts to a scaling, whose factor can be found without explicitly performing the full projection at each iteration. As for the norm, since all the groups at a depth k {1, . . . , d} do not overlap, the cost for performing all the projections at this depth k is O(p), which leads to a total complexity of O(dp). Note that d could depend on p as well. For instance, in an unbalanced case, the worse case could be d = O(p), in a balanced tree, one could have d = O(log(p)). In practice, the structures we have considered are relatively flat, with a depth not exceeding d = 5. Details will be provided in a longer version of this paper. 3.5. Learning the Dictionary We alternate between the updates of D and A, minimizing over one while keeping the other variable fixed. 5 Interestingly, we have observed that this was not true in general when . is an q norm, for q = 2 and q = . 3.3. Convergence in One Pass In general, Algorithm 1 is not guaranteed to solve exactly Eq. (2) in a finite number of iterations. However, when . is the 2 or norm, and provided that the groups in G are appropriately ordered, we now prove that only one pass of Algorithm 1, i.e., only one iteration over all groups, is sufficient to obtain the exact solution of Eq. (2). This result constitutes the main technical contribution of the paper. Before stating this result, we need to introduce a key lemma that shows that, given two nested groups g, h such that g h {1, . . . , p}, if g is updated before h in Algorithm 1, then the optimality condition for g is not perturbed by the update of h . Lemma 2 (Projections with nested groups) Let . denote either the 2 or norm, and g and h be two nested groups--that is, g h {1, . . . , p}. Let v be a vector in Rp , and let us consider the successive projections = g (v|g ) and = h (v|h - g ) t t h with tg , th > 0. Then, we have as well g = g (v|g -|g ). t g h The previous lemma establishes the convergence in one Proximal Methods for Sparse Hierarchical Dictionary Learning Updating D. We have chosen to follow the matrixinversion free procedure of Mairal et al. (2010) for updating the dictionary. This method consists in a blockcoordinate scheme over the columns of D. Specifically, we assume that the domain set D has the form Dµ = {D Rm×p , µ dj 1 4. Experiments 4.1. Natural Image Patches This experiment studies whether a hierarchical structure can help dictionaries for denoising natural image patches, and in which noise regime the potential gain is significant. We aim at reconstructing corrupted patches from a test set, after having learned dictionaries on a training set of noncorrupted patches. Though not typical in machine learning, this setting is reasonable in the context of images, where lots of non-corrupted patches are easily available.6 We have extracted 100, 000 patches of size m = 8 × 8 pixels from the Berkeley segmentation database of natural images7 , which contains a high variability of scenes. We have then split this dataset into a training set Xtr , a validation set Xval , and a test set Xte , respectively of size 50, 000, 25, 000, and 25, 000 patches. All the patches are centered and normalized to have unit 2 norm. For the first experiment, the dictionary D is learned on Xtr using the formulation of Eq. (1), with µ = 0 for Dµ defined in Eq. (4). The validation and test sets are corrupted by removing a certain percentage of pixels, the task being to reconstruct the missing pixels from the known pixels. We thus introduce for each element x of the validation/test set, a vector x, equal to x for the known pixel values and ~ ~ 0 otherwise. In the same way, we define D as the matrix equal to D, except for the rows corresponding to missing ~ pixel values, which are set to 0. By decomposing x on D, ~ we obtain a sparse code , and the estimate of the reconstructed patch is defined as D. Note that this procedure assumes that we know which pixel is missing and which is not for every element x. The parameters of the experiment are the regularization parameter tr used during the train step, the regularization parameter te used during the validation/test step, and the structure of the tree. For every reported result, these parameters have been selected by taking the ones offering the best performance on the validation set, before reporting any result from the test set. The values for the regularization parameters tr , te were tested on a logarithmic scale {2-10 , 2-9 , . . . , 22 }, and then further refined on a finer logarithmic scale of factor 2-1/4 . For simplicity reasons, we have chosen arbitrarily to use the -norm in the structured norm , with all the weights equal to one. We have tested 21 balanced tree structures of depth 3 and 4, with different branching factors p1 , p2 , . . . , pd-1 , where d is the depth of the tree and pk , k {1, . . . , d - 1} Note that we study the ability of the model to reconstruct independent patches, and additional work is required to apply our framework to a full image processing task, where patches usually overlap (Elad & Aharon, 2006). 7 www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ 6 + (1 - µ) dj 2 2 1}, (4) m×p + or Dµ = Dµ R+ , with µ [0, 1]. The choice for these particular domain sets is motivated by the experiments of Section 4. For natural image patches, the dictionary elements are usually constrained to be in the unit 2 norm ball (i.e., D = D0 ), while for topic modeling, the dictionary elements are distributions of words and therefore belong + to the simplex (i.e., D = D1 ). The update of each dictionary element amounts to performing a Euclidean projection, which can be computed efficiently (Mairal et al., 2010). Concerning the stopping criterion, we follow the strategy from the same authors and go over the columns of D only a few times, typically 5 in our experiments. Updating the vectors i . The procedure for updating the columns of A is built upon the results derived in Section 3.2. We have shown that the proximal operator from Eq. (2) can be computed exactly and efficiently. It makes it possible to use fast proximal techniques, suited to nonsmooth convex optimization. Specifically, we have tried the accelerated scheme from both Nesterov (2007) and Beck & Teboulle (2009), and finally opted for the latter since, for a comparable level of precision, fewer calls of the proximal operator are required. The procedure from Beck & Teboulle (2009) basically follows Section 3.1, except that the proximal operator is not directly applied on the current estimate, but on an auxiliary sequence of points that linearly combines past estimates. This algorithm has an optimal convergence rate in the class of first-order techniques, and also allows warm restarts, which is crucial in our alternating scheme. Furthermore, positivity constraints can be added on the domain of A, by noticing that for our norm and any u Rp , adding these constraints when computing the proximal operator is equivalent to solving vRp 2 min 1 [u]+ - v 2 2 + (v), with ([u]+ )j = max{uj , 0}. We will indeed use positive decompositions to model text corpora in Section 4. Finally, we monitor the convergence of the algorithm by checking the relative decrease in the cost function. We also investigated the derivation of a duality gap, but this implies the computation of the dual norm for which no closedform is available; computing approximations of based on bounds from Jenatton et al. (2009) turned out to be too slow for our experiments. Proximal Methods for Sparse Hierarchical Dictionary Learning Table 1. Quantitative results of the reconstruction task on natural image patches. First row: percentage of missing pixels. Second and third rows: mean square error multiplied by 100, respectively for classical sparse coding, and tree-structured sparse coding. noise 50 % 60 % 70 % 80 % 90 % flat 19.3 ± 0.1 26.8 ± 0.1 36.7 ± 0.1 50.6 ± 0.0 72.1 ± 0.0 tree 18.6 ± 0.1 25.7 ± 0.1 35.0 ± 0.1 48.0 ± 0.0 65.9 ± 0.3 is the number of children for the nodes at depth k. The branching factors tested for the trees of depth 3 where p1 {5, 10, 20, 40, 60, 80, 100}, p2 {2, 3}, and for trees of depth 4, p1 {5, 10, 20, 40}, p2 {2, 3} and p3 = 2, giving 21 possible structures associated with dictionaries with at most 401 elements. For each tree structure, we evaluated the performance obtained with the treestructured dictionary along with the non-structured dictionary containing the same number of elements. These experiments were carried out four times, each time with a different initialization, and with a different noise realization. Quantitative results are reported on Table 1. For every number of missing pixels, the tree-structured dictionary outperforms the "unstructured one", and the most significant improvement is obtained in the noisiest setting. Note that having more dictionary elements is worthwhile when using the tree structure. To study the influence of the chosen structure, we have reported on Figure 2 the results obtained by the 14 tested structures of depth 3, along with those obtained with the unstructured dictionaries containing the same number of elements, when 90% of the pixels are missing. For every number of dictionary elements, the tree-structured dictionary significantly outperforms the unstructured ones. An example of a learned tree-structured dictionary is presented on Figure 3. Dictionary elements naturally organize in groups of patches, with often low frequencies near the root of the tree, and high frequencies near the leaves. Dictionary elements tend to be highly correlated with their parents. Figure 3. Learned dictionary with tree structure of depth 4. The root of the tree is in the middle of the figure. The branching factors are p1 = 10, p2 = 2, p3 = 2. The dictionary is learned on 50, 000 patches of size 16 × 16 pixels. 4.2. Text Documents This second experimental section shows that our approach can also be applied to model text corpora. The goal of probabilistic topic models is to find a low-dimensional representation of a collection of documents, where the representation should provide a semantic description of the collection. Within a parametric Bayesian framework, latent Dirichlet allocation (LDA) (Blei et al., 2003) models documents as a mixture of a predefined number of latent topics that are distributions over a fixed vocabulary. When one marginalizes over the topic random variable, one gets multinomial PCA (Buntine, 2002). The number of topics is usually small compared to the size of the vocabulary (e.g., 100 against 10,000), so that the topic proportions of each document give a compact representation of the corpus. For instance, these new features can be used to feed a classifier in a subsequent classification task. We will similarly use our dictionary learning approach to find low-dimensional representations of text corpora. Suppose that the signals X = [x1 , . . . , xn ] Rm×n are n documents over a vocabulary of m words, the k-th component of xi standing for the frequency of the k-th word in the document i. If we further assume that the entries of D and A are nonnegative, and that the dictionary elements dj have unit 1 norm, the decomposition DA can be seen as a mixture of p topics. The regularization organizes these topics in a tree, so that, if a document involves a certain topic, then all its ancestors in the tree are also present in 80 70 60 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Figure 2. Mean square error multiplied by 100 obtained with 14 structures with error bars, sorted by number of dictionary elements. Red plain bars represents the tree-structured dictionaries. White bars correspond to the flat dictionary model containing the same number of dictionary as the tree-structured one. For readability purpose, the y-axis of the graph starts at 50. Proximal Methods for Sparse Hierarchical Dictionary Learning 100 Classification Accuracy (%) PCA + SVM NMF + SVM LDA + SVM SpDL + SVM SpHDL + SVM 90 80 70 60 3 7 15 31 Number of Topics 63 Figure 5. Binary classification of two newsgroups: classification accuracy for different dimensionality reduction techniques coupled with a linear SVM classifier. The bars and the errors are respectively the mean and the standard deviation, based on 10 random split of the data set. Best seen in color. Figure 4. Example of a topic hierarchy estimated from 1714 NIPS proceedings papers (from 1988 through 1999). Each node corresponds to a topic whose 5 most important words are displayed. Single characters such as n, t, r are part of the vocabulary and often appear in NIPS papers, and their place in the hierarchy is semantically relevant to children topics. the topic decomposition. Since the hierarchy is shared by all documents, the topics located at the top of the tree will be part of every decomposition, and should therefore correspond to topics common to all documents. Conversely, the deeper the topics in the tree, the more specific they should be. It is worth mentioning the extension of LDA that considers hierarchies of topics from a non-parametric Bayesian viewpoint (Blei et al., 2010). We plan to compare to this model in future work. Visualization of NIPS proceedings. We first qualitatively illustrate our dictionary learning approach on the NIPS proceedings papers from 1988 through 19998 . After removing words appearing fewer than 10 times, the dataset is composed of 1714 articles, with a vocabulary of 8274 words. + As explained above, we consider D1 and take A to be p×n R+ . Figure 4 displays an example of a learned dictionary with 13 topics, obtained by using the norm in and selecting manually = 2-15 . Similarly to Blei et al. (2010), we interestingly capture the stopwords at the root of the tree, and the different subdomains of the conference such as neuroscience, optimization or learning theory. Posting classification. We now consider a binary classification task of postings from the 20 Newsgroups data set9 . We classify the postings from the two newsgroups alt.atheism and talk.religion.misc, following the setting of 8 9 http://psiexp.ss.uci.edu/research/programs data/toolbox.htm See http://people.csail.mit.edu/jrennie/20Newsgroups/ Zhu et al. (2009). After removing words appearing fewer than 10 times and standard stopwords, these postings form a data set of 1425 documents over a vocabulary of 13312 words. We compare different dimensionality reduction techniques that we use to feed a linear SVM classifier, i.e., we consider (i) LDA (with the code from Blei et al., 2003), (ii) principal component analysis (PCA), (iii) nonnegative matrix factorization (NMF), (iv) standard sparse dictionary learning (denoted by SpDL) and (v) our sparse hierarchical approach (denoted by SpHDL). Both SpDL + and SpHDL are optimized over D1 and A = Rp×n , with + the weights wg equal to 1. We proceed as follows: given a random split into a training/test set of 1000/425 postings, and given a number of topics p (also the number of components for PCA, NMF, SpDL and SpHDL), we train a SVM classifier based on the low-dimensional representation of the postings. This is performed on the training set of 1000 postings, where the parameters, {2-26 , . . . , 2-5 } and/or Csvm {4-3 , . . . , 41 } are selected by 5-fold crossvalidation. We report in Figure 5 the average classification scores on the test set of 425 postings, based on 10 random splits, for different number of topics. Unlike the experiment on the image patches, we consider only one tree structure, namely complete binary trees with depths in {1, . . . , 5}. The results from Figure 5 show that SpDL and SpHDL perform better than the other dimensionality reduction techniques on this task. As a baseline, the SVM classifier applied directly to the raw data (the 13312 words) obtains a score of 90.9±1.1, which is better than all the tested methods, but without dimensionality reduction (as already reported by Blei et al., 2003). Moreover, the error bars indicate that, though nonconvex, SpDL and SpHDL do not seem to suffer much from instability issues. Even if SpDL and SpHDL perform similarly, SpHDL has the advantage to give a more interpretable topic mixture in terms of hierarchy, which standard unstructured sparse coding cannot. Proximal Methods for Sparse Hierarchical Dictionary Learning 5. Discussion We have shown in this paper that tree-structured sparse decomposition problems can be solved at the same computational cost as addressing classical decomposition based on the 1 norm. We have used this approach to learn dictionaries embedded in trees, with application to representation of natural image patches and text documents. We believe that the connection established between sparse methods and probabilistic topic models should prove fruitful as the two lines of work have focused on different aspects of the same unsupervised learning problem: our approach is based on convex optimization tools, and provides experimentally more stable data representations. Moreover, it can be easily extended with the same tools to other types of structures corresponding to other norms (Jenatton et al., 2009; Jacob et al., 2009). However, it is not able to learn elegantly and automatically model parameters such as dictionary size of tree topology, which Bayesian methods can. Finally, another interesting common line of research to pursue is the supervised design of dictionaries, which has been proved useful in the two frameworks (Mairal et al., 2009; Blei & McAuliffe., 2008). Boyd, S. P. and Vandenberghe, L. Convex optimization. Cambridge University Press, 2004. Buntine, W.L. Variational Extensions to EM and Multinomial PCA. In Proc. ECML, 2002. Elad, M. and Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process., 54(12):3736­3745, 2006. Jacob, L., Obozinski, G., and Vert, J.-P. Group Lasso with overlap and graph Lasso. In Proc. ICML, 2009. Jenatton, R., Audibert, J.-Y., and Bach, F. Structured variable selection with sparsity-inducing norms. Technical report, arXiv:0904.3523, 2009. Jenatton, R., Obozinski, G., and Bach, F. Structured sparse principal component analysis. In Proc. AISTATS, 2010. Ji, S., and Ye, J. An accelerated gradient method for trace norm minimization. In Proc. ICML, 2009. Kavukcuoglu, K., Ranzato, M., Fergus, R., and LeCun, Y. Learning invariant features through topographic filter maps. In Proc. CVPR, 2009. Kim, S. and Xing, E. P. Tree-guided group lasso for multitask regression with structured sparsity. Technical report, arXiv:0909.1373, 2009. Lee, D. D. and Seung, H. S. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755): 788­791, 1999. Lee, H., Battle, A., Raina, R., and Ng, A. Y. Efficient sparse coding algorithms. In Adv. NIPS, 2007. Mairal, J., Bach, F., Ponce, J., Sapiro, G., and Zisserman, A. Supervised Dictionary Learning. In Adv. NIPS, 2009. Mairal, J., Bach, F., Ponce, J., and Sapiro, G. Online learning for matrix factorization and sparse coding. J. Mach. Learn. Res., 11, 19­60, 2010. Nesterov, Y. Gradient methods for minimizing composite objective function. Technical report, CORE, 2007. Olshausen, B. A. and Field, D. J. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:3311­3325, 1997. Zhao, P., Rocha, G., and Yu, B. The composite absolute penalties family for grouped and hierarchical variable selection. Ann. Stat., 37(6A):3468­3497, 2009. Zhu, J., Ahmed, A., and Xing, E. P. MedLDA: maximum margin supervised topic models for regression and classification. In Proc. ICML, 2009. Acknowledgments This paper was partially supported by grants from the Agence Nationale de la Recherche (MGA Project) and from the European Research Council (SIERRA Project). References Bach, F. High-dimensional non-linear variable selection through hierarchical kernel learning. Technical report, arXiv:0909.0844, 2009. Beck, A. and Teboulle, M. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imag. Sci., 2(1):183­202, 2009. Bengio, S., Pereira, F., Singer, Y., and Strelow, D. Group sparse coding. In Adv. NIPS, 2009. Bengio, Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 2009. Bertsekas, D. P. Nonlinear programming. Athena Scientific Belmont, 1999. Blei, D., Ng, A., and Jordan, M. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993­1022, 2003. Blei, D. and McAuliffe, J. Supervised topic models. In Adv. NIPS, 2008. Blei, D., Griffiths, T., and Jordan, M. The nested chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 2010.