Sparse deep belief net model for visual area V2 Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng Computer Science D epartment Stanford U n i v e r s i t y Science Department, Stanford University Spotlight ID: W51 Sparse Deep Belief Net Deep Belief Net Our goal: learn hierarchical (or ''deep''), sparse representation from unlabeled data. We develop sparse variant of the deep belief networks We develop a sparse variant of the deep belief networks, and present an unsupervised learning model that faithfully mimics certain properties of visual area V2. Learning hierarchical features from natural images Learning hierarchical features from natural images We train two layers of nodes in the network from natural images. The first layer results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. the Gabor functions known to model V1 cell receptive fields The second layer in our model encodes both collinear (''contour'') features as well as corners and junctions. Learned first layer bases Comparing our model to biological V2 statistics Our model, the encoding of these complex ''corner'' features matches well with the results from the Ito & Komatsu's study of biological V2 responses. bi l i l V2 This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features. Learned second layer bases