Sequential Projection Learning for Hashing with Compact Codes Jun Wang jwang@ee.columbia.edu Department of Electrical Engineering, Columbia University, New York, NY 10027, USA Sanjiv Kumar Google Research, New York, NY 10011, USA sanjivk@google.com Shih-Fu Chang sfchang@ee.columbia.com Department of Electrical Engineering, Columbia University, New York, NY 10027, USA Abstract Hashing based Approximate Nearest Neighbor (ANN) search has attracted much attention due to its fast query time and drastically reduced storage. However, most of the hashing methods either use random projections or extract principal directions from the data to derive hash functions. The resulting embedding suffers from poor discrimination when compact codes are used. In this paper, we propose a novel data-dependent projection learning method such that each hash function is designed to correct the errors made by the previous one sequentially. The proposed method easily adapts to both unsupervised and semi-supervised scenarios and shows significant performance gains over the state-ofthe-art methods on two large datasets containing up to 1 million points. instead, which allows fast search in large databases. Tree-based methods and hashing techniques are two popular frameworks for ANN search. The tree-based methods require large memory, and their efficiency reduces significantly when data dimension is high. On the contrary, hashing approaches achieve fast query time and need substantially reduced storage by indexing data with compact hash codes. Hence, in this work we focus on hashing methods, particularly, those representing each point as a binary code. Binary hashing aims to map the original dataset X RD×n containing n D-dimensional points to a Hamming space such that near neighbors in the original space have similar binary codes. To generate a K-bit Hamming embedding Y BK×n , K binary hash functions are used. In this work, we focus on linear projection-based hashing methods since they are very simple and efficient. In linear projectionbased hashing, the k th hash function is defined as hk (x) = sgn(f (wk x + bk )), where x is a data point, wk is a projection vector and bk is a threshold. Since h(x) {-1, 1}, the corresponding binary hash bit can be simply expressed as: yk (x) = (1 + hk (x))/2. Different choices of w and f (·) lead to different hashing approaches. For example, Locality Sensitive Hashing (LSH) keeps f (·) to be an identity function and samples w randomly from a p-stable distribution, and b from a uniform distribution (Datar et al., 2004). ShiftInvariant Kernel-based Hashing (SIKH) samples w similar to LSH but chooses f (·) to be a shifted cosine function (Raginsky & Lazebnik, 2009). Spectral Hash (SH) uses similar f (·) as SIKH but w is chosen to be a principal direction of data (Weiss et al., 2008). Methods that use random projections, i.e, LSH and SIKH, have interesting asymptotic theoretical properties and have been shown to work well with a large 1. Introduction Nearest neighbor search is a fundamental step in many machine learning algorithms that have been quite successful in fields like computer vision and information retrieval. Nowadays, datasets containing millions or even billions of points are becoming quite common with data dimensionality easily exceeding hundreds or thousands. Thus, exhaustive linear search is infeasible in such gigantic datasets. Moreover, storage of such large datasets also causes an implementation bottleneck. Fortunately, in many applications, it is sufficient to find Approximate Nearest Neighbors (ANNs) Appearing in Proceedings of the 27 th International Conference on Machine Learning, Haifa, Israel, 2010. Copyright 2010 by the author(s)/owner(s). Sequential Projection Learning for Hashing with Compact Codes number of bits. But, for short codes, such dataindependent projections do not give good discrimination. Moreover, if long codes are used for creating hash lookup tables, one needs to use many tables to get reasonable recall since collision probability decreases exponentially as the code length increases. This makes the search much slower and also leads to significant increase in storage. To be able to deal with very large datasets, ideally one would like to use codes that are as compact as possible while keeping a desired accuracy level. For this, learning data-dependent projections is crucial, which is the focus of this work. SH learns data-dependent directions via principal component analysis and usually performs better than LSH or SIKH for short codes. But its performance may decrease substantially when most of the data variance in contained in top few principal directions, a common characteristics of many real-world datasets. In all of the above methods, each projection is learned independently in the sense that the coding errors made by any bit does not influence the learning of other bits. To address this shortcoming, in this work we propose to learn a series of data-dependent and bitcorrelated hash functions sequentially. Each new bit is specifically designed to correct the errors made by the previous bits, leading to powerful compact codes. In this work, we use linear projection coupled with mean thresholding as a hash function, i.e., hk (xi ) = sgn(wk xi + bk ). Without loss of generality, let data be normalized to have zero mean. Hence, bk = 0 for mean thresholding, and hk (xi ) = sgn(wk xi ). Let H = [h1 , . . . , hK ] be a sequence of hash functions and W = [w1 , . . . , wK ] RD×K be a set of vectors in RD where wk 2 = 1 k . Our method sequentially learns wk 's and works under both unsupervised and semisupervised scenarios. In the semi-supervised case, one is given a few labeled pairs indicating whether the specified pair contains neighboring or non-neighboring points. The sequential method maximizes the empirical accuracy on the labeled data while an informationtheoretic term acts as a regularizer to avoid overfitting. In the unsupervised case, a set of pseudo-labels are generated sequentially using the probable mistakes made by the previous bit. Each new hash function tries to minimize these errors subject to the same regularizer as in the semi-supervised case. We conduct experiments on the MNIST dataset for the semi-supervised case and SIFT-1-million dataset for the unsupervised case. Both experiments show superior performance of the proposed sequential learning method over the existing state-of-the-art approaches. The remainder of this paper is organized as follows. We first present sequential projection learning in semisupervised scenario in Section 2. We further extend our approach to the unsupervised case in Section 3, where the key idea is to generate pseudo-labels from each hash bit. Section 4 gives the experimental results including the comparison with several state-of-the-art approaches. Concluding remarks are provided in Section 5. 2. Semi-supervised Sequential Learning For many real-world tasks, it is possible to obtain a few data pairs in which points are known to be either neighbors or non-neighbors. In some cases, labels for a few points in the dataset are known and it is trivial to generate such pairs from them. Suppose, one is given a set of n points, X = [xi ], i = 1 . . . n, where xi RD and X RD×n , in which a fraction of points are associated with two categories of relationships, M and C. Specifically, a pair of points (xi , xj ) M is denoted as neighbor-pair when points are either neighbors in a metric space or share common class labels. Similarly, a pair (xi , xj ) C is called a non-neighbor-pair if points are far away in a metric space or have different class labels. Further, suppose there are l points, l < n, each of which is associated with at least one of the two categories M or C. The matrix formed by these l columns of X is denoted as Xl RD×l . The goal is to learn hash functions that minimize the errors on the labeled training data Xl . 2.1. Empirical Accuracy The empirical accuracy for a family of hash functions H is defined as the difference of the total number of correctly classified pairs and the total number of wrongly classified pairs by each bit: hk (xi )hk (xj )- hk (xi )hk (xj ) . (1) J(H)= k (xi ,xj )M (xi ,xj )C This relaxation is quite intuitive in the sense that it not only desires similar/dissimilar points to have the same/different signs but also large projection magnitudes. To simplify the above form, we define a label The above function is not differentiable since hk (xi ) = sgn(wk xi ). Hence, we relax the objective function J(H) by replacing the sgn(·) with its signed magnitude and rewrite the objective as a function of W J(W)= wk x i x j wk - wk xi xj wk . (2) k (xi ,xj )M (xi ,xj )C Sequential Projection Learning for Hashing with Compact Codes matrix S Rl×l which incorporates the pairwise relationship between points from Xl as: 1 : (xi , xj ) M -1 : (xi , xj ) C (3) Sij = 0 : otherwise. With this label matrix S, one can rewrite (2) as 1 2 J(W)= 1 wk Xl SXl wk= tr W Xl SXl W . (4) 2 Neighbor pair Non-neighbor pair k 2.2. Information-Theoretic Regularizer Maximizing empirical accuracy for just a few pairs can lead to severe overfitting, as illustrated in Figure 1. Hence, we use a regularizer which utilizes both labeled and unlabeled data. From the informationtheoretic point of view, one would like to maximize the information provided by each bit (Baluja & Covell, 2008). Using maximum entropy principle, a binary bit that gives balanced partitioning of X provides maximum information. Thus, it is desired to have n i=1 hk (xi ) = 0. However, finding mean-thresholded hash functions that meet the balancing requirement is an NP hard problem (Weiss et al., 2008). Instead, we use this property to construct a regularizer for the empirical accuracy given in (4). We now show that maximum entropy partition is equivalent to maximizing the variance of a bit. Proposition 2.1. [maximum variance condition] A hash function with maximum entropy H(hk (x)) must maximize the variance of the hash values, and viceversa, i.e., max H(hk (x)) max var[h(x)] Proof. Assume hk has a probability p of assigning the hash value hk (x) = 1 to a data point and 1 - p for hk (x) = -1. The entropy of hk (x) can be computed as H(hk (x)) = -p log2 p - (1 - p) log2 (1 - p) It is easy to show that the maximum entropy is max H(hk (x)) = 1 when the partition is balanced, i.e., p = 1/2. Now we show that balanced partitioning implies maximum bit variance. The mean of hash value is E[h(x)] = µ = 2p - 1 and the variance is: var[hk (x)] = E[(hk (x) - µ)2 ] = 4(1 - p)2 p + 4p2 (1 - p) = 4p(1 - p) Clearly, var[h(x)] is concave with respect to p and its maximum is reached at p = 1/2, i.e. balanced partitioning. Also, since var[h(x)] has a unique maximum, it is easy to see that the maximum variance Figure 1. An illustration of partitioning with maximum empirical fitness and entropy. Both of the partitions satisfy the given pairwise labels, while the right one is more informative due to higher entropy. partitioning also maximizes the entropy of the hash function. Using the above proposition, the regularizer term is defined as, R(W) = k var[hk (x)] = k var[sgn(wk x)] (5) Maximizing the above function with respect to W is still hard due to its non-differentiability. To overcome this problem, we first show that the maximum variance of a hash function is lower-bounded by the scaled variance of the projected data. Proposition 2.2. [lower bound on maximum variance of a hash function] The maximum variance of a hash function is lower-bounded by the scaled variance of the projected data, i.e., max var[hk (x)] · var[wk x], where is a positive constant. Proof. Suppose, xi 2 i. Since wk from cauchy-schwarz inequality, wk x 2 2 = 1 k, 2 = · sgn(wk x) 1 E sgn(wk x) 2 E wk x 2 1 max var[hk (x)] var[wk x] wk 2 · x 2 Here, we have used the properties that the data is zero-centered, i.e., E[wk x] = 0, and for maximum bit variance E[sgn(wk x)] = 0. Given the above proposition, we use the lower bound on the maximum variance of a hash function as a regularizer, which is easy to optimize, i.e., R(W) = 1 E[ wk x 2 ] = k 1 n wk XX wk k (6) Sequential Projection Learning for Hashing with Compact Codes 2.3. Total Objective The overall objection function combines the empirical accuracy given in (4) and the regularizer (6) as, J(W) = = 1 2 wk Xl SXl wk + k 2n wk XX wk k Here is the regularization coefficient. Absorbing constants like n and in , one can write the overall objective function in a more compact matrix form as J(W) = 1 tr W Xl SXl W + tr W XX W 2 2 1 (7) = tr W Xl SXl + XX W . 2 In summary, the above semi-supervised formulation consists of two components. First, a supervised term that measures the empirical accuracy of the learned hash functions over the labeled pairs. Second, an unsupervised term that acts as a regularizer and prefers directions maximizing the variance of the projected data. ^ The optimal solution W = arg maxW J(W) is obtained by maximizing (7). This can be done easily if additional orthogonality constraints are imposed on W, i.e., W W = I. Such constraints try to minimize redundancy in bits by effectively decorrelating them. In that case, the solution is simply given by top K eigenvectors of the following matrix M = Xl SXl + XX , (8) Algorithm 1 Semi-supervised sequential projection learning for hashing (S3PLH) Input: data X, pairwise labeled data Xl , initial pairwise labels S1 , length of hash codes K, constant for k = 1 to K do Compute adjusted covariance matrix: Mk = Xl Sk Xl + XX Extract the first eigenvector e of Mk and set: wk = e Update the labels from vector wk : ~ Sk+1 = Sk - T Sk , Sk Compute the residual: X = X - wk wk X end for sequential error correcting property where each hash function tries to correct the errors made by the previous one. In the next section, we propose an alternative solution to learn a sequence of projections, which can be extended to unsupervised case also as shown later. 2.4. Sequential Projection Learning The idea of sequential projection learning is quite intuitive. The hash functions are learned iteratively such that in each iteration, the pairwise label matrix S in (3) is updated by imposing higher weights on point pairs violated by the previous hash function. This sequential process implicitly creates dependency between bits and progressively minimizes empirical error. The sign of Sij , representing the logical relationship in a point pair (xi , xj ), remains unchanged in the entire process and only its magnitude |Sij | is updated. Algorithm 1 describes the proposed semi-supervised sequential projection learning. ~ Here, Sk Rl×l measures the signed magnitude of pairwise relationships of the k th projections of Xl : ~ S k = Xl w k w k Xl (9) which can be obtained in a single shot without needing any sequential optimization. Mathematically, it is very similar to finding maximum variance direction using PCA except that the data covariance matrix gets "adjusted" by another matrix arising from the labeled data. Since it is essentially a PCA based hashing method, in this work we refer to it as PCAH. Even though computationally simple, the imposition of orthogonality constraints leads to a practical problem. It is well known that for many real-world datasets, most of the variance is contained in top few principal directions. The orthogonality constraints force one to progressively pick those directions that have low variance, thus, substantially reducing the quality of such bits. A possible remedy is to relax the hard orthogonality constraints by adding a soft penalty term which penalizes non-orthogonal directions. The final solution can be achieved via single-shot adjustment over the orthogonal ones, as discussed in our previous work (Wang et al., 2010). However, the resulting solution is sensitive to the choice of the penalty coefficient. Moreover, such single-shot solution does not have the ~ Mathematically, Sk is simply the derivative of empir~ ical accuracy of k th hash function, i.e., Sk = S Jk , where Jk = wk Xl SXl wk . The function T(·) derives the truncated gradient of Jk : ~ T(Sk , Sij )= ij ~ ~ Sk : sgn(Sij · Sk ) < 0 ij ij ~ 0 : sgn(Sij · Sk ) 0 ij (10) ~ The condition sgn(Sij · Sk ) < 0 for a pair (xi , xj ) ij indicates that hash bits hk (xi ) and hk (xj ) contradict the given pairwise label. In other words, points in a neighbor pair (xi , xi ) M are assigned different bits Sequential Projection Learning for Hashing with Compact Codes Algorithm 2 Unsupervised sequential projection learning for hashing (USPLH) Input: data X, length of hashing codes K Initialize X0 = , S0 = 0. MC MC for k = 1 to K do Compute adjusted covariance matrix: Figure 2. Potential errors due to thresholding (red line) of the projected data to generate a bit. Points in r- and r+ , are assigned different bits even though they are quite close. Also, points in R- (R+ ) and r- (r+ ) are assigned the same bit even though they are quite far. k-1 Mk = i=0 k-i Xi Si Xi MC MC MC + XX or those in (xi , xi ) C are assigned the same bit. For ~ each such violation, Sij is updated as Sij = Sij -Sk . ij 1 The step size is chosen such that where ~ = maxi xi 2 , ensuring |Sk | 1. This leads to ij numerically stable updates without changing the sign of Sij . Those pairs for which current hash function ~ produces the correct bits, i.e., sgn(Sij · Sk ) > 0, Sij ij ~ k , Sij ) = 0. Thus, is kept unchanged by setting T(Sij those labeled pairs for which the current hash function does not predict the bits correctly exert more influence on the learning of the next function, biasing the new projection to produce correct bits for such pairs. Intuitively, it has a flavor of boosting-based methods commonly used for classification. After extracting a projection direction using Mk , the contribution of the subspace spanned by that direction is removed from X to minimize the redundancy in bits. Note that this is not the same as imposing the orthogonality constraints on W discussed in Section 2.3. Since the supervised term Xl Sk Xl still contains information potentially from the whole space spanned by original X, the new direction may still have a component in the subspace spanned by the previous directions. Thus, the proposed formulation automatically decides the level of desired correlations between successive hash functions. If empirical accuracy is not affected, it prefers to pick uncorrelated projections. Thus, unlike the single-shot solution discussed earlier, the proposed sequential method aggregates various desirable properties in a single formulation leading to superior performance on real-world tasks as shown in Section 4. Next we show how one can adapt the sequential learning method to the unsupervised case. Extract the first eigenvector e of Mk and set: wk = e Generate pseudo labels from projection wk : Sample Xk and construct Sk MC MC Compute the residual: X = X - wk wk X end for erating pseudo labels at each iteration of learning. In fact, while generating a bit via a binary hash function, there are two types of boundary errors one encounters due to thresholding of the projected data. Suppose all the data points are projected on a one-dimensional axis as shown in Figure 2, and the red vertical line is the partition boundary, i.e. wk x = 0. The points left to the boundary are assigned a hash value hk (x) = -1 and those on the right are assigned a value hk (x) = 1. The regions marked as r- , r+ are located very close to the boundary and regions R- , R+ are located far from it. Due to thresholding, points in the pair (xi , xj ), where xi r- and xj r+ , are assigned different hash bits even though their projections are quite close. On the other hand, points in pair (xi , xj ), where xi r- and xj R- or xi r+ and xi R+ , are assigned the same hash bit even though their projected values are quite far apart. To correct these two types of boundary "errors", we first introduce a neighbor-pair set M and a non-neighbor-pair set C: M = {(xi , xj )} : h(xi ) · h(xj ) = -1, |w (xi -xj )| C = {(xi , xj )} : h(xi ) · h(xj ) = 1, |w (xi - xj )| Then, given the current hash function, a desired number of point pairs are sampled from both M and C. Suppose, XMC contains all the points that are part of at least one sampled pair. Using the labeled pairs and XMC , a pairwise label matrix Sk is constructed MC similar to Equation (3). In other words, for a pair of samples (xi , xj ) M, a pseudo label Sk = 1 is MC assigned while for those (xi , xj ) C, Sk = -1 is MC assigned. In the next iteration, these pseudo labels enforce point pair in M to be assigned the same hash values and those in C different ones. Thus it sequentially tries to correct the potential errors made by the previous hash functions. 3. Unsupervised Sequential Learning Unlike the semi-supervised case, pairwise labels are not available in the unsupervised case. To apply the general framework of sequential projection learning to an unsupervised setting, we propose the idea of gen- Sequential Projection Learning for Hashing with Compact Codes Note that each hash function hk (·) produces a pseudo label set Xk MC and the corresponding label matrix Sk . The new label information is used to adjust MC the data covariance matrix in each iteration of sequential learning, similar to that for the semi-supervised case. However, the unsupervised setting does not have a boosting-like update of the label matrix unlike the semi-supervised case. Each iteration results in its own label matrix depending on the hash function. Hence, to learn a new projection, all the pairwise label matrices since the beginning are used but their contribution is decayed exponentially by a factor at each iteration. Note that one does not need to store these matrices explicitly since incremental update can be done at each iteration resulting in the same memory and time complexity as for the semi-supervised case. The detailed learning procedure is described in Algorithm 2. Since there exist no pseudo labels at the beginning, the first vector w1 is just the first principal direction of the data. Then, each hash function is learned to satisfy the pseudo labels iteratively by adjusting the data covariance matrix, similar to S3PLH approach. 0.55 Precision whithin Hamming radius 2 0.8 Mean Average Precision 0.5 0.45 0.4 0.35 0.3 0.25 0.2 12 16 24 32 48 LSH SH SIKH RBMs BSSC PCAH S3PLH 0.6 0.4 LSH SH SIKH RBMs BSSC PCAH S3PLH 0.2 Number of bits 64 0 12 16 24 Number of bits 32 48 64 (a) (b) Figure 3. Results on MNIST dataset. a) MAP for different number of bits using Hamming ranking; b) Precision within Hamming radius 2 using hash lookup. 4. Experiments To evaluate the performance of our sequential projection learning method for semi-supervised (S3PLH) and unsupervised (USPLH) cases, we used two large datasets: MNIST (70K points) and SIFT-1M (1 million points). We compare against several state-of-theart hashing approaches, including LSH, SH, and SIKH. In addition, the sequential learning results are contrasted with those from single-shot learning using adjusted data covariance (PCAH) as discussed in Section 2.3. From the supervised domain, we also compare against two popular techniques i.e., Restricted Boltzman Machines (RBMs) (Hinton & Salakhutdinov, 2006; Torralba et al., 2008), and a boosting-style method based on the standard LSH called Boosting SSC (BSSC) (Shakhnarovich, 2005). In BSSC, pairwise labels are used to gradually tune thresholding for binary partitioning and weights of each hash bit, resulting in sequential error correcting properties. To perform realtime search, we adopt two methods commonly used in the literature: (i) Hamming ranking - all the points in the database are ranked according to their Hamming distance from the query and the desired neighbors are returned from the top of the ranked list, and (ii) Hash lookup - a lookup table is constructed using the database codes, and all the points in the buckets that fall within a small Hamming radius of the query are returned. The complexity of Hamming ranking is linear even though it is very fast in prac- tice. On the other hand, the complexity of the hash lookups is constant time. All the experiments were conducted using relatively short codes of length up to 64 bits. We use several metrics to measure the quantitative performance of different methods. For Hamming ranking based evaluation, Mean Average Precision (MAP) is computed which approximates the area under precision-recall curve (Turpin & Scholer, 2006). To get MAP, at first, precision is computed at each location of the ranked list where the desired neighbors lie and averaged over all such locations. Then, mean of average precision for all the queries yields MAP. Next, recall is computed for the different methods via Hamming ranking by progressively scanning the entire dataset. We also compute the precision within top M neighbors returned from Hamming ranking. Finally, precision is computed for hash lookup. Similar to (Weiss et al., 2008), a hamming radius of 2 is used to retrieve the neighbors. If a query returns no neighbors, it is treated as a failed query with zero precision. 4.1. MNIST Digit Data We applied our semi-supervised sequential learning method (S3PLH) on the MNIST dataset, which consists of 70K handwritten digit samples 1 . Each sample is an image of size 28 × 28 pixels yielding a 784-dim data vector. A class label (digit 0 9) is associated with each sample. Since this dataset is fully annotated, the neighbor pairs can be obtained easily using the image labels. The entire dataset is partitioned into two parts: a training set with 69K samples and a test set with 1K samples. In addition, the S3PLH, RBMs and BSSC also use a small set of labeled training data while learning the binary encoding. Therefore, we randomly sample 1000 points from the training set to construct neighbor and non-neighbor pairs. A neighbor pair contains two samples that share the same digit label, and vice-a-versa. RBMs train neural networks using the labeled pairs for fine-tuning while SP3LH and PCAH uses them to construct the pairwise label matrix S. Fi1 http://yann.lecun.com/exdb/mnist/ Sequential Projection Learning for Hashing with Compact Codes 5 30 1 1 Training Time (log ) 4 Test Time Recall Recall 3 2 LSH SH SIKH RBMs BSSC PCAH S3PLH 10 25 20 15 10 5 LSH SH SIKH RBMs BSSC PCAH S3PLH 0.8 0.8 0.6 0.4 1 0.2 0 12 16 24 LSH SH SIKH RBMs BSSC PCAH S3PLH 1 2 3 4 5 6 x 10 7 4 0.6 0.4 0.2 LSH SH SIKH RBMs BSSC PCAH S3PLH Number of bits 32 48 64 0 12 16 24 Number of bits 32 48 64 0 0 The number of samples 0 0 1 The number of samples 2 3 4 5 6 x 10 7 4 (a) (b) Precision of the first retrieved M samples (a) 0.8 LSH SH SIKH RBMs BSSC PCAH S3PLH Precision of the first retrieved M samples (b) 0.8 LSH SH SIKH RBMs BSSC PCAH S3PLH Figure 4. Computational costs of different techniques. (a) Training time (sec in log 10 scale), (b) Test time (sec). 0.6 0.6 nally, BSSC uses the labeled pairs directly in a boosting framework to learn the thresholds and weights for each hash function. Figure 3(a) shows MAP for various methods using different number of bits. As discussed in Section 2.3, performance of the single-shot adjusted PCA (PCAH) decreases for higher bits because the later bits are computed using low-variance projections. SH also uses PCA directions and suffers from the same problem. Even though RBMs performs better than other techniques in semi-supervised setting, our sequential method (S3PLH) provides the best MAP for all bits. Figure 3(b) presents the precision using hash lookup within Hamming radius 2 for different number of bits. The precision for all the methods drops significantly when longer codes are used. This is because, for longer codes, the number of points falling in a bucket decrease exponentially. Thus, many queries fail by not returning any neighbor even in a Hamming ball of radius 2. This shows a practical problem with hash lookup tables even though they have faster query response than Hamming ranking. Even in this case, S3PLH provides the best performance for most of the cases. Also, the drop in precision for longer codes is much less compared to others, indicating less failed queries for S3PLH. Figure 4 provides the training and test time for different techniques. Clearly, RBMs is most expensive to train, needing at least two orders of magnitude more time than any other approach. LSH and SIKH need least training time because their projection directions and thresholds are randomly generated instead of being learned. PCAH and SH take similar training time. Being a sequential method, S3PLH needs longer training time than PCAH/SH but is still fast in practice needing just few hundred seconds for learning 64-bit codes (slightly faster than BSSC in our experiments). In terms of test time, RBMs is still most expensive because it computes the binary codes for a query using multi-layer neural network. S3LPH along with PCAH, LSH and BSSC are the four fastest techniques. SH and SIKH require a little more time than these methods 0.4 0.4 0.2 50 100 200 300 400 500 M 1,000 0.2 50 100 200 300 400 500 M 1,000 (c) (d) Figure 5. (a)-(b) Recall curves on MNIST dataset (a) with 24 bits, and (b) with 48 bits. (c)-(d) Precision of top M returned neighbors (c) with 24 bits (d) with 48 bits. due to the computation of nonlinear sinusoidal functions to generate each bit. Finally, the recall curves for different methods using Hamming ranking are shown in Figure 5(a) for 24 bits and Figure 5(b) for 48 bits. S3PLH gives the best recall among all the methods. Moreover, for higher bits, the recall of all other methods decreases except that of S3PLH. Figure 5(c) and 5(d) show the quality of the first M points retrieved using Hamming ranking by computing the percentage of true neighbors out of M points. M is varied from 50 to 1000 and precision is computed for two different bit lengths as for the recall curves. S3PLH gives the best precision for almost all values of M , and the difference from other methods is more pronounced at higher M 's. 4.2. SIFT-1M Dataset Next, we compare the performance of our unsupervised sequential learning method (USPLH) with other unsupervised methods on SIFT-1M dataset. It contains 1 million local SIFT descriptors extracted from random images (Lowe, 2004). Each point in the dataset is a 128-dim vector representing histograms of gradient orientations. We use 1 million samples for training and additional 10K for testing. Euclidean distance is used to determine the nearest neighbors. Following the criterion used in (Weiss et al., 2008; Wang et al., 2010), a returned point is considered a true neighbor if it lies in the top 2 percentile points closest to a query. Since no labels are available in this experiment, PCAH has no adjustment term. It uses just the data covariance to compute the required pro- Sequential Projection Learning for Hashing with Compact Codes Precision whithin Hamming radius 2 0.3 0.8 LSH SH SIKH PCAH USPLH 0.2 LSH SH SIKH PCAH USPLH 0.6 0.4 0.1 0.2 0 12 16 24 Number of bits 32 48 64 0 12 16 24 Number of bits 32 48 64 (a) 1 1 (b) 0.8 0.8 Recall Recall 0.6 LSH SH SIKH PCAH USPLH 0.6 LSH SH SIKH PCAH USPLH setting, it tries to maximize empirical accuracy over the labeled data while regularizing the solution using an information-theoretic term. Further, in an unsupervised setting, we show how one can generate pseudo labels using the potential errors made by the current bit. The final algorithm in both settings is very simple, mainly requiring a few matrix multiplications followed by extraction of top eigenvector. Experiments on two large datasets clearly demonstrate the performance gain over several state-of-the-art methods. In the future, we would like to investigate theoretical properties of the codes learned by sequential methods. Mean Average Precision 0.4 0.4 0.2 0.2 References 10 5 x 10 0 0 1 2 The number of samples 3 4 5 6 7 8 9 10 5 x 10 0 0 1 2 The number of samples 3 4 5 6 7 8 9 (c) (d) Figure 6. Results on SIFT-1M dataset. a) MAP for different number of bits using Hamming ranking; b) Precision within Hamming radius 2 using hash lookup; Recall curves (c) with 24 bits, and (d) with 48 bits. Baluja, S. and Covell, M. Learning to hash: forgiving hash functions and applications. Data Mining and Knowledge Discovery, 17(3):402­430, 2008. Datar, M., Immorlica, N., Indyk, P., and Mirrokni, V.S. Locality-sensitive hashing scheme based on pstable distributions. In the 20th annual Symp. on Computational Geometry, pp. 253­262, 2004. Hinton, GE and Salakhutdinov, RR. Reducing the dimensionality of data with neural networks. Science, 313(5786):504, 2006. Lowe, D. G. Distinctive Image Features from ScaleInvariant Keypoints. Int. J. Computer Vision, 60 (2):91­110, 2004. Raginsky, M. and Lazebnik, S. Locality-Sensitive Binary Codes from Shift-Invariant Kernels. In The Neural Information Processing Systems, volume 22, 2009. Shakhnarovich, G. Learning task-specific similarity. PhD thesis, Massachusetts Institute of Technology, 2005. Torralba, A., Fergus, R., and Weiss, Y. Small codes and large image databases for recognition. In Int. Conf. on Computer Vision and Pattern Recognition, pp. 1­8, 2008. Turpin, A. and Scholer, F. User performance versus precision measures for simple search tasks. In Proc. of ACM SIGIR, pp. 18, 2006. Wang, J., Kumar, S., and Chang, S.-F. SemiSupervised Hashing for Scalable Image Retrieval. In Int. Conf. on Computer Vision and Pattern Recognition, 2010. Weiss, Y., Torralba, A., and Fergus, R. Spectral hashing. In The Neural Information Processing Systems, volume 21, pp. 1753­1760, 2008. jections. For USPLH, to learn each hash function sequentially, we select 2000 samples from each of the four boundary and margin regions r- , r+ , R- , R+ . A label matrix S is constructed by assigning pseudo-labels to pairs generated from the samples as described in Section 3. Figure 6(a) shows MAP for different methods varying the number of bits. Methods that learn datadependent projections i.e., USPLH, SH and PCAH perform generally much better than LSH and SIKH. SH performs better than PCAH for longer codes since, for this dataset, SH tends to pick the high-variance directions again. USPLH yields the best MAP for all bits. Figure 6(b) shows the precision curves for different methods using hash lookup table. USPLH again gives the best performance for most cases. Also, the performance of USPLH does not drop as rapidly as SH and PCAH with increase in bits. Thus, USPLH leads to less query failures in comparison to other methods. Figure 6(c) and 6(d) show the recall curves for different methods using 24-bit and 48-bit codes. Higher precision and recall for USPLH indicate the advantage of learning hash functions sequentially even with noisy pseudo labels. 5. Conclusions We have proposed a sequential projection learning paradigm for robust hashing with compact codes. Each new bit tends to minimize the errors made by the previous bit. This results in learning correlated hash functions. When applied in a semi-supervised