Feature Hashing for Large Scale Multitask Learning Kilian Weinberger Anirban Dasgupta John Langford Alex Smola Josh Attenberg Yahoo! Research, 2821 Mission College Blvd., Santa Clara, CA 95051 USA KILIAN @ YAHOO - INC . COM ANIRBAN @ YAHOO - INC . COM JL @ HUNCH . NET ALEX @ SMOLA . ORG JOSH @ CIS . POLY. EDU Abstract Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks. existence of handcrafted non-linear features), yet, the training set may be prohibitively large in size and very high dimensional. In such a case, there is no need to map the input vectors into a higher dimensional feature space. Instead, limited memory makes storing a kernel matrix infeasible. For this common scenario several authors have recently proposed an alternative, but highly complimentary variation of the kernel-trick, which we refer to as the hashing-trick: one hashes the high dimensional input vectors x into a lower dimensional feature space Rm with : X Rm (Langford et al., 2007; Shi et al., 2009). The parameter vector of a classifier can therefore live in Rm instead of in the original input spaceRd (or in Rn in the case of kernel matrices), where m n and m d. Different from random projections, the hashing-trick preserves sparsity and introduces no additional overhead to store projection matrices. To our knowledge, we are the first to provide exponential tail bounds on the canonical distortion of these hashed inner products. We also show that the hashing-trick can be particularly powerful in multi-task learning scenarios where the original feature spaces are the cross-product of the data, X, and the set of tasks, U . We show that one can use different hash functions for each task 1 , . . . , |U | , to map the data into one joint space with little interference. Sharing amongst the different tasks is achieved with an additional hash function 0 that also maps into the same joint space. The hash function 0 is shared amongst all |U | tasks and allows to learn their common components. While many potential applications exist for the hashingtrick, as a particular case study we focus on collaborative email spam filtering. In this scenario, hundreds of thousands of users collectively label emails as spam or notspam, and each user expects a personalized classifier that reflects their particular preferences. Here, the set of tasks, U , is the number of email users (this can be very large for open systems such as Yahoo MailTM or GmailTM ), and the feature space spans the union of vocabularies in multitudes 1. Introduction Kernel methods use inner products as the basic tool for comparisons between objects. That is, given objects x1 , . . . , xn X for some domain X, they rely on k(xi , xj ) := (xi ), (xj ) (1) to compare the features (xi ) of xi and (xj ) of xj respectively. Eq. (1) is often famously referred to as the kernel-trick. It allows the use of inner products between very high dimensional feature vectors (xi ) and (xj ) implicitly through the definition of a positive semi-definite kernel matrix k without ever having to compute a vector (xi ) directly. This can be particularly powerful in classification settings where the original input representation has a non-linear decision boundary. Often, linear separability can be achieved in a high dimensional feature space (xi ). In practice, for example in text classification, researchers frequently encounter the opposite problem: the original input space is almost linearly separable (often because of the Appearing in Proceedings of the 26 th International Conference on Machine Learning, Montreal, Canada, 2009. Copyright 2009 by the author(s)/owner(s). Feature Hashing for Large Scale Multitask Learning of languages. This paper makes four main contributions: 1. In section 2 we introduce specialized hash functions with unbiased inner-products that are directly applicable to a large variety of kernel-methods. 2. In section 3 we provide exponential tail bounds that help explain why hashed feature vectors have repeatedly lead to, at times surprisingly, strong empirical results. 3. In the same chapter we show that the interference between independently hashed subspaces is negligible with high probability, which allows large-scale multi-task learning in a very compressed space. 4. In section 5 we introduce collaborative email-spam filtering as a novel application for hash representations and provide experimental results on large-scale real-world spam data sets. present paper continues where (Shi et al., 2009) falls short: we prove exponential tail bounds. These bounds hold for general hash kernels, which we later apply to show how hashing enables us to do large-scale multitask learning efficiently. We start with a simple lemma about the bias and variance of the hash kernel. The proof of this lemma appears in appendix A. Lemma 2 The hash kernel is unbiased, that is E [ x, x ] = x, x . Moreover, the variance is 2 x,x = = x 1 m 2 i=j x2 xj + xi xi xj xj , and thus, for i 1 m 2 x 2 2 = 1, x,x = O . 2. Hash Functions We introduce a variant on the hash kernel proposed by (Shi et al., 2009). This scheme is modified through the introduction of a signed sum of hashed features whereas the original hash kernels use an unsigned sum. This modification leads to an unbiased estimate, which we demonstrate and further utilize in the following section. Definition 1 Denote by h a hash function h : N {1, . . . , m}. Moreover, denote by a hash function : N {±1}. Then for vectors x, x 2 we define the hashed feature map and the corresponding inner product as i (h,) This suggests that typical values of the hash kernel should be concentrated within O( 1 ) of the target value. We use m Chebyshev's inequality to show that half of all observations are within a range of 2. This, together with Talagrand's convex distance inequality, enables us to construct exponential tail bounds. 3.1. Concentration of Measure Bounds In this subsection we show that under a hashed feature-map the length of each vector is preserved with high probability. Talagrand's inequality (Ledoux, 2001) is a key tool for the proof of the following theorem (detailed in the appendix B). Theorem 3 Let < 1 be a fixed constant and x be a given instance. Let = x . Under the assumptions above, the x 2 hash kernel satisfies the following inequality Pr | x 2 (x) = (j)xj (2) (3) j:h(j)=i - x x 2 2 and x, x := (h,) (x), (h,) (x ) . 2 2 | 2x,x + exp - 4 . Although the hash functions in definition 1 are defined over the natural numbers N, in practice we often consider hash functions over arbitrary strings. These are equivalent, since each finite-length string can be represented by a unique natural number. Usually, we abbreviate the notation (h,) (·) by just (·). Two hash functions and are different when = (h,) and = (h , ) such that either h = h or = . The purpose of the binary hash is to remove the bias inherent in the hash kernel of (Shi et al., 2009). In a multi-task setting, we obtain instances in combination with tasks, (x, u) X × U . We can naturally extend our definition 1 to hash pairs, and will write u (x) = ((x, u)). Note that an analogous result would also hold for the original hash kernel of (Shi et al., 2009), the only modification being the associated bias terms. The above result can also be utilized to show a concentration bound on the inner product between two general vectors x and x . Corollary 4 For two vectors x and x , let us define := max(x,x , x ,x , x-x ,x-x ) := min 2 x x 2 , 2 x x 2 , x-x x-x 2 2 . 3. Analysis The following section is dedicated to theoretical analysis of hash kernels and their applications. In this sense, the Also let = x + x + x - x . Under the assumptions above, we have that Pr | x, x - x, x | > ( 2+ )/2 < 3e- 4 . The proof for this corollary can be found in appendix C. We can also extend the bound in Theorem 3 for the maximal Feature Hashing for Large Scale Multitask Learning canonical distortion over large sets of distances between vectors as follows: Corollary 5 Denote by X = {x1 , . . . , xn } a set of vectors which satisfy xi - xj xi - xj 2 for all pairs i, j. In this case with probability 1 - we have for all i, j | xi - xj 2 - xi - xj 2 xj 2 2 2 | xi - 2 + 64 2 log2 m n 2 . hash-feature space we want to ensure that there is little interaction between the different parameter vectors. Let U be a set of different tasks, u U being a specific one. Let w be a combination of the parameter vectors of tasks in U \ {u}. We show that for any observation x for task u, the interaction of w with x in the hashed feature space is minimal. For each x, let the image of x under the hash feature-map for task u be denoted as u (x) = (,h) ((x, u)). Theorem 7 Let w Rm be a parameter vector for tasks in U \ {u}. In this case the value of the inner product w, u (x) is bounded by Pr {| w, u (x) | > } 2e - 2 /2 m-1 w 2 x 2 + w x /3 2 2 This means that the number of observations n (or correspondingly the size of the un-hashed kernel matrix) only enters logarithmically in the analysis. Proof We apply the bound of Theorem 3 to each dis1 tance individually. Note the bound 2 m for all normalized vectors. Also, since we have n(n-1) pairs of dis2 tances the union bound yields a corresponding factor. Solv ing claim. n(n-1) - 4 e 2 for and easy inequalities proves the Proof We use Bernstein's inequality (Bernstein, 1946), which states that for independent random variables Xj , with E [Xj ] = 0, if C > 0 is such that |Xj | C, then Pr n j=1 Xj > t exp - n j=1 3.2. Multiple Hashing Note that the tightness of the union bound in Corollary 5 depends crucially on the magnitude of . In other words, for large values of , that is, whenever some terms in x are very large, even a single collision can already lead to significant distortions of the embedding. This issue can be amended by trading off sparsity with variance. A vector of unit length may be written as (1, 0, 0, 0, . . .), or as 1 1 , , 0, . . . 2 2 t2 /2 . (4) 2 E Xj + Ct/3 , or more generally as a vector with c 1 We have to compute the concentration property of w, u (x) = j xj (j)wh(j) . Let Xj = xj (j)wh(j) . By the definition of h and , Xj are independent. Also, for each j, since w depends only on the hash-functions for U \ {u}, wh (j) is independent of (j). Thus, E[Xj ] = E(,h) xj (j)wh(j) = 0. For each j, we also have |Xj | < 2 x w =: C. Finally, j E[Xj ] is given by E j nonzero terms of magnitude c- 2 . This is relevant, for instance whenever the magnitudes of x follow a known pattern, e.g. when representing documents as bags of words since we may simply hash frequent words several times. The following corollary gives an intuition as to how the confidence bounds scale in terms of the replications: Lemma 6 If we let x = 1 (x, . . . , x) c 2 (xj (j)wh(j) )2 = 1 m x2 w 2 = j j, 1 m x 2 2 w 2 2 The claim follows by plugging both terms and C into the Bernstein inequality (4). then: 2 1. It is norm preserving: x = x . = x x 2. It reduces component magnitude by 1 c . . Theorem 7 bounds the influence of unrelated tasks with any particular instance. In section 5 we demonstrate the realworld applicability with empirical results on a large-scale multi-task learning problem. 2 2 3. Variance increases to x ,x = 1 x,x + c-1 2 x c c 4 2 4. Applications The advantage of feature hashing is that it allows for significant storage compression for parameter vectors: storing w in the raw feature space na¨vely requires O(d) numbers, i when w Rd . By hashing, we are able to reduce this to O(m) numbers while avoiding costly matrix-vector multiplications common in Locality Sensitive Hashing. In addition, the sparsity of the resulting vector is preserved. Applying Lemma 6 to Theorem 3, a large magnitude can be decreased at the cost of an increased variance. 3.3. Approximate Orthogonality For multitask learning, we must learn a different parameter vector for each related task. When mapped into the same Feature Hashing for Large Scale Multitask Learning The benefits of the hashing-trick leads to applications in almost all areas of machine learning and beyond. In particular, feature hashing is extremely useful whenever large numbers of parameters with redundancies need to be stored within bounded memory capacity. Personalization (Daume, 2007) introduced a very simple but strikingly effective method for multitask learning. Each task updates its very specific own (local) weights and a set of common (global) weights that are shared amongst all tasks. Theorem 7 allows us to hash all these multiple classifiers into one feature space with little interaction. To illustrate, we explore this setting in the context of spamclassifier personalization. Suppose we have thousands of users U and want to perform related but not identical classification tasks for each of the them. Users provide labeled data by marking emails as spam or not-spam. Ideally, for each user u U , we want to learn a predictor wu based on the data of that user solely. However, webmail users are notoriously lazy in labeling emails and even those that do not contribute to the training data expect a working spam filter. Therefore, we also need to learn an additional global predictor w0 to allow data sharing amongst all users. Storing all predictors wi requires O(d × (|U | + 1)) memory. In a task like collaborative spam-filtering, |U |, the number of users can be in the hundreds of thousands and the size of the vocabulary is usually in the order of millions. The na¨ve way of dealing with this is to elimii nate all infrequent tokens. However, spammers target this memory-vulnerability by maliciously misspelling words and thereby creating highly infrequent but spam-typical tokens that "fall under the radar" of conventional classifiers. Instead, if all words are hashed into a finite-sized feature vector, infrequent but class-indicative tokens get a chance to contribute to the classification outcome. Further, large scale spam-filters (e.g. Yahoo MailTM or GMailTM ) typically have severe memory and time constraints, since they have to handle billions of emails per day. To guarantee a finite-size memory footprint we hash all weight vectors w0 , . . . , w|U | into a joint, significantly smaller, feature space Rm with different hash functions 0 , . . . , |U | . The resulting hashed-weight vector wh Rm can then be written as: wh = 0 (w0 ) + u (wu ). (5) uU i. More precisely: 0 (x) + u (x), wh = x, w0 + wu + d + i. (6) The interference error consists of all collisions between 0 (x) or u (x) with hash functions of other users, i= 0 (x), v (wv ) + u (x), v (wv ) . (7) vU,v=0 vU,v=u To show that i is small with high probability we can apply Theorem 7 twice, once for each term of (7). We consider each user's classification to be a separate task, and since vU,v=0 wv is independent of the hashfunction 0 , the conditions of Theorem 7 apply with w = v=0 wv and we can employ it to bound the second term, The second application is vU,v=0 u (x), u (wv ) . identical except that all subscripts "0" are substituted with "u". For lack of space we do not derive the exact bounds. The distortion error occurs because each hash function that is utilized by user u can self-collide: d = v{u,0} | v (x), v (wv ) - x, wv |. (8) To show that d is small with high probability, we apply Corollary 4 once for each possible values of v. In section 5 we show experimental results for this setting. The empirical results are stronger than the theoretical bounds derived in this subsection--our technique outperforms a single global classifier on hundreds thousands of users. In the same section we provide an intuitive explanation for these strong results. Massively Multiclass Estimation We can also regard massively multi-class classification as a multitask problem, and apply feature hashing in a way similar to the personalization setting. Instead of using a different hash function for each user, we use a different hash function for each class. (Shi et al., 2009) apply feature hashing to problems with a high number of categories. They show empirically that joint hashing of the feature vector (x, y) can be efficiently achieved for problems with millions of features and thousands of classes. Collaborative Filtering Assume that we are given a very large sparse matrix M where the entry Mij indicates what action user i took on instance j. A common example for actions and instances is user-ratings of movies (Bennett & Lanning, 2007). A successful method for finding common factors amongst users and instances for predicting unobserved actions is to factorize M into M = U W . If we Note that in practice the weight vector wh can be learned directly in the hashed space. All un-hashed weight vectors never need to be computed. Given a new document/email x of user u U , the prediction task now consists of calculating 0 (x) + u (x), wh . Due to hashing we have two sources of error ­ distortion d of the hashed inner products and the interference with other hashed weight vectors Feature Hashing for Large Scale Multitask Learning !"#$%$&!!'(#)*%+(*,#-.*%)/%0#!*,&1*2% x NEU Votre Apotheke ... NEU USER123_NEU Votre USER123_Votre Apotheke USER123_Apotheke ... 1 0 -1 0 0 -1 0 1 0 ... !"'#% !"##% #"$#% #")#% #"*#% #"'#% #"##% !"!'% !"#$% !"#&% !"##% !"##% !% #"$'% #"(#% #")$% #")(% +,-./,01/2134% 5362-7/,8934% ./23,873% text document (email) bag of words bag of words (personalized) 0 (x)+u (x) hashed, sparse vector !$% '#% ''% 0%0&)!%&1%3#!3')#0,*% '*% ')% Figure 1. The hashed personalization summarized in a schematic layout. Each token is duplicated and one copy is individualized (e.g. by concatenating each word with a unique user identifier). Then, the global hash function maps all tokens into a low dimensional feature space where the document is classified. have millions of users performing millions of actions, storing U and W in memory quickly becomes infeasible. Instead, we may choose to compress the matrices U and W using hashing. For U, W Rn×d denote by u, w Rm vectors with ui = (j, k)Ujk and wi = (j, k)Wjk . Figure 2. The decrease of uncaught spam over the baseline classifier averaged over all users. The classification threshold was chosen to keep the not-spam misclassification fixed at 1%. The hashed global classifier (global-hashed) converges relatively soon, showing that the distortion error d vanishes. The personalized classifier results in an average improvement of up to 30%. user-id to each word in the email and then hash the newly generated tokens with the same global hash function. The data set was collected over a span of 14 days. We used the first 10 days for training and the remaining 4 days for testing. As baseline, we chose the purely global classifier trained over all users and hashed into 226 dimensional space. As 226 far exceeds the total number of unique words we can regard the baseline to be representative for the classification without hashing. All results are reported as the amount of spam that passed the filter undetected, relative to this baseline (eg. a value of 0.80 indicates a 20% reduction in spam for the user)2 . Figure 2 displays the average amount of spam in users' inboxes as a function of the number of hash keys m, relative to the baseline above. In addition to the baseline, we evaluate two different settings. The global-hashed curve represents the relative spam catch-rate of the global classifier after hashing 0 (w0 ), 0 (x) . At m = 226 this is identical to the baseline. Early convergence at m = 222 suggests that at this point hash collisions have no impact on the classification error and the baseline is indeed equivalent to that obtainable without hashing. In the personalized setting each user u U gets her own classifier u (wu ) as well as the global classifier 0 (w0 ). Without hashing the feature space explodes, as the cross product of u = 400K users and n = 40M tokens results in 16 trillion possible unique personalized features. Figure 2 shows that despite aggressive hashing, personalization results in a 30% spam reduction once the hash table is indexed by 22 bits. 2 As part of our data sharing agreement, we agreed not to include absolute classification error-rates. j,k:h(j,k)=i j,k:h (j,k)=i where (h, ) and (h , ) are independently chosen hash functions. This allows us to approximate matrix elements Mij = [U W ]ij via Mij := k (k, i) (k, j)uh(k,i) wh (k,j) . This gives a compressed vector representation of M that can be efficiently stored. 5. Results We evaluated our algorithm in the setting of personalization. As data set, we used a proprietary email spamclassification task of n = 3.2 million emails, properly anonymized, collected from |U | = 433167 users. Each email is labeled as spam or not-spam by one user in U . After tokenization, the data set consists of 40 million unique words. For all experiments in this paper, we used the Vowpal Wabbit implementation1 of stochastic gradient descent on a square-loss. In the mail-spam literature the misclassification of not-spam is considered to be much more harmful than misclassification of spam. We therefore follow the convention to set the classification threshold during test time such that exactly 1% of the not - spam test data is classified as spam Our implementation of the personalized hash functions is illustrated in Figure 1. To obtain a personalized hash function u for user u, we concatenate a unique 1 http://hunch.net/vw/ Feature Hashing for Large Scale Multitask Learning !"#$%$&!!'(#)*%+(*,#-.*%)/%0#!*,&1*2% (#%" (#$" (" !#'" !#&" !#%" !#$" !" ('" $!" $$" 0%0&)!%&1%3#!3')#0,*% $%" $&" )!*" )(*" )$+,*" )%+-*" )'+(.*" )(&+,(*" ),$+&%*" )&%+/0" 12345674" 6. Related Work A number of researchers have tackled related, albeit different problems. (Rahimi & Recht, 2008) use Bochner's theorem and sampling to obtain approximate inner products for Radial Basis Function kernels. (Rahimi & Recht, 2009) extend this to sparse approximation of weighted combinations of basis functions. This is computationally efficient for many function spaces. Note that the representation is dense. (Li et al., 2007) take a complementary approach: for sparse feature vectors, (x), they devise a scheme of reducing the number of nonzero terms even further. While this is in principle desirable, it does not resolve the problem of (x) being high dimensional. More succinctly, it is necessary to express the function in the dual representation rather than expressing f as a linear function, where w is unlikely to be compactly represented: f (x) = (x), w . (Achlioptas, 2003) provides computationally efficient randomization schemes for dimensionality reduction. Instead of performing a dense d·m dimensional matrix vector multiplication to reduce the dimensionality for a vector of dimensionality d to one of dimensionality m, as is required by the algorithm of (Gionis et al., 1999), he only requires 1 3 of that computation by designing a matrix consisting only of entries {-1, 0, 1}. (Shi et al., 2009) propose a hash kernel to deal with the issue of computational efficiency by a very simple algorithm: high-dimensional vectors are compressed by adding up all coordinates which have the same hash value -- one only needs to perform as many calculations as there are nonzero terms in the vector. This is a significant computational saving over locality sensitive hashing (Achlioptas, 2003; Gionis et al., 1999). Several additional works provide motivation for the investigation of hashing representations. For example, (Ganchev & Dredze, 2008) provide empirical evidence that the hashing-trick can be used to effectively reduce the memory footprint on many sparse learning problems by an order of magnitude via removal of the dictionary. Our experimental results validate this, and show that much more radical compression levels are achievable. In addition, (Langford et al., 2007) released the Vowpal Wabbit fast online learning software which uses a hash representation similar to that discussed here. Figure 3. Results for users clustered by training emails. For example, the bucket [8, 15] consists of all users with eight to fifteen training emails. Although users in buckets with large amounts of training data do benefit more from the personalized classifier (upto 65% reduction in spam), even users that did not contribute to the training corpus at all obtain almost 20% spam-reduction. User clustering One hypothesis for the strong results in Figure 2 might originate from the non-uniform distribution of user votes -- it is possible that by using personalization and feature hashing we benefit a small number of users who have labeled many emails, degrading the performance of most users (who have labeled few or no emails) in the process. In fact, in real life, a large fraction of email users do not contribute at all to the training corpus and only interact with the classifier during test time. The personalized version of the test email u (xu ) is then hashed into buckets of other tokens and only adds interference noise i to the classification. To show that we improve the performance of most users, it is therefore important that we not only report averaged results over all emails, but explicitly examine the effects of the personalized classifier for users depending on their contribution to the training set. To this end, we place users into exponentially growing buckets based on their number of training emails and compute the relative reduction of uncaught spam for each bucket individually. Figure 3 shows the results on a per-bucket basis. We do not compare against a purely local approach, with no global component, since for a large fraction of users--those without training data--this approach cannot outperform random guessing. It might appear surprising that users in the bucket with none or very little training emails (the line of bucket [0] is identical to bucket [1]) also benefit from personalization. After all, their personalized classifier was never trained and can only add noise at test-time. The classifier improvement of this bucket can be explained by the subjective definition of spam and not-spam. In the personalized setting the individual component of user labeling is absorbed by the local classifiers and the global classifier represents the common definition of spam and not-spam. In other words, the global part of the personalized classifier obtains better generalization properties, benefiting all users. 7. Conclusion In this paper we analyze the hashing-trick for dimensionality reduction theoretically and empirically. As part of our theoretical analysis we introduce unbiased hash functions and provide exponential tail bounds for hash kernels. These Feature Hashing for Large Scale Multitask Learning give further insight into hash-spaces and explain previously made empirical observations. We also derive that random subspaces of the hashed space are likely to not interact, which makes multitask learning with many tasks possible. Our empirical results validate this on a real-world application within the context of spam filtering. Here we demonstrate that even with a very large number of tasks and features, all mapped into a joint lower dimensional hashspace, one can obtain impressive classification results with finite memory guarantee. Rahimi, A., & Recht, B. (2009). Randomized kitchen sinks. In L. Bottou, Y. Bengio, D. Schuurmans and D. Koller (Eds.), Advances in neural information processing systems 21. Cambridge, MA: MIT Press. Shi, Q., Petterson, J., Dror, G., Langford, J., Smola, A., Strehl, A., & Vishwanathan, V. (2009). Hash kernels. Proc. Intl. Workshop on Artificial Intelligence and Statistics 12. A. Mean and Variance Proof [Lemma 2] To compute the expectation we expand x, x References Achlioptas, D. (2003). Database-friendly random projections: Johnson-lindenstrauss with binary coins. Journal of Computer and System Sciences, 66, 671­687. Bennett, J., & Lanning, S. (2007). The Netflix Prize. Proceedings of Conference on Knowledge Discovery and Data Mining Cup and Workshop 2007. Bernstein, S. (1946). The theory of probabilities. Moscow: Gastehizdat Publishing House. Daume, H. (2007). Frustratingly easy domain adaptation. Annual Meeting of the Association for Computational Linguistics (p. 256). Ganchev, K., & Dredze, M. (2008). Small statistical models by random feature mixing. Workshop on Mobile Language Processing, Annual Meeting of the Association for Computational Linguistics. Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing. Proceedings of the 25th VLDB Conference (pp. 518­529). Edinburgh, Scotland: Morgan Kaufmann. Langford, J., Li, L., & Strehl, A. (2007). Vowpal wabbit online learning project (Technical Report). http://hunch.net/?p=309. Ledoux, M. (2001). The concentration of measure phenomenon. Providence, RI: AMS. Li, P., Church, K., & Hastie, T. (2007). Conditional random sampling: A sketch-based sampling technique for sparse data. In B. Sch¨ lkopf, J. Platt and T. Hoffman (Eds.), o Advances in neural information processing systems 19, 873­880. Cambridge, MA: MIT Press. Rahimi, A., & Recht, B. (2008). Random features for largescale kernel machines. In J. Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Advances in neural information processing systems 20. Cambridge, MA: MIT Press. = i,j (i)(j)xi xj h(i),h(j) . (9) Since E [ x, x ] = Eh [E [ x, x ]], taking expectations over we see that only the terms i = j have nonzero value, which shows the first claim. For the variance we 2 compute E [ x, x ]. Expanding this, we get: x, x 2 (i)(j)(k)(l)xi xj xk xl h(i),h(j) h(k),h(l) . = i,j,k,l This expression can be simplified by noting that: E [(i)(j)(k)(l)] = ij kl +[1-ijkl ](ik jl + il jk ). Passing the expectation over through the sum, this allows us to break down the expansion of the variance into two terms. E [ x, x + i=j 2 ] = i,k xi xi xk xk + i=j x2 xj Eh h(i),h(j) i 2 xi xi xj xj Eh h(i),h(j) 2 = x, x + 1 m x2 xj + i i=j i=j 2 xi xi xj xj 1 by noting that Eh h(i),h(j) = m for i = j. Using the fact 2 that 2 = E [ x, x ]-E [ x, x ]2 proves the claim. B. Concentration of Measure Our proof uses Talagrand's convex distance inequality. We first define a weighted Hamming distance function between two hash-function and as follows. d(, ) = sup 2 1 i I(h(i) = h (i) or (i) = (i)) i = | {i : h(i) = h (i) or (i) = (i)} | Feature Hashing for Large Scale Multitask Learning Next denote by d(, A) the distance between a hash function and a set A of hash functions, that is d(, A) = inf A d(, ). In this case Talagrand's convex distance inequality (Ledoux, 2001) holds. If Pr(A) denotes the total probability mass of the set A, then Pr {d(, A) s} [Pr(A)] -1 -s2 /4 e . (10) Proof [Theorem 3] Without loss of generality assume that x 2 = 1. We can then easily generalize to the general x 2 case. From Lemma 2 it follows that the variance of x is 4 2 2 2 given by x,x = N [1 - x 4 ] and E( x ) = 1. Chebyshev's inequality states that P (|X - E(X)| 2) 1 . We can therefore denote 2 2 A := where x - 1 2x,x . 1 and obtain Pr(A) 2 . From Talagrand's inequality (10) 2 we know that Pr({ : d(, A) s}) 2e-s /4 . Now assume that we have a pair of hash functions and , with A. Let us define the difference of their hashed inner2 products as := x - x, x . By the triangle inequality and because A, we can state that (The last inequality holds because in the worst case all mass is concentrated in a single entry of vi .) As a next step we will express (x) 2 in terms of x,x . Because A, we obtain that (x) 2 = x, x (1+ 2x,x )1/2 1+x,x / 2. To simplify our notation, let us define = 1 + x,x / 2. Plugging our upper bounds for v 2 and (x) 2 into (12) leads to 2 x -1 8 x d2 (, )( +2 x d2 (, ))+ 2. As we have not specified our particular choice of , we can now choose it to be the closest vector to within A, ie such that d(, ) = d(, A). By Talagrand's inequality, 2 we know that with probability at least 1-2e-s /4 we obtain d(, A) s and therefore with high probability: 2 2 x - 1 8 x s2 + 16 x s4 + 2. 2 + - gives us that A change of variables s2 = 4 x 2 2 x - 1 2 + w.p. 1 - 2e-s /4 . Noting that s2 = ( 2 + - )/4 x /4 x , lets us obtain our final result 2 x - 1 2 + w.p. 1 - 2e- /4 x . Finally, for a general x, we can derive the above result for x y = x 2 . Replacing y = x we get the following x 2 version for a general x, Pr | x 2 x 2 -1 - x, x || + 2. x 2 + x, x -1 (11) Let us now denote the coordinate-wise difference between the hashed features as vi := i (x) - i (x). With this definition, we can express in terms of v: = i i (x)2 - 2 i (x) = -2 (x), v + v 2 . By applying the Cauchy2 Schwartz inequality to the inner product (x), v , we obtain || 2 (x) 2 v 2 + v 2 . Plugging this into (11) 2 leads us to 2 x - 1 2 (x) 2 v 2 + v 2 + 2x,x . (12) 2 Next, we bound v 2 in terms of d(, ). To do this, expand vi = j xj (j h (j)i - j h(j)i ). As j {+1, -1}, we know that |j - j | 2. Further, xj x and we can write |vi | 2 x j - x x 2 2 2 2 | 2x,x + exp - 4 x x 2 C. Inner Product Proof [Corollary 4] We have that 2 x, x = x + 2 2 x - x - x . Taking expectations, we have the standard inner product inequality. Thus, |2 u (x), u (x) - 2 x, x | | u (x) + | u (x ) 2 2 2 - x 2 | 2 h(j)i + h (j)i . - x 2 | + | (x - x ) 2 - x-x | (13) We can now make two observations: First note that i j h(j)i +h (j)i is at most 2t where t = |{j : h(j) = h (j)}|. Second, from the definition of the distance func tion, we get that d(, ) t. Putting these together, |vi | 4 x i t4 x 2 d (, ) 2 Using union bound, with probability 1 - 3 exp - 4 , each of the terms above is bounded using Theorem 3. Thus, putting the bounds together, we have that, with probability 1 - 3 exp - 4 , |2 u (x), u (x) - 2 x, x | 2 2 ( 2 + )( x + x + x - x 2 ) vi 2 2 = i |vi |2 16 x d4 (, ).