Fitting a Graph to Vector Data Samuel I. Daitch Department of Computer Science, Yale University, New Haven, CT 06511, USA samuel.daitch@yale.edu Jonathan A. Kelner kelner@mit.edu Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Daniel A. Spielman Department of Computer Science, Yale University, New Haven, CT 06511, USA spielman@cs.yale.edu Abstract We introduce a measure of how well a combinatorial graph fits a collection of vectors. The optimal graphs under this measure may be computed by solving convex quadratic programs and have many interesting properties. For vectors in d dimensional space, the graphs always have average degree at most 2(d + 1), and for vectors in 2 dimensions they are always planar. We compute these graphs for many standard data sets and show that they can be used to obtain good solutions to classification, regression and clustering problems. weights depending on the distance. Not surprisingly, different results are obtained by the use of different graphs (Maier et al., 2008), and researchers have studied how to combine different graphs in a way that tends to give heavier weight to the better graphs (Argyriou et al., 2005). In this paper, we study what can be gained by choosing the graphs with more care. For a set of vectors x 1 , . . . , x n , we construct a weighted, undirected graph on n vertices, where wi,j = wj,i 0 denotes the weight of edge (i, j), and di = j wi,j denotes the weighted degree of vertex i. When there is no edge (i, j), we have wi,j = 0. We do not allow self-loops, so wi,i = 0 for all i. We measure how well the graph with weights w fits the vectors by how small it makes the following function, which is a weighted sum of the squared distance from each vertex to the weighted average of its neighbors: f (w ) = i 1. Introduction Given a collection of vectors x 1 , . . . , x n I d , we ask R the question, di x i - j wi,j x j 2 . "What is the right graph to fit to this set of vectors?" In recent years, a number of researchers have gained insight by fitting graphs to their data and then using these graphs to solve clustering, classification, or regression problems on their data, e.g. (Ng et al., 2001; Zhu et al., 2003; Belkin & Niyogi, 2003; Joachims, 2003; Zhou & Sch¨lkopf, 2004a; Coifman et al., 2005). o They have employed simply defined graphs that are easy to compute, associating a vertex of the graph with each data vector, and then connecting vertices whose vectors are sufficiently close, sometimes with Appearing in Proceedings of the 26 th International Conference on Machine Learning, Montreal, Canada, 2009. Copyright 2009 by the author(s)/owner(s). If we let X be the n-by-d matrix with ith row x i , and let L be the graph Laplacian matrix, defined as Li,j = -wi,j di if i = j , if i = j then f may be rewritten as f (w ) = LX where M F 2 F , i,j 2 Mi,j )1/2 . is the Frobenius norm ( Since f = 0 for a graph with no edges, we construct graphs that minimize f subject to constraints that bound the vertex degrees away from zero. We define a hard graph of the vectors x 1 , . . . , x n to be a graph minimizing f subject to each vertex having weighted degree at least 1. As some vectors could Fitting a Graph to Vector Data 2. Computing the graphs 2.1. Hard Graph Let us first show that a hard graph is obtained by solving a convex quadratic program. Let E be the set of all possible edges in the graph, and let m = |E|. We define U to be the n × m matrix such that the column of U corresponding to edge e = (i, j) E has exactly two nonzero entries: Ui,e = 1 and Uj,e = -1. We also define the length-m vector w to contain all the edge weights, and we let W be the m × m diagonal matrix with diagonal entries given by w . The graph Laplacian may then be expressed as L = UWUT . Let x (k) be the kth column of X. Define the vector y (k) = U T x (k) , and let Y (k) be the diagonal matrix containing the entries of y (k) on its diagonal. Then the edge weights w of the hard graph are the weights satisfying di 1 that minimize d 2 F Figure 1. The hard graph for a random set of vectors in two dimensions. be outliers, we also consider allowing some vertices to have lower weighted degree. To this end, we define an -soft graph of x 1 , . . . , x n to be a graph minimizing f subject to (max(0, 1 - di )) n. i 2 (1) f (w ) = LX In special cases, such as when the vectors exhibit certain symmetries, the solutions to these programs will not be unique. Thus, when we refer to a hard graph or -soft graph for a set of vectors, this is not intended to imply that it is the unique such graph. Our measure of quality of fit f = LX F is similar to the one found in the locally linear embedding algorithm of (Roweis & Saul, 2000). If the graph is a collection of k disjoint cliques, with the weight on each edge being the reciprocal of the number of vertices in its clique, then f is just the value of the k-means objective function of the partition of the vectors into sets corresponding to the cliques. Also, the singular value decomposition of X may be understood to compute for each k the k-dimensional projection matrix minimizing (I - )X F . In Section 2, we sketch how we compute these graphs. In Section 3, we prove that the hard and -soft graphs of a point set in IRd are sparse and that they are planar for two-dimensional data. We expect these graphs will be discovered to have other interesting combinatorial properties. In Sections 4.1, 4.2 and 4.3 we present the results of using these graphs to solve classification, regression, and clustering problems on many standard data sets. The classification and regression experiments are done in the transductive setting, where unlabeled data is used to construct the graph. Even with no parameters to choose, our graphs provide very good answers to many of these problems. 2 d = k=1 Lx (k) 2 d 2 d = k=1 U W U T x (k) 2 2 = k=1 U W y (k) = k=1 U Y (k) w = Mw 2 where M T = Y (1) U T ... Y (d) U T . Since there are n edges that could appear in the hard 2 graph, it is computationally infeasible to directly solve this quadratic program in all n variables for even 2 moderately large n. Instead, we solve the quadratic program on a small subset of edges. We then compute a small set of new edges that will improve the hard graph, add these in to the quadratic program, and compute the solution with the new edges, removing those edges whose weights have been set to zero. We repeat until the graph cannot be improved. Since these graphs are sparse, as will be proven in the next section, our quadratic programs never get too large. To determine which edges to add to the quadratic program, we consider the Lagrange function (w , z ) = f (w )- i zi (di -1) = M w 2 -z T (Aw -1), where A is the matrix obtained by taking the absolute values of the entries of U , so that Aw gives the vector of weighted degrees. The primal-dual solution pair (w , z ) 0 must satisfy d the Karush-Kuhn-Tucker conditions, namely: dw 0, Fitting a Graph to Vector Data d dz d 0, w T dw = 0, and z T d = 0, where dz d = 2M T M w - AT z dw and d = 1 - Aw . dz When we solve the quadratic program on a subset of edges, we obtain a solution pair (w , z ) 0 that satisfy all of the KKT conditions on the full quadratic d program, except that dw(i,j) may be negative on the excluded edges. If there are any edges (i, j) for which d dw(i,j) < 0, we add to our quadratic program the edges d with the smallest dw(i,j) values. If there are no such edges, then we have a solution to the full quadratic program, and we are done. In our experiments, we solve these problems using Matlab's quadprog routine. As with the hard graphs, we use the technique described in section 2.1 to reach the solution to the non-negative least squares problem by solving a sequence of such problems on subsets of the edges. 3. Properties of the graphs We begin by mentioning that for the set of vectors in 5 {0, 1} with an even number of ones, neither the hard graph nor the -soft graph are unique. However, we conjecture that both graphs are unique for almost all sets of vectors (with probability 1 under arbitrarily small perturbations). Let us prove that these graphs have average degree at most 2(d + 1). We suggest that the average degree of the hard or -soft graph of a set of vectors may be a useful measure of the essential dimensionality of those vectors, and that it will be small if they lie close to a low-dimensional manifold of low curvature. We also conjecture that for every set of vectors of arbitrarily high dimension, there is a sparse, approximately optimal solution to the programs defining the hard and -soft graphs. Theorem 3.1. For every > 0, every set of n vectors in I d has a hard and an -soft graph with at most R (d + 1)n edges. Proof. Recall that the objective function optimized by 2 these graphs is given by a quadratic form M w on the weights w , where the matrix M has dn rows. Let us again write the vector of degree sums as Aw , where A has n rows. Suppose for the sake of contradiction that the minimum number of edges in a hard graph (or -soft graph) is m > (d + 1)n. Let w be the weights in such a graph. Then there must be some some non-zero (but not necessarily positive) vector w , with non-zero entries restricted to these m edges, such that M A In our experiments, we use the Matlab package SDPT3 (T¨t¨nc¨ et al., 2003; Toh et al., 1999) to uu u solve the quadratic programs. In Table 1, we indicate how long it took to compute the hard graph for the data sets on which we performed experiments. 2.2. -Soft graph Let us define (w ) = max(0, 1 - Aw ) to be the left-hand-side term in (1), which indicates to what extent the weighted degrees are smaller than 1. Since the value of f (w ) is always improved by uniformly scaling down all edge weights, it is clear that an -soft graph must satisfy (1) with equality, that is, it must have (w ) = n. It is inefficient to directly solve the optimization problem that yields an -soft graph. Instead, to compute the -soft graphs, we solve optimization problems of the form min{f (w ) + µ · (w ) : w 0} w 2 (2) for various values of µ. For any given µ, if w is a solution to (2) then it must also be an -soft graph for = (w )/n. Furthermore, note that as µ increases, decreases monotonically. Thus, to compute an -soft graph, we solve (2) using an initial guess for the value of µ, and we then adjust µ up or down proportionally to how far (w )/n is from the desired value of . We may repeat this until we are arbitrarily close to the desired . In our experiments, when we construct 0.1soft graphs, we actually search for graphs which have in the range 0.1 ± 0.01. To solve a convex program of the form in (2), note that it can be formulated as a non-negative least squares problem: min{ M w w ,s 2 w = 0. Clearly there is some r such that the weights w = (w + rw ) remain nonnegative and have at least one fewer positive edge than w . Since M w = M w and Aw = Aw , the score and degrees have not changed. So these new weights still form a hard (or -soft) graph, but now with fewer than m edges. Theorem 3.2. For every > 0, every set of n vectors in I 2 has a hard and an -soft graph that are planar. R We prove a more general statement, which implies Theorem 3.2 by treating edges as cliques of size 2. + µ 1 - Aw - s 2 : w , s 0}. Fitting a Graph to Vector Data Theorem 3.3. Let S = {x 1 , . . . , x n } and T = {y 1 , . . . , y m } be two sets of points the interior of whose convex hulls intersect. Then, for every > 0, any hard or -soft graph of maximum total degree of a set containing S T either does not contain a clique on S or does not contain a clique on T . Proof. Let z be a point in the intersection of the interiors of the convex hulls of S and T . We know that there exist i > 0 such that 1 = i i and z = i i x i , and there also exists i > 0 such that 1 = i i and z = i i y i . Consider a hard graph (or -soft graph) that contains a clique on S and a clique on T . Suppose that we decrease the weight on every edge (x i , x j ) by ri j and on every edge (y i , y j ) by ri j , while we increase the weight on every edge (x i , y j ) by ri j . We choose r just large enough to eliminate some edge from one of the cliques. We will show that these changes increase the weighted degree of every vertex in S T and do not change the objective function, so we can always construct a hard (or -soft) graph with an edge missing from one of the two cliques and larger total degree. First let us confirm that the weighted degrees increase: indeed, the weighted degree of x i S increases by ri j - j[m] k[n] k=i 4. Experimental Results Table 1 lists the data sets we used in our experiments. For each data set, we provide the average degrees of the hard and 0.1-soft graphs as dhard and dsof t . Before building our graphs, we always normalize the data by rescaling each dimension so that it has standard deviation one. Observe that the average degree of each graph is lower than that predicted by the analysis in Theorem 3.1. (Recall that the average degree of a graph is twice the number of edges, divided by the number of vertices.) In our experiments, we compare our graphs with those obtained by standard approaches of constructing graphs from vectors. The most common approach is to use weighted k-nearest neighbor graphs (denoted knn in the tables). These graphs are specified by two parameters, k and . Each vertex is connected to its k nearest neighbors, in Euclidean distance, with an edge weight of either 1, or with weight inverse exponential in the square of the edge of the length, divided by 2 2 . We tried all k = 3, 5, 7, 10, 15, 20, 25, 30, 40, and a power of 2 between 2-10 and 210 times 0 , where 0 is the mean distance across edges in the k-nearest neighbor graph. We also tried forming graphs by connecting all pairs of vertices within a given distance r and considered weighting the edges as we did for the k-nearest neighbor graphs (denoted thresh in the tables). For r, we tried dividing the median inter-vertex distance by all powers of 21/3 between 0 and 27. 4.1. Classification We employed the simple algorithm of (Zhu et al., 2003) for learning labels of vertices in the graphs. We do not yet know if our results would be improved by using one of the algorithms from (Zhou & Sch¨lkopf, 2004a; o Sindhwani et al., 2005; Zhou & Sch¨lkopf, 2004b; Zhou o et al., 2003; Belkin et al., 2004; Wang et al., 2008). We first explain how we handle the case of two classes. Let S be the set of vectors whose labels we know, and let c I n be a vector such that ci {0, 1} for each i S, R depending on the class of vector i. We then solve for the vector x minimizing xT Lx = (i,j)E ri k j - k + i k[n] = ri j[m] = 2 ri > 0. A symmetric argument holds for the vertices in T . Now let us show that the score of the graph does not change. For each vertex v , let us define the vector (v ) = (v ,v )E w(v ,v ) (v - v ), 2 and note that the objective function is v (v ) . So it suffices for us to show that we are not changing the value of any (x i ) or (y j ). Indeed, the amount by which we change (x i ) is ri j (x i - y j ) - j[m] k[n] wi,j (xi - xj )2 , (3) ri k (x i - x k ) k x i = ri j[m] j - k[n] + k[n] k x k - j[m] j y j subject to xi = ci for i S. For every j S, we then guess that the class of vector j is 0 if xj < 1/2 and 1 otherwise. Note that x can be found by solving one linear equation in a diagonally-dominant matrix, so this can be done very quickly (Spielman & Teng, 2004). When we have k > 2 classes, we construct a vector cj = ri [(1 - 1)x i - (z - z )] = 0. Fitting a Graph to Vector Data Table 1. Average degrees of our graphs for various data sets. Source indicates whether we obtained the data from the UCI Machine Learning Repository (Asuncion & Newman, 2007) or from LIBSVM (Chang & Lin, 2001). Type indicates whether the data come from a classification problem, and if so how many classes, or whether they come from a regression problem. Soft time and hard time indicate how many seconds it took to compute the soft and hard graphs on a single core of a Dell Precision 690 workstation with an Intel Xeon 2.66 GHz processor and 4GB of RAM. For each data set, we normalized every column to have variance one. Data set abalone glass heart housing ionosphere iris machine mpg pima sonar vehicle vowel990 wine Source libsvm UCI UCI libsvm UCI UCI UCI libsvm UCI UCI UCI libsvm UCI Type regression 6 classes 2 classes regression 2 classes 3 classes regression regression 2 classes 2 classes 4 classes 11 classes 3 classes n 4177 214 270 506 351 150 209 392 768 208 846 990 178 dim 8 9 13 13 34 4 6 7 8 60 18 10 13 dhard 13.1 8.9 11.1 8.8 13.0 7.0 8.9 7.6 11.2 12.7 11.8 10.4 9.1 dsof t 12.7 8.6 11.0 10.1 11.9 7.0 8.0 8.8 10.9 13.0 11.5 10.1 9.9 soft time (sec) 1,582 8 12 23 72 4 11 16 111 38 159 85 6 hard time (sec) 49,986 22 36 114 148 17 26 43 529 50 889 706 15 for class j that is 1 for labeled examples in the class and 0 elsewhere. We then solve (3) for each class to obtain a vector xj , and we guess that an unlabeled vertex i belongs to the class j for which xj is largest. i Table 2 presents the results of performing classification using 10-fold cross-validation. For each experiment, we compute the hard graph and the 0.1-soft graph once. We then randomly partition the vectors into 10 sets of size as equal as possible, and for each set we use the algorithm above to infer the classes, using the labels on the other nine sets. We repeat each of these experiments 100 times, and report the average error. There were no parameters to train. For comparison, we also report results of experiments done using conventional graphs, LIBSVM (Chang & Lin, 2001), and from experiments reported in (Su & Zhang, 2006) and (Kotsiantis et al., 2006). In the experiments that we performed on competing algorithms, we used the other nine sets to train parameters. In the columns knn and thresh we compare to the graphs described above, and chose parameters by leave-one-out cross validation on the nine other sets. In each experiment with LIBSVM, we called the easy routine on the nine tenths of the data to choose parameters and train the support vector classifier. We then used this classifier on the remaining tenth of the data. The results are the averages of shifting over all 10 parts of the partition and repeating the whole process 50 times. The results we copy from the (Su & Zhang, 2006) are for FBC: Full Bayes Classifier, AODE: Averaged OneDependence Estimators (Webb et al., 2005), and HGC: the Hill Climbing BN Learning Algorithm (Heckerman). The results we copy from the (Kotsiantis et al., 2006) are for NB: Naive Bayesian networks, C4.5 (Quinlan, 1993), BP: Back Propagation, and SMO: Sequential Minimal Optimization. These results suggest that our graphs provide very good classifiers. They perform exceptionally well on the ionosphere and sonar data sets. The only data set on which any algorithm performs significantly better is vehicle, on which LIBSVM does particularly well. 4.2. Regression We again use the algorithm of (Zhu et al., 2003) to predict the values of the unlabeled vertices. Supposing that we know the values fi for i S, we predict the remaining fi values to be those that minimize f T Lf = (i,j)E wi,j (fi - fj )2 subject to fixing the known fi values. Table 3 presents the results of the regression experiments using 10-way cross-validation. Again, for each experiment, we compute the hard graph and the 0.1soft graph once. We then randomly partition the vectors into 10 sets of size as equal as possible, and for each set we use the algorithm above to infer the function. We repeat each of these experiments 50 times and report the average error. For the Abalone data set, because of time constraints due to its larger size, we perform 2-way rather than 10-way cross-validation. Again we observed what happens when we replace our Fitting a Graph to Vector Data Table 2. Classification error (%), 10-fold cross validation. The best result for each data set is bold. The experiments that do not perform better than ours have a grey background. Data set glass heart ionosphere iris pima sonar vehicle vowel990 wine hard 27.78 18.18 4.75 4.87 26.64 9.16 23.03 1.19 2.92 0.1-soft 28.30 17.81 5.57 4.21 26.61 8.64 22.47 0.95 2.62 knn 26.92 16.05 18.50 4.46 24.54 13.80 27.70 2.62 2.86 thresh 33.30 16.1 6.34 6.20 26.45 14.94 29.98 0.98 3.64 libsvm 31.44 17.01 6.20 3.87 23.24 11.71 14.87 0.64 2.57 FBC 37.56 16.19 9.20 6.27 25.15 22.62 25.77 6.54 AODE 38.27 16.37 8.26 6.00 23.43 20.09 28.35 10.36 HGC 41.64 17.41 6.60 3.93 24.08 30.84 31.90 7.30 NB 50.55 16.41 17.83 4.47 24.25 32.29 55.32 37.10 2.54 C4.5 32.37 21.85 10.26 5.27 25.51 26.39 27.72 19.80 6.80 BP 32.68 16.70 12.93 15.20 22.96 21.33 18.89 7.27 1.98 SMO 42.64 16.19 12.07 15.13 22.93 22.12 25.92 29.39 1.24 Table 3. Regression mean-square error, k-fold cross validation (k=2 for Abalone, k=10 for other data sets). For each data set, the labels have been rescaled to have variance one. The best result for each data set is bold. The experiments that do not perform better than ours have a grey background. Data set abalone housing machine mpg hard 0.479 0.136 0.170 0.120 0.1-soft 0.482 0.138 0.185 0.118 knn 0.492 0.224 0.164 0.137 thresh 0.657 0.507 0.608 0.145 epsilon-svr 0.138 0.394 0.128 gproc 0.112 0.890 0.129 graphs with the knn and thresh graphs described above. In the knn and thresh regression experiments, we ran each experiment 10 times. The hard and 0.1-soft graphs outperform these graphs for all but one data set, and even in that data set the performance is very close. We again also ran experiments using LIBSVM (Chang & Lin, 2001). We modified the classification routine easy to search the same parameter ranges but do support vector regression instead of classification. We fed this routine nine tenths of the data to choose parameters and train the support vector machine, which we then used to predict values on the remaining tenth of the data. We did this for each of the 10 partitions of data, and repeated the whole experiment 20 times. We also ran 10-way cross validation experiments using the gproc Gaussian process regression algorithm from Spider (Weston et al., 2008), repeating each experiment 5 times. Due to the size of Abalone, we were unable to run regression tests on it with LIBSVM or Spider. On the other three sets of regression data that we tested, our graphs outperformed LIBSVM on all of them, and Spider on all but one. 4.3. Clustering Given the graph associated with a set of unlabeled vectors x 1 , . . . , x n , one may obtain a clustering of the vectors into k subsets by finding a good k-partition of the corresponding graph. We did this with a spectral algorithm; it remains an interesting question whether other graph partitioning algorithms would improve the results. The graph partitioning algorithm that we applied is essentially the same as that used by Ng, Jordan, and Weiss (Ng et al., 2001). We formed the normalized Laplacian L = D-1/2 LD-1/2 , where D is the diagonal matrix whose ith diagonal entry is the weighted degree of vertex i. We then found its eigenvectors v 1 , . . . , v n , where the v i are sorted in increasing order of the corresponding eigenvalues. Let V = [v 1 , . . . , v t ] be the n×t matrix whose columns are given by the first t eigenvectors. (We shall discuss the correct value of t below.) Following Ng, Jordan, and Weiss, we let W be the matrix obtained by scaling the rows of V to each have norm 1. The rows of W give us n points in I t , which we clustered using R k-means. We then lifted this clustering back to the original vectors. It is not immediately obvious how best to choose t. A standard answer, which was the one employed by Ng, Jordan, and Weiss, is to set it equal to the desired number of clusters k. However, we found that this was not the best strategy with our graphs. Intuitively, one may view the matrix V as a low-rank approximation to L. If one works in a model in which the graph is expected to have a very sparse cut that breaks it into Fitting a Graph to Vector Data Table 4. Clustering error (%). We compare our results to k-means and the Ng-Jordan-Weiss algorithm. For both the hard and 0.1 soft graphs, we list the performance when t = k and when t is chosen heuristically as described in the text. In the t chosen columns, the selected value of t is listed in parentheses. Data set glass heart ionosphere iris pima sonar vehicle vowel990 wine k-means 0.41 0.41 0.29 0.11 0.34 0.45 0.55 0.66 0.3 NJW 0.43 0.42 0.33 0.33 0.35 0.47 0.62 0.76 0.33 Hard, t = k 0.44 0.35 0.26 0.33 0.35 0.41 0.58 0.65 0.03 Hard, t chosen 0.43 (12) 0.35 (2) 0.09 (15) 0.15 (5) 0.35 (12) 0.41 (35) 0.54 (6) 0.66 (30) 0.03 (3) 0.1-soft, t = k 0.45 0.19 0.33 0.17 0.35 0.4 0.56 0.65 0.03 0.1-soft, t chosen 0.45 (12) 0.19 (2) 0.09 (15) 0.09 (8) 0.35 (12) 0.35 (35) 0.53 (6) 0.6 (30) 0.03 (3) k large and well-connected pieces, L will be well approximated by its rank-k approximation, and taking t = k is the right course of action. This occurs in most random graph models in which one assumes a good k-partition exists (McSherry, 2001). The theoretical basis for this arises from the existence in these models of a large gap between the k th and (k+1)st eigenvalues. However, this gap arises from the very strong clustering assumed to exist in these models, and it does not always occur in practice, even when there is a good partition into k clusters. Ideally, one wants to include only as many eigenvectors as are necessary to get a good approximation of the data. Let j = i (v j · x i )2 be the total squared norm of the projections of the xi into the j th eigenspace. In practice, there tend to be some number of reasonably large i , after which they fall off quite sharply. Setting t to be an index after which the i are small yielded considerably better results than simply setting t = k. While the choice of the exact cutoff point is somewhat arbitrary, it tends to be fairly clear in practice, and the quality of the resulting clusterings tended not to be very sensitive to its exact value. To evaluate the efficacy of this approach, we removed the class labels from the data sets from Section 4.1 and applied our algorithm. We then compared the resulting clustering to the actual correct classification and computed the fraction of the points that were misclassified. For comparison, we performed the same experiments using k-means and the Ng-Jordan-Weiss routines implemented in Spider (Weston et al., 2008). We computed the classification error for both the hard and 0.1-soft graphs. In order to allow a fair comparison to the Ng-Jordan-Weiss algorithm, which used exactly k values, we computed the performance of our method with t = k. In addition, we computed the clustering error when t was chosen heuristically as described above. We note that we selected the t at which the i appeared to become negligible, not the t that gave the lowest error rate (as doing so would require using the actual classification, which was not given to us in the problem). For both the hard and 0.1-soft graphs, our algorithm with t = k was never significantly worse than the Ng-Jordan-Weiss algorithm, and it was usually significantly better. As the two algorithms performed the same operations on their respective graphs, this provides strong support for the notion that our graphs improve upon the traditional ones. When t was chosen more carefully, our graphs tended to significantly outperform both k-means and Ng-Jordan-Weiss. In particular, they provided extreme improvements on the ionosphere and wine datasets. 5. Conclusions and Future work We have suggested optimizing a graph to fit a data set, and found a measure of fitness under which the optimal graphs are sparse, have interesting combinatorial properties, and provide good answers to classification, regression, and clustering properties. We ask if there are other natural graphs to fit to a data set, and if our graphs can be improved for these learning problems. For example, we ask if one can incorporate labeled examples into the construction of the graph. We also ask if there is a natural way to use our graphs to infer labels of vectors that were not used in the graph construction, such as was done in the work of (Yu et al., 2004; Sindhwani et al., 2005; Coifman & Lafon, 2006). Acknowledgments This work was partially supported by NSF Grants CCF-0634957 and CCF-0843915. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Fitting a Graph to Vector Data Foundation. mensionality reduction by locally linear embedding. Science, 290, 2323­2326. Sindhwani, V., Niyogi, P., & Belkin, M. (2005). Beyond the point cloud: from transductive to semisupervised learning. Proc. 22nd Int. Conf. on Mach. Learn. (pp. 824­831). Spielman, D. A., & Teng, S.-H. (2004). Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. Proc. 36th ACM Symp. on the Theory of Computing (pp. 81­90). Su, J., & Zhang, H. (2006). Full bayesian network classifiers. Proc. 23rd Int. Conf. on Mach. Learn. (pp. 897­904). Toh, K. C., Todd, M. J., & T¨t¨nc¨, R. H. (1999). uu u SDPT3--a MATLAB software package for semidefinite programming, version 1.3. Optim. Methods Softw., 11/12, 545­581. Interior point methods. T¨t¨nc¨, R. H., Toh, K. C., & Todd, M. J. (2003). uu u Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program., 95, 189­217. Wang, J., Jebara, T., & Chang, S.-F. (2008). Graph transduction via alternating minimization. Proc. 25th Int. Conf. on Mach. Learn. (pp. 1144­1151). Webb, G. I., Boughton, J., & Wang, Z. (2005). Aggregating one-dependence estimators. Journal of Machine Learning, 58, 5­24. Weston, J., Elisseeff, A., BakIr, G., & Sinz, F. (2008). The spider. http://www.kyb.mpg.de/bs/people/spider/. Yu, K., Tresp, V., & Zhou, D. (2004). Semi-supervised induction with basis functions (Technical Report 141). Max Planck Inst. for Biological Cybernetics. Zhou, D., Bousquet, O., Lal, T. N., Weston, J., & Sch¨lkopf, B. (2003). Learning with local and global o consistency. Adv. in Neural Inf. Proc. Sys. 16 (pp. 321­328). Zhou, D., & Sch¨lkopf, B. (2004a). Learning from lao beled and unlabeled data using random walks. Pattern Recognition, 26th DAGM Symposium (pp. 237­ 244). Zhou, D., & Sch¨lkopf, B. (2004b). A regularizao tion framework for learning from graph data. ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields (pp. 132­137). Zhu, X., Ghahramani, Z., & Lafferty, J. D. (2003). Semi-supervised learning using gaussian fields and harmonic functions. Proc. 20th Int. Conf. on Mach. Learn.. References Argyriou, A., Herbster, M., & Pontil, M. (2005). Combining graph laplacians for semi-supervised learning. Adv. in Neural Inf. Proc. Sys. 18 (pp. 67­74). Asuncion, A., & Newman, D. (2007). UCI machine learning repository. http://www.ics.uci.edu/mlearn/MLRepository.html. Belkin, M., Matveeva, I., & Niyogi, P. (2004). Regularization and semi-supervised learning on large graphs. Proc. 17th Conf. on Learning Theory, 624­ 638. Belkin, M., & Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15, 1373­1396. Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm. Coifman, R. R., & Lafon, S. (2006). Geometric harmonics: A novel tool for multiscale out-of-sample extension of empirical functions. Applied and Computational Harmonic Analysis, 21, 31 ­ 52. Coifman, R. R., Lafon, S., Lee, A. B., Maggioni, M., Nadler, B., Warner, F., & Zucker, S. (2005). Geometric diffusion as a tool for harmonic analysis and structure definition of data, part i: Diffusion maps. Proc. Nat. Acad. Sci., 102, 7426­31. Joachims, T. (2003). Transductive learning via spectral graph partitioning. Proc. 20th Int. Conf. on Mach. Learn. (pp. 290­297). Kotsiantis, S. B., Zaharakis, I. D., & Pintelas, P. E. (2006). Machine learning: a review of classification and combining techniques. Artif. Intell. Rev., 26, 159­190. Maier, M., von Luxburg, U., & Hein, M. (2008). Influence of graph construction on graph-based clustering measures. Adv. in Neural Inf. Proc. Sys. 21 (pp. 1025­1032). McSherry, F. (2001). Spectral partitioning of random graphs. Proc. 42nd IEEE Symp. on Foundations of Computer Science (pp. 529­537). Ng, A. Y., Jordan, M. I., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. Adv. in Neural Inf. Proc. Sys. 14 (pp. 849­856). Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Francisco: Morgan Kaufmann. Roweis, S. T., & Saul, L. K. (2000). Nonlinear di-