Learning with Consistency between Inductive Functions and Kernels Haixuan Yang Irwin King Michael R. Lyu Department of Computer Science & Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong {hxyang,king,lyu}@cse.cuhk.edu.hk Abstract Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting problems and to express solutions as kernel expansions. However, we observe that the current RLS algorithms cannot provide a satisfactory interpretation even on a constant function. On the other hand, while kernel-based algorithms have been developed in such a tendency that almost all learning algorithms are kernelized or being kernelized, a basic fact is often ignored: The learned function from the data and the kernel fits the data well, but may not be consistent with the kernel. Based on these considerations and on the intuition that a good kernel-based inductive function should be consistent with both the data and the kernel, a novel learning scheme is proposed. The advantages of this scheme lie in its corresponding Representer Theorem, its strong interpretation ability about what kind of functions should not be penalized, and its promising accuracy improvements shown in a number of experiments. Furthermore, we provide a detailed technical description about heat kernels, which serves as an example for the readers to apply similar techniques for other kernels. Our work provides a preliminary step in a new direction to explore the varying consistency between inductive functions and kernels under various distributions. 1 Introduction Regularized Least Squares (RLS) algorithms have been drawing people's attention since they were proposed due to their ability to avoid over-fitting problems and to express solutions as kernel expansions in terms of the training data [4, 9, 12, 13]. Various modifications of RLS are made to improve its performance either from the viewpoint of manifold [1] or in a more generalized form [7, 11]. However, despite these modifications, problems still remain. We observe that the previous RLS-related work has the following problem: Over Penalization. For a constant function f = c, a nonzero term ||f ||K is penalized in both RLS and LapRLS [1]. As a result, for a distribution generalized by a nonzero constant function, the resulting regression function by both RLS and LapRLS is not a constant as illustrated in the left diagram in Fig. 1. For such situations, there is an over-penalization. In this work, we aim to provide a new viewpoint for supervised or semi-supervised learning problems. By such a viewpoint we can provide a general condition under which constant functions should not be penalized. The basic idea is that, if a learning algorithm can learn an inductive function f (x) from examples generated by a joint probability distribution P on X × R, then the learned He is now a postdoctoral researcher in Dept. of Computer Science, Royal Holloway Univ. of London. 1 RLS 1.02 1.01 The Re-learned function and the Residual 1 f(x) r(x) 0.5 1.002 f(x)-r(x) 1.001 1.003 RLS vs PRLS RLS-=0.005 PRLS-=1000 PRLS-=1 PRLS-=0.001 PRLS-=0 1 0.99 0.98 0.97 0.96 The Ideal Function Labeled Data RLS-=0.1 RLS-=0.01 RLS-A=0 0 RLS-=0.005 0.5 1 -1 0.998 -2 0 2 0 0.5 0 y y 1 0.999 -0.5 1 x x x Figure 1: Illustration for over penalization. Left diagram: The training set contains 20 points, whose x is randomly drawn from the interval [0 1], whereas the test set contains another 20 points, and y is generated by 1 + 0.005, N (0, 1). The over penalized constant functions in the term ||f ||K cause the phenomena that smaller can achieve better results. On the other hand, the overfitting phenomenon when = 0 suggests the necessity of the regularization term. Based on these observations, an appropriate penalization on a function is expected. Middle diagram: r(x) is very smooth, and f (x) - r(x) remains the uneven part of f (x); therefore f (x) - r(x) should be penalized while f is over penalized in ||f ||K . Right diagram: the proposed model has a stable property so that a large variant of results in small changes of the curves, suggesting a right way of penalizing functions. function f (x) and the marginal PX represents a new distribution on X × R, from which there is a re-learned function r(x). The re-learned function should be consistent with the learned function in the sense that the expected difference on distribution PX is small. Because the re-learned function depends on the underlying kernel, the difference f (x) - r(x) depends on f (x) and the kernel, and from this point of view, we name this work. 2 Background The RKHS Theory enables us to express solutions of RLS as kernel expansions in terms of the training data. Here we give a brief description of the concepts. For a complete discussion, see [2]. Let X be a compact domain or manifold, be a Borel measure on X , and K : X × X R be a Mercer kernel, then there is an associated Hilbert space RKHS HK of functions X R with the corresponding norm || · ||K . HK satisfies the reproducing property, i.e., for all f HK , f (x) = Kx , f , where Kx is the function K (x, ·). Moreover, an operator LK can be defined on X HK as: (LK f )(x) = f (y )K (x, y )d (y ), where L2 (X ) is the Hilbert space of square integrable X functions on X with the scalar product f, g = f (x)g (x)d (x). Given a Mercer kernel and a set of labeled examples (xi , yi ) (i = 1, ..., l), there are two popular inductive learning algorithms: RLS [12, 13] and the Nadaraya-Watson Formula [5, 8, 14]. By the standard Tikhonov regularization, RLS is a special case of the following functional extreme problem: f = arg min where V is some loss function. 2 l 1i V (xi , yi , f ) + ||f ||2 K l =1 f HK (1) The Classical Representer Theorem states that the solution to this minimization problem exists in HK and can be written as il f (x) = i K (xi , x). (2) =1 Such a Representer Theorem is general because it plays an important role in both RLS in the case when V (x, y , f ) = (y - f (x))2 , and SVM in the case when V (x, y , f ) = max(0, 1 - y f (x)). The Nadaraya-Watson Formula is based on local weighted averaging, and it comes with a closed form: il il r(x) = yi K (x, xi )/ K (x, xi ). (3) =1 =1 The formula has a similar appearance as Eq. (2), but it plays an important role in this paper because we can write it in an integral form which makes our idea technically feasible as follows. Let p(x) be a probability density function over X , P (x) be the corresponding cumulative distribution function, and f (x) be an inductive function. we observe that A Re-learned Function can be expressed as X l f ()K (x, )dP () LK (f ) i=1 f (xi )K (x, xi ) X r(x) = lim = =X , l l K (x, )dP () K (x, )dP () i=1 K (x, xi ) (4) based on f (x) and P (x). From this form, we show two points: (1) If r(x) = f (x), then f (x) is completely predicted by itself through the Nadaraya-Watson Formula, and so f (x) is considered to be completely consistent with the kernel K (x, y ); if r(x) = f (x), then the difference ||f (x) - r(x)||K can measure how badly f (x) is consistent with the kernel K (x, y ) and (2) Intuitively r(x) can also be understood as the smoothed function of f (x) through a kernel K . Consequently, f (x) - r(x) represents the intrinsically uneven part of f (x), which we will penalize. This intuition is illustrated in the middle diagram in Fig. 1. X Throughout this paper, we assume that K (x, )dP () is a constant, and for simplicity all kernels X are normalized by K/ K (x, )dP () so that r(x) = LK (f ). Moreover, we assume that X is compact, and the measure is specified as P (x). 3 Partially-penalized Regularization For a given kernel K and an inductive function f , LK (f ) is the prediction function produced by K through the Nadaraya-Watson Formula. Based on Eq. (1), penalizing the inconsistent part f (x) - LK (f ) leads to the following Partially-penalized Regularization problem: f = arg min l 1i V (xi , yi , f ) + ||f - LK (f )||2 . K l =1 f HK (5) To obtain a Representer Theorem, we need one assumption. Assumption 1 Let f1 , f2 HK . If f1 , f2 K = 0, then ||f1 - LK (f1 ) + f2 - LK (f2 )||2 = K ||f1 - LK (f1 )||2 + ||f2 - LK (f2 )||2 . K K It is well-known that the operator LK is compact, self-adjoint, and positive with respect to L2 (X ), and by the Spectral Theorem [2, 3], its eigenfunctions e1 (x), e2 (x), . . . form an orthogonal basis of L2 (X ) and the corresponding eigenvalues 1 2 , . . . are either finitely many thait are nonzero, i or there are infinitely many, in which case k 0.i Let f1 = i ai ei (x), f2 = i bi ei (x), then i i f1 - LK (f1 ) = ai ei (x) - LK (i ai ei (x)) = ai ei (x) - i ai ei (x) = (1 - i )ai ei (x), and similarly, f2 - LK (f2 ) = (1 - i )bi ei (x). By the discussions in [1], we have ei , ej = 0 1 if i = j , and ei , ei = 1; ei , ej K = 0 if i = j , and ei , ei K = i . If we consider the situation that ai , bi 0 for all i 1, then f1 , f2 K = 0 implies that ai bi = 0 for all i 1, and consequently i f1 - LK (f1 ), f2 - LK (f2 ) K = (1 - i )2 ai bi ei (x), ei (x) K = 0. Therefore, under some constrains, this assumption is a fact. Under this assumption, we have a Representer Theorem: 3 Theorem 2 Under Assumption 1, the minimizer of the optimization problem in Eq. (5) admits an expansion jo il j µj (x) + i K (xi , x) (6) f (x) = =1 =1 where µj (x) is a basis in H0 of the operator I - LK , i.e., H0 = {f HK |f - LK (f ) = 0}. Proof of the Representer Theorem. Any function f HK can be uniquely decomposed into a component f|| in the linear subspace spanned by the kernel functions {K (xi , ·)}l =1 , and a compoi il nent f orthogonal to it. Thus, f = f|| + f = i K (xi , ·) + f . By the reproducing property and the fact that f , K (xi , ·) = 0 for 1 i l, we have i i f (xj ) = f, K (xj , ·) = l i K (xi , ·), K (xj , ·) + f , K (xj , ·) = l i K (xi , ·), K (xj , ·) . =1 =1 =1 Thus the empirical terms involving the loss function in Eq. (5) depend only on the value of the coefficients {i }l =1 and the gram matrix of the kernel function. By Assumption 1, we have i il il ||f - LK (f )||2 = || i K (xi , ·) - LK ( i K (xi , ·))||2 + ||f - LK (f )||2 K K K || il =1 =1 il i K (xi , ·) - LK ( i K (xi , ·))||2 . K =1 =1 It follows that the minimizer of Eq. (5) must have ||f - LK (f )||2 = 0, and therefore admits a K il jo il representation f (x) = f + i K (xi , x) = j µj (x) + i K (xi , x). =1 =1 =1 3.1 Partially-penalized Regularized Least Squares (PRLS) Algorithm In this section, we focus our attention in the case that V (xi , yi , f ) = (yi - f (xi ))2 , i.e, the Regularized Least Squares algorithm. In our setting, we aim to solve: ( 1 min yi - f (xi ))2 + ||f - LK (f )||2 . (7) K f HK l By the Representer Theorem, the solution to Eq. (7) is of the following form: jo il f (x) = j µj (x) + i K (xi , x). (8) =1 =1 By the proof of Theorem 2, we have f = jo =1 j µj (x) and f , il =1 i K (xi , x) K = 0. By Assumption 1 and the fact that f belongs to the null space H0 of the operator I - LK , we have l l ||f - LK (f )||2 = ||f - LK (f )||2 + || i=1 i K (xi , x) - LK ( i=1 i K (xi , x))||2 K K K l l = || i=1 i K (xi , x) - i=1 i LK (K (xi , x))||2 = T (K - 2K + K ), K (9) T where = [1 , 2 , . . . , l ] , K is the l × l gram matrix Kij = K (xi , xj ), K and K are reconstructed l × l matrices Kij = K(xi , x), LK (K (xj , x)) K, and Kij = LK (K (xi , x)), LK (K (xj , x)) K. Substituting Eq. (8) and Eq. (9) to the problem in Eq. (7), we arrive at the following quadratic objective function of the l-dimensional variable and o-dimensional variable = [1 , 2 , . . . , o ]T : 1 [ , ] = arg min (Y - K - )T (Y - K - ) + T (K - 2K + K ), (10) l where is an l × o matrix ij = µj (xi ), and Y = [y1 , y2 , . . . , yl ]T . Taking derivatives with respect to and , since the derivative of the objective function vanishes at the minimizer, we obtain ( l(K - 2K + K ) + K 2 ) + K = K Y , T (Y - K - ) = 0. (11) In the term ||f - LK (f )||, f is subtracted by LK (f ), and so it partially penalized. For this reason, the resulting algorithm is referred as Partially-penalized Regularized Least Squares algorithm (PRLS). 4 3.2 The PLapRLS Algorithm The idea in the previous section can also be extended to LapRLS in the manifold regularization framework [1]. In the manifold setting, the smoothness on the data adjacency graph should be considered, and Eq. (5) is modified as f = arg min l il+u 1i I V (xi , yi , f ) + A ||f - LK (f )||2 + (f (xi ) - f (xj ))2 Wij , (12) K l =1 (u + l)2 ,j =1 f HK where Wij are edge weights in the data adjacency. From W , the graph Laplacian L is given by l+u L = D - W , where D is the diagonal matrix with Dii = j =1 Wij . For this optimization problem, the result in Theorem 2 can be modified slightly as: Theorem 3 Under Assumption 1, the minimizer of the optimization problem in Eq. (12) admits an expansion l+u jo i f (x) = j µj (x) + i K (xi , x). (13) =1 =1 Following Eq. (13), we continue to optimize the (l + u)-dimensional variable = [1 , 2 , . . . , l+u ] and the o-dimensional variable = [1 , 2 , . . . , o ]T . In a similar way as the previous section and LapRLS in [1], and are determined by the following linear systems: ( K J K + 1 (K - 2K + K ) + 2 K LK ) + (K J + 2 K L) = K J Y , (14) ( J K - 2 LK ) + ( - 2 L) = Y , where K, K , K are the (l + u) × (l + u) Gram matrices over labeled and unlabeled points; Y is an (l + u) dimensional label vector given by: Y = [y1 , y2 , . . . , yl , 0, . . . , 0], J is an (l + u) × (l + u) diagonal matrix given by J = diag(1, 1, . . . , 1, 0, . . . , 0) with the first l diagonal entries as 1 and the rest 0, and is an (l + u) × o matrix ij = µj (xi ). 4 Discussions I 4.1 Heat Kernels and the Computation of K a nd K n this section we will illustrate the computation of K and K in the case of heat kernels. The basic facts about heat kernels are excerpted from [6], and for more materials, see [10]. Given a manifold M and points x and y , the heat kernel Kt (x, y ) is a special solution to the heat equation with a special initial condition called the delta function (x - y ). More specifically, (x - y ) describes a unit heat source at position y with no heat in other positions. Namely, (x - y ) = 0 for + x = y and - (x - y )dx = 1. If we let f0 (x, 0) = (x - y ), then Kt (x, y ) is a solution to the following differential equation on a manifold M: f - Lf = 0, f (x, 0) = f0 (x), t (15) where f (x, t) is the temperature at location x at time t, beginning with an initial distribution f0 (x) at time zero, and L is the Laplace-Beltrami operator. Equation (15) describes the heat flow throughout a geometric manifold with initial conditions. Theorem 4 Let M be a complete Riemannian manifold. Then there exists a function K C (R+ × M × M), called the heat kernel, which satisfies the following properties for all x, y M, with Kt (x, y ) = K (t, x, y ): (1) Kt (x, y ) defines a Mercer kernel. (2) M Kt (x, y ) = Kt-s (x, z )Ks (z , M dz for any s > 0. (3) The solution to Eq. (15) is f (x, t) = y) M Kt (x, y )f0 (y )dy . (4) 1 = Kt (x, y )1dy and (5) When M = Rm , Lf is simplified as i 2f ||x-y ||2 -m - 2e 4t . 2 , and the heat kernel takes the Gaussian RBF form Kt (x, y ) = (4 t) x i 5 K a nd K Kij c an be computed as follows: = Kt (xi , x), LK (Kt (xj , x)) K (by definition) = LK (Kt (xj , x))|x=xi (by the reproducing property of a Mercer kernel) X = Kt (xj , y )Kt (xi , y )d (y ) (by the definition of LK ) = K2t (xi , xj ) (by Property 2 in Theorem 4) (16) Based on the fact that LK is self-adjoint, we can similarly derive Kij = K3t (xi , xj ). For other kernels, K and K can also be computed. 4.2 What should not be penalized? From Theorem 2, we know that the functions in the null space H0 = {f HK |f - LK (f ) = 0} should not be penalized. Although there may be looser assumptions that can guarantee the vX idity of the result in Theorem 2, there are two assumptions in this work: X is compact and al K (x, )dP () in Eq. (4) is a constant. Next we discuss the constant functions and the linear functions. Should constant functions be penalized? Under the twX assumptions, a constant function c should o X not be penalized, because c = cK (x, )p()d/ K (x, )p()d, i.e., c X H0 . For heat kernels, if P (x) is uniformly distributed on M, then by Property 4 in Theorem 4, K (x, )dP () is a constant, and so c should not be penalized. For polynomial kernels, the theory cannot guarantee that constant functions should not be penalized even with a uniform distribution P (x). For example, considering the polynomial kernel xy + 1 in the X 1 interval X = [0 1] and the uniform distribution on X , (xy + 1)dP (y ) = 0 (xy + 1)dy = x/2 + 1 is not a constant. As a counter example, we will show in Section 5.3 that not penalizing constant functions in polynomial kernels will result in much worse accuracy. The reason for this phenomenon is that constant functions may not be smooth in the feature space produced by the polynomial kernel 1 under some distributions. The readers can deduce an example for p(x) such that 0 (xy + 1)dP (y ) happens to be a constant. Should linear function aT x be penalized? In the case when X is a closed ball Br with radius r when P (x) is uniformly distributed over Br and when K is the Gaussian RBF R ernel, theB aT x k n should not be penalized when r is big enough. 1 Since r is big enough, we have n ·dx r ·dx B R B and r Kt (x, y ) 1, and so aT x = n Kt (x, y )aT y dy Kt (x, y )aT y dy LK (aT x). r T T Consequently ||a x - LK (a x)||K will be small enough, and so the linear function aT x needs not be penalized. For other kernels, other spaces, or other PX , the conclusion may not be true. 5 Experiments In this section, we evaluate the proposed algorithms PRLS and PLapRLS on a toy dataset (size: 40), a medium-sized dataset (size: 3,119), and a large-sized dataset (size: 20,000), and provide a counter example for constant functions on another dataset (size: 9,298). We use the Gaussian RBF kernels in the first three datasets, and use polynomial kernels to provide a counter example on the last dataset. Without any prior knowledge about the data distribution, we assume that the examples are uniformly distributed, and so constant functions are considered to be in H0 for the Gaussian RBF kernel, but linear functions are not considered to be in H0 since it is rare for data to be distributed uniformly on a large ball. The data and results for the toy dataset are illustrated in the left diagram and the right diagram in Fig. 1. 5.1 UCI Dataset Isolet about Spoken Letter Recognition We follow the same semi-supervised settings as that in [1] to compare RLS with PRLS, and compare LapRLS with PLapRLS on the Isolet database. The dataset contains utterances of 150 subjects who 1 Note that a subset of Rn is compact if and only if it is closed and bounded. Since Rn is not bounded, it is not compact, and so the Representer Theorem cannot be established. This is the reason why we cannot talk about Rn directly. 6 RLS vs PRLS 28 Error Rates (unlabeled set) 26 24 22 20 18 16 14 12 10 0 5 10 15 20 Labeled Speaker # RLS vs PPLS 35 RLS PRLS 30 Error Rates (test set) 32 30 28 26 24 22 20 18 16 15 0 5 10 15 20 Labeled Speaker # 25 30 14 0 5 25 30 10 0 5 RLS PRLS 25 LapRLS vs PLapRLS LapRLS PLapRLS 20 15 10 15 20 Labeled Speaker # LapRLS vs PLapRLS 25 30 LapRLS PLapRLS 25 20 10 15 20 Labeled Speaker # 25 30 Figure 2: Isolet Experiment pronounced the name of each letter of the English alphabet twice. The speakers were grouped into 5 sets of 30 speakers each. The data of the first 30 speakers forms a training set of 1,560 examples, and that of the last 29 speakers forms the test set. The task is to distinguish the first 13 letters from the last 13. To simulate a real-world situation, 30 binary classification problems corresponding to 30 splits of the training data where all 52 utterances of one speaker were labeled and all the rest were left unlabeled. All the algorithms use Gaussian RBF kernels. For RLS and LapRLS, the results were obtained with width = 10, l = 0.05, A l = I l/(u + l)2 = 0.005, . For PRLS and PLapRLS, the results were obtained with width = 4, l = 0.01, and A l = I l/(u + l)2 = 0.01. In Fig. 2, we can see that both PRLS and PLapRLS make significant performance improvements over their corresponding counterparts on both unlabeled data and test set. 5.2 UCI Dataset Letter about Printed Letter Recognition In Dataset Letter, there are 16 features for each example, and there are 26 classes representing the upper case printed letters. The first 400 examples were taken to form the training set. The remaining 19,600 examples form the test set. The parameters are set as follows: = 1, l = A (l + u) = 0.25, and I l/(u + l)2 = 0.05. For each of the four algorithms RLS, PRLS, LapRLS, and PLapRLS, for each of the 26 one-versus-all binary classification tasks, and for each of 10 runs, two examples for each class were randomly labeled. For each algorithm, the averages over all the 260 one-versus-all binary classification error rates for unlabeled 398 examples and test set are listed respectively as follows: (5.79%, 5.23%) for RLS, (5.12%, 4.77%) for PRLS, (0%, 2.96%) for LapRLS, and (0%, 3.15%) for PLapRLS respectively. From the results, we can see that RLS is improved on both unlabeled examples and test set. The fact that there is no error in the total 260 tasks for LapRLS and PLapRLS on unlabeled examples suggests that the data is distributed in a curved manifold. On a curved manifold, the heat kernels do not take the Gaussian RBF form, and so PLapRLS using the Gaussian RBF form cannot achieve its best. This is the reason why we can observe that PLapRLS is slightly worse than LapRLS on the test set. This suggests the need for a vast of investigations on heat kernels on a manifold. 5.3 A Counter Example in Handwritten Digit Recognition Note that, polynomial kernels with degree 3 were used on USPS dataset in [1], and 2 images for each class were randomly labeled. We follow the same experimental setting as that in [1]. For RLS, if we 7 use Eq. (2), then the averages of 45 pairwise binary classification error rates are 8.83% and 8.41% for unlabeled 398 images and 8,898 images in the test set respectively. If constant functions are not l penalized, then we should use f (x) = i=1 i K (xi , x) + a, and the corresponding error rates are 9.75% and 9.09% respectively. By this example, we show that leaving constant functions outside the regularization term is dangerous; however, it is fortunate that we have a theory to guide this in X K (x, )dP () in Eq. (4) is a constant, then constant functions Section 4: if X is compact and should not be penalized. 6 Conclusion A novel learning scheme is proposed based on a new viewpoint of penalizing the inconsistent part between inductive functions and kernels. In theoretical aspects, we have three important claims: (1) On a compact domain or manifold, if the denominator in Eq. (4) is a constant, then there is a new Representer Theorem; (2) The same conditions become a sufficient condition under which constant functions should not be penalized; and (3) under the same conditions, a function belongs to the null space if and only if the function should not be penalized. Empirically, we claim that the novel learning scheme can achieve accuracy improvement in practical applications. Acknowledgments The work described in this paper was supported by two grants from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. CUHK4150/07E) and Project No. CUHK4235/04E). The first author would like to thank Hao Ma for his helpful suggestions, and thank Kun Zhang and Wenye Li for useful discussions. References [1] GMikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399­2434, 2006. [2] F. Cucker and S. Smale. On the mathematical foundations of learning. Bulletin (New Series) of the American Mathematical Society, 39(1):1­49, 2002. [3] Lokenath Debnath and Piotr Mikusinski. Introduction to Hilbert Spaces with Applications. Academic Press, San Diego, second edition, 1999. [4] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines. Advances in Computational Mathematics, 13:1­50, 2000. [5] T. Hastie and C. Loader. Local regression: Automatic kernel carpentry. Statistical Science, 8(1):120­129, 1993. [6] John Lafferty and Guy Lebanon. Diffusion kernels on statistical manifolds. Journal of Machine Learning Research, 6:129­163, 2005. [7] Wenye Li, Kin-Hong Lee, and Kwong-Sak Leung. Generalized regularized least-squares learning with predefined features in a Hilbert space. In NIPS, 2006. [8] E. A. Nadaraya. On estimating regression. Theory of Probability and Its Applications, 9(1):141­142, 1964. [9] R.M. Rifkin and R.A. Lippert. Notes on regularized least-squares. Technical Report 2007-019, Massachusetts Institute of Technology, 2007. [10] S. Rosenberg. The Laplacian on a Riemmannian Manifold. Cambridge University Press, 1997. ¨ [11] Bernhard Scholkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In COLT, 2001. ¨ [12] I. Schonberg. Spline functions and the problem of graduation. Proc. Nat. Acad. Sci. USA, 52:947­950, 1964. [13] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problems. W. H. Winston, 1977. [14] G. S. Watson. Smooth regression analysis. Sankhya, Series A, 26:359­372, 1964. ´ 8