Learning with Consistency between Inductive Functions and Kernels (W41) Department of Computer Science & Engineering, The Chinese University of Hong Kong Haixuan Yang, Irwin King & Michael R. Lyu Regularized Least Squares (RLS): 1l min ( f ( xi ) - yi ) 2 + || f ( x) ||K f ( x) l i =1 The data are generated by 1000+2*N(0,1), where the ideal learned function should be y=1000. But... Problem: f(x) is over-penalized in RLS. See the paper or visit us if you want to know: What is the minimizer? Our solution: 1l min ( f ( xi ) - yi ) 2 + || f ( x) - LK ( f )( x) || K f ( x) l i =1 What is the consistency between inductive functions and kernels? Are constant functions not penalized in any case? Should linear functions be penalized? How are heat kernels interesting? How is the solution generalized to Semi-supervised Learning or Manifold Learning? Over-unpenalized?