Ke Chen and Shihai Wang Ke Shihai School of Computer Science, The University of Manchester, Manchester M13 9PL, U.K.

Regularized Boost for Semi-supervised Learning

· ·

Introduction
­ ­ ­ We introduce a local smoothness regularizer to semi-supervised boosting. We regularizer Regularized universal margin cost functional of boosting ~ ( F , f ) = - < C ( F ), f > -xi L UU  i x j L UU ,i  j Wij C (- | yi - y j |), Wij = exp( || xi - x j ||2 / 2 2 ). - Empirical data distribution encoding the local smoothness constraints ~  i C [ yi F (xi )] -  i x j LUU ,i  j Wij C (- | yi - y j |) ~ D(i) = , 1  i | L | + | U | . ~ { i C [ yi F (xi )] -  i x j LUU ,i  j Wij C (- | yi - y j |)} xi LUU Maximize ( F , f ) iis equivalent to finding a base learner to minimize s Maximize
2
f ( xi )  yi

Methodology

­

 D(i)

~

+2

f ( xi ) = yi



 i C [ yi F (xi )] -  i x j LUU ,i  j Wij C (- | yi - y j |)
~

~

misclassi fication

xi LUU { i C [ yi F (xi )] - i x j LUU ,i  j Wij C (- | yi - y j |)}
local class label incompatibility

- 1.

·

Conclusion
­ ­ ­ Experiments in synthetic, benchmark and real data sets demonstrate its effectiveness. Input data distribution information can be incorporated via i = P(x). Input Our method can be related to graph-based semi-supervised learning algorithms.
Poster ID: T29

NIPS'07