統計数理研究所

第1回 Statistical Machine Learning Seminar

日時
2010年11月29日(月)15:00-17:10
会場
統計数理研究所 セミナー室5(3階 D313,D314)
※講演は英語で行います
Speaker 1
Francesco Dinuzzo (Università di Pavia, Italy)
Title
Kernel machines with two layers
Abstract
Recently, the problem of "learning the kernel" has been attacked using a variety of techniques, such as semi-definite programming, hyper-kernels, and multiple kernel learning (MKL). In this talk, we will introduce "kernel machines with two layers", an extension of the classical framework of regularization in Reproducing Kernel Hilbert Spaces (RKHS) that provides a formal connection between multi-layer computational architectures and the theme of learning the kernel function from the data. We will present a representer theorem showing that, for a suitable class of regularization problems, optimal solutions can be expressed as the composition of two functions (layers). A variety of learning methodologies, including methods based on \ell_1 norm regularization, can be interpreted as kernel machines with two layers. In the talk, we will show that a general class of MKL algorithms can be interpreted as particular instances of kernel machines with two layers, where the second layer is a linear function.
Speaker 2
Ryota Tomioka (University of Tokyo)
(joint work with Taiji Suzuki)
Title
Regularization Strategies and Empirical Bayesian Learning for MKL
Abstract
Multiple kernel learning (MKL) has received considerable attention recently.
In this talk, we show how different MKL algorithms can be understood as applications of different types of regularization on the kernel weights. More specifically, we show *exact* correspondence of Ivanov and Tikhonov regularization strategies for Lp-norm MKL proposed by Kloft et al. 2009, and propose a generalized block norm formulation, which is connected to the Tikhonov regularization form through a convex upper bounding technique.
Within the regularization view we consider in this paper, the Tikhonov formulation of MKL allows us to consider a generative probabilistic model behind MKL. Based on this model, we propose learning algorithms for the kernel weights through the maximization of marginalized likelihood.
We also present some preliminary numerical results.
▲ このページのトップへ