統計数理研究所

Seminar by Kun Zhang and Francesco Dinuzzo

Date and time
November 22 (tue), 2011, 15:00-17:00
登録不要・参加無料
Place
Seminar room 5 (3F, D313),
The Institute of Statistical Mathematics
Speaker 1
Kun Zhang
(Max Planck Institute for Intelligent Systems)
Title
Recent advances in causal discovery:
Conditional independence,non-Gaussianity, and nonlinearity
Abstract

Causal discovery from non-experimental data has attracted much interest in the past two decades. This talk briefly reports some recent advances of causal discovery of continuous variables. We start with the constraint-based methods, which derive the causal diagram by making use of the (conditional) independence relationships. We propose a very general nonparametric method for conditional independence testing, called Kernel-based Conditional Independence test (KCI-test). Experimental results show that it outperforms other methods, especially when the conditioning set is large or the sample size is not very large, in which case other methods encounter difficulties.

Constraint-based methods find a Markov equivalence class, which may contain multiple causal models implying the same conditional independence relations.
Under appropriate assumptions, functional causal model-based causal discovery can avoid this disadvantage. We then review how the non-Gaussianity of the data helps to find the causal model uniquely. In particular, the linear, non-Gaussian, and acyclic (LiNGAM) model is discussed. Next, since nonlinearity is usually encountered in practice, we present the post-nonlinear (PNL) causal model, which takes into account the nonlinear effect of the cause, the inner noise effect, and the possible measurement distortion in the effect. Its identifiability, as well as its results on real-world problems, is reported.

Speaker 2
Francesco Dinuzzo
(Max Planck Institute for Intelligent Systems)
Title
Learning kernels for the output space
Abstract
Recently, learning problems with multiple and structured outputs are attracting considerable attention. Within the framework of kernel methods, one can embed prior knowledge about the relationship between the different output components by designing suitable kernels on both the input and the output set. However, the available prior knowledge may be not sufficient to design a good kernel in advance. In this talk, we discuss the framework of Output Kernel Learning (OKL), that allows to learn simultaneously a vector-valued function and a kernel for the output space. The methodology allows to solve a supervised learning problem with multiple outputs, while revealing interesting structures in the output space.
▲ このページのトップへ