Workshop on Bayesian Inference at ISM

Chaired by the Multimodal Data Project of ROIS


Taking advantage of the simultaneous presence of different international researchers in Tokyo by mid-august, an informal workshop on Bayesian Statistics will be held on August 21th 2007 at the Institute of Statistical Mathematics, Tokyo, Japan. Many thanks to all speakers and participants!

NEW: Slides can be downloaded below!


[ Program ]
15:15 - 15:20 Opening Remarks
15:20 - 16:00 Tutorial: Inference methods for Nonparametric Bayes modelsNaonori Ueda
16:00 - 16:55 The Infinite Markov Model: A Nonparametric Bayesian approachDaichi Mochihashi
16:55 - 17:10 Break
17:10 - 18:05 Particle Markov chain Monte Carlo: Applications to Non-linear Dynamic ModelsArnaud Doucet
18:05 - 19:00 Inference for Levy driven stochastic volatility models via sequential Monte CarloAjay Jasra


[ Access ]
Please follow this link for access information. The Workshop will be held in the Kenshu-Shitsu room (2F).

[ Organizer ]
Please contact Tomoko Matsui for any questions regarding the workshop.

[ Detailed Program And Slides]
slides Tutorial: Inference methods for Nonparametric Bayes modelsNaonori Ueda
Nonparametric Bayes modeling, specifically, Dirichlet process mixture (DPM) modeling have been at the center of recent research in machine learning. The major difference between parametric Bayes and nonparametric one is that in the latter the number of mixture components can infinitely increase depending on the number of observed samples. This flexible modeling has attracted both basic reseachers and practitioners. In this talk, focusing on practical inference methods for DPM models, first I will briefly review the basics of DPM models, and then will explain several inference methods based on the variational approximation and Markov chain Monte Carlo. I also show the Gibbs sampling methods for hierarchical Dirichlet process mixture (HDM) models.
slides The Infinite Markov Model: A Nonparametric Bayesian approachDaichi Mochihashi
Markov models are very simple but effective tools widely employed in discrete sequence modeling, such as natural language processing, music modeling, compression, and bioinformatics. However, the crucial problem with a Markov model is that we must determine its order. Not knowing the true Markov orders in advance, this restriction often imposes us using short and fixed range dependencies that are set heuristically to avoid an explosion in the number of parameters associated with the model. In this talk, we will present a complete nonparametric Bayesian generative model of variable order Markov sequences. Introducing a simple prior over the tree structures of hierarchical Chinese Restaurant processes, our model can infer the latent Markov orders from which each symbol originated. We show that it also yields an efficient inference and scientifically interesting results on language streams of words and characters. This model can interpreted as a complete Bayesian replacement of the pruning approaches to variable order Markov models by Buhlmann (1999) in statistics and Ron et al. (1994) in machine learning.
slides Particle Markov chain Monte Carlo: Applications to Non-linear Dynamic ModelsArnaud Doucet
Markov chain Monte Carlo (MCMC) are now routinely used to perform Bayesian inference but, for complex models, standard MCMC algorithms mix slowly and can easily get trapped in local maxima. In this talk, I will present a new method to build very high dimensional proposal distributions for MCMC and will demonstrate its performance on non-linear state-space models. Our method relies on Sequential Monte Carlo/particle filtering methods. It can be interpreted as an extension of the popular Configurational Bias Monte Carlo (CBMC) method developed in molecular simulation but it enjoys much nicer theoretical properties than CBMC and outperforms this latter by several orders of magnitude in simulation.
slidesInference for Levy driven stochastic volatility models via sequential Monte CarloAjay Jasra
In the following talk I investigate simulation and inference for a class of continuous-time stochastic volatility (SV) models: when the price includes an additive variance gamma process. This model assumes that movements of the log price of an asset is modelled through a Brownian and L\'evy component, with the volatility process following a correlated diffusion (a Cox-Ingersoll-Ross model). The infinite activity nature of the driving gamma process can capture the observed behaviour of many financial time series, and a discretized version has been found to be very useful for modelling such (daily) data. However, it is well-known that when a fine-scale discretization of the diffusion is adopted, Markov chain Monte Carlo (MCMC) methods can mix very slowly. In this paper we introduce an approach which can be more efficient; we can provide more accurate discretizations of the diffusions, and simultaneously maintain a better mixing MCMC algorithm. In addition, for complex problems, we introduce a fully adaptive sequential Monte Carlo (SMC) sampler algorithm to simulate from the posterior density. Our approach can be adopted for any SV model where discretization is required (that is, exact inference is not currently possible). We illustrate the methodology with an analysis of high frequency (5 minute) S&P 500 share index data. From the inferential point of view, we find that the discretized variance gamma model does not capture, at least, the correlation structure of the volatility and is not always appropriate for high frequency financial data.